eegdash.dataset package#
Public API for dataset helpers and dynamically generated datasets.
- class eegdash.dataset.DS001785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds001785
.Modality: Tactile | Type: Perception | Subjects: Healthy
This dataset contains 18 subjects with 54 recordings across 3 tasks. Total duration: 14.644 hours. Dataset size: 27.86 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds001785
18
63
3
1000,1024
14.644
27.86 GB
Short overview of dataset ds001785 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds001785
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS001785 >>> dataset = DS001785(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS001785( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS001787(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds001787
.Modality: Auditory | Type: Attention | Subjects: Healthy
This dataset contains 24 subjects with 40 recordings across 1 tasks. Total duration: 27.607 hours. Dataset size: 5.69 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds001787
24
64
1
256
27.607
5.69 GB
Short overview of dataset ds001787 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds001787
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS001787 >>> dataset = DS001787(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS001787( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS001810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds001810
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 47 subjects with 263 recordings across 1 tasks. Total duration: 91.205 hours. Dataset size: 109.70 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds001810
47
64
1
512
91.205
109.70 GB
Short overview of dataset ds001810 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds001810
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS001810 >>> dataset = DS001810(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS001810( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS001849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds001849
.Modality: Multisensory | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 20 subjects with 120 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 44.51 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds001849
20
30
1
5000
0
44.51 GB
Short overview of dataset ds001849 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds001849
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS001849 >>> dataset = DS001849(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS001849( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS001971(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds001971
.Modality: Auditory | Type: Motor | Subjects: Healthy
This dataset contains 20 subjects with 273 recordings across 1 tasks. Total duration: 46.183 hours. Dataset size: 31.98 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds001971
20
108
1
512
46.183
31.98 GB
Short overview of dataset ds001971 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds001971
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS001971 >>> dataset = DS001971(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS001971( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002034
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 14 subjects with 167 recordings across 4 tasks. Total duration: 37.248 hours. Dataset size: 10.10 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002034
14
64
4
512
37.248
10.10 GB
Short overview of dataset ds002034 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002034
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002034 >>> dataset = DS002034(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002034( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002094(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002094
.Modality: Resting State | Type: Resting state | Subjects: nan
This dataset contains 20 subjects with 43 recordings across 3 tasks. Total duration: 18.593 hours. Dataset size: 39.45 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002094
20
30
3
5000
18.593
39.45 GB
Short overview of dataset ds002094 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002094
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002094 >>> dataset = DS002094(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002094( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002158
.Modality: Visual | Type: Affect | Subjects: Healthy
This dataset contains 20 subjects with 117 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 428.59 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002158
20
1
0
428.59 GB
Short overview of dataset ds002158 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002158
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002158 >>> dataset = DS002158(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002158( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002181
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 226 subjects with 226 recordings across 1 tasks. Total duration: 7.676 hours. Dataset size: 150.89 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002181
226
125
1
500
7.676
150.89 MB
Short overview of dataset ds002181 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002181
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002181 >>> dataset = DS002181(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002181( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002218
.Modality: Multisensory | Type: Perception | Subjects: Healthy
This dataset contains 18 subjects with 18 recordings across 1 tasks. Total duration: 16.52 hours. Dataset size: 1.95 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002218
18
0
1
256
16.52
1.95 GB
Short overview of dataset ds002218 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002218
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002218 >>> dataset = DS002218(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002218( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002336
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 10 subjects with 54 recordings across 6 tasks. Total duration: 0.0 hours. Dataset size: 17.98 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002336
10
6
5000
0
17.98 GB
Short overview of dataset ds002336 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002336
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002336 >>> dataset = DS002336(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002336( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002338
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 17 subjects with 85 recordings across 4 tasks. Total duration: 0.0 hours. Dataset size: 25.89 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002338
17
4
5000
0
25.89 GB
Short overview of dataset ds002338 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002338
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002338 >>> dataset = DS002338(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002338( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002578(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002578
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 2 subjects with 2 recordings across 1 tasks. Total duration: 1.455 hours. Dataset size: 1.33 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002578
2
256
1
256
1.455
1.33 GB
Short overview of dataset ds002578 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002578
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002578 >>> dataset = DS002578(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002578( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002680(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002680
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 14 subjects with 350 recordings across 1 tasks. Total duration: 21.244 hours. Dataset size: 9.22 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002680
14
31
1
1000
21.244
9.22 GB
Short overview of dataset ds002680 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002680
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002680 >>> dataset = DS002680(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002680( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002691
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 6.721 hours. Dataset size: 776.76 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002691
20
32
1
250
6.721
776.76 MB
Short overview of dataset ds002691 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002691
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002691 >>> dataset = DS002691(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002691( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002718
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 18 subjects with 18 recordings across 1 tasks. Total duration: 14.844 hours. Dataset size: 4.31 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002718
18
74
1
250
14.844
4.31 GB
Short overview of dataset ds002718 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002718
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002718 >>> dataset = DS002718(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002718( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002720
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 18 subjects with 165 recordings across 10 tasks. Total duration: 0.0 hours. Dataset size: 2.39 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002720
18
19
10
1000
0
2.39 GB
Short overview of dataset ds002720 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002720
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002720 >>> dataset = DS002720(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002720( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002721(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002721
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 31 subjects with 185 recordings across 6 tasks. Total duration: 0.0 hours. Dataset size: 3.35 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002721
31
19
6
1000
0
3.35 GB
Short overview of dataset ds002721 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002721
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002721 >>> dataset = DS002721(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002721( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002722(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002722
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 19 subjects with 94 recordings across 5 tasks. Total duration: 0.0 hours. Dataset size: 6.10 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002722
19
32
5
1000
0
6.10 GB
Short overview of dataset ds002722 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002722
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002722 >>> dataset = DS002722(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002722( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002723(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002723
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 8 subjects with 44 recordings across 6 tasks. Total duration: 0.0 hours. Dataset size: 2.60 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002723
8
32
6
1000
0
2.60 GB
Short overview of dataset ds002723 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002723
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002723 >>> dataset = DS002723(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002723( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002724(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002724
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 10 subjects with 96 recordings across 4 tasks. Total duration: 0.0 hours. Dataset size: 8.52 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002724
10
32
4
1000
0
8.52 GB
Short overview of dataset ds002724 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002724
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002724 >>> dataset = DS002724(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002724( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002725(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002725
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 21 subjects with 105 recordings across 5 tasks. Total duration: 0.0 hours. Dataset size: 15.32 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002725
21
30
5
1000
0
15.32 GB
Short overview of dataset ds002725 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002725
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002725 >>> dataset = DS002725(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002725( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002778(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002778
.Modality: Resting State | Type: Resting state | Subjects: Parkinson’s
This dataset contains 31 subjects with 46 recordings across 1 tasks. Total duration: 2.518 hours. Dataset size: 545.00 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002778
31
40
1
512
2.518
545.00 MB
Short overview of dataset ds002778 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002778
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002778 >>> dataset = DS002778(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002778( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002814(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002814
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 21 subjects with 168 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 48.57 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002814
21
68
1
1200
0
48.57 GB
Short overview of dataset ds002814 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002814
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002814 >>> dataset = DS002814(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002814( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002833(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002833
.Modality: Auditory | Type: Decision-making | Subjects: nan
This dataset contains 20 subjects with 80 recordings across 1 tasks. Total duration: 11.604 hours. Dataset size: 39.77 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002833
20
257
1
1000
11.604
39.77 GB
Short overview of dataset ds002833 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002833
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002833 >>> dataset = DS002833(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002833( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS002893(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds002893
.Modality: Multisensory | Type: Attention | Subjects: Healthy
This dataset contains 49 subjects with 52 recordings across 1 tasks. Total duration: 36.114 hours. Dataset size: 7.70 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds002893
49
33
1
250,250.0293378038558
36.114
7.70 GB
Short overview of dataset ds002893 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds002893
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS002893 >>> dataset = DS002893(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS002893( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003004(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003004
.Modality: Auditory | Type: Affect | Subjects: Healthy
This dataset contains 34 subjects with 34 recordings across 1 tasks. Total duration: 49.072 hours. Dataset size: 35.63 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003004
34
134,180,189,196,201,206,207,208,209,211,212,213,214,215,218,219,220,221,222,223,224,226,227,229,231,232,235
1
256
49.072
35.63 GB
Short overview of dataset ds003004 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003004
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003004 >>> dataset = DS003004(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003004( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003039(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003039
.Modality: Motor | Type: Motor | Subjects: Healthy
This dataset contains 16 subjects with 16 recordings across 1 tasks. Total duration: 14.82 hours. Dataset size: 7.82 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003039
16
64
1
500
14.82
7.82 GB
Short overview of dataset ds003039 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003039
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003039 >>> dataset = DS003039(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003039( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003061(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003061
.Modality: Auditory | Type: Perception | Subjects: nan
This dataset contains 13 subjects with 39 recordings across 1 tasks. Total duration: 8.196 hours. Dataset size: 2.26 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003061
13
79
1
256
8.196
2.26 GB
Short overview of dataset ds003061 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003061
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003061 >>> dataset = DS003061(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003061( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003190
.Modality: Visual | Type: Perception | Subjects: nan
This dataset contains 19 subjects with 280 recordings across 1 tasks. Total duration: 29.891 hours. Dataset size: 1.27 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003190
19
0
1
256
29.891
1.27 GB
Short overview of dataset ds003190 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003190
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003190 >>> dataset = DS003190(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003190( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003194
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Parkinson’s
This dataset contains 15 subjects with 29 recordings across 2 tasks. Total duration: 7.178 hours. Dataset size: 189.15 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003194
15
19,21
2
200
7.178
189.15 MB
Short overview of dataset ds003194 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003194
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003194 >>> dataset = DS003194(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003194( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003195
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Parkinson’s
This dataset contains 10 subjects with 20 recordings across 2 tasks. Total duration: 4.654 hours. Dataset size: 121.08 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003195
10
19
2
200
4.654
121.08 MB
Short overview of dataset ds003195 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003195
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003195 >>> dataset = DS003195(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003195( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003343
.Modality: Tactile | Type: Perception | Subjects: Healthy
This dataset contains 20 subjects with 59 recordings across 1 tasks. Total duration: 6.551 hours. Dataset size: 663.50 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003343
20
16
1
500
6.551
663.50 MB
Short overview of dataset ds003343 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003343
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003343 >>> dataset = DS003343(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003343( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003421(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003421
.Modality: Multisensory | Type: Decision-making | Subjects: Healthy
This dataset contains 20 subjects with 80 recordings across 1 tasks. Total duration: 11.604 hours. Dataset size: 76.77 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003421
20
257
1
1000
11.604
76.77 GB
Short overview of dataset ds003421 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003421
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003421 >>> dataset = DS003421(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003421( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003458(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003458
.Modality: Visual | Type: Affect | Subjects: Healthy
This dataset contains 23 subjects with 23 recordings across 1 tasks. Total duration: 10.447 hours. Dataset size: 4.72 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003458
23
64
1
500
10.447
4.72 GB
Short overview of dataset ds003458 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003458
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003458 >>> dataset = DS003458(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003458( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003474(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003474
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 122 subjects with 122 recordings across 1 tasks. Total duration: 36.61 hours. Dataset size: 16.64 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003474
122
64
1
500
36.61
16.64 GB
Short overview of dataset ds003474 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003474
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003474 >>> dataset = DS003474(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003474( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003478(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003478
.Modality: Resting State | Type: Resting state | Subjects: Healthy
This dataset contains 122 subjects with 243 recordings across 1 tasks. Total duration: 23.57 hours. Dataset size: 10.65 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003478
122
64
1
500
23.57
10.65 GB
Short overview of dataset ds003478 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003478
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003478 >>> dataset = DS003478(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003478( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003490(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003490
.Modality: Auditory | Type: Attention | Subjects: Parkinson’s
This dataset contains 50 subjects with 75 recordings across 1 tasks. Total duration: 12.76 hours. Dataset size: 5.85 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003490
50
64
1
500
12.76
5.85 GB
Short overview of dataset ds003490 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003490
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003490 >>> dataset = DS003490(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003490( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003505
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 19 subjects with 37 recordings across 2 tasks. Total duration: 0.0 hours. Dataset size: 90.13 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003505
19
128
2
2048
0
90.13 GB
Short overview of dataset ds003505 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003505
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003505 >>> dataset = DS003505(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003505( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003506
.Modality: Visual | Type: Decision-making | Subjects: Parkinson’s
This dataset contains 56 subjects with 84 recordings across 1 tasks. Total duration: 35.381 hours. Dataset size: 16.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003506
56
64
1
500
35.381
16.21 GB
Short overview of dataset ds003506 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003506
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003506 >>> dataset = DS003506(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003506( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003509
.Modality: Visual | Type: Learning | Subjects: Parkinson’s
This dataset contains 56 subjects with 84 recordings across 1 tasks. Total duration: 48.535 hours. Dataset size: 22.34 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003509
56
64
1
500
48.535
22.34 GB
Short overview of dataset ds003509 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003509
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003509 >>> dataset = DS003509(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003509( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003516
.Modality: Auditory | Type: Attention | Subjects: Healthy
This dataset contains 25 subjects with 25 recordings across 1 tasks. Total duration: 22.57 hours. Dataset size: 13.46 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003516
25
47
1
500
22.57
13.46 GB
Short overview of dataset ds003516 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003516
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003516 >>> dataset = DS003516(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003516( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003517
.Modality: Visual | Type: Learning | Subjects: Healthy
This dataset contains 17 subjects with 34 recordings across 1 tasks. Total duration: 13.273 hours. Dataset size: 6.48 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003517
17
64
1
500
13.273
6.48 GB
Short overview of dataset ds003517 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003517
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003517 >>> dataset = DS003517(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003517( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003518(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003518
.Modality: Visual | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 110 subjects with 137 recordings across 1 tasks. Total duration: 89.888 hours. Dataset size: 39.51 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003518
110
64
1
500
89.888
39.51 GB
Short overview of dataset ds003518 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003518
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003518 >>> dataset = DS003518(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003518( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003519
.Modality: Visual | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 27 subjects with 54 recordings across 1 tasks. Total duration: 20.504 hours. Dataset size: 8.96 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003519
27
64
1
500
20.504
8.96 GB
Short overview of dataset ds003519 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003519
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003519 >>> dataset = DS003519(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003519( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003522
.Modality: Auditory | Type: Decision-making | Subjects: TBI
This dataset contains 96 subjects with 200 recordings across 1 tasks. Total duration: 57.079 hours. Dataset size: 25.36 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003522
96
64
1
500
57.079
25.36 GB
Short overview of dataset ds003522 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003522
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003522 >>> dataset = DS003522(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003522( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003523
.Modality: Visual | Type: Memory | Subjects: TBI
This dataset contains 91 subjects with 221 recordings across 1 tasks. Total duration: 84.586 hours. Dataset size: 37.54 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003523
91
64
1
500
84.586
37.54 GB
Short overview of dataset ds003523 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003523
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003523 >>> dataset = DS003523(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003523( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003555
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Epilepsy
This dataset contains 30 subjects with 30 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 28.27 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003555
30
1
1024
0
28.27 GB
Short overview of dataset ds003555 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003555
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003555 >>> dataset = DS003555(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003555( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003570(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003570
.Modality: Auditory | Type: Decision-making | Subjects: Healthy
This dataset contains 40 subjects with 40 recordings across 1 tasks. Total duration: 26.208 hours. Dataset size: 36.12 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003570
40
64
1
2048
26.208
36.12 GB
Short overview of dataset ds003570 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003570
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003570 >>> dataset = DS003570(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003570( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003574
.Modality: Visual | Type: Affect | Subjects: Healthy
This dataset contains 18 subjects with 18 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 14.79 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003574
18
64
1
500
0
14.79 GB
Short overview of dataset ds003574 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003574
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003574 >>> dataset = DS003574(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003574( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003602
.Modality: Visual | Type: Decision-making | Subjects: Other
This dataset contains 118 subjects with 699 recordings across 6 tasks. Total duration: 159.35 hours. Dataset size: 73.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003602
118
35
6
1000
159.35
73.21 GB
Short overview of dataset ds003602 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003602
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003602 >>> dataset = DS003602(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003602( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003626
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 10 subjects with 30 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 24.99 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003626
10
1
0
24.99 GB
Short overview of dataset ds003626 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003626
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003626 >>> dataset = DS003626(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003626( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003638(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003638
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 57 subjects with 57 recordings across 1 tasks. Total duration: 40.597 hours. Dataset size: 16.31 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003638
57
64
1
512
40.597
16.31 GB
Short overview of dataset ds003638 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003638
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003638 >>> dataset = DS003638(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003638( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003645(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003645
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 18 subjects with 108 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 105.89 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003645
18
1
0
105.89 GB
Short overview of dataset ds003645 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003645
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003645 >>> dataset = DS003645(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003645( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003655(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003655
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 156 subjects with 156 recordings across 1 tasks. Total duration: 130.923 hours. Dataset size: 20.26 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003655
156
19
1
500
130.923
20.26 GB
Short overview of dataset ds003655 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003655
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003655 >>> dataset = DS003655(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003655( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003670
.Modality: Visual | Type: Attention | Subjects: nan
This dataset contains 25 subjects with 62 recordings across 1 tasks. Total duration: 72.772 hours. Dataset size: 97.53 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003670
25
32
1
2000
72.772
97.53 GB
Short overview of dataset ds003670 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003670
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003670 >>> dataset = DS003670(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003670( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003690(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003690
.Modality: Auditory | Type: Decision-making | Subjects: Healthy
This dataset contains 75 subjects with 375 recordings across 3 tasks. Total duration: 46.771 hours. Dataset size: 21.46 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003690
75
64,66
3
500
46.771
21.46 GB
Short overview of dataset ds003690 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003690
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003690 >>> dataset = DS003690(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003690( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003702(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003702
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 47 subjects with 47 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 60.93 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003702
47
61
1
500
0
60.93 GB
Short overview of dataset ds003702 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003702
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003702 >>> dataset = DS003702(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003702( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003710(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003710
.Modality: Multisensory | Type: Perception | Subjects: Healthy
This dataset contains 13 subjects with 48 recordings across 1 tasks. Total duration: 9.165 hours. Dataset size: 10.18 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003710
13
32
1
5000
9.165
10.18 GB
Short overview of dataset ds003710 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003710
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003710 >>> dataset = DS003710(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003710( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003739(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003739
.Modality: Motor | Type: Perception | Subjects: Healthy
This dataset contains 30 subjects with 120 recordings across 4 tasks. Total duration: 20.574 hours. Dataset size: 10.94 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003739
30
128
4
256
20.574
10.94 GB
Short overview of dataset ds003739 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003739
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003739 >>> dataset = DS003739(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003739( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003751(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003751
.Modality: Multisensory | Type: Affect | Subjects: Healthy
This dataset contains 38 subjects with 38 recordings across 1 tasks. Total duration: 19.95 hours. Dataset size: 4.71 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003751
38
128
1
250
19.95
4.71 GB
Short overview of dataset ds003751 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003751
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003751 >>> dataset = DS003751(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003751( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003753(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003753
.Modality: Visual | Type: Learning | Subjects: Healthy
This dataset contains 25 subjects with 25 recordings across 1 tasks. Total duration: 10.104 hours. Dataset size: 4.62 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003753
25
64
1
500
10.104
4.62 GB
Short overview of dataset ds003753 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003753
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003753 >>> dataset = DS003753(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003753( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003766(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003766
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 31 subjects with 124 recordings across 4 tasks. Total duration: 39.973 hours. Dataset size: 152.77 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003766
31
129
4
1000
39.973
152.77 GB
Short overview of dataset ds003766 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003766
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003766 >>> dataset = DS003766(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003766( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003768
.Modality: Sleep | Type: Sleep | Subjects: Healthy
This dataset contains 33 subjects with 255 recordings across 2 tasks. Total duration: 0.0 hours. Dataset size: 89.24 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003768
33
2
0
89.24 GB
Short overview of dataset ds003768 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003768
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003768 >>> dataset = DS003768(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003768( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003801
.Modality: Auditory | Type: Attention | Subjects: Healthy
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 13.689 hours. Dataset size: 1.15 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003801
20
24
1
250
13.689
1.15 GB
Short overview of dataset ds003801 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003801
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003801 >>> dataset = DS003801(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003801( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003805(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003805
.Modality: Multisensory | Type: Learning | Subjects: Healthy
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.033 hours. Dataset size: 16.96 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003805
1
19
1
500
0.033
16.96 MB
Short overview of dataset ds003805 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003805
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003805 >>> dataset = DS003805(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003805( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003810
.Modality: Motor | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 10 subjects with 50 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 69.31 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003810
10
15
1
125
0
69.31 MB
Short overview of dataset ds003810 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003810
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003810 >>> dataset = DS003810(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003810( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003816
.Modality: Other | Type: Affect | Subjects: Healthy
This dataset contains 48 subjects with 1077 recordings across 8 tasks. Total duration: 159.313 hours. Dataset size: 53.97 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003816
48
127
8
1000
159.313
53.97 GB
Short overview of dataset ds003816 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003816
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003816 >>> dataset = DS003816(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003816( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003822(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003822
.Modality: Visual | Type: Affect | Subjects: Healthy
This dataset contains 25 subjects with 25 recordings across 1 tasks. Total duration: 12.877 hours. Dataset size: 5.82 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003822
25
64
1
500
12.877
5.82 GB
Short overview of dataset ds003822 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003822
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003822 >>> dataset = DS003822(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003822( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003825(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003825
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 50 subjects with 50 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 55.34 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003825
50
63,128
1
1000
0
55.34 GB
Short overview of dataset ds003825 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003825
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003825 >>> dataset = DS003825(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003825( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003838(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003838
.Modality: Auditory | Type: Memory | Subjects: Healthy
This dataset contains 65 subjects with 130 recordings across 2 tasks. Total duration: 136.757 hours. Dataset size: 253.29 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003838
65
63
2
1000
136.757
253.29 GB
Short overview of dataset ds003838 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003838
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003838 >>> dataset = DS003838(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003838( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003846(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003846
.Modality: Multisensory | Type: Decision-making | Subjects: Healthy
This dataset contains 19 subjects with 60 recordings across 1 tasks. Total duration: 24.574 hours. Dataset size: 11.36 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003846
19
64
1
500
24.574
11.36 GB
Short overview of dataset ds003846 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003846
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003846 >>> dataset = DS003846(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003846( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003885
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 24 subjects with 24 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 82.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003885
24
128
1
1000
0
82.21 GB
Short overview of dataset ds003885 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003885
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003885 >>> dataset = DS003885(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003885( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003887(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003887
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 24 subjects with 24 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 80.10 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003887
24
128
1
1000
0
80.10 GB
Short overview of dataset ds003887 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003887
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003887 >>> dataset = DS003887(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003887( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003944
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Schizophrenia/Psychosis
This dataset contains 82 subjects with 82 recordings across 1 tasks. Total duration: 6.999 hours. Dataset size: 6.15 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003944
82
61
1
1000,3000.00030000003
6.999
6.15 GB
Short overview of dataset ds003944 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003944
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003944 >>> dataset = DS003944(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003944( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003947(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003947
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Schizophrenia/Psychosis
This dataset contains 61 subjects with 61 recordings across 1 tasks. Total duration: 5.266 hours. Dataset size: 12.54 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003947
61
61
1
1000,3000.00030000003
5.266
12.54 GB
Short overview of dataset ds003947 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003947
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003947 >>> dataset = DS003947(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003947( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003969(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003969
.Modality: Auditory | Type: Attention | Subjects: Healthy
This dataset contains 98 subjects with 392 recordings across 4 tasks. Total duration: 66.512 hours. Dataset size: 54.46 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003969
98
64
4
1024,2048
66.512
54.46 GB
Short overview of dataset ds003969 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003969
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003969 >>> dataset = DS003969(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003969( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS003987(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds003987
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 23 subjects with 69 recordings across 1 tasks. Total duration: 52.076 hours. Dataset size: 26.41 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds003987
23
64
1
500.093
52.076
26.41 GB
Short overview of dataset ds003987 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds003987
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS003987 >>> dataset = DS003987(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS003987( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004000(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004000
.Modality: Multisensory | Type: Decision-making | Subjects: Schizophrenia/Psychosis
This dataset contains 43 subjects with 86 recordings across 2 tasks. Total duration: 0.0 hours. Dataset size: 22.50 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004000
43
128
2
2048
0
22.50 GB
Short overview of dataset ds004000 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004000
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004000 >>> dataset = DS004000(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004000( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004010(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004010
.Modality: Multisensory | Type: Attention | Subjects: Healthy
This dataset contains 24 subjects with 24 recordings across 1 tasks. Total duration: 26.457 hours. Dataset size: 23.14 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004010
24
64
1
1000
26.457
23.14 GB
Short overview of dataset ds004010 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004010
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004010 >>> dataset = DS004010(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004010( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004015(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004015
.Modality: Auditory | Type: Attention | Subjects: Healthy
This dataset contains 36 subjects with 36 recordings across 1 tasks. Total duration: 47.29 hours. Dataset size: 6.03 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004015
36
18
1
500
47.29
6.03 GB
Short overview of dataset ds004015 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004015
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004015 >>> dataset = DS004015(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004015( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004018
.Modality: Visual | Type: Learning | Subjects: Healthy
This dataset contains 16 subjects with 32 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 10.56 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004018
16
63
1
1000
0
10.56 GB
Short overview of dataset ds004018 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004018
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004018 >>> dataset = DS004018(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004018( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004022(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004022
.Modality: Visual | Type: Motor | Subjects: Other
This dataset contains 7 subjects with 21 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 634.93 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004022
7
16,18
1
500
0
634.93 MB
Short overview of dataset ds004022 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004022
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004022 >>> dataset = DS004022(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004022( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004024(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004024
.Modality: Visual | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 13 subjects with 497 recordings across 3 tasks. Total duration: 55.503 hours. Dataset size: 1021.22 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004024
13
64
3
20000
55.503
1021.22 GB
Short overview of dataset ds004024 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004024
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004024 >>> dataset = DS004024(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004024( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004033
.Modality: Motor | Type: Motor | Subjects: nan
This dataset contains 18 subjects with 36 recordings across 2 tasks. Total duration: 42.645 hours. Dataset size: 19.81 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004033
18
64
2
500
42.645
19.81 GB
Short overview of dataset ds004033 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004033
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004033 >>> dataset = DS004033(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004033( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004040
.Modality: Auditory | Type: Other | Subjects: Healthy
This dataset contains 2 subjects with 4 recordings across 1 tasks. Total duration: 4.229 hours. Dataset size: 11.59 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004040
2
64
1
512
4.229
11.59 GB
Short overview of dataset ds004040 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004040
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004040 >>> dataset = DS004040(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004040( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004043(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004043
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 30.44 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004043
20
63
1
1000
0
30.44 GB
Short overview of dataset ds004043 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004043
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004043 >>> dataset = DS004043(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004043( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004067(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004067
.Modality: Visual | Type: Affect | Subjects: Healthy
This dataset contains 80 subjects with 84 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 100.79 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004067
80
63
1
2000
0
100.79 GB
Short overview of dataset ds004067 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004067
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004067 >>> dataset = DS004067(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004067( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004075(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004075
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 29 subjects with 116 recordings across 4 tasks. Total duration: 0.0 hours. Dataset size: 7.39 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004075
29
4
1000
0
7.39 GB
Short overview of dataset ds004075 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004075
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004075 >>> dataset = DS004075(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004075( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004117
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 23 subjects with 85 recordings across 1 tasks. Total duration: 15.941 hours. Dataset size: 5.80 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004117
23
69
1
1000,250,500,500.059
15.941
5.80 GB
Short overview of dataset ds004117 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004117
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004117 >>> dataset = DS004117(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004117( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004152
.Modality: Multisensory | Type: Learning | Subjects: Healthy
This dataset contains 21 subjects with 21 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 4.77 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004152
21
31
1
1000
0
4.77 GB
Short overview of dataset ds004152 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004152
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004152 >>> dataset = DS004152(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004152( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004196
.Modality: Visual | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 4 subjects with 4 recordings across 1 tasks. Total duration: 1.511 hours. Dataset size: 9.33 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004196
4
64
1
512
1.511
9.33 GB
Short overview of dataset ds004196 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004196
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004196 >>> dataset = DS004196(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004196( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004200
.Modality: Multisensory | Type: Attention | Subjects: Healthy
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 14.123 hours. Dataset size: 7.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004200
20
37
1
1000
14.123
7.21 GB
Short overview of dataset ds004200 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004200
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004200 >>> dataset = DS004200(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004200( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004252(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004252
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 4.31 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004252
1
1
0
4.31 GB
Short overview of dataset ds004252 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004252
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004252 >>> dataset = DS004252(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004252( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004256
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 53 subjects with 53 recordings across 2 tasks. Total duration: 42.337 hours. Dataset size: 18.18 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004256
53
64
2
500
42.337
18.18 GB
Short overview of dataset ds004256 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004256
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004256 >>> dataset = DS004256(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004256( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004262
.Modality: Visual | Type: Learning | Subjects: Healthy
This dataset contains 21 subjects with 21 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 3.48 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004262
21
31
1
1000
0
3.48 GB
Short overview of dataset ds004262 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004262
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004262 >>> dataset = DS004262(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004262( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004264
.Modality: Visual | Type: Learning | Subjects: Healthy
This dataset contains 21 subjects with 21 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 3.30 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004264
21
31
1
1000
0
3.30 GB
Short overview of dataset ds004264 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004264
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004264 >>> dataset = DS004264(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004264( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004279
.Modality: Auditory | Type: Perception | Subjects: Healthy
This dataset contains 56 subjects with 60 recordings across 1 tasks. Total duration: 53.729 hours. Dataset size: 25.22 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004279
56
64
1
1000
53.729
25.22 GB
Short overview of dataset ds004279 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004279
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004279 >>> dataset = DS004279(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004279( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004284
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 18 subjects with 18 recordings across 1 tasks. Total duration: 9.454 hours. Dataset size: 16.49 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004284
18
129
1
1000
9.454
16.49 GB
Short overview of dataset ds004284 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004284
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004284 >>> dataset = DS004284(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004284( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004295(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004295
.Modality: Multisensory | Type: Learning | Subjects: Healthy
This dataset contains 26 subjects with 26 recordings across 1 tasks. Total duration: 34.313 hours. Dataset size: 31.51 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004295
26
66
1
1024,512
34.313
31.51 GB
Short overview of dataset ds004295 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004295
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004295 >>> dataset = DS004295(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004295( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004306(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004306
.Modality: Multisensory | Type: Perception | Subjects: Healthy
This dataset contains 12 subjects with 15 recordings across 1 tasks. Total duration: 18.183 hours. Dataset size: 79.11 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004306
12
124
1
1024
18.183
79.11 GB
Short overview of dataset ds004306 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004306
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004306 >>> dataset = DS004306(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004306( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004315
.Modality: Multisensory | Type: Affect | Subjects: Healthy
This dataset contains 50 subjects with 50 recordings across 1 tasks. Total duration: 21.104 hours. Dataset size: 9.81 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004315
50
60
1
500
21.104
9.81 GB
Short overview of dataset ds004315 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004315
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004315 >>> dataset = DS004315(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004315( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004317
.Modality: Multisensory | Type: Affect | Subjects: Healthy
This dataset contains 50 subjects with 50 recordings across 1 tasks. Total duration: 37.767 hours. Dataset size: 18.29 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004317
50
60
1
500
37.767
18.29 GB
Short overview of dataset ds004317 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004317
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004317 >>> dataset = DS004317(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004317( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004324(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004324
.Modality: Multisensory | Type: Affect | Subjects: Healthy
This dataset contains 26 subjects with 26 recordings across 1 tasks. Total duration: 19.216 hours. Dataset size: 2.46 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004324
26
28
1
500
19.216
2.46 GB
Short overview of dataset ds004324 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004324
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004324 >>> dataset = DS004324(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004324( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004347
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 24 subjects with 48 recordings across 1 tasks. Total duration: 6.389 hours. Dataset size: 2.69 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004347
24
64
1
128,512
6.389
2.69 GB
Short overview of dataset ds004347 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004347
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004347 >>> dataset = DS004347(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004347( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004348
.Modality: Sleep | Type: Sleep | Subjects: Healthy
This dataset contains 9 subjects with 18 recordings across 2 tasks. Total duration: 35.056 hours. Dataset size: 12.30 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004348
9
34
2
200
35.056
12.30 GB
Short overview of dataset ds004348 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004348
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004348 >>> dataset = DS004348(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004348( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004350(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004350
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 24 subjects with 240 recordings across 5 tasks. Total duration: 41.265 hours. Dataset size: 26.83 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004350
24
64
5
256
41.265
26.83 GB
Short overview of dataset ds004350 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004350
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004350 >>> dataset = DS004350(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004350( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004356
.Modality: Auditory | Type: Perception | Subjects: Healthy
This dataset contains 22 subjects with 24 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 213.08 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004356
22
34
1
10000
0
213.08 GB
Short overview of dataset ds004356 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004356
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004356 >>> dataset = DS004356(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004356( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004357(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004357
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 16 subjects with 16 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 69.56 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004357
16
63
1
1000
0
69.56 GB
Short overview of dataset ds004357 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004357
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004357 >>> dataset = DS004357(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004357( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004362(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004362
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 109 subjects with 1526 recordings across 1 tasks. Total duration: 48.592 hours. Dataset size: 11.14 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004362
109
64
1
128,160
48.592
11.14 GB
Short overview of dataset ds004362 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004362
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004362 >>> dataset = DS004362(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004362( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004367
.Modality: Visual | Type: Perception | Subjects: Schizophrenia/Psychosis
This dataset contains 40 subjects with 40 recordings across 1 tasks. Total duration: 24.81 hours. Dataset size: 27.98 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004367
40
68
1
1200
24.81
27.98 GB
Short overview of dataset ds004367 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004367
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004367 >>> dataset = DS004367(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004367( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004368(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004368
.Modality: Visual | Type: Perception | Subjects: Schizophrenia/Psychosis
This dataset contains 39 subjects with 40 recordings across 1 tasks. Total duration: 0.033 hours. Dataset size: 997.14 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004368
39
63
1
128
0.033
997.14 MB
Short overview of dataset ds004368 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004368
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004368 >>> dataset = DS004368(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004368( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004369(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004369
.Modality: Auditory | Type: Perception | Subjects: Healthy
This dataset contains 41 subjects with 41 recordings across 1 tasks. Total duration: 37.333 hours. Dataset size: 8.01 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004369
41
4
1
500
37.333
8.01 GB
Short overview of dataset ds004369 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004369
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004369 >>> dataset = DS004369(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004369( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004381(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004381
.Modality: Other | Type: Other | Subjects: Surgery
This dataset contains 18 subjects with 437 recordings across 1 tasks. Total duration: 11.965 hours. Dataset size: 12.36 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004381
18
4,5,7,8,10
1
20000
11.965
12.36 GB
Short overview of dataset ds004381 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004381
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004381 >>> dataset = DS004381(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004381( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004388(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004388
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 40 subjects with 399 recordings across 3 tasks. Total duration: 43.327 hours. Dataset size: 682.54 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004388
40
67
3
10000
43.327
682.54 GB
Short overview of dataset ds004388 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004388
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004388 >>> dataset = DS004388(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004388( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004389(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004389
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 26 subjects with 260 recordings across 4 tasks. Total duration: 30.932 hours. Dataset size: 376.50 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004389
26
42
4
10000
30.932
376.50 GB
Short overview of dataset ds004389 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004389
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004389 >>> dataset = DS004389(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004389( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004408
.Modality: Auditory | Type: Other | Subjects: Healthy
This dataset contains 19 subjects with 380 recordings across 1 tasks. Total duration: 20.026 hours. Dataset size: 18.70 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004408
19
128
1
512
20.026
18.70 GB
Short overview of dataset ds004408 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004408
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004408 >>> dataset = DS004408(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004408( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004444(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004444
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 30 subjects with 465 recordings across 1 tasks. Total duration: 55.687 hours. Dataset size: 48.62 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004444
30
129
1
1000
55.687
48.62 GB
Short overview of dataset ds004444 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004444
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004444 >>> dataset = DS004444(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004444( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004446
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 30 subjects with 237 recordings across 1 tasks. Total duration: 33.486 hours. Dataset size: 29.23 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004446
30
129
1
1000
33.486
29.23 GB
Short overview of dataset ds004446 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004446
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004446 >>> dataset = DS004446(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004446( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004447(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004447
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 22 subjects with 418 recordings across 1 tasks. Total duration: 23.554 hours. Dataset size: 20.73 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004447
22
128,129
1
1000
23.554
20.73 GB
Short overview of dataset ds004447 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004447
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004447 >>> dataset = DS004447(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004447( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004448
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 56 subjects with 280 recordings across 1 tasks. Total duration: 43.732 hours. Dataset size: 38.17 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004448
56
129
1
1000
43.732
38.17 GB
Short overview of dataset ds004448 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004448
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004448 >>> dataset = DS004448(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004448( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004460
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 20 subjects with 40 recordings across 1 tasks. Total duration: 27.494 hours. Dataset size: 61.36 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004460
20
160
1
1000
27.494
61.36 GB
Short overview of dataset ds004460 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004460
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004460 >>> dataset = DS004460(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004460( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004475(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004475
.Modality: Motor | Type: Motor | Subjects: Healthy
This dataset contains 30 subjects with 30 recordings across 1 tasks. Total duration: 26.899 hours. Dataset size: 112.74 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004475
30
113,115,118,119,120,122,123,124,125,126,127,128
1
512
26.899
112.74 GB
Short overview of dataset ds004475 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004475
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004475 >>> dataset = DS004475(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004475( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004477
.Modality: Multisensory | Type: Decision-making | Subjects: Healthy
This dataset contains 9 subjects with 9 recordings across 1 tasks. Total duration: 13.557 hours. Dataset size: 22.34 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004477
9
79
1
2048
13.557
22.34 GB
Short overview of dataset ds004477 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004477
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004477 >>> dataset = DS004477(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004477( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004504(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004504
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Dementia
This dataset contains 88 subjects with 88 recordings across 1 tasks. Total duration: 19.608 hours. Dataset size: 5.38 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004504
88
19
1
500
19.608
5.38 GB
Short overview of dataset ds004504 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004504
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004504 >>> dataset = DS004504(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004504( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004505
.Modality: Motor | Type: Motor | Subjects: Healthy
This dataset contains 25 subjects with 25 recordings across 1 tasks. Total duration: 30.398 hours. Dataset size: 522.56 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004505
25
120
1
250
30.398
522.56 GB
Short overview of dataset ds004505 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004505
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004505 >>> dataset = DS004505(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004505( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004511(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004511
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 45 subjects with 134 recordings across 3 tasks. Total duration: 48.922 hours. Dataset size: 202.28 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004511
45
139
3
3000
48.922
202.28 GB
Short overview of dataset ds004511 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004511
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004511 >>> dataset = DS004511(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004511( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004515
.Modality: Visual | Type: Affect | Subjects: Other
This dataset contains 54 subjects with 54 recordings across 1 tasks. Total duration: 20.61 hours. Dataset size: 9.48 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004515
54
64
1
500
20.61
9.48 GB
Short overview of dataset ds004515 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004515
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004515 >>> dataset = DS004515(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004515( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004519
.Modality: Visual | Type: Attention | Subjects: nan
This dataset contains 40 subjects with 40 recordings across 1 tasks. Total duration: 0.067 hours. Dataset size: 12.56 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004519
40
62
1
250
0.067
12.56 GB
Short overview of dataset ds004519 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004519
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004519 >>> dataset = DS004519(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004519( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004520
.Modality: Visual | Type: Memory | Subjects: nan
This dataset contains 33 subjects with 33 recordings across 1 tasks. Total duration: 0.055 hours. Dataset size: 10.41 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004520
33
62
1
250
0.055
10.41 GB
Short overview of dataset ds004520 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004520
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004520 >>> dataset = DS004520(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004520( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004521
.Modality: Visual | Type: Motor | Subjects: nan
This dataset contains 34 subjects with 34 recordings across 1 tasks. Total duration: 0.057 hours. Dataset size: 10.68 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004521
34
62
1
250
0.057
10.68 GB
Short overview of dataset ds004521 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004521
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004521 >>> dataset = DS004521(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004521( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004532(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004532
.Modality: Visual | Type: Learning | Subjects: Healthy
This dataset contains 110 subjects with 137 recordings across 1 tasks. Total duration: 49.651 hours. Dataset size: 22.09 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004532
110
64
1
500
49.651
22.09 GB
Short overview of dataset ds004532 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004532
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004532 >>> dataset = DS004532(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004532( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004554
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 16 subjects with 16 recordings across 1 tasks. Total duration: 0.024 hours. Dataset size: 8.79 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004554
16
99
1
1000
0.024
8.79 GB
Short overview of dataset ds004554 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004554
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004554 >>> dataset = DS004554(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004554( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004561(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004561
.Modality: Motor | Type: Perception | Subjects: Healthy
This dataset contains 23 subjects with 23 recordings across 1 tasks. Total duration: 11.379 hours. Dataset size: 97.96 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004561
23
62
1
10000
11.379
97.96 GB
Short overview of dataset ds004561 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004561
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004561 >>> dataset = DS004561(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004561( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004572(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004572
.Modality: Auditory | Type: Perception | Subjects: nan
This dataset contains 52 subjects with 516 recordings across 10 tasks. Total duration: 52.624 hours. Dataset size: 43.56 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004572
52
58
10
1000
52.624
43.56 GB
Short overview of dataset ds004572 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004572
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004572 >>> dataset = DS004572(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004572( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004574
.Modality: Multisensory | Type: Clinical/Intervention | Subjects: Parkinson’s
This dataset contains 146 subjects with 146 recordings across 1 tasks. Total duration: 31.043 hours. Dataset size: 13.48 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004574
146
63,64,66
1
500
31.043
13.48 GB
Short overview of dataset ds004574 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004574
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004574 >>> dataset = DS004574(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004574( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004577(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004577
.Modality: Sleep | Type: Clinical/Intervention | Subjects: Healthy
This dataset contains 103 subjects with 130 recordings across 1 tasks. Total duration: 22.974 hours. Dataset size: 652.76 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004577
103
19,21,24
1
200
22.974
652.76 MB
Short overview of dataset ds004577 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004577
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004577 >>> dataset = DS004577(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004577( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004579(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004579
.Modality: Visual | Type: Decision-making | Subjects: Parkinson’s
This dataset contains 139 subjects with 139 recordings across 1 tasks. Total duration: 55.703 hours. Dataset size: 24.12 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004579
139
63,64,66
1
500
55.703
24.12 GB
Short overview of dataset ds004579 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004579
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004579 >>> dataset = DS004579(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004579( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004580(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004580
.Modality: Visual | Type: Decision-making | Subjects: Parkinson’s
This dataset contains 147 subjects with 147 recordings across 1 tasks. Total duration: 36.514 hours. Dataset size: 15.84 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004580
147
63,64,66
1
500
36.514
15.84 GB
Short overview of dataset ds004580 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004580
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004580 >>> dataset = DS004580(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004580( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004582(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004582
.Modality: Visual | Type: Affect | Subjects: Healthy
This dataset contains 73 subjects with 73 recordings across 1 tasks. Total duration: 34.244 hours. Dataset size: 294.22 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004582
73
59
1
10000
34.244
294.22 GB
Short overview of dataset ds004582 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004582
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004582 >>> dataset = DS004582(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004582( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004584(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004584
.Modality: Resting State | Type: Clinical/Intervention | Subjects: Parkinson’s
This dataset contains 149 subjects with 149 recordings across 1 tasks. Total duration: 6.641 hours. Dataset size: 2.87 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004584
149
63,64,66
1
500
6.641
2.87 GB
Short overview of dataset ds004584 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004584
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004584 >>> dataset = DS004584(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004584( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004587(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004587
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 103 subjects with 114 recordings across 1 tasks. Total duration: 25.491 hours. Dataset size: 219.34 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004587
103
59
1
10000
25.491
219.34 GB
Short overview of dataset ds004587 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004587
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004587 >>> dataset = DS004587(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004587( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004588(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004588
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 42 subjects with 42 recordings across 1 tasks. Total duration: 4.957 hours. Dataset size: 601.76 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004588
42
24
1
300
4.957
601.76 MB
Short overview of dataset ds004588 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004588
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004588 >>> dataset = DS004588(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004588( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004595(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004595
.Modality: Visual | Type: Decision-making | Subjects: Other
This dataset contains 53 subjects with 53 recordings across 1 tasks. Total duration: 17.078 hours. Dataset size: 7.89 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004595
53
64
1
500
17.078
7.89 GB
Short overview of dataset ds004595 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004595
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004595 >>> dataset = DS004595(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004595( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004598(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004598
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 9 subjects with 20 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 26.66 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004598
9
1
10000
0
26.66 GB
Short overview of dataset ds004598 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004598
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004598 >>> dataset = DS004598(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004598( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004602
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 182 subjects with 546 recordings across 3 tasks. Total duration: 87.11 hours. Dataset size: 73.91 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004602
182
128
3
250,500
87.11
73.91 GB
Short overview of dataset ds004602 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004602
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004602 >>> dataset = DS004602(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004602( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004603(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004603
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 37 subjects with 37 recordings across 1 tasks. Total duration: 30.653 hours. Dataset size: 39.13 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004603
37
64
1
1024
30.653
39.13 GB
Short overview of dataset ds004603 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004603
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004603 >>> dataset = DS004603(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004603( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004621(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004621
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 42 subjects with 167 recordings across 4 tasks. Total duration: 0.0 hours. Dataset size: 77.39 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004621
42
4
1000
0
77.39 GB
Short overview of dataset ds004621 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004621
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004621 >>> dataset = DS004621(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004621( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004625(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004625
.Modality: Motor | Type: Attention | Subjects: nan
This dataset contains 32 subjects with 543 recordings across 9 tasks. Total duration: 28.397 hours. Dataset size: 62.46 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004625
32
120
9
500
28.397
62.46 GB
Short overview of dataset ds004625 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004625
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004625 >>> dataset = DS004625(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004625( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004626
.Modality: Visual | Type: Attention | Subjects: Other
This dataset contains 52 subjects with 52 recordings across 1 tasks. Total duration: 21.359 hours. Dataset size: 19.87 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004626
52
68
1
1000
21.359
19.87 GB
Short overview of dataset ds004626 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004626
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004626 >>> dataset = DS004626(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004626( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004635(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004635
.Modality: Multisensory | Type: Attention | Subjects: Healthy
This dataset contains 55 subjects with 55 recordings across 1 tasks. Total duration: 20.068 hours. Dataset size: 30.56 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004635
55
129
1
1000
20.068
30.56 GB
Short overview of dataset ds004635 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004635
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004635 >>> dataset = DS004635(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004635( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004657(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004657
.Modality: Motor | Type: Decision-making | Subjects: nan
This dataset contains 24 subjects with 119 recordings across 1 tasks. Total duration: 27.205 hours. Dataset size: 43.06 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004657
24
64
1
1024,8192
27.205
43.06 GB
Short overview of dataset ds004657 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004657
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004657 >>> dataset = DS004657(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004657( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004660(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004660
.Modality: Multisensory | Type: Attention | Subjects: Healthy
This dataset contains 21 subjects with 42 recordings across 1 tasks. Total duration: 23.962 hours. Dataset size: 7.25 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004660
21
32
1
2048,512
23.962
7.25 GB
Short overview of dataset ds004660 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004660
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004660 >>> dataset = DS004660(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004660( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004661(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004661
.Modality: Multisensory | Type: Memory | Subjects: nan
This dataset contains 17 subjects with 17 recordings across 1 tasks. Total duration: 10.137 hours. Dataset size: 1.40 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004661
17
64
1
128
10.137
1.40 GB
Short overview of dataset ds004661 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004661
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004661 >>> dataset = DS004661(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004661( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004718
.Modality: Auditory | Type: Learning | Subjects: Healthy
This dataset contains 51 subjects with 51 recordings across 1 tasks. Total duration: 21.836 hours. Dataset size: 108.98 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004718
51
64
1
1000
21.836
108.98 GB
Short overview of dataset ds004718 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004718
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004718 >>> dataset = DS004718(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004718( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004745(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004745
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 6 subjects with 6 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 242.08 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004745
6
1
1000
0
242.08 MB
Short overview of dataset ds004745 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004745
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004745 >>> dataset = DS004745(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004745( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004752
.Modality: Auditory | Type: Memory | Subjects: Epilepsy
This dataset contains 15 subjects with 136 recordings across 1 tasks. Total duration: 0.302 hours. Dataset size: 11.95 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004752
15
0,8,10,19,20,21,23
1
200,2000,4000,4096
0.302
11.95 GB
Short overview of dataset ds004752 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004752
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004752 >>> dataset = DS004752(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004752( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004771(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004771
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 61 subjects with 61 recordings across 1 tasks. Total duration: 0.022 hours. Dataset size: 1.36 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004771
61
34
1
256
0.022
1.36 GB
Short overview of dataset ds004771 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004771
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004771 >>> dataset = DS004771(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004771( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004784(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004784
.Modality: Motor | Type: Attention | Subjects: Healthy
This dataset contains 1 subjects with 6 recordings across 6 tasks. Total duration: 0.518 hours. Dataset size: 10.82 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004784
1
128
6
512
0.518
10.82 GB
Short overview of dataset ds004784 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004784
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004784 >>> dataset = DS004784(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004784( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004785
.Modality: Motor | Type: Motor | Subjects: Healthy
This dataset contains 17 subjects with 17 recordings across 1 tasks. Total duration: 0.019 hours. Dataset size: 351.17 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004785
17
32
1
500
0.019
351.17 MB
Short overview of dataset ds004785 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004785
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004785 >>> dataset = DS004785(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004785( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004796(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004796
.Modality: Visual/Resting State | Type: Memory/Resting state | Subjects: Other
This dataset contains 79 subjects with 235 recordings across 3 tasks. Total duration: 0.0 hours. Dataset size: 240.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004796
79
3
1000
0
240.21 GB
Short overview of dataset ds004796 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004796
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004796 >>> dataset = DS004796(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004796( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004802
.Modality: Visual | Type: Affect | Subjects: Other
This dataset contains 38 subjects with 38 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 29.34 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004802
38
65
1
2048,512
0
29.34 GB
Short overview of dataset ds004802 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004802
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004802 >>> dataset = DS004802(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004802( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004816
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 23.31 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004816
20
63
1
1000
0
23.31 GB
Short overview of dataset ds004816 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004816
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004816 >>> dataset = DS004816(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004816( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004817
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 25.34 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004817
20
63
1
1000
0
25.34 GB
Short overview of dataset ds004817 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004817
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004817 >>> dataset = DS004817(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004817( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004840
.Modality: Auditory | Type: Clinical/Intervention | Subjects: Other
This dataset contains 9 subjects with 51 recordings across 3 tasks. Total duration: 11.306 hours. Dataset size: 1.75 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004840
9
8
3
1024,256,512
11.306
1.75 GB
Short overview of dataset ds004840 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004840
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004840 >>> dataset = DS004840(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004840( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004841
.Modality: Multisensory | Type: Attention | Subjects: nan
This dataset contains 20 subjects with 147 recordings across 1 tasks. Total duration: 29.054 hours. Dataset size: 7.31 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004841
20
64
1
256
29.054
7.31 GB
Short overview of dataset ds004841 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004841
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004841 >>> dataset = DS004841(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004841( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004842(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004842
.Modality: Multisensory | Type: Attention | Subjects: nan
This dataset contains 14 subjects with 102 recordings across 1 tasks. Total duration: 20.102 hours. Dataset size: 5.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004842
14
64
1
256
20.102
5.21 GB
Short overview of dataset ds004842 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004842
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004842 >>> dataset = DS004842(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004842( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004843(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004843
.Modality: Visual | Type: Attention | Subjects: nan
This dataset contains 14 subjects with 92 recordings across 1 tasks. Total duration: 29.834 hours. Dataset size: 7.66 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004843
14
64
1
256
29.834
7.66 GB
Short overview of dataset ds004843 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004843
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004843 >>> dataset = DS004843(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004843( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004844
.Modality: Multisensory | Type: Decision-making | Subjects: nan
This dataset contains 17 subjects with 68 recordings across 1 tasks. Total duration: 21.252 hours. Dataset size: 22.33 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004844
17
64
1
1024
21.252
22.33 GB
Short overview of dataset ds004844 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004844
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004844 >>> dataset = DS004844(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004844( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004849
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 79.21 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004849
1
64
1
128
0.535
79.21 MB
Short overview of dataset ds004849 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004849
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004849 >>> dataset = DS004849(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004849( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004850
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 79.21 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004850
1
64
1
128
0.535
79.21 MB
Short overview of dataset ds004850 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004850
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004850 >>> dataset = DS004850(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004850( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004851(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004851
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 56.59 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004851
1
64
1
128
0.535
56.59 GB
Short overview of dataset ds004851 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004851
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004851 >>> dataset = DS004851(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004851( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004852(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004852
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 79.21 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004852
1
64
1
128
0.535
79.21 MB
Short overview of dataset ds004852 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004852
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004852 >>> dataset = DS004852(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004852( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004853(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004853
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 79.21 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004853
1
64
1
128
0.535
79.21 MB
Short overview of dataset ds004853 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004853
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004853 >>> dataset = DS004853(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004853( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004854(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004854
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 79.21 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004854
1
64
1
128
0.535
79.21 MB
Short overview of dataset ds004854 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004854
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004854 >>> dataset = DS004854(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004854( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004855(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004855
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 1 subjects with 1 recordings across 1 tasks. Total duration: 0.535 hours. Dataset size: 79.21 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004855
1
64
1
128
0.535
79.21 MB
Short overview of dataset ds004855 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004855
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004855 >>> dataset = DS004855(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004855( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004860(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004860
.Modality: Auditory | Type: Decision-making | Subjects: Healthy
This dataset contains 31 subjects with 31 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 3.79 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004860
31
32
1
2048,512
0
3.79 GB
Short overview of dataset ds004860 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004860
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004860 >>> dataset = DS004860(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004860( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004883(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004883
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 172 subjects with 516 recordings across 3 tasks. Total duration: 137.855 hours. Dataset size: 122.80 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004883
172
128
3
500
137.855
122.80 GB
Short overview of dataset ds004883 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004883
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004883 >>> dataset = DS004883(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004883( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004902
.Modality: Resting State | Type: Resting state | Subjects: Healthy
This dataset contains 71 subjects with 218 recordings across 2 tasks. Total duration: 18.118 hours. Dataset size: 8.29 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004902
71
61
2
500,5000
18.118
8.29 GB
Short overview of dataset ds004902 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004902
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004902 >>> dataset = DS004902(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004902( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004917(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004917
.Modality: Multisensory | Type: Decision-making | Subjects: Healthy
This dataset contains 24 subjects with 24 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 36.47 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004917
24
1
0
36.47 GB
Short overview of dataset ds004917 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004917
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004917 >>> dataset = DS004917(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004917( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004942(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004942
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 62 subjects with 62 recordings across 1 tasks. Total duration: 28.282 hours. Dataset size: 25.05 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004942
62
65
1
1000
28.282
25.05 GB
Short overview of dataset ds004942 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004942
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004942 >>> dataset = DS004942(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004942( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004951(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004951
.Modality: Tactile | Type: Learning | Subjects: nan
This dataset contains 11 subjects with 23 recordings across 1 tasks. Total duration: 29.563 hours. Dataset size: 22.00 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004951
11
63
1
1000
29.563
22.00 GB
Short overview of dataset ds004951 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004951
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004951 >>> dataset = DS004951(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004951( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004952(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004952
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 10 subjects with 245 recordings across 1 tasks. Total duration: 123.411 hours. Dataset size: 696.72 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004952
10
128
1
1000
123.411
696.72 GB
Short overview of dataset ds004952 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004952
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004952 >>> dataset = DS004952(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004952( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004980(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004980
.Modality: Visual | Type: Perception | Subjects: Healthy
This dataset contains 17 subjects with 17 recordings across 1 tasks. Total duration: 36.846 hours. Dataset size: 15.82 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004980
17
64
1
499.9911824,499.9912809,499.991385,499.9914353,499.9914553,499.9915179,499.9917272,499.9917286,499.9917378,499.9919292,499.9919367,499.9923017,499.9923795,500
36.846
15.82 GB
Short overview of dataset ds004980 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004980
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004980 >>> dataset = DS004980(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004980( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS004995(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds004995
.Modality: Visual | Type: Attention | Subjects: nan
This dataset contains 20 subjects with 20 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 27.60 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds004995
20
1
0
27.60 GB
Short overview of dataset ds004995 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds004995
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS004995 >>> dataset = DS004995(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS004995( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005021(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005021
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 36 subjects with 36 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 83.20 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005021
36
64
1
1024
0
83.20 GB
Short overview of dataset ds005021 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005021
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005021 >>> dataset = DS005021(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005021( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005028
.Modality: Visual | Type: Motor | Subjects: nan
This dataset contains 11 subjects with 66 recordings across 3 tasks. Total duration: 0.0 hours. Dataset size: 1.46 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005028
11
3
0
1.46 GB
Short overview of dataset ds005028 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005028
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005028 >>> dataset = DS005028(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005028( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005034
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 25 subjects with 100 recordings across 2 tasks. Total duration: 37.525 hours. Dataset size: 61.36 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005034
25
129
2
1000
37.525
61.36 GB
Short overview of dataset ds005034 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005034
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005034 >>> dataset = DS005034(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005034( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005048(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005048
.Modality: Auditory | Type: Attention | Subjects: Dementia
This dataset contains 35 subjects with 35 recordings across 1 tasks. Total duration: 5.203 hours. Dataset size: 355.91 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005048
35
1
250
5.203
355.91 MB
Short overview of dataset ds005048 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005048
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005048 >>> dataset = DS005048(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005048( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005079(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005079
.Modality: Multisensory | Type: Affect | Subjects: Healthy
This dataset contains 1 subjects with 60 recordings across 15 tasks. Total duration: 3.25 hours. Dataset size: 1.68 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005079
1
65
15
500
3.25
1.68 GB
Short overview of dataset ds005079 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005079
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005079 >>> dataset = DS005079(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005079( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005089(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005089
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 36 subjects with 36 recordings across 1 tasks. Total duration: 68.82 hours. Dataset size: 68.01 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005089
36
63
1
1000
68.82
68.01 GB
Short overview of dataset ds005089 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005089
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005089 >>> dataset = DS005089(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005089( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005095
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 48 subjects with 48 recordings across 1 tasks. Total duration: 16.901 hours. Dataset size: 14.28 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005095
48
63
1
1000
16.901
14.28 GB
Short overview of dataset ds005095 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005095
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005095 >>> dataset = DS005095(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005095( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005106
.Modality: Visual | Type: Attention | Subjects: Healthy
This dataset contains 42 subjects with 42 recordings across 1 tasks. Total duration: 0.012 hours. Dataset size: 12.62 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005106
42
32
1
500
0.012
12.62 GB
Short overview of dataset ds005106 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005106
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005106 >>> dataset = DS005106(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005106( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005114
.Modality: Visual | Type: Attention | Subjects: TBI
This dataset contains 91 subjects with 223 recordings across 1 tasks. Total duration: 125.701 hours. Dataset size: 56.47 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005114
91
64
1
500
125.701
56.47 GB
Short overview of dataset ds005114 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005114
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005114 >>> dataset = DS005114(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005114( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005121
.Modality: Sleep | Type: Memory | Subjects: Healthy
This dataset contains 34 subjects with 39 recordings across 1 tasks. Total duration: 41.498 hours. Dataset size: 9.04 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005121
34
58
1
512
41.498
9.04 GB
Short overview of dataset ds005121 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005121
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005121 >>> dataset = DS005121(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005121( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005131
.Modality: Auditory | Type: Attention/Memory | Subjects: Healthy
This dataset contains 58 subjects with 63 recordings across 2 tasks. Total duration: 52.035 hours. Dataset size: 22.35 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005131
58
64
2
500
52.035
22.35 GB
Short overview of dataset ds005131 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005131
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005131 >>> dataset = DS005131(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005131( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005170
.Modality: Visual | Type: other | Subjects: nan
This dataset contains 5 subjects with 225 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 261.77 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005170
5
1
0
261.77 GB
Short overview of dataset ds005170 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005170
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005170 >>> dataset = DS005170(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005170( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005185
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 20 subjects with 356 recordings across 3 tasks. Total duration: 0.0 hours. Dataset size: 783.25 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005185
20
8
3
500
0
783.25 GB
Short overview of dataset ds005185 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005185
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005185 >>> dataset = DS005185(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005185( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005189
.Modality: Visual | Type: Memory | Subjects: Healthy
This dataset contains 30 subjects with 30 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 17.03 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005189
30
61
1
1000
0
17.03 GB
Short overview of dataset ds005189 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005189
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005189 >>> dataset = DS005189(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005189( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005207
.Modality: Sleep | Type: Sleep | Subjects: Healthy
This dataset contains 20 subjects with 39 recordings across 1 tasks. Total duration: 422.881 hours. Dataset size: 69.12 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005207
20
6,10,12,14,15,16,17,18
1
128,250
422.881
69.12 GB
Short overview of dataset ds005207 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005207
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005207 >>> dataset = DS005207(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005207( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005262
.Modality: Visual | Type: other | Subjects: Healthy
This dataset contains 12 subjects with 186 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 688.75 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005262
12
1
0
688.75 MB
Short overview of dataset ds005262 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005262
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005262 >>> dataset = DS005262(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005262( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005273(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005273
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 33 subjects with 33 recordings across 1 tasks. Total duration: 58.055 hours. Dataset size: 44.42 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005273
33
63
1
1000
58.055
44.42 GB
Short overview of dataset ds005273 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005273
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005273 >>> dataset = DS005273(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005273( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005274(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005274
.Modality: nan | Type: nan | Subjects: Healthy
This dataset contains 22 subjects with 22 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 71.91 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005274
22
6
1
500
0
71.91 MB
Short overview of dataset ds005274 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005274
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005274 >>> dataset = DS005274(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005274( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005296(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005296
.Modality: Multisensory | Type: Decision-making | Subjects: Healthy
This dataset contains 62 subjects with 62 recordings across 1 tasks. Total duration: 37.205 hours. Dataset size: 8.53 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005296
62
1
500
37.205
8.53 GB
Short overview of dataset ds005296 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005296
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005296 >>> dataset = DS005296(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005296( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005305(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005305
.Modality: Visual | Type: Decision-making | Subjects: Healthy
This dataset contains 165 subjects with 165 recordings across 1 tasks. Total duration: 14.136 hours. Dataset size: 6.41 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005305
165
64
1
2048,512
14.136
6.41 GB
Short overview of dataset ds005305 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005305
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005305 >>> dataset = DS005305(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005305( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005307(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005307
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 7 subjects with 73 recordings across 1 tasks. Total duration: 1.335 hours. Dataset size: 18.59 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005307
7
72,104
1
10000
1.335
18.59 GB
Short overview of dataset ds005307 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005307
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005307 >>> dataset = DS005307(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005307( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005340
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 15 subjects with 15 recordings across 1 tasks. Total duration: 35.297 hours. Dataset size: 19.14 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005340
15
2
1
10000
35.297
19.14 GB
Short overview of dataset ds005340 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005340
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005340 >>> dataset = DS005340(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005340( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005342
.Modality: Visual | Type: Motor | Subjects: Healthy
This dataset contains 32 subjects with 32 recordings across 1 tasks. Total duration: 33.017 hours. Dataset size: 2.03 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005342
32
17
1
250
33.017
2.03 GB
Short overview of dataset ds005342 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005342
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005342 >>> dataset = DS005342(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005342( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005345
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 26 subjects with 26 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 405.13 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005345
26
64
1
500
0
405.13 GB
Short overview of dataset ds005345 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005345
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005345 >>> dataset = DS005345(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005345( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005363(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005363
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 43 subjects with 43 recordings across 1 tasks. Total duration: 43.085 hours. Dataset size: 17.71 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005363
43
64
1
1000
43.085
17.71 GB
Short overview of dataset ds005363 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005363
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005363 >>> dataset = DS005363(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005363( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005383(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005383
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 30 subjects with 240 recordings across 1 tasks. Total duration: 8.327 hours. Dataset size: 17.43 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005383
30
30
1
200
8.327
17.43 GB
Short overview of dataset ds005383 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005383
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005383 >>> dataset = DS005383(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005383( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005385(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005385
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 608 subjects with 3264 recordings across 2 tasks. Total duration: 169.62 hours. Dataset size: 74.07 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005385
608
64
2
1000
169.62
74.07 GB
Short overview of dataset ds005385 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005385
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005385 >>> dataset = DS005385(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005385( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005397(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005397
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 26 subjects with 26 recordings across 1 tasks. Total duration: 27.923 hours. Dataset size: 12.10 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005397
26
64
1
500
27.923
12.10 GB
Short overview of dataset ds005397 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005397
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005397 >>> dataset = DS005397(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005397( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005403(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005403
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 32 subjects with 32 recordings across 1 tasks. Total duration: 13.383 hours. Dataset size: 135.65 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005403
32
62
1
10000
13.383
135.65 GB
Short overview of dataset ds005403 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005403
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005403 >>> dataset = DS005403(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005403( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005406
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 29 subjects with 29 recordings across 1 tasks. Total duration: 15.452 hours. Dataset size: 13.26 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005406
29
63
1
1000
15.452
13.26 GB
Short overview of dataset ds005406 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005406
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005406 >>> dataset = DS005406(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005406( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005410(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005410
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 81 subjects with 81 recordings across 1 tasks. Total duration: 22.976 hours. Dataset size: 19.76 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005410
81
63
1
1000
22.976
19.76 GB
Short overview of dataset ds005410 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005410
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005410 >>> dataset = DS005410(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005410( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005416(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005416
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 23 subjects with 23 recordings across 1 tasks. Total duration: 24.68 hours. Dataset size: 21.30 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005416
23
64
1
1000
24.68
21.30 GB
Short overview of dataset ds005416 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005416
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005416 >>> dataset = DS005416(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005416( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005420
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 37 subjects with 72 recordings across 2 tasks. Total duration: 5.485 hours. Dataset size: 372.11 MB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005420
37
20
2
500
5.485
372.11 MB
Short overview of dataset ds005420 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005420
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005420 >>> dataset = DS005420(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005420( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005429(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005429
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 15 subjects with 61 recordings across 3 tasks. Total duration: 14.474 hours. Dataset size: 16.47 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005429
15
64
3
2500,5000
14.474
16.47 GB
Short overview of dataset ds005429 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005429
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005429 >>> dataset = DS005429(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005429( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005486(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005486
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 159 subjects with 445 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 371.04 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005486
159
1
25000,5000
0
371.04 GB
Short overview of dataset ds005486 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005486
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005486 >>> dataset = DS005486(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005486( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005505
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 136 subjects with 1342 recordings across 10 tasks. Total duration: 125.366 hours. Dataset size: 103.11 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005505
136
129
10
500
125.366
103.11 GB
Short overview of dataset ds005505 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005505
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005505 >>> dataset = DS005505(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005505( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005506
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 150 subjects with 1405 recordings across 10 tasks. Total duration: 127.896 hours. Dataset size: 111.88 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005506
150
129
10
500
127.896
111.88 GB
Short overview of dataset ds005506 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005506
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005506 >>> dataset = DS005506(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005506( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005507(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005507
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 184 subjects with 1812 recordings across 10 tasks. Total duration: 168.649 hours. Dataset size: 139.37 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005507
184
129
10
500
168.649
139.37 GB
Short overview of dataset ds005507 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005507
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005507 >>> dataset = DS005507(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005507( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005508(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005508
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 324 subjects with 3342 recordings across 10 tasks. Total duration: 269.281 hours. Dataset size: 229.81 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005508
324
129
10
500
269.281
229.81 GB
Short overview of dataset ds005508 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005508
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005508 >>> dataset = DS005508(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005508( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005509
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 330 subjects with 3326 recordings across 10 tasks. Total duration: 274.559 hours. Dataset size: 224.17 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005509
330
129
10
500
274.559
224.17 GB
Short overview of dataset ds005509 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005509
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005509 >>> dataset = DS005509(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005509( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005510(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005510
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 135 subjects with 1227 recordings across 10 tasks. Total duration: 112.464 hours. Dataset size: 90.80 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005510
135
129
10
500
112.464
90.80 GB
Short overview of dataset ds005510 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005510
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005510 >>> dataset = DS005510(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005510( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005511(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005511
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 381 subjects with 3100 recordings across 10 tasks. Total duration: 285.629 hours. Dataset size: 244.83 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005511
381
6,129
10
500
285.629
244.83 GB
Short overview of dataset ds005511 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005511
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005511 >>> dataset = DS005511(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005511( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005512(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005512
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 257 subjects with 2320 recordings across 10 tasks. Total duration: 196.205 hours. Dataset size: 157.19 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005512
257
129
10
500
196.205
157.19 GB
Short overview of dataset ds005512 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005512
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005512 >>> dataset = DS005512(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005512( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005514
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 295 subjects with 2885 recordings across 10 tasks. Total duration: 213.008 hours. Dataset size: 185.03 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005514
295
129
10
500
213.008
185.03 GB
Short overview of dataset ds005514 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005514
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005514 >>> dataset = DS005514(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005514( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005515
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 533 subjects with 2516 recordings across 8 tasks. Total duration: 198.849 hours. Dataset size: 160.55 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005515
533
129
8
500
198.849
160.55 GB
Short overview of dataset ds005515 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005515
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005515 >>> dataset = DS005515(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005515( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005516
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 430 subjects with 3397 recordings across 8 tasks. Total duration: 256.932 hours. Dataset size: 219.39 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005516
430
129
8
500
256.932
219.39 GB
Short overview of dataset ds005516 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005516
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005516 >>> dataset = DS005516(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005516( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005520
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 23 subjects with 69 recordings across 3 tasks. Total duration: 60.73 hours. Dataset size: 275.98 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005520
23
67
3
1000
60.73
275.98 GB
Short overview of dataset ds005520 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005520
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005520 >>> dataset = DS005520(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005520( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005530(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005530
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 17 subjects with 21 recordings across 1 tasks. Total duration: 154.833 hours. Dataset size: 6.47 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005530
17
10
1
500
154.833
6.47 GB
Short overview of dataset ds005530 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005530
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005530 >>> dataset = DS005530(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005530( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005540(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005540
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 59 subjects with 103 recordings across 1 tasks. Total duration: 0.0 hours. Dataset size: 70.40 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005540
59
64
1
1200,600
0
70.40 GB
Short overview of dataset ds005540 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005540
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005540 >>> dataset = DS005540(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005540( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005555
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 128 subjects with 256 recordings across 1 tasks. Total duration: 2002.592 hours. Dataset size: 33.45 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005555
128
2,8,9,11,12,13
1
256
2002.59
33.45 GB
Short overview of dataset ds005555 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005555
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005555 >>> dataset = DS005555(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005555( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005565(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005565
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 24 subjects with 24 recordings across 1 tasks. Total duration: 11.436 hours. Dataset size: 2.62 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005565
24
1
500
11.436
2.62 GB
Short overview of dataset ds005565 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005565
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005565 >>> dataset = DS005565(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005565( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005571(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005571
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 24 subjects with 45 recordings across 2 tasks. Total duration: 0.0 hours. Dataset size: 62.77 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005571
24
64
2
5000
0
62.77 GB
Short overview of dataset ds005571 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005571
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005571 >>> dataset = DS005571(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005571( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005586(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005586
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 23 subjects with 23 recordings across 1 tasks. Total duration: 33.529 hours. Dataset size: 28.68 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005586
23
60
1
1000
33.529
28.68 GB
Short overview of dataset ds005586 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005586
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005586 >>> dataset = DS005586(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005586( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005594(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005594
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 16 subjects with 16 recordings across 1 tasks. Total duration: 12.934 hours. Dataset size: 10.89 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005594
16
64
1
1000
12.934
10.89 GB
Short overview of dataset ds005594 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005594
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005594 >>> dataset = DS005594(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005594( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005620
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 21 subjects with 202 recordings across 3 tasks. Total duration: 21.811 hours. Dataset size: 77.30 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005620
21
64,65
3
5000
21.811
77.30 GB
Short overview of dataset ds005620 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005620
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005620 >>> dataset = DS005620(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005620( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005672(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005672
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 3 subjects with 3 recordings across 1 tasks. Total duration: 4.585 hours. Dataset size: 4.23 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005672
3
65,69
1
1000
4.585
4.23 GB
Short overview of dataset ds005672 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005672
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005672 >>> dataset = DS005672(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005672( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005688
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 20 subjects with 89 recordings across 5 tasks. Total duration: 2.502 hours. Dataset size: 8.42 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005688
20
4
5
10000,20000
2.502
8.42 GB
Short overview of dataset ds005688 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005688
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005688 >>> dataset = DS005688(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005688( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005692(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005692
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 30 subjects with 59 recordings across 1 tasks. Total duration: 112.206 hours. Dataset size: 92.81 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005692
30
24
1
5000
112.206
92.81 GB
Short overview of dataset ds005692 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005692
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005692 >>> dataset = DS005692(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005692( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005697(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005697
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 50 subjects with 50 recordings across 1 tasks. Total duration: 77.689 hours. Dataset size: 66.58 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005697
50
65,69
1
1000
77.689
66.58 GB
Short overview of dataset ds005697 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005697
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005697 >>> dataset = DS005697(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005697( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005779(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005779
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 19 subjects with 250 recordings across 16 tasks. Total duration: 16.65 hours. Dataset size: 88.67 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005779
19
64,67,70
16
5000
16.65
88.67 GB
Short overview of dataset ds005779 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005779
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005779 >>> dataset = DS005779(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005779( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005787(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005787
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 19 subjects with 448 recordings across 1 tasks. Total duration: 23.733 hours. Dataset size: 27.09 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005787
19
64,66
1
1000,500
23.733
27.09 GB
Short overview of dataset ds005787 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005787
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005787 >>> dataset = DS005787(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005787( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005795(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005795
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 34 subjects with 39 recordings across 2 tasks. Total duration: 0.0 hours. Dataset size: 6.43 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005795
34
72
2
500
0
6.43 GB
Short overview of dataset ds005795 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005795
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005795 >>> dataset = DS005795(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005795( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005811(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005811
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 19 subjects with 448 recordings across 1 tasks. Total duration: 23.733 hours. Dataset size: 24.12 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005811
19
62
1
1000,500
23.733
24.12 GB
Short overview of dataset ds005811 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005811
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005811 >>> dataset = DS005811(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005811( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005815(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005815
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 26 subjects with 137 recordings across 4 tasks. Total duration: 38.618 hours. Dataset size: 9.91 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005815
26
30
4
1000,500
38.618
9.91 GB
Short overview of dataset ds005815 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005815
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005815 >>> dataset = DS005815(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005815( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005863(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005863
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 127 subjects with 357 recordings across 4 tasks. Total duration: 0.0 hours. Dataset size: 10.59 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005863
127
27
4
500
0
10.59 GB
Short overview of dataset ds005863 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005863
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005863 >>> dataset = DS005863(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005863( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005866
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 60 subjects with 60 recordings across 1 tasks. Total duration: 15.976 hours. Dataset size: 3.57 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005866
60
1
500
15.976
3.57 GB
Short overview of dataset ds005866 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005866
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005866 >>> dataset = DS005866(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005866( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005868(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005868
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 48 subjects with 48 recordings across 1 tasks. Total duration: 13.094 hours. Dataset size: 2.93 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005868
48
1
500
13.094
2.93 GB
Short overview of dataset ds005868 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005868
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005868 >>> dataset = DS005868(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005868( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005873(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005873
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 125 subjects with 2850 recordings across 1 tasks. Total duration: 11935.09 hours. Dataset size: 117.21 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005873
125
2
1
256
11935.1
117.21 GB
Short overview of dataset ds005873 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005873
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005873 >>> dataset = DS005873(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005873( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.DS005876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]
Bases:
EEGDashDataset
OpenNeuro dataset
ds005876
.Modality: nan | Type: nan | Subjects: nan
This dataset contains 29 subjects with 29 recordings across 1 tasks. Total duration: 16.017 hours. Dataset size: 7.61 GB.
dataset
#Subj
#Chan
#Classes
Freq(Hz)
Duration(H)
Size
ds005876
29
32
1
1000
16.017
7.61 GB
Short overview of dataset ds005876 more details in the NeMAR documentation.
This dataset class provides convenient access to the
ds005876
dataset through the EEGDash interface. It inherits all functionality fromEEGDashDataset
with the dataset filter pre-configured.- Parameters:
cache_dir (str) – Directory to cache downloaded data.
query (dict, optional) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset
.s3_bucket (str, optional) – Base S3 bucket used to locate the data.
**kwargs – Additional arguments passed to the base dataset class.
See also
EEGDashDataset
Base dataset class with full API documentation
Notes
More details available in the NEMAR documentation.
Examples
Basic usage:
>>> from eegdash.dataset import DS005876 >>> dataset = DS005876(cache_dir="./data") >>> print(f"Number of recordings: {len(dataset)}")
Load a specific recording:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}")
Filter by additional criteria:
>>> # Get subset with specific task or subject >>> filtered_dataset = DS005876( ... cache_dir="./data", ... query={"task": "RestingState"} # if applicable ... )
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
cache_dir – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- class eegdash.dataset.EEGChallengeDataset(release: str, cache_dir: str, mini: bool = True, query: dict | None = None, s3_bucket: str | None = 's3://nmdatasets/NeurIPS25', **kwargs)[source]
Bases:
EEGDashDataset
A dataset helper for the EEG 2025 Challenge.
This class simplifies access to the EEG 2025 Challenge datasets. It is a specialized version of
EEGDashDataset
that is pre-configured for the challenge’s data releases. It automatically maps a release name (e.g., “R1”) to the corresponding OpenNeuro dataset and handles the selection of subject subsets (e.g., “mini” release).- Parameters:
release (str) – The name of the challenge release to load. Must be one of the keys in
RELEASE_TO_OPENNEURO_DATASET_MAP
(e.g., “R1”, “R2”, …, “R11”).cache_dir (str) – The local directory where the dataset will be downloaded and cached.
mini (bool, default True) – If True, the dataset is restricted to the official “mini” subset of subjects for the specified release. If False, all subjects for the release are included.
query (dict, optional) – An additional MongoDB-style query to apply as a filter. This query is combined with the release and subject filters using a logical AND. The query must not contain the
dataset
key, as this is determined by therelease
parameter.s3_bucket (str, optional) – The base S3 bucket URI where the challenge data is stored. Defaults to the official challenge bucket.
**kwargs – Additional keyword arguments that are passed directly to the
EEGDashDataset
constructor.
- Raises:
ValueError – If the specified
release
is unknown, or if thequery
argument contains adataset
key. Also raised ifmini
is True and a requested subject is not part of the official mini-release subset.
See also
EEGDashDataset
The base class for creating datasets from queries.
Examples
Basic usage with dataset and subject filtering:
>>> from eegdash import EEGDashDataset >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject="012" ... ) >>> print(f"Number of recordings: {len(dataset)}")
Filter by multiple subjects and specific task:
>>> subjects = ["012", "013", "014"] >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject=subjects, ... task="RestingState" ... )
Load and inspect EEG data from recordings:
>>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}") ... print(f"Duration: {raw.times[-1]:.1f} seconds")
Advanced filtering with raw MongoDB queries:
>>> from eegdash import EEGDashDataset >>> query = { ... "dataset": "ds002718", ... "subject": {"$in": ["012", "013"]}, ... "task": "RestingState" ... } >>> dataset = EEGDashDataset(cache_dir="./data", query=query)
Working with dataset collections and braindecode integration:
>>> # EEGDashDataset is a braindecode BaseConcatDataset >>> for i, recording in enumerate(dataset): ... if i >= 2: # limit output ... break ... print(f"Recording {i}: {recording.description}") ... raw = recording.load() ... print(f" Channels: {len(raw.ch_names)}, Duration: {raw.times[-1]:.1f}s")
Initialize self. See help(type(self)) for accurate signature.
- Parameters:
release – The description is missing.
cache_dir – The description is missing.
mini – The description is missing.
query – The description is missing.
s3_bucket – The description is missing.
**kwargs – The description is missing.
- eegdash.dataset.register_openneuro_datasets(summary_file: str | Path, *, base_class=None, namespace: Dict[str, Any] | None = None, add_to_all: bool = True) Dict[str, type] [source]
Dynamically create and register dataset classes from a summary file.
This function reads a CSV file containing summaries of OpenNeuro datasets and dynamically creates a Python class for each dataset. These classes inherit from a specified base class and are pre-configured with the dataset’s ID.
- Parameters:
summary_file (str or pathlib.Path) – The path to the CSV file containing the dataset summaries.
base_class (type, optional) – The base class from which the new dataset classes will inherit. If not provided,
eegdash.api.EEGDashDataset
is used.namespace (dict, optional) – The namespace (e.g., globals()) into which the newly created classes will be injected. Defaults to the local globals() of this module.
add_to_all (bool, default True) – If True, the names of the newly created classes are added to the __all__ list of the target namespace, making them importable with from … import *.
- Returns:
A dictionary mapping the names of the registered classes to the class types themselves.
- Return type:
dict[str, type]