NM000232: eeg dataset, 10 subjects#
THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition
Access recordings and metadata through EEGDash.
Citation: Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy (2022). THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition. 10.17605/OSF.IO/3JK45
Modality: eeg Subjects: 10 Recordings: 638 License: CC-BY 4.0 Source: nemar
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000232
dataset = NM000232(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000232(cache_dir="./data", subject="01")
Advanced query
dataset = NM000232(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000232,
title = {THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition},
author = {Alessandro T. Gifford and Kshitij Dwivedi and Gemma Roig and Radoslaw M. Cichy},
doi = {10.17605/OSF.IO/3JK45},
url = {https://doi.org/10.17605/OSF.IO/3JK45},
}
About This Dataset#
THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition
Overview
EEG dataset of 10 subjects who viewed 16,540 distinct training images and 200 test images (each repeated ~80 times) using rapid serial visual presentation (RSVP) at 5 Hz, recorded on a BrainVision actiCHamp system at 1000 Hz. The source files store 63 EEG channels (the online reference electrode is not stored). Stimuli are drawn from the THINGS database (Hebart et al. 2019). Each subject completed 4 separate sessions; each session contained:
View full README
THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition
Overview
EEG dataset of 10 subjects who viewed 16,540 distinct training images and 200 test images (each repeated ~80 times) using rapid serial visual presentation (RSVP) at 5 Hz, recorded on a BrainVision actiCHamp system at 1000 Hz. The source files store 63 EEG channels (the online reference electrode is not stored). Stimuli are drawn from the THINGS database (Hebart et al. 2019). Each subject completed 4 separate sessions; each session contained:
5 training runs (~3,360 trials each) covering ~16,540 unique images
1 test run (~4,080 trials) of 200 images repeated 20× per session
2 resting-state runs (one before, one after the main experiment)
Total: ~32,540 training trials + ~16,000 test trials per subject across 4 sessions.
Recording setup
Manufacturer: Brain Products (actiCHamp)
63 EEG channels (one electrode served as online reference and is not stored in the source files)
10-10 cap layout
Sampling rate: 1000 Hz
Online band-pass: 0.01-100 Hz
Triggers recorded as BrainVision stimulus annotations (not as a dedicated stim channel)
Tasks (BIDS labels)
task-train: training run (RSVP of unique images)
task-test: test run (RSVP of repeated test images)
task-rest: resting state (eyes open, fixation cross)
Run numbering
task-train: run-01..run-05 per session (5 training parts)
task-test: single run per session
task-rest: run-01 (before main task) and run-02 (after main task)
Events
- events.tsv columns:
onset, duration, sample, value, trial_type tot_img_number - global image ID (1-16540 for train; 1-200 for test;
‘n/a’ for target catch trials)
img_category - integer category index category_name - human-readable category, e.g. “01175_roller_coaster” block, sequence - hierarchical position within the run img_in_sequence - image position within its 20-image sequence soa - actual stimulus onset asynchrony (~200 ms)
- trial_type values:
image - normal training/test image presentation target - random catch trial (subject must press a button) rest_marker - resting-state start/end marker
Subject information
participants.tsv contains age and sex (both extracted from the behavioural .mat files in the source data).
Folder layout
/sub-XX/ses-YY/eeg/ - main BIDS data (BDF + sidecars) /sourcedata/ - original BrainVision .eeg/.vhdr/.vmrk and
behavioural .mat files
/derivatives/preprocessed_eeg/ - authors’ preprocessed train/test epochs /derivatives/resting_state/ - authors’ preprocessed resting state /stimuli/ - image set (training_images.zip, test_images.zip)
plus image_metadata.npy
/code/ - this conversion script
Reference
Gifford, A.T., Dwivedi, K., Roig, G., & Cichy, R.M. (2022). A large and rich EEG dataset for modeling human visual object recognition. NeuroImage, 264, 119754. https://doi.org/10.1016/j.neuroimage.2022.119754
Code: https://github.com/gifale95/eeg_encoding OSF: https://osf.io/3jk45/
Dataset Information#
Dataset ID |
|
Title |
THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition |
Author (year) |
|
Canonical |
— |
Importable as |
|
Year |
2022 |
Authors |
Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy |
License |
CC-BY 4.0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{nm000232,
title = {THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition},
author = {Alessandro T. Gifford and Kshitij Dwivedi and Gemma Roig and Radoslaw M. Cichy},
doi = {10.17605/OSF.IO/3JK45},
url = {https://doi.org/10.17605/OSF.IO/3JK45},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 10
Recordings: 638
Tasks: 5
Channels: 63
Sampling rate (Hz): 1000
Duration (hours): 87.2788263888889
Pathology: Not specified
Modality: —
Type: —
Size on disk: 203.9 GB
File count: 638
Format: BIDS
License: CC-BY 4.0
DOI: doi:10.17605/OSF.IO/3JK45
API Reference#
Use the NM000232 class to access this dataset programmatically.
- class eegdash.dataset.NM000232(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetTHINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition
- Study:
nm000232(NeMAR)- Author (year):
Gifford2019- Canonical:
—
Also importable as:
NM000232,Gifford2019.Modality:
eeg. Subjects: 10; recordings: 638; tasks: 5.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000232 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000232 DOI: https://doi.org/10.17605/OSF.IO/3JK45
Examples
>>> from eegdash.dataset import NM000232 >>> dataset = NM000232(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset