NM000133: eeg dataset, 8 subjects#
Alljoined1
Access recordings and metadata through EEGDash.
Citation: Jonathan Xu, Si Kai Lee, Wangshu Jiang (2024). Alljoined1. 10.82901/nemar.nm000133
Modality: eeg Subjects: 8 Recordings: 13 License: CC-BY-NC-ND-4.0 Source: nemar
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000133
dataset = NM000133(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000133(cache_dir="./data", subject="01")
Advanced query
dataset = NM000133(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000133,
title = {Alljoined1},
author = {Jonathan Xu and Si Kai Lee and Wangshu Jiang},
doi = {10.82901/nemar.nm000133},
url = {https://doi.org/10.82901/nemar.nm000133},
}
About This Dataset#
Alljoined1: EEG Responses to Natural Images
Overview
Alljoined1 is an EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, designed for EEG-to-image decoding research. Eight healthy right-handed adults (6 male, 2 female; mean age 22 +/- 0.64 years, normal or corrected-to-normal vision) each viewed 10,000 natural images across two recording sessions on separate days. The original data were recorded in BioSemi Data Format (BDF) via a 64-channel BioSemi ActiveTwo system with 24-bit A/D conversion, digitized at 512 Hz. This BIDS-formatted version preserves the BDF format to maintain full 24-bit data fidelity. Reference: Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. https://doi.org/10.48550/arXiv.2404.05553
View full README
Alljoined1: EEG Responses to Natural Images
Overview
Alljoined1 is an EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, designed for EEG-to-image decoding research. Eight healthy right-handed adults (6 male, 2 female; mean age 22 +/- 0.64 years, normal or corrected-to-normal vision) each viewed 10,000 natural images across two recording sessions on separate days. The original data were recorded in BioSemi Data Format (BDF) via a 64-channel BioSemi ActiveTwo system with 24-bit A/D conversion, digitized at 512 Hz. This BIDS-formatted version preserves the BDF format to maintain full 24-bit data fidelity. Reference: Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. https://doi.org/10.48550/arXiv.2404.05553
Recording Setup
Equipment: BioSemi ActiveTwo, 64 Ag/AgCl sintered electrodes
Montage: International 10-20 system
Sampling rate: 512 Hz
Reference: CMS/DRL (BioSemi default); average reference applied in preprocessing
Electrode offset: kept below 40 mV
Power line: 60 Hz notch filter applied during preprocessing
Task Paradigm
Participants viewed natural images in a rapid serial visual presentation (RSVP) paradigm with an oddball detection task. Each trial consisted of an image presented for 300 ms, followed by 300 ms of black screen, plus 0-50 ms of random jitter. Participants pressed the space bar when two consecutive trials contained the same image (oddball detection). Oddball trials (24 per block) were excluded from analysis.
Stimulus Set
10,000 natural images per participant drawn from the Natural Scenes Dataset (NSD), which itself is sourced from MS-COCO: - 1,000 shared images: the first 960 images from the NSD “shared1000” subset, shown to all participants (each image repeated 4 times per participant) - 9,000 unique images: different for each participant
Each image was shown 4 times per participant across blocks and sessions (presented twice per block, with blocks repeated within sessions).
Subjects and Sessions
8 subjects, 1-2 sessions each (13 sessions total):
| Subject | Sessions | Notes |
|---------|----------|-------|
| sub-01 | ses-01, ses-02 | |
| sub-02 | ses-01 | |
| sub-03 | ses-01, ses-02 | Epoched file missing for ses-01 |
| sub-04 | ses-01, ses-02 | |
| sub-05 | ses-01, ses-02 | |
| sub-06 | ses-01, ses-02 | |
| sub-07 | ses-01 | |
| sub-08 | ses-01 | |
Total: approximately 46,080 epochs across all participants (approximately 3,839 events per session after oddball exclusion).
Data Format
Raw continuous EEG recordings are stored as BDF files (BioSemi Data Format, 24-bit resolution). The original data were distributed as MNE-Python FIF files; conversion to BDF was performed to preserve the native 24-bit precision of the BioSemi ActiveTwo system. Round-trip validation confirmed data integrity to within 1.55e-8 V (sub-nanovolt), and event onsets match exactly (zero timing error). Per-session files:
| Path | Description |
|------|-------------|
| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_eeg.bdf` | Raw EEG |
| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_events.tsv` | Event markers |
Shared sidecar files (root level, BIDS inheritance principle):
| File | Description |
|------|-------------|
| `task-images_eeg.json` | Recording parameters |
| `task-images_channels.tsv` | Channel descriptions (64 EEG channels) |
| `task-images_electrodes.tsv` | Electrode positions (standard 10-20, CapTrak) |
| `task-images_coordsystem.json` | Coordinate system specification |
Event values in the events.tsv files represent image indices (1-960+) corresponding to NSD image identifiers. The trial_type column uses the format image/{index}.
Derivatives
The derivatives/epoched/ directory contains preprocessed and epoched data provided by the original authors, stored in MNE-Python FIF format (.fif).
Preprocessing pipeline applied by the original authors:
1. Band-pass filter: 0.5-125 Hz
2. Notch filter: 60 Hz (power line)
3. Independent Component Analysis (ICA): FastICA, retaining 95% of variance
4. Epoch extraction: -50 ms to 600 ms relative to stimulus onset
5. Artifact rejection: AutoReject algorithm (mean 130.75 epochs dropped per subject, SD 260.44)
6. Baseline correction
7. Average re-referencing
These epoched files are derivative products, not raw recordings, and are stored separately per BIDS conventions. Note: the epoched file for sub-03 ses-01 was not available in the source distribution.
Code
The code/ directory contains the original Alljoined1 analysis code, cloned from https://github.com/Alljoined/alljoined-dataset1.
BIDS Conversion
Converted to BIDS by Yahya Shirazi (Swartz Center for Computational Neuroscience, UC San Diego) using MNE-Python and custom scripts. - Source data: OSF repository https://osf.io/kqgs8/ - Conversion validated with round-trip integrity checks (data, channels, sampling frequency, event count, event values, and event timing)
License and Terms of Use
This dataset is distributed under CC-BY-NC-ND-4.0 (Creative Commons Attribution-NonCommercial-NoDerivatives 4.0). The Alljoined team imposes additional terms on their datasets. By using this dataset you agree to all conditions below. 1. Researcher shall use the Dataset only for non-commercial research and educational purposes, in accordance with Alljoined’s Terms of Use. 2. No Warranties: Alljoined makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. Full Responsibility: Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify Alljoined, including their employees, officers and agents, against any and all claims arising from Researcher’s use of the Dataset. 4. Privacy Compliance: Researcher shall comply with Alljoined’s Privacy Policy and ensure that any use of the Dataset respects the privacy rights of individuals whose data may be included. 5. Sharing Rights: Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions. 6. Termination Rights: Alljoined reserves the right to terminate Researcher’s access to the Dataset at any time. 7. Commercial Entity Binding: If Researcher is employed by a for-profit, commercial entity, Researcher’s employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 8. Governing Law: The law of the State of California shall apply to all disputes under this agreement.
Note: The original Alljoined1 dataset on OSF (https://osf.io/kqgs8/) does not specify an explicit license. The terms above are from the Alljoined-1.6M HuggingFace distribution and the Alljoined website; they are included here as the best available guidance. Contact the Alljoined team (team@alljoined.com) for clarification on redistribution rights.
Full terms: https://www.alljoined.com/terms-of-use
Privacy policy: https://www.alljoined.com/privacy-policy
References
Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. https://doi.org/10.48550/arXiv.2404.05553
Dataset Information#
Dataset ID |
|
Title |
Alljoined1 |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2024 |
Authors |
Jonathan Xu, Si Kai Lee, Wangshu Jiang |
License |
CC-BY-NC-ND-4.0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{nm000133,
title = {Alljoined1},
author = {Jonathan Xu and Si Kai Lee and Wangshu Jiang},
doi = {10.82901/nemar.nm000133},
url = {https://doi.org/10.82901/nemar.nm000133},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 8
Recordings: 13
Tasks: 1
Channels: 64
Sampling rate (Hz): 512
Duration (hours): Not calculated
Pathology: Not specified
Modality: —
Type: —
Size on disk: 7.6 GB
File count: 13
Format: BIDS
License: CC-BY-NC-ND-4.0
DOI: 10.82901/nemar.nm000133
API Reference#
Use the NM000133 class to access this dataset programmatically.
- class eegdash.dataset.NM000133(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetAlljoined1
- Study:
nm000133(NeMAR)- Author (year):
Xu2024- Canonical:
Alljoined1,Alljoined
Also importable as:
NM000133,Xu2024,Alljoined1,Alljoined.Modality:
eeg. Subjects: 8; recordings: 13; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000133 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000133 DOI: https://doi.org/10.82901/nemar.nm000133
Examples
>>> from eegdash.dataset import NM000133 >>> dataset = NM000133(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset