DS006593#
cBCI Matrix Multimodal Dataset
Access recordings and metadata through EEGDash.
Citation: Basak Celik, Tab Memmott, Matthew Lawhead, Srikar Ananthoju, Deniz Erdogmus (2025). cBCI Matrix Multimodal Dataset. 10.18112/openneuro.ds006593.v1.0.0
Modality: eeg Subjects: 21 Recordings: 214 License: CC0 Source: openneuro
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS006593
dataset = DS006593(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS006593(cache_dir="./data", subject="01")
Advanced query
dataset = DS006593(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds006593,
title = {cBCI Matrix Multimodal Dataset},
author = {Basak Celik and Tab Memmott and Matthew Lawhead and Srikar Ananthoju and Deniz Erdogmus},
doi = {10.18112/openneuro.ds006593.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds006593.v1.0.0},
}
About This Dataset#
Multimodal Sensor Fusion for EEG-Based BCI Typing Systems
Dataset Overview
This dataset contains recordings of EEG and EyeTracking for a BCI spelling task. The data were collected in 2023 at Northeastern University.
View full README
Multimodal Sensor Fusion for EEG-Based BCI Typing Systems
Dataset Overview
This dataset contains recordings of EEG and EyeTracking for a BCI spelling task. The data were collected in 2023 at Northeastern University.
N=21
Calibration task were proctored using BciPy [1]
The dataset is organized in accordance with the Brain Imaging Data Structure (BIDS) specification (version 1.7.0).
Methodology
Calibration data were collected from control participants (n=21, mean age 23.6 ± 3.1 years) in a quiet lab room at Northeastern University. EEG data were collected using the DSI-24, dry electrode cap (Wearable Sensing, San Diego CA) at a sampling rate of 300 Hz. The device employs a hardware filter permitting a collection bandwidth of 0.003–150 Hz. Data were recorded from Fp1/2, Fz, F3/4, F7/8, Cz, C3/4, T7/T8, T3/T4, Pz, P3/P4, P7/P8, T5/T6, O1/2 with linked-ear reference (A1 and A2) and ground at A1. All data were collected using a Lenovo Legion 5 Pro Laptop with Windows 11, an Intel Core i7-11800H @ 2.30 GHz, 16 GB DDR4 RAM, and a NVIDIA GeForce RTX 3050. Trigger fidelity on the experiment laptop was verified using the Matrix Time Test Task in BciPy and a photodiode. The results of this timing test were used to determine static offsets between hardware and prevent experimentation with any timing violations greater than +/− 10 ms. The Eyetracker data were collected using a portable eye tracker (Tobii Pro Nano) at a sampling rate of 60 Hz. The matrix paradigm and the data acquisition modules are developed in BciPy [1], which is a standalone application for experimental data collection. This work focuses on a specific BCI paradigm called single-character-presentation (SCP) based visual presentation, which consists of symbols presented in matrix form and individually highlighted in randomized order. Calibration task presented letter characters at a rate of 4 Hz, with 100 inquiries consisting of 10 letters each (1 target, 9 non-target). In 10% of the inquiries, only non-target characters were shown. The stimuli included all 26 letters of the English alphabet, as well as the characters “_” for space and “<“ for backspace. The order of target stimuli was randomly distributed among the inquiries. Between inquiries, there was a two-second blank screen. Each inquiry consisted of a one-second prompt showing the target letter, followed by a 0.5s fixation, and then the presentation of 10 letters. The letters were displayed in the center of the screen, in white on a black background. Target prompts and stimuli were presented in white, while fixation crosses were rendered in red.
The experimental protocol was approved by the Northeastern University Institutional Review Board (IRB). All participants provided written informed consent prior to participation.
Directory Structure
The datasets follows the BIDS convention with the following structure: /sub-[subject]/ses-[session]/[eeg or et]. To load the BIDS formatted data into BciPy Simulator, please see the following directory: /sourcedata/bcipy_metadata. This directory contains the raw BciPy parameter files. It also contains the output of the matrix display (matrix.png) for eyetracking visualization.
Contact Information
For questions or issues regarding this dataset, please contact the corresponding author Basak Celik_ via email.
[1] Memmott T, Koçanaoğulları A, Lawhead M, Klee D, Dudy S, Fried-Oken M, Oken B. BciPy: brain-computer interface software in Python. Brain-Computer Interfaces, 8(4), 137-53, 2021.
Dataset Information#
Dataset ID |
|
Title |
cBCI Matrix Multimodal Dataset |
Year |
2025 |
Authors |
Basak Celik, Tab Memmott, Matthew Lawhead, Srikar Ananthoju, Deniz Erdogmus |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds006593,
title = {cBCI Matrix Multimodal Dataset},
author = {Basak Celik and Tab Memmott and Matthew Lawhead and Srikar Ananthoju and Deniz Erdogmus},
doi = {10.18112/openneuro.ds006593.v1.0.0},
url = {https://doi.org/10.18112/openneuro.ds006593.v1.0.0},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 21
Recordings: 214
Tasks: 1
Channels: 19
Sampling rate (Hz): 300.0
Duration (hours): 0.0
Pathology: Healthy
Modality: Visual
Type: Attention
Size on disk: 441.9 MB
File count: 214
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds006593.v1.0.0
API Reference#
Use the DS006593 class to access this dataset programmatically.
- class eegdash.dataset.DS006593(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds006593. Modality:eeg; Experiment type:Attention; Subject type:Healthy. Subjects: 21; recordings: 21; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds006593 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds006593
Examples
>>> from eegdash.dataset import DS006593 >>> dataset = DS006593(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset