NM000147: eeg dataset, 22 subjects#
RomaniBF2025ERP
Access recordings and metadata through EEGDash.
Citation: Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet (2019). RomaniBF2025ERP. 10.48550/arXiv.2510.10169
Modality: eeg Subjects: 22 Recordings: 120 License: CC-BY-4.0 Source: nemar
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import NM000147
dataset = NM000147(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = NM000147(cache_dir="./data", subject="01")
Advanced query
dataset = NM000147(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{nm000147,
title = {RomaniBF2025ERP},
author = {Michele Romani and Devis Zanoni and Elisabetta Farella and Luca Turchet},
doi = {10.48550/arXiv.2510.10169},
url = {https://doi.org/10.48550/arXiv.2510.10169},
}
About This Dataset#
RomaniBF2025ERP
MOABB class for BrainForm event-related potentials (ERP) dataset.
Dataset Overview
Code: RomaniBF2025ERP Paradigm: p300 DOI: 10.48550/arXiv.2510.10169
View full README
RomaniBF2025ERP
MOABB class for BrainForm event-related potentials (ERP) dataset.
Dataset Overview
Code: RomaniBF2025ERP Paradigm: p300 DOI: 10.48550/arXiv.2510.10169 Subjects: 22 Sessions per subject: 2 Events: Target=1, NonTarget=2 Trial interval: [-0.1, 1.0] s File format: EDF Contributing labs: University of Trento, Fondazione Bruno Kessler
Acquisition
Sampling rate: 250.0 Hz Number of channels: 8 Channel types: eeg=8 Channel names: Fz, C3, Cz, C4, Pz, PO7, Oz, PO8 Montage: standard_1020 Hardware: g.tec Unicorn Hybrid Black Reference: right mastoid Ground: left mastoid Sensor type: EEG Line frequency: 50.0 Hz Cap manufacturer: g.tec Cap model: Unicorn Hybrid Black Electrode type: conductive gel
Participants
Number of subjects: 22 Health status: healthy Age: mean=21.87, std=3.22 Gender distribution: female=10, male=12 BCI experience: naive
Experimental Protocol
Paradigm: p300 Number of classes: 2 Class labels: Target, NonTarget Trial duration: 0.9 s Tasks: Complex Task (5 colored laser beams), Speller Task (10 color targets) Study design: Within-subject study with two main sessions separated by visual texture swap (counterbalanced). Each session: calibration, tutorial, practice run with Complex Task (5 targets) and Speller Task (10 targets). Optional free-play third session for 16 participants. Study domain: BCI training, serious gaming, skill acquisition Feedback type: visual Stimulus type: flickering Stimulus modalities: visual Primary modality: visual Synchronicity: synchronous Mode: online Training/test split: True Instructions: minimize movement during recording to reduce motion artifacts, focus on flickering targets for calibration and task completion
HED Event Annotations
Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser Target
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Target
NonTarget
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Non-target
Paradigm-Specific Parameters
Detected paradigm: p300 Number of targets: 10 Stimulus onset asynchrony: 100.0 ms
Data Structure
Trials: 600 Trials context: Per calibration session: 600 total stimulus events (60 target + 540 non-target from 10 unique targets). ~1 minute duration.
Preprocessing
Data state: raw Preprocessing applied: False
Signal Processing
Classifiers: LDA
Cross-Validation
Method: cross-validation Evaluation type: within-subject
Performance (Original Study)
Task Accuracy Complex Median T2A: 0.833 Task Accuracy Speller Median T3B: 0.833 Itr Complex Mean T2A: 10.76 Itr Speller Mean T3B: 21.95 Calibration Attempts Session1 Mean: 2.64 Calibration Attempts Session2 Mean: 2.68
BCI Application
Applications: speller, gaming Environment: laboratory Online feedback: True
Tags
Pathology: Healthy Modality: ERP Type: P300
Documentation
Description: BrainForm: a Serious Game for BCI Training and Data Collection - gamified BCI training system designed for scalable data collection using consumer hardware DOI: 10.48550/arXiv.2510.10169 License: CC-BY-4.0 Investigators: Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet Senior author: Luca Turchet Institution: University of Trento Address: 38122, Trento, Italy Country: IT Repository: GitHub Data URL: https://zenodo.org/records/17225966 Publication year: 2025 Keywords: Brain-Computer Interfaces, Event-Related Potentials, Machine Learning, Serious Games, Human factors
Abstract
BrainForm is a gamified Brain-Computer Interface (BCI) training system designed for scalable data collection using consumer hardware and a minimal setup. We investigated (1) how users develop BCI control skills across repeated sessions and (2) perceptual and performance effects of two visual stimulation textures. Game Experience Questionnaire (GEQ) scores for Flow, Positive Affect, Competence and Challenge were strongly positive, indicating sustained engagement. A within-subject study with multiple runs, two task complexities, and post-session questionnaires revealed no significant performance differences between textures but increased ocular irritation over time. Online metrics—Task Accuracy, Task Time, and Information Transfer Rate—improved across sessions, confirming learning effects for symbol spelling, even under pressure conditions. Our results highlight the potential of BrainForm as a scalable, user-friendly BCI research tool and offer guidance for sustained engagement and reduced training fatigue.
Methodology
Structured protocol consisting of: (1) introductory tutorial, (2) two practice runs involving calibration and control with up to ten flickering targets, (3) final timed challenge. Two main sessions separated by short break and visual texture swap (counterbalanced). Calibration: 60 trials focusing on single flashing target (~1 minute), repeated until 80%+ accuracy. Tasks: Complex Task (5 colored laser beams, game-oriented) and Speller Task (10 color targets, BCI-oriented symbol spelling). Optional free-play run for 16 participants. Data collection: raw EEG, performance metrics, in-game metadata, and questionnaires (demographic, session questionnaire, GEQ).
References
M. Romani, D. Zanoni, E. Farella, and L. Turchet, “BrainForm: a Serious Game for BCI Training and Data Collection,” Oct. 14, 2025, arXiv: arXiv:2510.10169. doi: 10.48550/arXiv.2510.10169. M. Romani, F. Paissan, A. Fossà, and E. Farella, “Explicit modelling of subject dependency in BCI decoding,” Sept. 27, 2025, arXiv: arXiv:2509.23247. doi: 10.48550/arXiv.2509.23247. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb
Dataset Information#
Dataset ID |
|
Title |
RomaniBF2025ERP |
Author (year) |
|
Canonical |
|
Importable as |
|
Year |
2019 |
Authors |
Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet |
License |
CC-BY-4.0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{nm000147,
title = {RomaniBF2025ERP},
author = {Michele Romani and Devis Zanoni and Elisabetta Farella and Luca Turchet},
doi = {10.48550/arXiv.2510.10169},
url = {https://doi.org/10.48550/arXiv.2510.10169},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 22
Recordings: 120
Tasks: 1
Channels: 8
Sampling rate (Hz): 250.0
Duration (hours): 6.27819111111111
Pathology: Healthy
Modality: Visual
Type: Learning
Size on disk: 134.3 MB
File count: 120
Format: BIDS
License: CC-BY-4.0
DOI: doi:10.48550/arXiv.2510.10169
API Reference#
Use the NM000147 class to access this dataset programmatically.
- class eegdash.dataset.NM000147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetRomaniBF2025ERP
- Study:
nm000147(NeMAR)- Author (year):
RomaniBF2025- Canonical:
Romani2025
Also importable as:
NM000147,RomaniBF2025,Romani2025.Modality:
eeg; Experiment type:Learning; Subject type:Healthy. Subjects: 22; recordings: 120; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/nm000147 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000147 DOI: https://doi.org/10.48550/arXiv.2510.10169
Examples
>>> from eegdash.dataset import NM000147 >>> dataset = NM000147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset