NM000120: eeg dataset, 11 subjects#

Oikonomou2016 – SSVEP MAMEM 2 dataset

Access recordings and metadata through EEGDash.

Citation: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris (2016). Oikonomou2016 – SSVEP MAMEM 2 dataset.

Modality: eeg Subjects: 11 Recordings: 55 License: ODC-By-1.0 Source: nemar

Metadata: Complete (90%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import NM000120

dataset = NM000120(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = NM000120(cache_dir="./data", subject="01")

Advanced query

dataset = NM000120(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{nm000120,
  title = {Oikonomou2016 – SSVEP MAMEM 2 dataset},
  author = {Vangelis P. Oikonomou and Georgios Liaros and Kostantinos Georgiadis and Elisavet Chatzilari and Katerina Adam and Spiros Nikolopoulos and Ioannis Kompatsiaris},
}

About This Dataset#

SSVEP MAMEM 2 dataset

SSVEP MAMEM 2 dataset.

Dataset Overview

  • Code: MAMEM2

  • Paradigm: ssvep

  • DOI: 10.48550/arXiv.1602.00904

View full README

SSVEP MAMEM 2 dataset

SSVEP MAMEM 2 dataset.

Dataset Overview

  • Code: MAMEM2

  • Paradigm: ssvep

  • DOI: 10.48550/arXiv.1602.00904

  • Subjects: 11

  • Sessions per subject: 1

  • Events: 6.66=1, 7.50=2, 8.57=3, 10.00=4, 12.00=5

  • Trial interval: [1, 4] s

  • Runs per session: 5

  • File format: MAT

Acquisition

  • Sampling rate: 250.0 Hz

  • Number of channels: 256

  • Channel types: eeg=256

  • Channel names: E1, E10, E100, E101, E102, E103, E104, E105, E106, E107, E108, E109, E11, E110, E111, E112, E113, E114, E115, E116, E117, E118, E119, E12, E120, E121, E122, E123, E124, E125, E126, E127, E128, E129, E13, E130, E131, E132, E133, E134, E135, E136, E137, E138, E139, E14, E140, E141, E142, E143, E144, E145, E146, E147, E148, E149, E15, E150, E151, E152, E153, E154, E155, E156, E157, E158, E159, E16, E160, E161, E162, E163, E164, E165, E166, E167, E168, E169, E17, E170, E171, E172, E173, E174, E175, E176, E177, E178, E179, E18, E180, E181, E182, E183, E184, E185, E186, E187, E188, E189, E19, E190, E191, E192, E193, E194, E195, E196, E197, E198, E199, E2, E20, E200, E201, E202, E203, E204, E205, E206, E207, E208, E209, E21, E210, E211, E212, E213, E214, E215, E216, E217, E218, E219, E22, E220, E221, E222, E223, E224, E225, E226, E227, E228, E229, E23, E230, E231, E232, E233, E234, E235, E236, E237, E238, E239, E24, E240, E241, E242, E243, E244, E245, E246, E247, E248, E249, E25, E250, E251, E252, E253, E254, E255, E256, E26, E27, E28, E29, E3, E30, E31, E32, E33, E34, E35, E36, E37, E38, E39, E4, E40, E41, E42, E43, E44, E45, E46, E47, E48, E49, E5, E50, E51, E52, E53, E54, E55, E56, E57, E58, E59, E6, E60, E61, E62, E63, E64, E65, E66, E67, E68, E69, E7, E70, E71, E72, E73, E74, E75, E76, E77, E78, E79, E8, E80, E81, E82, E83, E84, E85, E86, E87, E88, E89, E9, E90, E91, E92, E93, E94, E95, E96, E97, E98, E99

  • Montage: GSN-HydroCel-256

  • Hardware: EGI 300 Geodesic EEG System (GES 300)

  • Reference: Cz

  • Line frequency: 50.0 Hz

  • Impedance threshold: 80.0 kOhm

  • Cap manufacturer: EGI

  • Cap model: HydroCel Geodesic Sensor Net (HCGSN)

Participants

  • Number of subjects: 11

  • Health status: healthy

  • Age: min=24, max=39

  • Gender distribution: male=8, female=3

  • Handedness: {‘right’: 10, ‘left’: 1}

Experimental Protocol

  • Paradigm: ssvep

  • Number of classes: 5

  • Class labels: 6.66, 7.50, 8.57, 10.00, 12.00

  • Trial duration: 5.0 s

  • Study design: Subjects focus attention on visual stimuli flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) to select commands. Each stimulus presented for 5 seconds followed by 5 seconds rest.

  • Feedback type: none

  • Stimulus type: flickering box

  • Stimulus modalities: visual

  • Primary modality: visual

  • Synchronicity: synchronous

  • Mode: offline

  • Stimulus presentation: SoftwareName=Microsoft Visual Studio 2010 with OpenGL, device=22 inch LCD monitor, refresh_rate=60 Hz, resolution=1680x1080

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser

6.66
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/6_66

7.50
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/7_50

8.57
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/8_57

10.00
     ├─ Sensory-event
     ├─ Experimental-stimulus
     ├─ Visual-presentation
     └─ Label/10_00

12.00
├─ Sensory-event
├─ Experimental-stimulus
├─ Visual-presentation
└─ Label/12_00

Paradigm-Specific Parameters

  • Detected paradigm: ssvep

  • Stimulus frequencies: [6.66, 7.5, 8.57, 10.0, 12.0] Hz

  • Number of targets: 5

  • Number of repetitions: 3

Data Structure

  • Trials: 1104

  • Trials context: Each session includes 23 trials (8 adaptation trials excluded from analysis). 5 sessions per subject (with exceptions: S001=3 sessions, S003=4 sessions, S004=4 sessions). Total: 1104 trials of 5 seconds each.

Preprocessing

  • Data state: raw

  • Preprocessing applied: False

Signal Processing

  • Classifiers: LDA, SVM, Random Forest, kNN, Naive Bayes, AdaBoost, Decision Trees, CCA

  • Feature extraction: PWelch, Periodogram, FFT, Goertzel, PYULEAR (Yule-AR), STFT, DWT, PSD, Wavelet, Spectrogram

  • Frequency bands: analyzed=[5.0, 48.0] Hz

  • Spatial filters: CAR, CSP, Minimum Energy

Cross-Validation

  • Method: leave-one-subject-out

  • Evaluation type: cross_subject

Performance (Original Study)

  • Accuracy: 74.42%

  • Mean Accuracy Default Config: 72.47

  • Mean Accuracy Optimal Config: 74.42

  • Processing Time Msec: 68

BCI Application

  • Applications: command_selection

  • Environment: laboratory

  • Online feedback: False

Tags

  • Pathology: Healthy

  • Modality: Visual

  • Type: Research

Documentation

  • DOI: 10.48550/arXiv.1602.00904

  • Associated paper DOI: arXiv:1602.00904v2

  • License: ODC-By-1.0

  • Investigators: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris

  • Institution: Centre for Research and Technology Hellas (CERTH)

  • Country: GR

  • Repository: GitHub

  • Data URL: https://figshare.com/articles/dataset/3153409

  • Publication year: 2016

  • Funding: H2020-ICT-2014-644780

  • Ethics approval: Approved by ethics committee of Centre for Research and Technology Hellas, date 3/7/2015, grant H2020-ICT-2014-644780

  • Keywords: SSVEP, BCI, brain-computer interface, EEG, visual evoked potentials, signal processing, feature extraction, classification

Abstract

Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This study focuses on SSVEP-based BCIs and performs a comparative evaluation of state-of-the-art algorithms for filtering, artifact removal, feature extraction, feature selection and classification. Dataset consists of 256-channel EEG signals from 11 subjects with 5 flickering frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz).

Methodology

Leave-one-subject-out cross-validation was used to evaluate a general-purpose BCI system without subject-specific training. Systematic comparison of algorithms across all signal processing stages: (1) Signal filtering: FIR vs IIR filters; (2) Artifact removal: AMUSE vs FastICA; (3) Feature extraction: PWelch, Periodogram, PYULEAR, DWT, STFT, Goertzel; (4) Feature selection: entropy-based methods and PCA/SVD; (5) Classification: SVM, LDA, KNN, Naive Bayes, Random Forest, AdaBoost. Optimal configuration achieved 74.42% mean accuracy using IIR-Elliptic filter, AMUSE artifact removal, PWelch feature extraction with nfft=512, segment length=350, overlap=0.75, and channel-138.

References

Oikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904. MAMEM Steady State Visually Evoked Potential EEG Database https://archive.physionet.org/physiobank/database/mssvepdb/ S. Nikolopoulos, 2016, DataAcquisitionDetails.pdf https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_II_256_channels_11_subjects_5_frequencies_presented_simultaneously_/3153409?file=4911931 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Dataset Information#

Dataset ID

NM000120

Title

Oikonomou2016 – SSVEP MAMEM 2 dataset

Author (year)

Oikonomou2016_MAMEM2

Canonical

MAMEM2, SSVEPMAMEM2, MAMEM2_SSVEP

Importable as

NM000120, Oikonomou2016_MAMEM2, MAMEM2, SSVEPMAMEM2, MAMEM2_SSVEP

Year

2016

Authors

Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris

License

ODC-By-1.0

Citation / DOI

Unknown

Source links

OpenNeuro | NeMAR | Source URL

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 11

  • Recordings: 55

  • Tasks: 1

Channels & sampling rate
  • Channels: 256

  • Sampling rate (Hz): 250.0

  • Duration (hours): 5.1091766666666665

Tags
  • Pathology: Healthy

  • Modality: Visual

  • Type: Attention

Files & format
  • Size on disk: 4.4 GB

  • File count: 55

  • Format: BIDS

License & citation
  • License: ODC-By-1.0

  • DOI: —

Provenance

API Reference#

Use the NM000120 class to access this dataset programmatically.

class eegdash.dataset.NM000120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

Oikonomou2016 – SSVEP MAMEM 2 dataset

Study:

nm000120 (NeMAR)

Author (year):

Oikonomou2016_MAMEM2

Canonical:

MAMEM2, SSVEPMAMEM2, MAMEM2_SSVEP

Also importable as: NM000120, Oikonomou2016_MAMEM2, MAMEM2, SSVEPMAMEM2, MAMEM2_SSVEP.

Modality: eeg; Experiment type: Attention; Subject type: Healthy. Subjects: 11; recordings: 55; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/nm000120 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=nm000120

Examples

>>> from eegdash.dataset import NM000120
>>> dataset = NM000120(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#