DS002724#

A dataset recorded during development of an affective brain-computer music interface: training sessions

Access recordings and metadata through EEGDash.

Citation: Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). A dataset recorded during development of an affective brain-computer music interface: training sessions. 10.18112/openneuro.ds002724.v1.0.1

Modality: eeg Subjects: 10 Recordings: 700 License: CC0 Source: openneuro Citations: 1.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS002724

dataset = DS002724(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS002724(cache_dir="./data", subject="01")

Advanced query

dataset = DS002724(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds002724,
  title = {A dataset recorded during development of an affective brain-computer music interface: training sessions},
  author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto},
  doi = {10.18112/openneuro.ds002724.v1.0.1},
  url = {https://doi.org/10.18112/openneuro.ds002724.v1.0.1},
}

About This Dataset#

0. Sections

  1. Project

  2. Dataset

  3. Terms of Use

  4. Contents

  5. Method and Processing

View full README

0. Sections

  1. Project

  2. Dataset

  3. Terms of Use

  4. Contents

  5. Method and Processing

1. PROJECT

Title: Brain-Computer Music Interface for Monitoring and Inducing Affective States (BCMI-MIdAS) Dates: 2012-2017 Funding organisation: Engineering and Physical Sciences Research Council (EPSRC) Grant no.: EP/J003077/1 and EP/J002135/1.

2. DATASET

EEG data from an affective Music Brain-Computer Interface: offline training to induce target emotional states. Description: This dataset accompanies the publication by Daly et al. (2018) and has been analysed in Daly et al. (2015) (please see Section 5 for full references). The purpose of the research activity in which the data were collected was to train a music brain-computer interface system to induce specific affective states for individual users. For this purpose the participants listened to music clips (40 s) targeting two affective states, as defined by valence and arousal. Data were recorded over 3 sessions (separate days), each containing 4 runs (same day) of 18 trials each. The music clips were generated using a synthetic music generator. The dataset contains the electroencephalogram (EEG), galvanic skin response (GSR) and electrocardiogram (ECG) data from 16 healthy participants while listening to the music clips, together with the reported affective state (valence and arousal values) and auxiliary variables.

This dataset is connected to 2 additional datasets:

  1. EEG data from an affective Music Brain-Computer Interface: system calibration. doi:

  2. EEG data from an affective Music Brain-Computer Interface: online real-time control. doi:

Please note that the number of participants varies between datasets; however, participant codes are the same across all three datasets.

Publication Year: 2018

Creators: Nicoletta Nicolaou, Ian Daly.

Contributors: Isil Poyraz Bilgin, James Weaver, Asad Malik, Alexis Kirke, Duncan Williams.

Principal Investigator: Slawomir Nasuto (EP/J003077/1).

Co-Investigator: Eduardo Miranda (EP/J002135/1).

Organisation: University of Reading

Rights-holders: University of Reading

Source: The synthetic generator used to generate the music clips was presented in Williams et al., “Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System”, ACM Trans. Appl. Percept. 14, 3, Article 17 (May 2017), 13 pages. DOI: https://doi.org/10.1145/3059005

3. TERMS OF USE

Copyright University of Reading, 2018. This dataset is licensed by the rights-holder(s) under a Creative Commons Attribution 4.0 International Licence: https://creativecommons.org/licenses/by/4.0/.

4. CONTENTS

The dataset comprises data from 17 subjects, stored using the BIDS format. The sampling rate is 1 kHz and the music listening task corresponding to a music clip is 40 s long (clip duration). During the first 20 s, the music clip targets emotional state A, while for the remaining 20 s the music clip targets emotional state B.

5. METHOD and PROCESSING

This information is available in the following publications:

[1] Daly, I., Nicolaou, N., Williams, D., Hwang, F., Kirke, A., Miranda, E., Nasuto, S.J., �Neural and physiological data from participants listening to affective music�, Scientific Data, 2018.

[2] Daly, I., Williams, D., Hwang, F., Kirke, A., Malik, A., Roesch, E., Weaver, J., Miranda, E. R., Nasuto, S. J., �Identifying music-induced emotions from EEG for use in brain-computer music interfacing�, in Proc. 4th Workshop on Affective Brain-Computer Interfaces at the 6th International Conference on Affective Computing and Intelligent Interaction (ACII2015). Xi�an, China, 21-25 September 2015. If you use this dataset in your study please cite these references, as well as the following reference:

[3] Williams, D., Kirke, A., Miranda, E.R., Daly, I., Hwang, F., Weaver, J., Nasuto, S.J., �Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System�, ACM Trans. Appl. Percept. 14, 3, Article 17 (May 2017), 13 pages. DOI: https://doi.org/10.1145/3059005

Thank you for your interest in our work.

Dataset Information#

Dataset ID

DS002724

Title

A dataset recorded during development of an affective brain-computer music interface: training sessions

Year

2020

Authors

Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto

License

CC0

Citation / DOI

10.18112/openneuro.ds002724.v1.0.1

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds002724,
  title = {A dataset recorded during development of an affective brain-computer music interface: training sessions},
  author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto},
  doi = {10.18112/openneuro.ds002724.v1.0.1},
  url = {https://doi.org/10.18112/openneuro.ds002724.v1.0.1},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 10

  • Recordings: 700

  • Tasks: 1

Channels & sampling rate
  • Channels: 32 (96), 37 (96)

  • Sampling rate (Hz): 1000.0

  • Duration (hours): 0.0

Tags
  • Pathology: Not specified

  • Modality: —

  • Type: —

Files & format
  • Size on disk: 8.5 GB

  • File count: 700

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: 10.18112/openneuro.ds002724.v1.0.1

Provenance

API Reference#

Use the DS002724 class to access this dataset programmatically.

class eegdash.dataset.DS002724(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds002724. Modality: eeg; Experiment type: Affect; Subject type: Healthy. Subjects: 10; recordings: 96; tasks: 4.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds002724 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds002724

Examples

>>> from eegdash.dataset import DS002724
>>> dataset = DS002724(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#