DS006554#

Social Observation EEG raw data

Access recordings and metadata through EEGDash.

Citation: Yaner Su (2025). Social Observation EEG raw data. 10.18112/openneuro.ds006554.v1.0.0

Modality: eeg Subjects: 47 Recordings: 241 License: CC0 Source: openneuro

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS006554

dataset = DS006554(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS006554(cache_dir="./data", subject="01")

Advanced query

dataset = DS006554(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds006554,
  title = {Social Observation EEG raw data},
  author = {Yaner Su},
  doi = {10.18112/openneuro.ds006554.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds006554.v1.0.0},
}

About This Dataset#

README

WARNING

Below is a template to write a README file for this BIDS dataset. If this message is still present, it means that the person exporting the file has decided not to update the template.If you are the researcher editing this README file, please remove this warning section. The README is usually the starting point for researchers using your dataand serves as a guidepost for users of your data. A clear and informativeREADME makes your data much more usable. In general you can include information in the README that is not captured by some otherfiles in the BIDS dataset (dataset_description.json, events.tsv, …). It can also be useful to also include information that might already bepresent in another file of the dataset but might be important for users to be aware ofbefore preprocessing or analysing the data.

View full README

README

WARNING

Below is a template to write a README file for this BIDS dataset. If this message is still present, it means that the person exporting the file has decided not to update the template.If you are the researcher editing this README file, please remove this warning section. The README is usually the starting point for researchers using your dataand serves as a guidepost for users of your data. A clear and informativeREADME makes your data much more usable. In general you can include information in the README that is not captured by some otherfiles in the BIDS dataset (dataset_description.json, events.tsv, …). It can also be useful to also include information that might already bepresent in another file of the dataset but might be important for users to be aware ofbefore preprocessing or analysing the data. If the README gets too long you have the possibility to create a /doc folderand add it to the .bidsignore file to make sure it is ignored by the BIDS validator. More info here: https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3

Details related to access to the data

  • Data user agreement

If the dataset requires a data user agreement, link to the relevant information. - Contact person

Indicate the name and contact details (email and ORCID) of the person responsible for additional information. - Practical information to access the data

If there is any special information related to access rights orhow to download the data make sure to include it.For example, if the dataset was curated using datalad,make sure to include the relevant section from the datalad handbook:http://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset

Overview

  • Project name (if relevant)

  • Year(s) that the project ran

If no scans.tsv is included, this could at least cover when the data acquisitionstarter and ended. Local time of day is particularly relevant to subject state. - Brief overview of the tasks in the experiment

A paragraph giving an overview of the experiment. This should include thegoals or purpose and a discussion about how the experiment tries to achievethese goals. - Description of the contents of the dataset

An easy thing to add is the output of the bids-validator that describes what type ofdata and the number of subject one can expect to find in the dataset. - Independent variables

A brief discussion of condition variables (sometimes called contrastsor independent variables) that were varied across the experiment. - Dependent variables

A brief discussion of the response variables (sometimes called thedependent variables) that were measured and or calculated to assessthe effects of varying the condition variables. This might also includequestionnaires administered to assess behavioral aspects of the experiment. - Control variables

A brief discussion of the control variables — that is what aspectswere explicitly controlled in this experiment. The control variables mightinclude subject pool, environmental conditions, set up, or other thingsthat were explicitly controlled. - Quality assessment of the data

Provide a short summary of the quality of the data ideally with descriptive statistics if relevantand with a link to more comprehensive description (like with MRIQC) if possible.

Methods

Subjects

A brief sentence about the subject pool in this experiment. Remember that Control or Patient status should be defined in the ``participants.tsv``using a group column. - Information about the recruitment procedure- [ ] Subject inclusion criteria (if relevant)- [ ] Subject exclusion criteria (if relevant)

Apparatus

A summary of the equipment and environment setup for theexperiment. For example, was the experiment performed in a shielded roomwith the subject seated in a fixed position.

Initial setup

A summary of what setup was performed when a subject arrived.

Task organization

How the tasks were organized for a session.This is particularly important because BIDS datasets usually have task dataseparated into different files.) - Was task order counter-balanced?- [ ] What other activities were interspersed between tasks? - In what order were the tasks and other activities performed?

Task details

As much detail as possible about the task and the events that were recorded.

Additional data acquired

A brief indication of data other than theimaging data that was acquired as part of this experiment. In additionto data from other modalities and behavioral data, this might includequestionnaires and surveys, swabs, and clinical information. Indicatethe availability of this data. This is especially relevant if the data are not included in a phenotype folder.https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data

Experimental location

This should include any additional information regarding thethe geographical location and facility that cannot be includedin the relevant json files.

Missing data

Mention something if some participants are missing some aspects of the data.This can take the form of a processing log and/or abnormalities about the dataset. Some examples: - A brain lesion or defect only present in one participant- Some experimental conditions missing on a given run for a participant because of some technical issue.- Any noticeable feature of the data for certain participants- Differences (even slight) in protocol for certain participants.

Notes

Any additional information or pointers to information thatmight be helpful to users of the dataset. Include qualitative informationrelated to how the data acquisition went.

Dataset Information#

Dataset ID

DS006554

Title

Social Observation EEG raw data

Year

2025

Authors

Yaner Su

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds006554.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds006554,
  title = {Social Observation EEG raw data},
  author = {Yaner Su},
  doi = {10.18112/openneuro.ds006554.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds006554.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 47

  • Recordings: 241

  • Tasks: 1

Channels & sampling rate
  • Channels: 62 (47), 64 (47)

  • Sampling rate (Hz): 500.0

  • Duration (hours): 0.0

Tags
  • Pathology: Not specified

  • Modality: —

  • Type: —

Files & format
  • Size on disk: 12.1 GB

  • File count: 241

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds006554.v1.0.0

Provenance

API Reference#

Use the DS006554 class to access this dataset programmatically.

class eegdash.dataset.DS006554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds006554. Modality: eeg; Experiment type: Unknown; Subject type: Unknown. Subjects: 47; recordings: 47; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds006554 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds006554

Examples

>>> from eegdash.dataset import DS006554
>>> dataset = DS006554(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#