DS005522#

Spatial Navigation Memory of Object Locations

Access recordings and metadata through EEGDash.

Citation: Haydn G. Herrema, Michael J. Kahana (2024). Spatial Navigation Memory of Object Locations. 10.18112/openneuro.ds005522.v1.0.0

Modality: ieeg Subjects: 58 Recordings: 1297 License: CC0 Source: openneuro Citations: 0.0

Metadata: Complete (100%)

Quickstart#

Install

pip install eegdash

Access the data

from eegdash.dataset import DS005522

dataset = DS005522(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)

Filter by subject

dataset = DS005522(cache_dir="./data", subject="01")

Advanced query

dataset = DS005522(
    cache_dir="./data",
    query={"subject": {"$in": ["01", "02"]}},
)

Iterate recordings

for rec in dataset:
    print(rec.subject, rec.raw.info['sfreq'])

If you use this dataset in your research, please cite the original authors.

BibTeX

@dataset{ds005522,
  title = {Spatial Navigation Memory of Object Locations},
  author = {Haydn G. Herrema and Michael J. Kahana},
  doi = {10.18112/openneuro.ds005522.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds005522.v1.0.0},
}

About This Dataset#

Spatial Navigation Memory of Object Locations

Description

This dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task. The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania.

Each session contains 50 trials (2 practice and 48 experimental), and each overall “trial” contains 2 learning trials followed by 1 test trial with the same object at the same location. For learning trial 1, participants are placed at a random location at a given radius from the object. They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s). 5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2. On test trials, participants are placed at a random location and orientation, with the object invisible. They navigate to where they believe the object was located and press a button to record their response. The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0).

View full README

Spatial Navigation Memory of Object Locations

Description

This dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task. The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania.

Each session contains 50 trials (2 practice and 48 experimental), and each overall “trial” contains 2 learning trials followed by 1 test trial with the same object at the same location. For learning trial 1, participants are placed at a random location at a given radius from the object. They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s). 5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2. On test trials, participants are placed at a random location and orientation, with the object invisible. They navigate to where they believe the object was located and press a button to record their response. The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0).

The trials are blocked by a counterbalanced scheme, so for every trial there is another trial with reflected object position, starting position, and orientation. Each block contains 2 trials (i.e., 2 x (2 learning, 1 test)), with object (X, Y) and starting locations (x, y): - (X1, Y1)

  • (x1’, y1’)

  • (x1’’, y1’’)

  • (x1’’’, y1’’’)

  • (X2, Y2)
    • (x2’, y2’)

    • (x2’’, y2’’)

    • (x2’’’, y2’’’)

The paired block contains 2 trials in the opposite order with object and starting locations: - (-X2, -Y2)

  • (-x2’, -y2’)

  • (-x2’’, -y2’’)

  • (-x2’’’, -y2’’’)

  • (-X1, -Y1)
    • (-x1’, -y1’)

    • (-x1’’, -y1’’)

    • (-x1’’’, -y1’’’)

To Note

  • The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables.

  • Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available.

  • Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V.

Contact

For questions or inquiries, please contact sas-kahana-sysadmin@sas.upenn.edu.

Dataset Information#

Dataset ID

DS005522

Title

Spatial Navigation Memory of Object Locations

Year

2024

Authors

Haydn G. Herrema, Michael J. Kahana

License

CC0

Citation / DOI

doi:10.18112/openneuro.ds005522.v1.0.0

Source links

OpenNeuro | NeMAR | Source URL

Copy-paste BibTeX
@dataset{ds005522,
  title = {Spatial Navigation Memory of Object Locations},
  author = {Haydn G. Herrema and Michael J. Kahana},
  doi = {10.18112/openneuro.ds005522.v1.0.0},
  url = {https://doi.org/10.18112/openneuro.ds005522.v1.0.0},
}

Found an issue with this dataset?

If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!

Report an Issue on GitHub

Technical Details#

Subjects & recordings
  • Subjects: 58

  • Recordings: 1297

  • Tasks: 1

Channels & sampling rate
  • Channels: 133 (16), 110 (16), 120 (14), 88 (14), 72 (12), 126 (12), 173 (12), 188 (12), 56 (10), 108 (10), 112 (8), 68 (8), 127 (8), 128 (8), 46 (8), 64 (8), 50 (6), 124 (6), 186 (6), 123 (6), 92 (6), 146 (6), 144 (6), 86 (6), 182 (6), 104 (6), 130 (4), 70 (4), 111 (4), 140 (4), 75 (4), 85 (4), 138 (4), 163 (4), 59 (4), 180 (4), 160 (4), 100 (4), 118 (4), 158 (4), 166 (4), 96 (4), 63 (4), 170 (4), 76 (2), 174 (2), 122 (2), 172 (2), 149 (2), 94 (2), 109 (2), 105 (2), 151 (2), 54 (2), 90 (2), 116 (2), 60 (2), 80 (2), 136 (2), 169 (2), 177 (2), 125 (2), 84 (2), 165 (2), 178 (2), 78 (2)

  • Sampling rate (Hz): 1000.0 (140), 500.0 (122), 1600.0 (52), 999.0 (26), 2000.0 (8), 1999.0 (4)

  • Duration (hours): 0.0

Tags
  • Pathology: Not specified

  • Modality: Visual

  • Type: Memory

Files & format
  • Size on disk: 107.5 GB

  • File count: 1297

  • Format: BIDS

License & citation
  • License: CC0

  • DOI: doi:10.18112/openneuro.ds005522.v1.0.0

Provenance

API Reference#

Use the DS005522 class to access this dataset programmatically.

class eegdash.dataset.DS005522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#

Bases: EEGDashDataset

OpenNeuro dataset ds005522. Modality: ieeg; Experiment type: Memory; Subject type: Unknown. Subjects: 55; recordings: 176; tasks: 1.

Parameters:
  • cache_dir (str | Path) – Directory where data are cached locally.

  • query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key dataset.

  • s3_bucket (str | None) – Base S3 bucket used to locate the data.

  • **kwargs (dict) – Additional keyword arguments forwarded to EEGDashDataset.

data_dir#

Local dataset cache directory (cache_dir / dataset_id).

Type:

Path

query#

Merged query with the dataset filter applied.

Type:

dict

records#

Metadata records used to build the dataset, if pre-fetched.

Type:

list[dict] | None

Notes

Each item is a recording; recording-level metadata are available via dataset.description. query supports MongoDB-style filters on fields in ALLOWED_QUERY_FIELDS and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.

References

OpenNeuro dataset: https://openneuro.org/datasets/ds005522 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005522

Examples

>>> from eegdash.dataset import DS005522
>>> dataset = DS005522(cache_dir="./data")
>>> recording = dataset[0]
>>> raw = recording.load()
__init__(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
save(path, overwrite=False)[source]#

Save the dataset to disk.

Parameters:
  • path (str or Path) – Destination file path.

  • overwrite (bool, default False) – If True, overwrite existing file.

Return type:

None

See Also#