DS005059#
Paired Associates Learning: Memory for Word Pairs in Cued Recall
Access recordings and metadata through EEGDash.
Citation: Haydn G. Herrema, Michael J. Kahana (2024). Paired Associates Learning: Memory for Word Pairs in Cued Recall. 10.18112/openneuro.ds005059.v1.0.6
Modality: ieeg Subjects: 72 Recordings: 2003 License: CC0 Source: openneuro Citations: 0.0
Metadata: Complete (100%)
Quickstart#
Install
pip install eegdash
Access the data
from eegdash.dataset import DS005059
dataset = DS005059(cache_dir="./data")
# Get the raw object of the first recording
raw = dataset.datasets[0].raw
print(raw.info)
Filter by subject
dataset = DS005059(cache_dir="./data", subject="01")
Advanced query
dataset = DS005059(
cache_dir="./data",
query={"subject": {"$in": ["01", "02"]}},
)
Iterate recordings
for rec in dataset:
print(rec.subject, rec.raw.info['sfreq'])
If you use this dataset in your research, please cite the original authors.
BibTeX
@dataset{ds005059,
title = {Paired Associates Learning: Memory for Word Pairs in Cued Recall},
author = {Haydn G. Herrema and Michael J. Kahana},
doi = {10.18112/openneuro.ds005059.v1.0.6},
url = {https://doi.org/10.18112/openneuro.ds005059.v1.0.6},
}
About This Dataset#
Paired Associates Learning of Word Pairs
Description
This dataset contains behavioral events and intracranial electrophysiological recordings from a paired associates memory task. The experiment consists of participants studying pairs of visually presented words, solving simple arithmetic problems that function as a distractor, and then completing a cued recall task. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania.
Each session contains 25 lists of the structure: encoding, distractor, cued recall. During encoding, 6 pairs of words are presented one pair at a time. Each pair remains on screen for 4000 ms and is followed by a 1000 ms interstimulus interval. During the cued recall, one randomly chosen word from each pair is shown, and the participant is asked to vocally recall the other word from the pair. Participants have 5000 ms for each recall, and then the next cue (i.e., a word from another pair) is shown. All 6 pairs of words are tested on each list.
To Note:
The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables.
Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available, along with brain region annotations.
Recordings were made on multiple different systems, so we have done the scaling to provide all voltage values in V.
Contact
For questions or inquiries, please contact sas-kahana-sysadmin@sas.upenn.edu.
Dataset Information#
Dataset ID |
|
Title |
Paired Associates Learning: Memory for Word Pairs in Cued Recall |
Year |
2024 |
Authors |
Haydn G. Herrema, Michael J. Kahana |
License |
CC0 |
Citation / DOI |
|
Source links |
OpenNeuro | NeMAR | Source URL |
Copy-paste BibTeX
@dataset{ds005059,
title = {Paired Associates Learning: Memory for Word Pairs in Cued Recall},
author = {Haydn G. Herrema and Michael J. Kahana},
doi = {10.18112/openneuro.ds005059.v1.0.6},
url = {https://doi.org/10.18112/openneuro.ds005059.v1.0.6},
}
Found an issue with this dataset?
If you encounter any problems with this dataset (missing files, incorrect metadata, loading errors, etc.), please let us know!
Technical Details#
Subjects: 72
Recordings: 2003
Tasks: 1
Channels: 112 (44), 126 (30), 85 (22), 110 (20), 128 (20), 88 (18), 104 (18), 100 (18), 72 (16), 186 (16), 64 (16), 102 (14), 121 (14), 116 (14), 92 (12), 142 (12), 119 (10), 94 (10), 95 (10), 97 (10), 123 (8), 96 (8), 124 (8), 140 (8), 106 (8), 68 (8), 130 (8), 86 (8), 139 (8), 120 (6), 84 (6), 188 (6), 107 (6), 87 (6), 173 (6), 117 (6), 80 (6), 55 (6), 83 (6), 108 (6), 114 (6), 74 (6), 58 (6), 115 (4), 138 (4), 141 (4), 118 (4), 73 (4), 149 (4), 111 (4), 122 (4), 90 (2), 177 (2), 99 (2), 14 (2), 53 (2), 46 (2), 76 (2), 93 (2), 67 (2), 77 (2), 60 (2), 146 (2), 16 (2), 133 (2), 52 (2), 98 (2)
Sampling rate (Hz): 1000.0 (386), 500.0 (142), 1024.0 (16), 499.7071 (12), 1600.0 (8)
Duration (hours): 0.0
Pathology: Not specified
Modality: Visual
Type: Memory
Size on disk: 167.3 GB
File count: 2003
Format: BIDS
License: CC0
DOI: doi:10.18112/openneuro.ds005059.v1.0.6
API Reference#
Use the DS005059 class to access this dataset programmatically.
- class eegdash.dataset.DS005059(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, **kwargs)[source]#
Bases:
EEGDashDatasetOpenNeuro dataset
ds005059. Modality:ieeg; Experiment type:Memory; Subject type:Unknown. Subjects: 69; recordings: 282; tasks: 1.- Parameters:
cache_dir (str | Path) – Directory where data are cached locally.
query (dict | None) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key
dataset.s3_bucket (str | None) – Base S3 bucket used to locate the data.
**kwargs (dict) – Additional keyword arguments forwarded to
EEGDashDataset.
- data_dir#
Local dataset cache directory (
cache_dir / dataset_id).- Type:
Path
- query#
Merged query with the dataset filter applied.
- Type:
dict
- records#
Metadata records used to build the dataset, if pre-fetched.
- Type:
list[dict] | None
Notes
Each item is a recording; recording-level metadata are available via
dataset.description.querysupports MongoDB-style filters on fields inALLOWED_QUERY_FIELDSand is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata.References
OpenNeuro dataset: https://openneuro.org/datasets/ds005059 NeMAR dataset: https://nemar.org/dataexplorer/detail?dataset_id=ds005059
Examples
>>> from eegdash.dataset import DS005059 >>> dataset = DS005059(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load()
See Also#
eegdash.dataset.EEGDashDataseteegdash.dataset