eegdash.features.extractors#
Core Feature Extraction Orchestration.
This module defines the fundamental building blocks for creating feature extraction pipelines.
The module provides the base class:
FeatureExtractor- The central pipeline for execution trees.
Classes
|
Pipeline for multi-stage feature extraction. |
- class eegdash.features.extractors.FeatureExtractor(feature_extractors: Dict[str, Callable], preprocessor: Callable | None = None)[source]
Bases:
TrainableFeaturePipeline for multi-stage feature extraction.
This class manages a collection of feature extraction functions or nested extractors. It handles the application of shared preprocessing, validates the dependency graph between components, and aggregates results into a named dictionary compatible with
FeaturesDataset.- Parameters:
feature_extractors (dict[str, callable]) – A dictionary where keys are the base names for the features and values are the extraction functions or other
FeatureExtractorinstances.preprocessor (callable, optional) – A shared preprocessing function applied to the input data before it is passed to child extractors.
- preprocessor
The shared preprocessing stage for this extractor.
- Type:
callable or None
- feature_extractors_dict
The validated dictionary of child extractors.
- Type:
dict
- features_kwargs
A collection of all keyword arguments used by the preprocessor and child functions, preserved for metadata tracking.
- Type:
dict
Notes
The extractor automatically detects if any child components are trainable and will require a
fit()phase before extraction can occur.Examples
>>> # Create a simple extractor >>> fe = FeatureExtractor( ... feature_extractors={'mean': signal_mean, 'std': signal_std} ... )
>>> # Extract from a batch (2 windows, 3 channels, 100 samples) >>> X = np.random.randn(2, 3, 100) >>> results = fe(X, _batch_size=2, _ch_names=['O1', 'Oz', 'O2'])
- preprocess(*x, _metadata: dict)[source]
Apply the shared preprocessor to the input data.
- Parameters:
*x (tuple of ndarray) – The input data batch.
_metadata (dict) – A dictionary of record and batch metadata.
- Returns:
tuple – The preprocessed data passed as a tuple to support multi-output preprocessors.
_metadata (dict) – The preprocessed metadata. Only relevant for metadata preprocessors.
- clear()[source]
Clear the state of all trainable sub-features.
- partial_fit(*x, y=None, _metadata: dict)[source]
Propagate partial fitting to all trainable children.
- Parameters:
*x (tuple of ndarray) – The input data batch.
y (ndarray, optional) – Target labels for supervised training.
_metadata (dict) – A dictionary of record and batch metadata.
- fit()[source]
Fit all trainable sub-features.
- to_dict() dict[source]
Dumps the feature extractor to a dictionary.
- Returns:
A dictionary representing the feature extractor, with
"feature_extractors"and"preprocessor"fields (if applicable).- Return type:
dict
See also
feature_extractor_from_dictNotes
Feature extractors including non-function callables are not supported.
- to_json(path: str | Path)[source]
Dumps the feature extractor to a json file.
- Parameters:
path (str | pathlib.Path) – The path to the json file.
See also
load_feature_extractor_from_json,FeatureExtractor.to_dictNotes
Feature extractors including non-function callables are not supported.
- to_yaml(path: str | Path)[source]
Dumps the feature extractor to a yaml file.
- Parameters:
path (str | pathlib.Path) – The path to the yaml file.
See also
load_feature_extractor_from_yaml,FeatureExtractor.to_dictNotes
- Feature extractors including non-function callables are not
supported.
Requires the pyyaml package.
- to_hocon(path: str | Path)[source]
Dumps the feature extractor to a HOCON’s conf file.
- Parameters:
path (str | pathlib.Path) – The path to the conf file.
See also
load_feature_extractor_from_hocon,FeatureExtractor.to_dictNotes
- Feature extractors including non-function callables are not
supported.
Requires the pyhocon package.