Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
99 commits
Select commit Hold shift + click to select a range
3dc5729
Test IBL extractors tests failing for PI update
alejoe91 Dec 29, 2025
d1a0532
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 Jan 6, 2026
79ca022
original commit - good times
m-beau Jan 6, 2026
22501da
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 6, 2026
7ca3d35
good times - progress
m-beau Jan 7, 2026
e3f31bf
merge
m-beau Jan 7, 2026
ab0e8dc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 7, 2026
fe0aaf1
Merge remote-tracking branch 'alessio/select_sorting_periods' into go…
m-beau Jan 7, 2026
7279b67
wip
alejoe91 Jan 7, 2026
13ebb8f
Merge remote-tracking branch 'alessio/select_sorting_periods' into go…
m-beau Jan 7, 2026
1962f21
Fix test for base sorting and propagate to basevector extension
alejoe91 Jan 7, 2026
7fbe160
wip
m-beau Jan 7, 2026
5645ee6
Merge branch 'select_sorting_periods' of https://github.com/alejoe91/…
m-beau Jan 7, 2026
528c82b
Fix tests in quailty metrics
alejoe91 Jan 8, 2026
fccdbe3
finished implementing good periods
m-beau Jan 8, 2026
7adab75
Merge branch 'select_sorting_periods' into goodtimes
alejoe91 Jan 8, 2026
f36c7fc
Some fixes
alejoe91 Jan 8, 2026
775dda7
Fix retrieval of spikevector features
alejoe91 Jan 8, 2026
6f02b7f
Merge branch 'select_sorting_periods' into goodtimes
alejoe91 Jan 8, 2026
15df754
Fix tests, saving and loading
alejoe91 Jan 8, 2026
40e3417
started working on get_data method for good periods
m-beau Jan 8, 2026
cdf7846
Solve conflicts, still wip
alejoe91 Jan 8, 2026
81d745e
done refactoring self.data serializable format and get_data method
m-beau Jan 8, 2026
93a53ca
credits
m-beau Jan 8, 2026
493d215
Make good_periods blazing fast!
alejoe91 Jan 9, 2026
a1fb167
Add credits
alejoe91 Jan 9, 2026
e8518b0
Solve conflicts
alejoe91 Jan 9, 2026
f6752ac
Fix tests
alejoe91 Jan 9, 2026
a251826
oups
alejoe91 Jan 9, 2026
983d255
Sam's review + implement select/merge/split data
alejoe91 Jan 9, 2026
c5dbb93
Rename to valid_unit_periods and wip widgets
alejoe91 Jan 12, 2026
f382f89
Fix imports
alejoe91 Jan 12, 2026
ad50845
Add widget and extend params
alejoe91 Jan 12, 2026
bb46f27
Update src/spikeinterface/core/sorting_tools.py
alejoe91 Jan 13, 2026
121a0b1
Apply suggestion from @chrishalcrow
alejoe91 Jan 13, 2026
cbf3213
refactor presence ratio and drift metrics to use periods properly
alejoe91 Jan 13, 2026
4409aa5
Fix rp_violations
alejoe91 Jan 13, 2026
71f8668
implement firing range and fix drift
alejoe91 Jan 13, 2026
1ea0d68
fix naming issue
alejoe91 Jan 13, 2026
a86c2d3
remove solved todos
alejoe91 Jan 13, 2026
d98ff66
sync with select_sorting_period PR
alejoe91 Jan 13, 2026
84da1a2
wip: test user defined
alejoe91 Jan 13, 2026
c539f58
wip: tests
alejoe91 Jan 13, 2026
d8e1f90
Merge branch 'main' of github.com:SpikeInterface/spikeinterface into …
alejoe91 Jan 13, 2026
3f93f97
Implement select_segment_periods in core
alejoe91 Jan 13, 2026
cd85456
remove utils
alejoe91 Jan 13, 2026
7a42fe3
rebase on #4316
alejoe91 Jan 13, 2026
4f754cb
Merge with main
alejoe91 Jan 14, 2026
cbc0986
Fix import
alejoe91 Jan 14, 2026
56b672e
Merge branch 'select_sorting_periods_core' into select_sorting_periods
alejoe91 Jan 14, 2026
cd2ba0b
wip
alejoe91 Jan 14, 2026
046430e
fix import
alejoe91 Jan 14, 2026
bb86253
Add misc_metric changes
alejoe91 Jan 14, 2026
accbc31
Merge branch 'select_sorting_periods' into goodtimes
alejoe91 Jan 14, 2026
807f5c6
Add tests for user defined and combined
alejoe91 Jan 14, 2026
89d563b
Add to built_in extensions
alejoe91 Jan 14, 2026
50f33f0
fix tests
alejoe91 Jan 14, 2026
f2d48ba
Remove debug print
alejoe91 Jan 14, 2026
6b48730
Merge branch 'select_sorting_periods' into goodtimes
alejoe91 Jan 15, 2026
6b1284b
Merge branch 'goodtimes' of github.com:m-beau/spikeinterface into goo…
alejoe91 Jan 15, 2026
e173a63
wip: fix intervals
alejoe91 Jan 15, 2026
80bc50f
Change base_period_dtype order and fix select_sorting_periods array i…
alejoe91 Jan 15, 2026
4c8fa23
fix conflicts
alejoe91 Jan 15, 2026
e1f5bab
Merge metrics implementations
alejoe91 Jan 15, 2026
96e6a53
fix tests
alejoe91 Jan 15, 2026
3198911
Fix generation of bins
alejoe91 Jan 15, 2026
bbc28c5
Refactor generation of subperiods
alejoe91 Jan 15, 2026
9d2ad09
fix conflicts
alejoe91 Jan 15, 2026
8312db2
fix conflicts2
alejoe91 Jan 15, 2026
87fbe9a
Merge branch 'main' of github.com:SpikeInterface/spikeinterface into …
alejoe91 Jan 16, 2026
7446a43
Use cached get_spike_vector_to_indices
alejoe91 Jan 16, 2026
873a687
Solve conflicts
alejoe91 Jan 16, 2026
bc91b81
fix conflicts3
alejoe91 Jan 16, 2026
51e906a
Fix error in merging
alejoe91 Jan 16, 2026
88da6fc
Merge branch 'select_sorting_periods' into goodtimes
alejoe91 Jan 16, 2026
ab5a771
fix conflicts
alejoe91 Jan 20, 2026
2209514
Add supports_periods in BaseMetric/Extension
alejoe91 Jan 20, 2026
b23c431
wip: test metrics with periods
alejoe91 Jan 20, 2026
6fb26a4
almost there?
alejoe91 Jan 20, 2026
0fe7f3e
Fix periods arg in MetricExtensions
alejoe91 Jan 20, 2026
f087e08
Make bin edges unique
alejoe91 Jan 20, 2026
e785b64
fix conflicts with selecto_sorting_periods
alejoe91 Jan 20, 2026
173e747
Add support_periods to spike train metrics and tests
alejoe91 Jan 21, 2026
066c378
Force NaN/-1 values for float/int metrics if num_spikes is 0
alejoe91 Jan 21, 2026
65e1848
Fix test_empty_units: -1 is a valid value for ints
alejoe91 Jan 21, 2026
f1c4682
Fix firing range if unit samples < bin samples
alejoe91 Jan 21, 2026
3291638
fix noise_cutoff if empty units
alejoe91 Jan 21, 2026
b5bf3c3
Move warnings at the end of the loop for firing range and drift
alejoe91 Jan 21, 2026
8aeedcc
clean up tests and add get_available_metric_names
alejoe91 Jan 22, 2026
d4db43c
simplify total samples
alejoe91 Jan 22, 2026
d0a1e66
Go back to Pierre's implementation for drifts
alejoe91 Jan 22, 2026
6926532
Merge branch 'select_sorting_periods' into goodtimes
alejoe91 Jan 22, 2026
a1e750d
solve conflicts and fix tests
alejoe91 Jan 22, 2026
4909bfb
Add in docs
alejoe91 Jan 22, 2026
2739df8
Add use_valid_periods param to quality metrics
alejoe91 Jan 22, 2026
2b27c40
Force int64 in tests
alejoe91 Jan 22, 2026
a0aad71
Add clip_amplitude_scalings arg in plot
alejoe91 Jan 22, 2026
4c203d5
Fix serialization/deserialization to Zarr
alejoe91 Jan 23, 2026
d42b6a2
Fix reloading extension data in zarr
alejoe91 Jan 23, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ SpikeInterface is made of several modules to deal with different aspects of the
- visualize recordings and spike sorting outputs in several ways (matplotlib, sortingview, jupyter, ephyviewer)
- export a report and/or export to phy
- curate your sorting with several strategies (ml-based, metrics based, manual, ...)
- offer a powerful Qt-based or we-based viewer in a separate package `spikeinterface-gui <https://github.com/SpikeInterface/spikeinterface-gui>`_ for manual curation that replace phy.
- offer a powerful desktop or web viewer in a separate package `spikeinterface-gui <https://github.com/SpikeInterface/spikeinterface-gui>`_ for manual curation that replace phy.
- have powerful sorting components to build your own sorter.
- have a full motion/drift correction framework (See :ref:`motion_correction`)

Expand Down
40 changes: 24 additions & 16 deletions doc/modules/core.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,10 +93,11 @@ with 16 channels:
timestamps = np.arange(num_samples) / sampling_frequency + 300
recording.set_times(times=timestamps, segment_index=0)

**Note**:
Raw data formats often store data as integer values for memory efficiency. To give these integers meaningful physical units (uV), you can apply a gain and an offset.
Many devices have their own gains and offsets necessary to convert their data and these values are handled by SpikeInterface for its extractors. This
is triggered by the :code:`return_in_uV` parameter in :code:`get_traces()`, (see above example), which will return the traces in uV. Read more in our how to guide, :ref:`physical_units`.
.. note::

Raw data formats often store data as integer values for memory efficiency. To give these integers meaningful physical units (uV), you can apply a gain and an offset.
Many devices have their own gains and offsets necessary to convert their data and these values are handled by SpikeInterface for its extractors. This
is triggered by the :code:`return_in_uV` parameter in :code:`get_traces()`, (see above example), which will return the traces in uV. Read more in our how to guide, :ref:`physical_units`.


Sorting
Expand Down Expand Up @@ -180,8 +181,9 @@ a numpy.array with dtype `[("sample_index", "int64"), ("unit_index", "int64"), (
For computations which are done unit-by-unit, like computing isi-violations per unit, it is better that
spikes from a single unit are concurrent in memory. For these other cases, we can re-order the
`spike_vector` in different ways:
* order by unit, then segment, then sample
* order by segment, then unit, then sample

* order by unit, then segment, then sample
* order by segment, then unit, then sample

This is done using `sorting.to_reordered_spike_vector()`. The first time a reordering is done, the
reordered spiketrain is cached in memory by default. Users should rarely have to worry about these
Expand Down Expand Up @@ -458,9 +460,11 @@ It represents unsorted waveform cutouts. Some acquisition systems, in fact, allo
threshold and only record the times at which a peak was detected and the waveform cut out around
the peak.

**NOTE**: while we support this class (mainly for legacy formats), this approach is a bad practice
and is highly discouraged! Most modern spike sorters, in fact, require the raw traces to perform
template matching to recover spikes!
.. note::

While we support this class (mainly for legacy formats), this approach is a bad practice
and is highly discouraged! Most modern spike sorters, in fact, require the raw traces to perform
template matching to recover spikes!

Here we assume :code:`snippets` is a :py:class:`~spikeinterface.core.BaseSnippets` object
with 16 channels:
Expand Down Expand Up @@ -548,9 +552,11 @@ Sparsity is defined as the subset of channels on which waveforms (and related in
sparsity is not global, but it is unit-specific. Importantly, saving sparse waveforms, especially for high-density probes,
dramatically reduces the size of the waveforms extension if computed.

**NOTE** As of :code:`0.101.0` all :code:`SortingAnalyzer`'s have a default of :code:`sparse=True`. This was first
introduced in :code:`0.99.0` for :code:`WaveformExtractor`'s and will be the default going forward. To obtain dense
waveforms you will need to set :code:`sparse=False` at the creation of the :code:`SortingAnalyzer`.
.. note::

As of :code:`0.101.0` all :code:`SortingAnalyzer`'s have a default of :code:`sparse=True`. This was first
introduced in :code:`0.99.0` for :code:`WaveformExtractor`'s and will be the default going forward. To obtain dense
waveforms you will need to set :code:`sparse=False` at the creation of the :code:`SortingAnalyzer`.


Sparsity can be computed from a :py:class:`~spikeinterface.core.SortingAnalyzer` object with the
Expand Down Expand Up @@ -854,10 +860,12 @@ The same functions are also available for
:py:func:`~spikeinterface.core.select_segment_sorting`).


**Note** :py:func:`~spikeinterface.core.append_recordings` and:py:func:`~spikeinterface.core.concatenate_recordings`
have the same goal, aggregate recording pieces on the time axis but with 2 different strategies! One is keeping the
multi segments concept, the other one is breaking it!
See this example for more detail :ref:`example_segments`.
.. note::

:py:func:`~spikeinterface.core.append_recordings` and:py:func:`~spikeinterface.core.concatenate_recordings`
have the same goal, aggregate recording pieces on the time axis but with 2 different strategies! One is keeping the
multi segments concept, the other one is breaking it!
See this example for more detail :ref:`example_segments`.



Expand Down
38 changes: 22 additions & 16 deletions doc/modules/exporters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,11 @@ and behavioral data. It can be used to decode behavior, make tuning curves, comp
The :py:func:`~spikeinterface.exporters.to_pynapple_tsgroup` function allows you to convert a
SortingAnalyzer to Pynapple's ``TsGroup`` object on the fly.

**Note** : When creating the ``TsGroup``, we will use the underlying time support of the SortingAnalyzer.
How this works depends on your acquisition system. You can use the ``get_times`` method on a recording
(``my_recording.get_times()``) to find the time support of your recording.
.. note::

When creating the ``TsGroup``, we will use the underlying time support of the SortingAnalyzer.
How this works depends on your acquisition system. You can use the ``get_times`` method on a recording
(``my_recording.get_times()``) to find the time support of your recording.

When constructed, if ``attach_unit_metadata`` is set to ``True``, any relevant unit information
is propagated to the ``TsGroup``. The ``to_pynapple_tsgroup`` checks if unit locations, quality
Expand Down Expand Up @@ -54,13 +56,15 @@ The :py:func:`~spikeinterface.exporters.export_to_phy` function allows you to us
`Phy template GUI <https://github.com/cortex-lab/phy>`_ for visual inspection and manual curation of spike sorting
results.

**Note** : :py:func:`~spikeinterface.exporters.export_to_phy` speed and the size of the folder will highly depend
on the sparsity of the :code:`SortingAnalyzer` itself or the external specified sparsity.
The Phy viewer enables one to explore PCA projections, spike amplitudes, waveforms and quality of spike sorting results.
So if these pieces of information have already been computed as extensions (see :ref:`modules/postprocessing:Extensions as AnalyzerExtensions`),
then exporting to Phy should be fast (and the user has better control of the parameters for the extensions).
If not pre-computed, then the required extensions (e.g., :code:`spike_amplitudes`, :code:`principal_components`)
can be computed directly at export time.
.. note::

:py:func:`~spikeinterface.exporters.export_to_phy` speed and the size of the folder will highly depend
on the sparsity of the :code:`SortingAnalyzer` itself or the external specified sparsity.
The Phy viewer enables one to explore PCA projections, spike amplitudes, waveforms and quality of spike sorting results.
So if these pieces of information have already been computed as extensions (see :ref:`modules/postprocessing:Extensions as AnalyzerExtensions`),
then exporting to Phy should be fast (and the user has better control of the parameters for the extensions).
If not pre-computed, then the required extensions (e.g., :code:`spike_amplitudes`, :code:`principal_components`)
can be computed directly at export time.

The input of the :py:func:`~spikeinterface.exporters.export_to_phy` is a :code:`SortingAnalyzer` object.

Expand Down Expand Up @@ -131,12 +135,14 @@ The report includes summary figures of the spike sorting output (e.g. amplitude
depth VS amplitude) as well as unit-specific reports, that include waveforms, templates, template maps,
ISI distributions, and more.

**Note** : similarly to :py:func:`~spikeinterface.exporters.export_to_phy` the
:py:func:`~spikeinterface.exporters.export_report` depends on the sparsity of the :code:`SortingAnalyzer` itself and
on which extensions have been computed. For example, :code:`spike_amplitudes` and :code:`correlograms` related plots
will be automatically included in the report if the associated extensions are computed in advance.
The function can perform these computations as well, but it is a better practice to compute everything that's needed
beforehand.
.. note::

Similarly to :py:func:`~spikeinterface.exporters.export_to_phy` the
:py:func:`~spikeinterface.exporters.export_report` depends on the sparsity of the :code:`SortingAnalyzer` itself and
on which extensions have been computed. For example, :code:`spike_amplitudes` and :code:`correlograms` related plots
will be automatically included in the report if the associated extensions are computed in advance.
The function can perform these computations as well, but it is a better practice to compute everything that's needed
beforehand.

Note that every unit will generate a summary unit figure, so the export process can be slow for spike sorting outputs
with many units!
Expand Down
52 changes: 48 additions & 4 deletions doc/modules/postprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -163,8 +163,10 @@ Extensions are generally saved in two ways, suitable for two workflows:
:code:`sorting_analyzer.compute('waveforms', save=False)`).


**NOTE**: We recommend choosing a workflow and sticking with it. Either keep everything on disk or keep everything in memory until
you'd like to save. A mixture can lead to unexpected behavior. For example, consider the following code
.. note::

We recommend choosing a workflow and sticking with it. Either keep everything on disk or keep everything in memory until
you'd like to save. A mixture can lead to unexpected behavior. For example, consider the following code

.. code::

Expand Down Expand Up @@ -257,15 +259,35 @@ spike_amplitudes
This extension computes the amplitude of each spike as the value of the traces on the extremum channel at the times of
each spike. The extremum channel is computed from the templates.


**NOTE:** computing spike amplitudes is highly recommended before calculating amplitude-based quality metrics, such as
:ref:`amp_cutoff` and :ref:`amp_median`.

.. code-block:: python

amplitudes = sorting_analyzer.compute(input="spike_amplitudes", peak_sign="neg")
amplitudes = sorting_analyzer.compute(input="spike_amplitudes")

For more information, see :py:func:`~spikeinterface.postprocessing.compute_spike_amplitudes`


.. _postprocessing_amplitude_scalings:

amplitude_scalings
^^^^^^^^^^^^^^^^^^

This extension computes the amplitude scaling of each spike as the value of the linear fit between the template and the
spike waveform. In case of spatio-temporal collisions, a multi-linear fit is performed using the templates of all units
involved in the collision.

**NOTE:** computing amplitude scalings is highly recommended before calculating amplitude-based quality metrics, such as
:ref:`amp_cutoff` and :ref:`amp_median`.

.. code-block:: python

amplitude_scalings = sorting_analyzer.compute(input="amplitude_scalings")

For more information, see :py:func:`~spikeinterface.postprocessing.compute_amplitude_scalings`

.. _postprocessing_spike_locations:

spike_locations
Expand Down Expand Up @@ -367,7 +389,29 @@ This extension computes the histograms of inter-spike-intervals. The computed ou
method="auto"
)

For more information, see :py:func:`~spikeinterface.postprocessing.compute_isi_histograms`
valid_unit_periods
^^^^^^^^^^^^^^^^^^

This extension computes the valid unit periods for each unit based on the estimation of false positive rates
(using RP violation - see ::doc:`metrics/qualitymetrics/isi_violations`) and false negative rates
(using amplitude cutoff - see ::doc:`metrics/qualitymetrics/amplitude_cutoff`) computed over chunks of the recording.
The valid unit periods are the periods where both false positive and false negative rates are below specified
thresholds. Periods can be either absolute (in seconds), same for all units, or relative, where
chunks will be unit-specific depending on firing rate (with a target number of spikes per chunk).

.. code-block:: python

valid_periods = sorting_analyzer.compute(
input="valid_unit_periods",
period_mode='relative',
target_num_spikes=300,
fp_threshold=0.1,
fn_threshold=0.1,
)

For more information, see :py:func:`~spikeinterface.postprocessing.compute_valid_unit_periods`.




Other postprocessing tools
Expand Down
6 changes: 4 additions & 2 deletions doc/modules/sortingcomponents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,9 @@ Other variants are also implemented (but less tested or not so useful):
* **'by_channel_torch'** (requires :code:`torch`): pytorch implementation (GPU-compatible) that uses max pooling for time deduplication
* **'locally_exclusive_torch'** (requires :code:`torch`): pytorch implementation (GPU-compatible) that uses max pooling for space-time deduplication

**NOTE**: the torch implementations give slightly different results due to a different implementation.
.. note::

The torch implementations give slightly different results due to a different implementation.

Peak detection, as many of the other sorting components, can be run in parallel.

Expand Down Expand Up @@ -274,7 +276,7 @@ handle drift can benefit from drift estimation/correction.
Especially for acute Neuropixels-like probes, this is a crucial step.

The motion estimation step comes after peak detection and peak localization. Read more about
it in the :ref:`_motion_correction` modules doc, and a more practical guide in the
it in the :ref:`motion_correction` modules doc, and a more practical guide in the
:ref:`handle-drift-in-your-recording` How To.

Here is an example with non-rigid motion estimation:
Expand Down
21 changes: 19 additions & 2 deletions src/spikeinterface/core/sortinganalyzer.py
Original file line number Diff line number Diff line change
Expand Up @@ -2471,6 +2471,18 @@ def get_any_dependencies(cls, **params):
all_dependencies = list(chain.from_iterable([dep.split("|") for dep in all_dependencies]))
return all_dependencies

@classmethod
def get_default_params(cls):
"""
Get the default params for the extension.

Returns
-------
default_params : dict
The default parameters for the extension.
"""
return get_default_analyzer_extension_params(cls.extension_name)

def load_run_info(self):
run_info = None
if self.format == "binary_folder":
Expand Down Expand Up @@ -2698,10 +2710,14 @@ def _save_data(self):
for ext_data_name, ext_data in self.data.items():
if ext_data_name in extension_group:
del extension_group[ext_data_name]
if isinstance(ext_data, dict):
if isinstance(ext_data, (dict, list)):
# These could be dicts or lists of dicts. The check_json makes sure
# that everything is json serializable
ext_data_ = check_json(ext_data)
extension_group.create_dataset(
name=ext_data_name, data=np.array([ext_data], dtype=object), object_codec=numcodecs.JSON()
name=ext_data_name, data=np.array([ext_data_], dtype=object), object_codec=numcodecs.JSON()
)
extension_group[ext_data_name].attrs["dict"] = True
elif isinstance(ext_data, np.ndarray):
extension_group.create_dataset(name=ext_data_name, data=ext_data, **saving_options)
elif HAS_PANDAS and isinstance(ext_data, pd.DataFrame):
Expand Down Expand Up @@ -2884,6 +2900,7 @@ def set_data(self, ext_data_name, ext_data):
"spike_locations": "spikeinterface.postprocessing",
"template_similarity": "spikeinterface.postprocessing",
"unit_locations": "spikeinterface.postprocessing",
"valid_unit_periods": "spikeinterface.postprocessing",
# from metrics
"quality_metrics": "spikeinterface.metrics",
"template_metrics": "spikeinterface.metrics",
Expand Down
1 change: 1 addition & 0 deletions src/spikeinterface/metrics/quality/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,5 @@
compute_sliding_rp_violations,
compute_sd_ratio,
compute_synchrony_metrics,
compute_refrac_period_violations,
)
13 changes: 13 additions & 0 deletions src/spikeinterface/metrics/quality/quality_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,13 @@ class ComputeQualityMetrics(BaseMetricExtension):
need_backward_compatibility_on_load = True
metric_list = misc_metrics_list + pca_metrics_list

@classmethod
def get_required_dependencies(cls, **params):
if params.get("use_valid_periods", False):
return ["valid_unit_periods"]
else:
return []

def _handle_backward_compatibility_on_load(self):
# For backwards compatibility - this renames qm_params as metric_params
if (qm_params := self.params.get("qm_params")) is not None:
Expand All @@ -70,6 +77,7 @@ def _set_params(
metric_params: dict | None = None,
delete_existing_metrics: bool = False,
metrics_to_compute: list[str] | None = None,
use_valid_periods=False,
periods=None,
# common extension kwargs
peak_sign=None,
Expand All @@ -86,6 +94,11 @@ def _set_params(
pc_metric_names = [m.metric_name for m in pca_metrics_list]
metric_names = [m for m in metric_names if m not in pc_metric_names]

if use_valid_periods:
if periods is not None:
raise ValueError("If use_valid_periods is True, periods should not be provided.")
periods = self.sorting_analyzer.get_extension("valid_unit_periods").get_data(outputs="numpy")

return super()._set_params(
metric_names=metric_names,
metric_params=metric_params,
Expand Down
Loading