API Reference¶
Complete API documentation for all nonconform modules and classes.
Start Here¶
If you are looking for task-oriented call sequences, start with Common Workflows.
Detector¶
nonconform.detector ¶
Core conformal anomaly detector implementation.
This module provides the main ConformalDetector class that wraps any anomaly detector with conformal inference for valid p-values and FDR control.
Classes:
| Name | Description |
|---|---|
BaseConformalDetector |
Abstract base class for conformal detectors. |
ConformalDetector |
Main conformal anomaly detector with optional weighting. |
BaseConformalDetector ¶
Bases: ABC
Abstract base class for all conformal anomaly detectors.
Defines the core interface that all conformal anomaly detection implementations must provide. Conformal detectors support either an integrated or detached calibration workflow:
- Integrated calibration:
fit()trains detector(s) and computes calibration scores - Detached calibration: train detector externally, then call
calibrate()on a separate calibration dataset - Inference Phase:
compute_p_values()converts new data scores to valid p-values, orselect()for the combined p-value + FDR-control workflow
Subclasses must implement both abstract methods.
Note
This is an abstract class and cannot be instantiated directly.
Use ConformalDetector for the main implementation.
fit
abstractmethod
¶
Fit the detector model(s) and compute calibration scores.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
The dataset used for fitting the model(s) and determining calibration scores. |
required |
y
|
ndarray | None
|
Ignored. Present for sklearn API compatibility. |
None
|
n_jobs
|
int | None
|
Optional strategy-specific parallelism hint.
Currently used by strategies that expose an |
None
|
Returns:
| Type | Description |
|---|---|
Self
|
The fitted detector instance. |
Source code in nonconform/detector.py
calibrate ¶
Calibrate a pre-fitted detector on separate calibration data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
Dataset used only to compute calibration scores. |
required |
y
|
ndarray | None
|
Ignored. Present for sklearn API compatibility. |
None
|
Returns:
| Type | Description |
|---|---|
Self
|
The calibrated detector instance. |
Source code in nonconform/detector.py
compute_p_values
abstractmethod
¶
compute_p_values(
x: DataFrame | Series | ndarray,
*,
refit_weights: bool = True,
) -> np.ndarray | pd.Series
Return conformal p-values for new data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | Series | ndarray
|
New data instances for anomaly estimation. |
required |
refit_weights
|
bool
|
Whether to refit the weight estimator for this batch in weighted mode. Ignored in standard mode. |
True
|
Returns:
| Type | Description |
|---|---|
ndarray | Series
|
P-values as ndarray for numpy input, or pandas Series for pandas input. |
Source code in nonconform/detector.py
score_samples
abstractmethod
¶
score_samples(
x: DataFrame | Series | ndarray,
*,
refit_weights: bool = True,
) -> np.ndarray | pd.Series
Return aggregated raw anomaly scores for new data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | Series | ndarray
|
New data instances for anomaly estimation. |
required |
refit_weights
|
bool
|
Whether to refit the weight estimator for this batch in weighted mode. Ignored in standard mode. |
True
|
Returns:
| Type | Description |
|---|---|
ndarray | Series
|
Raw scores as ndarray for numpy input, or pandas Series for pandas input. |
Source code in nonconform/detector.py
ConformalDetector ¶
ConformalDetector(
detector: Any,
strategy: BaseStrategy,
estimation: BaseEstimation | None = None,
weight_estimator: BaseWeightEstimator | None = None,
aggregation: str = "median",
score_polarity: ScorePolarity
| Literal[
"auto", "higher_is_anomalous", "higher_is_normal"
]
| None = None,
seed: int | None = None,
verbose: bool = False,
verify_prepared_batch_content: bool = True,
)
Bases: BaseConformalDetector
Unified conformal anomaly detector with optional covariate shift handling.
Provides distribution-free anomaly detection with valid p-values and False Discovery Rate (FDR) control by wrapping any anomaly detector with conformal inference. Supports PyOD detectors, sklearn-compatible detectors, and custom detectors implementing the AnomalyDetector protocol.
When no weight estimator is provided (standard conformal prediction): - Uses classical conformal inference for exchangeable data - Provides optimal performance and memory usage - Suitable when training and test data come from the same distribution
When a weight estimator is provided (weighted conformal prediction): - Handles distribution shift between calibration and test data - Estimates importance weights to maintain statistical validity - Slightly higher computational cost but robust to covariate shift
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
detector
|
Any
|
Anomaly detector (PyOD, sklearn-compatible, or custom). |
required |
strategy
|
BaseStrategy
|
The conformal strategy for fitting and calibration. |
required |
estimation
|
BaseEstimation | None
|
P-value estimation strategy. Defaults to Empirical(). |
None
|
weight_estimator
|
BaseWeightEstimator | None
|
Weight estimator for covariate shift. Defaults to None. |
None
|
aggregation
|
str
|
Method for aggregating scores from multiple models. Defaults to "median". |
'median'
|
score_polarity
|
ScorePolarity | Literal['auto', 'higher_is_anomalous', 'higher_is_normal'] | None
|
Score direction convention. Use |
None
|
seed
|
int | None
|
Random seed for reproducibility. Defaults to None. |
None
|
verbose
|
bool
|
If True, displays progress bars during prediction. Defaults to False. |
False
|
verify_prepared_batch_content
|
bool
|
If True (default), weighted reuse mode
( |
True
|
Attributes:
| Name | Type | Description |
|---|---|---|
detector |
The underlying anomaly detection model. |
|
strategy |
The calibration strategy for computing p-values. |
|
weight_estimator |
Optional weight estimator for handling covariate shift. |
|
aggregation |
Method for combining scores from multiple models. |
|
score_polarity |
ScorePolarity
|
Resolved score polarity used internally. |
seed |
ScorePolarity
|
Random seed for reproducible results. |
verbose |
ScorePolarity
|
Whether to display progress bars. |
_detector_set |
ScorePolarity
|
List of trained detector models (populated after fit). |
_calibration_set |
ScorePolarity
|
Calibration scores (populated after fit). |
Examples:
Standard conformal prediction — FDR-controlled selection in one call:
from pyod.models.iforest import IForest
from nonconform import ConformalDetector, Split
detector = ConformalDetector(
detector=IForest(), strategy=Split(n_calib=0.2), seed=42
)
detector.fit(X_train)
mask = detector.select(X_test, alpha=0.05)
Access raw p-values when needed:
Weighted conformal prediction:
from nonconform import logistic_weight_estimator
detector = ConformalDetector(
detector=IForest(),
strategy=Split(n_calib=0.2),
weight_estimator=logistic_weight_estimator(),
seed=42,
)
detector.fit(X_train)
mask = detector.select(X_test, alpha=0.05)
Detached calibration with a pre-trained model (Split strategy):
base_detector.fit(X_fit)
detector = ConformalDetector(
detector=base_detector, strategy=Split(n_calib=0.2)
)
detector.calibrate(X_calib)
p_values = detector.compute_p_values(X_test)
Note
Strict inductive conformal/FDR workflows require a fixed training-only score map at inference time. PyOD detectors known to violate this are: CD, COF, COPOD, ECOD, LMDD, LOCI, RGraph, SOD, SOS.
Source code in nonconform/detector.py
detector_set
property
¶
Returns a copy of the list of trained detector models.
calibration_samples
property
¶
Returns a copy of the calibration samples (weighted mode only).
last_result
property
¶
Return the most recent conformal result snapshot.
score_polarity
property
¶
Returns the resolved score polarity convention.
get_params ¶
Return estimator parameters following sklearn conventions.
Notes
deep=Falsereturns constructor-facing parameters used for sklearn clone compatibility.deep=Truealso includes nestedcomponent__paramentries read from the current runtime components (effective/internal state), which may differ from originally passed constructor objects after adaptation/normalization.
Source code in nonconform/detector.py
set_params ¶
Set estimator parameters following sklearn conventions.
Source code in nonconform/detector.py
fit ¶
Fit detector model(s) and compute calibration scores.
Uses the specified strategy to train the base detector(s) and calculate non-conformity scores on the calibration set.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
The dataset used for fitting and calibration. |
required |
y
|
ndarray | None
|
Ignored. Present for sklearn API compatibility. |
None
|
n_jobs
|
int | None
|
Optional strategy-specific parallelism hint. Supported by
strategies whose |
None
|
Returns:
| Type | Description |
|---|---|
Self
|
The fitted detector instance (for method chaining). |
Source code in nonconform/detector.py
calibrate ¶
Calibrate a pre-fitted detector on separate calibration data.
This detached workflow is currently supported only for Split strategy,
where a single pre-fitted model is calibrated on a dedicated dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
Calibration dataset used to compute calibration scores. |
required |
y
|
ndarray | None
|
Ignored. Present for sklearn API compatibility. |
None
|
Returns:
| Type | Description |
|---|---|
Self
|
The calibrated detector instance (for method chaining). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If strategy is not |
NotFittedError
|
If the base detector appears unfitted. |
Source code in nonconform/detector.py
select ¶
select(
x: DataFrame | Series | ndarray,
*,
alpha: float = 0.05,
pruning: Pruning = Pruning.DETERMINISTIC,
seed: int | None = None,
refit_weights: bool = True,
) -> np.ndarray | pd.Series
Compute p-values and apply FDR-controlled selection in one step.
This is the recommended single-call workflow for most use cases. It
combines compute_p_values() and the appropriate selection procedure
(BH-style FDR selection for standard mode, weighted conformalized
selection for weighted mode) into one method, eliminating the need to
access last_result manually.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | Series | ndarray
|
New data instances for anomaly estimation. |
required |
alpha
|
float
|
Target FDR level in |
0.05
|
pruning
|
Pruning
|
Pruning strategy for weighted FDR control. Ignored in
standard (unweighted) mode. Defaults to
|
DETERMINISTIC
|
seed
|
int | None
|
Optional random seed for weighted randomized pruning modes.
When |
None
|
refit_weights
|
bool
|
Whether to refit the weight estimator for this batch in weighted mode. Ignored in standard mode. Defaults to True. |
True
|
Returns:
| Type | Description |
|---|---|
ndarray | Series
|
Boolean selection mask of shape |
ndarray | Series
|
the FDR-controlled anomaly discoveries. Returns a pandas Series when |
ndarray | Series
|
the input is a DataFrame or Series. |
Examples:
Standard workflow (no weight estimator):
detector.fit(X_train)
mask = detector.select(X_test, alpha=0.05)
print(f"Discoveries: {mask.sum()}")
Weighted workflow:
detector = ConformalDetector(
detector=IForest(),
strategy=Split(n_calib=0.2),
weight_estimator=logistic_weight_estimator(),
)
detector.fit(X_train)
mask = detector.select(
X_test,
alpha=0.1,
pruning=Pruning.HETEROGENEOUS,
seed=42,
)
Source code in nonconform/detector.py
681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 | |
prepare_weights_for ¶
Prepare weighted conformal state for a specific test batch.
In weighted mode, this fits the weight estimator for the supplied batch without producing predictions. Use this for explicit state transitions in exploratory workflows.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
Test batch for which weights should be prepared. |
required |
Returns:
| Type | Description |
|---|---|
Self
|
The fitted detector instance (for method chaining). |
Raises:
| Type | Description |
|---|---|
NotFittedError
|
If fit() has not been called. |
RuntimeError
|
If weighted mode is disabled. |
Source code in nonconform/detector.py
score_samples ¶
score_samples(
x: DataFrame | Series | ndarray,
*,
refit_weights: bool = True,
) -> np.ndarray | pd.Series
Return aggregated raw anomaly scores for new data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | Series | ndarray
|
New data instances for anomaly estimation. |
required |
refit_weights
|
bool
|
Whether to refit the weight estimator for this batch in weighted mode. Defaults to True. |
True
|
Returns:
| Type | Description |
|---|---|
ndarray | Series
|
Aggregated raw anomaly scores. |
Source code in nonconform/detector.py
compute_p_values ¶
compute_p_values(
x: DataFrame | Series | ndarray,
*,
refit_weights: bool = True,
) -> np.ndarray | pd.Series
Return conformal p-values for new data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | Series | ndarray
|
New data instances for anomaly estimation. |
required |
refit_weights
|
bool
|
Whether to refit the weight estimator for this batch in weighted mode. Defaults to True. |
True
|
Returns:
| Type | Description |
|---|---|
ndarray | Series
|
Conformal p-values. |
Source code in nonconform/detector.py
Resampling Strategies¶
nonconform.resampling ¶
Calibration strategies for conformal anomaly detection.
This module provides various calibration strategies that define how to split data for training and calibration in conformal prediction.
Classes:
| Name | Description |
|---|---|
BaseStrategy |
Abstract base class for calibration strategies. |
Split |
Simple train-test split strategy. |
CrossValidation |
K-fold cross-validation strategy (includes Jackknife factory). |
JackknifeBootstrap |
Jackknife+-after-Bootstrap (JaB+) strategy. |
BaseStrategy ¶
Bases: ABC
Abstract base class for anomaly detection calibration strategies.
This class provides a common interface for various calibration strategies applied to anomaly detectors. Subclasses must implement the core calibration logic and define how calibration data is identified and used.
Attributes:
| Name | Type | Description |
|---|---|---|
_mode |
ConformalMode
|
Model retention mode controlling calibration/inference behavior. |
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
ConformalModeInput
|
Model retention mode ( |
'plus'
|
Source code in nonconform/resampling.py
calibration_ids
abstractmethod
property
¶
Indices of data points used for calibration.
fit_calibrate
abstractmethod
¶
fit_calibrate(
x: DataFrame | ndarray,
detector: AnomalyDetector,
seed: int | None = None,
weighted: bool = False,
) -> tuple[list[AnomalyDetector], np.ndarray]
Fits the detector and performs calibration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
The input data for fitting and calibration. |
required |
detector
|
AnomalyDetector
|
The anomaly detection model to be fitted and calibrated. |
required |
seed
|
int | None
|
Random seed for reproducibility. Defaults to None. |
None
|
weighted
|
bool
|
Whether to use weighted approach. Defaults to False. |
False
|
Returns:
| Type | Description |
|---|---|
tuple[list[AnomalyDetector], ndarray]
|
Tuple of (list of trained detectors, calibration scores array). |
Source code in nonconform/resampling.py
Split ¶
Bases: BaseStrategy
Split conformal strategy for fast anomaly detection.
Implements the classical split conformal approach by dividing training data into separate fitting and calibration sets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_calib
|
float | int
|
Size or proportion of data used for calibration. If float, must be between 0.0 and 1.0 (proportion). If int, the absolute number of samples. Defaults to 0.1. |
0.1
|
Examples:
# Use 20% of data for calibration
strategy = Split(n_calib=0.2)
# Use exactly 1000 samples for calibration
strategy = Split(n_calib=1000)
Source code in nonconform/resampling.py
calibration_ids
property
¶
Indices of calibration samples (None if weighted=False).
fit_calibrate ¶
fit_calibrate(
x: DataFrame | ndarray,
detector: AnomalyDetector,
weighted: bool = False,
seed: int | None = None,
) -> tuple[list[AnomalyDetector], np.ndarray]
Fits detector and generates calibration scores using a data split.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
The input data. |
required |
detector
|
AnomalyDetector
|
The detector instance to train. |
required |
weighted
|
bool
|
If True, stores calibration sample indices. Defaults to False. |
False
|
seed
|
int | None
|
Random seed for reproducibility. Defaults to None. |
None
|
Returns:
| Type | Description |
|---|---|
tuple[list[AnomalyDetector], ndarray]
|
Tuple of (list with trained detector, calibration scores array). |
Source code in nonconform/resampling.py
CrossValidation ¶
Bases: BaseStrategy
K-fold cross-validation strategy for conformal anomaly detection.
Splits data into k folds and uses each fold as a calibration set while training on the remaining folds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
k
|
int | None
|
Number of folds. If None, uses leave-one-out (k=n at fit time). |
5
|
mode
|
ConformalModeInput
|
Model retention mode ( |
'plus'
|
shuffle
|
bool
|
Whether to shuffle data before splitting. Defaults to True. Set to False for deterministic leave-one-out (Jackknife). |
True
|
Examples:
# 5-fold cross-validation
strategy = CrossValidation(k=5)
# Leave-one-out (Jackknife) via factory
strategy = CrossValidation.jackknife()
Source code in nonconform/resampling.py
jackknife
classmethod
¶
Create Leave-One-Out cross-validation (deterministic, no shuffle).
This factory method creates a Jackknife strategy, which is a special case of k-fold CV where k equals n (the dataset size). Each sample is left out exactly once for calibration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
ConformalModeInput
|
Model retention mode ( |
'plus'
|
Returns:
| Type | Description |
|---|---|
CrossValidation
|
CrossValidation configured for leave-one-out. |
Examples:
strategy = CrossValidation.jackknife()
detector_list, calib_scores = strategy.fit_calibrate(X, detector)
Source code in nonconform/resampling.py
fit_calibrate ¶
fit_calibrate(
x: DataFrame | ndarray,
detector: AnomalyDetector,
seed: int | None = None,
weighted: bool = False,
) -> tuple[list[AnomalyDetector], np.ndarray]
Fit and calibrate using k-fold cross-validation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
Input data matrix. |
required |
detector
|
AnomalyDetector
|
The base anomaly detector. |
required |
seed
|
int | None
|
Random seed for reproducibility. Defaults to None. |
None
|
weighted
|
bool
|
Whether to use weighted calibration. Defaults to False. |
False
|
Returns:
| Type | Description |
|---|---|
tuple[list[AnomalyDetector], ndarray]
|
Tuple of (list of trained detectors, calibration scores array). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If k < 2 or not enough samples for specified k. |
Source code in nonconform/resampling.py
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | |
JackknifeBootstrap ¶
JackknifeBootstrap(
n_bootstraps: int = 100,
aggregation_method: BootstrapAggregationMethod = "mean",
mode: ConformalModeInput = "plus",
)
Bases: BaseStrategy
Jackknife+-after-Bootstrap (JaB+) conformal anomaly detection.
Implements the JaB+ method which provides predictive inference for ensemble models trained on bootstrap samples. Uses out-of-bag samples for calibration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_bootstraps
|
int
|
Number of bootstrap iterations. Defaults to 100. |
100
|
aggregation_method
|
BootstrapAggregationMethod
|
How to aggregate OOB predictions ("mean" or "median"). Defaults to "mean". |
'mean'
|
mode
|
ConformalModeInput
|
Model retention mode ( |
'plus'
|
References
Jin, Ying, and Emmanuel J. Candès. "Selection by Prediction with Conformal p-values." Journal of Machine Learning Research 24.244 (2023): 1-41.
Source code in nonconform/resampling.py
calibration_ids
property
¶
Indices used for calibration (all samples in JaB+).
aggregation_method
property
¶
Aggregation method for OOB predictions.
fit_calibrate ¶
fit_calibrate(
x: DataFrame | ndarray,
detector: AnomalyDetector,
seed: int | None = None,
weighted: bool = False,
n_jobs: int | None = None,
) -> tuple[list[AnomalyDetector], np.ndarray]
Fit and calibrate using JaB+ method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
DataFrame | ndarray
|
Input data matrix. |
required |
detector
|
AnomalyDetector
|
The base anomaly detector. |
required |
seed
|
int | None
|
Random seed for reproducibility. Defaults to None. |
None
|
weighted
|
bool
|
Not used in JaB+. Defaults to False. |
False
|
n_jobs
|
int | None
|
Number of parallel jobs. Use -1 for all available cores. Defaults to None (sequential). |
None
|
Returns:
| Type | Description |
|---|---|
tuple[list[AnomalyDetector], ndarray]
|
Tuple of (list of trained detectors, calibration scores array). |
Source code in nonconform/resampling.py
P-Value Estimation¶
nonconform.scoring ¶
P-value estimation strategies for conformal prediction.
This module provides strategies for computing p-values from calibration scores.
Classes:
| Name | Description |
|---|---|
BaseEstimation |
Abstract base class for p-value estimation. |
Empirical |
Classical empirical p-value estimation using discrete CDF. |
ConditionalEmpirical |
Conditionally calibrated empirical p-values. |
Probabilistic |
KDE-based probabilistic p-value estimation. |
Kernel ¶
Bases: Enum
Kernel functions for KDE-based p-value computation.
Attributes:
| Name | Type | Description |
|---|---|---|
GAUSSIAN |
Gaussian (normal) kernel. |
|
EXPONENTIAL |
Exponential kernel. |
|
BOX |
Box (uniform) kernel. |
|
TRIANGULAR |
Triangular kernel. |
|
EPANECHNIKOV |
Epanechnikov kernel. |
|
BIWEIGHT |
Biweight (quartic) kernel. |
|
TRIWEIGHT |
Triweight kernel. |
|
TRICUBE |
Tricube kernel. |
|
COSINE |
Cosine kernel. |
BaseEstimation ¶
Bases: ABC
Abstract base for p-value estimation strategies.
compute_p_values
abstractmethod
¶
compute_p_values(
scores: ndarray,
calibration_set: ndarray,
weights: tuple[ndarray, ndarray] | None = None,
) -> np.ndarray
Compute p-values for test scores.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scores
|
ndarray
|
Test instance anomaly scores (1D array). |
required |
calibration_set
|
ndarray
|
Calibration anomaly scores (1D array). |
required |
weights
|
tuple[ndarray, ndarray] | None
|
Optional (w_calib, w_test) tuple for weighted conformal. |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Array of p-values for each test instance. |
Source code in nonconform/scoring.py
get_metadata ¶
set_seed ¶
Set random seed for reproducibility.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Random seed value or None. |
required |
Empirical ¶
Bases: BaseEstimation
Classical empirical p-value estimation using discrete CDF.
Computes p-values using deterministic tie handling by default. Optionally supports randomized smoothing to eliminate the resolution floor caused by discrete ties (Jin & Candes 2023).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tie_break
|
TieBreakModeInput
|
Tie-breaking strategy ( |
'classical'
|
Examples:
estimation = Empirical() # tie_break="classical" by default
p_values = estimation.compute_p_values(test_scores, calib_scores)
# For randomized smoothing:
estimation = Empirical(tie_break="randomized")
Source code in nonconform/scoring.py
set_seed ¶
compute_p_values ¶
compute_p_values(
scores: ndarray,
calibration_set: ndarray,
weights: tuple[ndarray, ndarray] | None = None,
) -> np.ndarray
Compute empirical p-values from calibration set.
Source code in nonconform/scoring.py
ConditionalEmpirical ¶
ConditionalEmpirical(
*,
delta: float = 0.05,
method: str | ConditionalCalibrationMethod = "mc",
tie_break: TieBreakModeInput = "classical",
simes_kden: int = 2,
mc_num_simulations: int = 10000,
)
Bases: Empirical
Conditionally calibrated empirical conformal p-values (CCCPV).
This estimator first computes classical empirical conformal p-values and then applies a finite-sample calibration map:
.. math:: p_j = \frac{1 + \sum_{i=1}^{n_{\text{cal}}}\mathbf{1}[s_i \ge s_j]} {n_{\text{cal}} + 1}, \qquad \tilde p_j = C_{n_{\text{cal}},\delta}(p_j).
Supported calibration maps are "mc", "simes", "dkwm", and
"asymptotic".
References
Bates et al. (2023), Testing for outliers with conformal p-values. Reference implementation: https://github.com/msesia/conditional-conformal-pvalues
Note
Weighted conformal p-values are intentionally not supported in this
first release of ConditionalEmpirical.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
delta
|
float
|
Confidence level used by the conditional calibration map.
Must be in |
0.05
|
method
|
str | ConditionalCalibrationMethod
|
Conditional calibration method. One of
|
'mc'
|
tie_break
|
TieBreakModeInput
|
Tie-breaking strategy used for base empirical p-values
( |
'classical'
|
simes_kden
|
int
|
Denominator used to derive |
2
|
mc_num_simulations
|
int
|
Monte Carlo sample size used to estimate the
finite-sample correction for |
10000
|
Source code in nonconform/scoring.py
set_seed ¶
Set random seed for reproducibility.
compute_p_values ¶
compute_p_values(
scores: ndarray,
calibration_set: ndarray,
weights: tuple[ndarray, ndarray] | None = None,
) -> np.ndarray
Compute conditionally calibrated conformal p-values.
Source code in nonconform/scoring.py
Probabilistic ¶
Probabilistic(
kernel: Kernel | Sequence[Kernel] = Kernel.GAUSSIAN,
n_trials: int = 100,
cv_folds: int = -1,
)
Bases: BaseEstimation
KDE-based probabilistic p-value estimation with continuous values.
Provides smooth p-values in [0,1] via kernel density estimation. Supports automatic hyperparameter tuning and weighted conformal prediction. In weighted mode, only calibration weights are applied to the KDE; test weights are intentionally not injected into the survival calculation so p-values can reach 0. This avoids the lower bound w_test / (sum_calib_weight + w_test) that the discrete weighted formula would impose.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kernel
|
Kernel | Sequence[Kernel]
|
Kernel function or list (list triggers kernel tuning). Bandwidth is always auto-tuned. Defaults to Kernel.GAUSSIAN. |
GAUSSIAN
|
n_trials
|
int
|
Number of Optuna trials for tuning. Defaults to 100. |
100
|
cv_folds
|
int
|
CV folds for tuning (-1 for leave-one-out). Defaults to -1. |
-1
|
Examples:
# Basic usage
estimation = Probabilistic()
p_values = estimation.compute_p_values(test_scores, calib_scores)
# With custom kernel
estimation = Probabilistic(kernel=Kernel.EPANECHNIKOV)
Source code in nonconform/scoring.py
compute_p_values ¶
compute_p_values(
scores: ndarray,
calibration_set: ndarray,
weights: tuple[ndarray, ndarray] | None = None,
) -> np.ndarray
Compute continuous p-values using KDE.
Lazy fitting: tunes and fits KDE on first call or when calibration changes. Note: When weights are provided, this estimator uses only calibration weights to shape the KDE. Test weights are accepted for API parity but do not set a positive lower bound on p-values.
Source code in nonconform/scoring.py
get_metadata ¶
Return KDE metadata after p-value computation.
Source code in nonconform/scoring.py
calculate_p_val ¶
calculate_p_val(
scores: ndarray,
calibration_set: ndarray,
tie_break: TieBreakModeInput = "classical",
rng: Generator | None = None,
) -> np.ndarray
Calculate empirical p-values (standalone function).
Uses classical deterministic tie handling by default. Optionally supports randomized smoothing to eliminate the resolution floor caused by discrete ties (Jin & Candes 2023).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scores
|
ndarray
|
Test instance anomaly scores (1D array). |
required |
calibration_set
|
ndarray
|
Calibration anomaly scores (1D array). |
required |
tie_break
|
TieBreakModeInput
|
Tie-breaking strategy for equal scores ( |
'classical'
|
rng
|
Generator | None
|
Optional random number generator for reproducibility. |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Array of p-values for each test instance. |
Source code in nonconform/scoring.py
calculate_weighted_p_val ¶
calculate_weighted_p_val(
scores: ndarray,
calibration_set: ndarray,
test_weights: ndarray,
calib_weights: ndarray,
tie_break: TieBreakModeInput = "classical",
rng: Generator | None = None,
) -> np.ndarray
Calculate weighted empirical p-values (standalone function).
Uses classical deterministic tie handling by default. Optionally supports randomized smoothing to eliminate the resolution floor caused by discrete ties (Jin & Candes 2023).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scores
|
ndarray
|
Test instance anomaly scores (1D array). |
required |
calibration_set
|
ndarray
|
Calibration anomaly scores (1D array). |
required |
test_weights
|
ndarray
|
Test instance weights (1D array). |
required |
calib_weights
|
ndarray
|
Calibration weights (1D array). |
required |
tie_break
|
TieBreakModeInput
|
Tie-breaking strategy for equal scores ( |
'classical'
|
rng
|
Generator | None
|
Optional random number generator for reproducibility. |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Array of weighted p-values for each test instance. |
Note
Including test_weights in the numerator/denominator implies a positive lower bound of test_weights / (sum(calib_weights) + test_weights) when there is no calibration mass above the test score.
Source code in nonconform/scoring.py
329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 | |
Weight Estimation¶
nonconform.weighting ¶
Weight estimation for covariate shift correction in weighted conformal prediction.
This module provides weight estimators that compute importance weights to correct for covariate shift between calibration and test distributions. They estimate density ratios w(x) = p_test(x) / p_calib(x) which are used to reweight conformal scores for better coverage guarantees under distribution shift.
Classes:
| Name | Description |
|---|---|
BaseWeightEstimator |
Abstract base class for weight estimators. |
IdentityWeightEstimator |
Returns uniform weights (no covariate shift). |
SklearnWeightEstimator |
Universal wrapper for sklearn probabilistic classifiers. |
BootstrapBaggedWeightEstimator |
Bootstrap-bagged wrapper for robust estimation. |
Factory functions
logistic_weight_estimator: Create estimator using Logistic Regression. forest_weight_estimator: Create estimator using Random Forest.
ProbabilisticClassifier ¶
Bases: Protocol
Protocol for classifiers that support probability estimation.
This protocol defines the interface for sklearn-compatible classifiers that can produce probability estimates for weight computation.
fit ¶
Fit the classifier on training data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Feature matrix of shape (n_samples, n_features). |
required |
y
|
ndarray
|
Target labels of shape (n_samples,). |
required |
Returns:
| Type | Description |
|---|---|
ProbabilisticClassifier
|
The fitted classifier instance. |
Source code in nonconform/weighting.py
predict_proba ¶
Return probability estimates for samples.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Feature matrix of shape (n_samples, n_features). |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Probability estimates of shape (n_samples, n_classes). |
BaseWeightEstimator ¶
Bases: ABC
Abstract base class for weight estimators in weighted conformal prediction.
Weight estimators compute importance weights to correct for covariate shift between calibration and test distributions. They estimate density ratios w(x) = p_test(x) / p_calib(x) which are used to reweight conformal scores for better coverage guarantees under distribution shift.
Subclasses must implement fit(), _get_stored_weights(), and _score_new_data() to provide specific weight estimation strategies.
fit
abstractmethod
¶
get_weights ¶
get_weights(
calibration_samples: ndarray | None = None,
test_samples: ndarray | None = None,
) -> tuple[np.ndarray, np.ndarray]
Return density ratio weights for calibration and test data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
calibration_samples
|
ndarray | None
|
Optional calibration data to score. If provided, computes weights for this data using the fitted model. If None, returns stored weights from fit(). Must provide both or neither. |
None
|
test_samples
|
ndarray | None
|
Optional test data to score. If provided, computes weights for this data using the fitted model. If None, returns stored weights from fit(). Must provide both or neither. |
None
|
Returns:
| Type | Description |
|---|---|
tuple[ndarray, ndarray]
|
Tuple of (calibration_weights, test_weights) as numpy arrays. |
Raises:
| Type | Description |
|---|---|
NotFittedError
|
If fit() has not been called. |
ValueError
|
If only one of calibration_samples/test_samples is provided. |
Source code in nonconform/weighting.py
set_seed ¶
Set random seed for reproducibility.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int | None
|
Random seed value or None. |
required |
IdentityWeightEstimator ¶
Bases: BaseWeightEstimator
Identity weight estimator that returns uniform weights.
This estimator assumes no covariate shift and returns weights of 1.0 for all samples. Useful as a baseline or when covariate shift is known to be minimal.
This effectively makes weighted conformal prediction equivalent to standard conformal prediction.
Source code in nonconform/weighting.py
fit ¶
Fit the identity weight estimator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
calibration_samples
|
ndarray
|
Array of calibration data samples. |
required |
test_samples
|
ndarray
|
Array of test data samples. |
required |
Source code in nonconform/weighting.py
SklearnWeightEstimator ¶
SklearnWeightEstimator(
base_estimator: ProbabilisticClassifier
| BaseEstimator
| None = None,
clip_quantile: float | None = 0.05,
)
Bases: BaseWeightEstimator
Universal wrapper for any sklearn-compatible probabilistic classifier.
Adheres to the standard sklearn 'Meta-Estimator' pattern. Accepts a configured estimator instance and clones it for cross-validation safety.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_estimator
|
ProbabilisticClassifier | BaseEstimator | None
|
Configured sklearn classifier instance with predict_proba support. Defaults to LogisticRegression. |
None
|
clip_quantile
|
float | None
|
Quantile for weight clipping (e.g., 0.05 clips to 5th-95th percentile). Use None to disable clipping. Defaults to 0.05. |
0.05
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If base_estimator does not implement predict_proba. |
Examples:
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
# Default (LogisticRegression)
estimator = SklearnWeightEstimator()
# Custom with pipeline
estimator = SklearnWeightEstimator(
base_estimator=make_pipeline(
StandardScaler(), LogisticRegression(C=1.0, class_weight="balanced")
)
)
# Random Forest
estimator = SklearnWeightEstimator(
base_estimator=RandomForestClassifier(n_estimators=100, max_depth=5)
)
Source code in nonconform/weighting.py
fit ¶
Fit the weight estimator on calibration and test samples.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
calibration_samples
|
ndarray
|
Array of calibration data samples. |
required |
test_samples
|
ndarray
|
Array of test data samples. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If calibration_samples is empty. |
Source code in nonconform/weighting.py
BootstrapBaggedWeightEstimator ¶
BootstrapBaggedWeightEstimator(
base_estimator: BaseWeightEstimator,
n_bootstraps: int = 100,
clip_quantile: float | None = 0.05,
scoring_mode: Literal["frozen"] = "frozen",
)
Bases: BaseWeightEstimator
Bootstrap-bagged wrapper for weight estimators with instance-wise aggregation.
This estimator wraps any base weight estimator and applies bootstrap bagging to create more stable, robust weight estimates. It's most relevant when the calibration set is much larger than the test batch (or vice versa), where standalone weights can become spiky and unstable.
The algorithm: 1. For each bootstrap iteration: - Resample BOTH sets to balanced sample size (min of calibration and test sizes) - Fit the base estimator on the balanced bootstrap sample - Score ALL original instances using the fitted model (perfect coverage) - Store log(weights) for each instance 2. After all iterations: - Aggregate instance-wise weights using geometric mean (average in log-space) - Apply clipping to maintain boundedness for theoretical guarantees
Seed inheritance
This class uses the _seed attribute pattern for automatic seed
inheritance from ConformalDetector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_estimator
|
BaseWeightEstimator
|
Any BaseWeightEstimator instance. |
required |
n_bootstraps
|
int
|
Number of bootstrap iterations. Defaults to 100. |
100
|
clip_quantile
|
float | None
|
Quantile for adaptive clipping. Use None to disable clipping. Defaults to 0.05. |
0.05
|
scoring_mode
|
Literal['frozen']
|
Weight scoring behavior after fit. Currently only
|
'frozen'
|
References
Jin, Ying, and Emmanuel J. Candès. "Selection by Prediction with Conformal p-values." Journal of Machine Learning Research 24.244 (2023): 1-41.
Source code in nonconform/weighting.py
supports_rescoring
property
¶
Whether this estimator can score arbitrary new batches after fit().
weight_counts
property
¶
Return diagnostic info about instance-wise weight coverage.
fit ¶
Fit the bagged weight estimator with perfect instance coverage.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
calibration_samples
|
ndarray
|
Array of calibration data samples. |
required |
test_samples
|
ndarray
|
Array of test data samples. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If calibration_samples is empty. |
Source code in nonconform/weighting.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 | |
logistic_weight_estimator ¶
logistic_weight_estimator(
regularization: str | float = "auto",
clip_quantile: float = 0.05,
class_weight: str | dict = "balanced",
max_iter: int = 1000,
) -> SklearnWeightEstimator
Create weight estimator using Logistic Regression.
This factory function provides behavioral equivalence with the old LogisticWeightEstimator class.
Note
When used with ConformalDetector, the detector's seed is automatically propagated to the weight estimator for reproducibility.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
regularization
|
str | float
|
Regularization parameter. If 'auto', uses C=1.0. If float, uses as C parameter. |
'auto'
|
clip_quantile
|
float
|
Quantile for weight clipping. Defaults to 0.05. |
0.05
|
class_weight
|
str | dict
|
Class weights for LogisticRegression. Defaults to 'balanced'. |
'balanced'
|
max_iter
|
int
|
Maximum iterations for solver convergence. Defaults to 1000. |
1000
|
Returns:
| Type | Description |
|---|---|
SklearnWeightEstimator
|
Configured SklearnWeightEstimator instance. |
Examples:
estimator = logistic_weight_estimator(regularization=0.5)
estimator.fit(calib_samples, test_samples)
w_calib, w_test = estimator.get_weights()
Source code in nonconform/weighting.py
forest_weight_estimator ¶
forest_weight_estimator(
n_estimators: int = 100,
max_depth: int | None = 5,
min_samples_leaf: int = 10,
clip_quantile: float = 0.05,
) -> SklearnWeightEstimator
Create weight estimator using Random Forest.
This factory function provides behavioral equivalence with the old ForestWeightEstimator class.
Note
When used with ConformalDetector, the detector's seed is automatically propagated to the weight estimator for reproducibility.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_estimators
|
int
|
Number of trees in the forest. Defaults to 100. |
100
|
max_depth
|
int | None
|
Maximum depth of trees. Defaults to 5. |
5
|
min_samples_leaf
|
int
|
Minimum samples at leaf node. Defaults to 10. |
10
|
clip_quantile
|
float
|
Quantile for weight clipping. Defaults to 0.05. |
0.05
|
Returns:
| Type | Description |
|---|---|
SklearnWeightEstimator
|
Configured SklearnWeightEstimator instance. |
Examples:
estimator = forest_weight_estimator(n_estimators=200)
estimator.fit(calib_samples, test_samples)
w_calib, w_test = estimator.get_weights()
Source code in nonconform/weighting.py
FDR Control¶
Includes weighted low-level expert APIs (weighted_false_discovery_control).
For standard workflows, prefer ConformalDetector.select(...).
nonconform.fdr ¶
False Discovery Rate control utilities for conformal prediction.
This module provides explicit entry points for:
- Weighted Conformalized Selection (WCS) under covariate shift.
Pruning ¶
Bases: Enum
Pruning strategies for weighted FDR control.
Attributes:
| Name | Type | Description |
|---|---|---|
HETEROGENEOUS |
Remove elements based on independent random checks per item. |
|
HOMOGENEOUS |
Apply one shared random decision to all items. |
|
DETERMINISTIC |
Remove items using a fixed rule with no randomness. |
weighted_false_discovery_control ¶
weighted_false_discovery_control(
result: ConformalResult | None,
*,
alpha: float = 0.05,
pruning: Pruning = Pruning.DETERMINISTIC,
seed: int | None = None,
) -> np.ndarray
Perform WCS from a strict ConformalResult bundle.
Source code in nonconform/fdr.py
weighted_false_discovery_control_from_arrays ¶
weighted_false_discovery_control_from_arrays(
*,
p_values: ndarray,
test_scores: ndarray,
calib_scores: ndarray,
test_weights: ndarray,
calib_weights: ndarray,
alpha: float = 0.05,
pruning: Pruning = Pruning.DETERMINISTIC,
seed: int | None = None,
) -> np.ndarray
Perform WCS from explicit weighted arrays and precomputed p-values.
Source code in nonconform/fdr.py
Martingales¶
nonconform.martingales ¶
Exchangeability martingales for sequential conformal evidence.
This module implements p-value-based martingales and alarm statistics for streaming or temporal monitoring workflows. In practice, you feed one conformal p-value at a time and read a running evidence state after each update.
Implemented martingales
- PowerMartingale
- SimpleMixtureMartingale
- SimpleJumperMartingale
All classes consume conformal p-values in [0, 1]. Alarm statistics are
computed from martingale ratio increments and exposed together with the current
martingale value in :class:MartingaleState.
AlarmConfig
dataclass
¶
AlarmConfig(
ville_threshold: float | None = None,
cusum_threshold: float | None = None,
shiryaev_roberts_threshold: float | None = None,
)
Optional alarm thresholds for martingale evidence statistics.
Thresholds are disabled when set to None. Each threshold compares
against a running statistic in :class:MartingaleState.
MartingaleState
dataclass
¶
MartingaleState(
step: int,
p_value: float,
log_martingale: float,
martingale: float,
log_cusum: float,
cusum: float,
log_shiryaev_roberts: float,
shiryaev_roberts: float,
triggered_alarms: tuple[str, ...],
)
Snapshot of martingale and alarm statistics after one update.
BaseMartingale ¶
Bases: ABC
Abstract base class for p-value-driven exchangeability martingales.
Source code in nonconform/martingales.py
reset ¶
Reset martingale and alarm statistics to initial values.
Source code in nonconform/martingales.py
update_many ¶
Update state for each p-value in order and return all snapshots.
update ¶
Ingest one p-value in [0, 1] and return the updated state.
Source code in nonconform/martingales.py
PowerMartingale ¶
Bases: BaseMartingale
Power martingale with fixed epsilon in (0, 1].
Source code in nonconform/martingales.py
SimpleMixtureMartingale ¶
SimpleMixtureMartingale(
epsilons: Sequence[float] | ndarray | None = None,
*,
n_grid: int = 100,
min_epsilon: float = 0.01,
alarm_config: AlarmConfig | None = None,
)
Bases: BaseMartingale
Simple mixture martingale over a fixed epsilon grid.
Source code in nonconform/martingales.py
SimpleJumperMartingale ¶
Bases: BaseMartingale
Simple Jumper martingale (Algorithm 1 in Vovk et al.).
This method mixes three betting components and redistributes mass each
step through jump.
Source code in nonconform/martingales.py
Data Structures¶
nonconform.structures ¶
Core data structures and protocols for nonconform.
This module provides the fundamental types used throughout the package:
Classes:
| Name | Description |
|---|---|
AnomalyDetector |
Protocol defining the detector interface. |
ConformalResult |
Container for conformal prediction outputs. |
AnomalyDetector ¶
Bases: Protocol
Protocol defining the interface for anomaly detectors.
Any detector (PyOD, sklearn-compatible, or custom) can be used with nonconform by implementing this protocol.
Required methods
fit: Train the detector on data decision_function: Compute anomaly scores get_params: Retrieve detector parameters set_params: Configure detector parameters
The detector must be copyable (support copy.copy and copy.deepcopy).
Examples:
# Most PyOD detectors work automatically (blocked strict-inductive
# exceptions are documented in the detector compatibility guide)
from pyod.models.iforest import IForest
detector: AnomalyDetector = IForest()
# Custom detector implementing the protocol
class MyDetector:
def fit(self, X, y=None): ...
def decision_function(self, X): ...
def get_params(self, deep=True): ...
def set_params(self, **params): ...
detector: AnomalyDetector = MyDetector()
fit ¶
Train the anomaly detector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Training data of shape (n_samples, n_features). |
required |
y
|
ndarray | None
|
Ignored. Present for API consistency. |
None
|
Returns:
| Type | Description |
|---|---|
Self
|
The fitted detector instance. |
Source code in nonconform/structures.py
decision_function ¶
Compute anomaly scores for samples.
Higher scores indicate more anomalous samples.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Data of shape (n_samples, n_features). |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Anomaly scores of shape (n_samples,). |
Source code in nonconform/structures.py
get_params ¶
Get parameters for this detector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
deep
|
bool
|
If True, return parameters for sub-objects. |
True
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Parameter names mapped to their values. |
set_params ¶
Set parameters for this detector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Any
|
Detector parameters. |
{}
|
Returns:
| Type | Description |
|---|---|
Self
|
The detector instance. |
ConformalResult
dataclass
¶
ConformalResult(
p_values: ndarray | None = None,
test_scores: ndarray | None = None,
calib_scores: ndarray | None = None,
test_weights: ndarray | None = None,
calib_weights: ndarray | None = None,
metadata: dict[str, Any] = dict(),
)
Snapshot of detector outputs for downstream procedures.
This dataclass holds all outputs from a conformal prediction, including p-values, raw scores, and optional weights for weighted conformal.
Attributes:
| Name | Type | Description |
|---|---|---|
p_values |
ndarray | None
|
Conformal p-values for test instances (None when unavailable). |
test_scores |
ndarray | None
|
Non-conformity scores for the test instances (raw predictions). |
calib_scores |
ndarray | None
|
Non-conformity scores for the calibration set. |
test_weights |
ndarray | None
|
Importance weights for test instances (weighted mode only). |
calib_weights |
ndarray | None
|
Importance weights for calibration instances. |
metadata |
dict[str, Any]
|
Optional dictionary with extra data (debug info, timings, etc.). |
Examples:
p_values = detector.compute_p_values(X_test)
result = detector.last_result
print(result.p_values) # Access p-values
print(result.metadata) # Access optional metadata
copy ¶
Return a copy with arrays and metadata fully duplicated.
Returns:
| Type | Description |
|---|---|
ConformalResult
|
A new ConformalResult with copied arrays and deep-copied metadata. |
Source code in nonconform/structures.py
Adapters¶
nonconform.adapters ¶
External detector adapters for nonconform.
ScorePolarityAdapter ¶
Adapter that normalizes detector score direction conventions.
Source code in nonconform/adapters.py
PyODAdapter ¶
adapt ¶
Adapt a detector to the AnomalyDetector protocol.
Source code in nonconform/adapters.py
parse_score_polarity ¶
Parse score polarity input to canonical enum representation.
Source code in nonconform/adapters.py
resolve_implicit_score_polarity ¶
Resolve score polarity when users omit score_polarity.
This pre-release default favors low-friction custom detector onboarding while preserving safe behavior for known detector families: - Known sklearn normality detectors -> HIGHER_IS_NORMAL - PyOD detectors -> HIGHER_IS_ANOMALOUS - Unknown custom detectors -> HIGHER_IS_ANOMALOUS
Source code in nonconform/adapters.py
resolve_score_polarity ¶
Resolve requested score polarity in strict AUTO mode.
Unlike resolve_implicit_score_polarity, this function is intentionally
strict for explicit score_polarity="auto" and raises for unknown
detectors.
Source code in nonconform/adapters.py
apply_score_polarity ¶
apply_score_polarity(
detector: AnomalyDetector,
score_polarity: ScorePolarityInput,
) -> AnomalyDetector
Return detector that follows requested score polarity convention.