Hypothesis tests of calibration.
This package implements different hypothesis tests for calibration of probabilistic models in the Julia language.
CalibrationErrors.jl contains estimators for classification models. CalibrationErrorsDistributions.jl extends them to more general probabilistic predictive models that output arbitrary probability distributions.
pycalibration is a Python interface for CalibrationErrors.jl, CalibrationErrorsDistributions.jl, and CalibrationTests.jl.
rcalibration is an R interface for CalibrationErrors.jl, CalibrationErrorsDistributions.jl, and CalibrationTests.jl.
If you use CalibrationsTests.jl as part of your research, teaching, or other activities, please consider citing the following publications:
Widmann, D., Lindsten, F., & Zachariah, D. (2019). Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (pp. 12257–12267).
Widmann, D., Lindsten, F., & Zachariah, D. (2021). Calibration tests beyond classification. To be presented at ICLR 2021.