loading page

A survey on confidence calibration of deep learning under class imbalance data
  • +5
  • Jinzong Dong,
  • Zhaohui Jiang,
  • Dong Pan,
  • Zhiwen Chen,
  • Qingyi Guan,
  • Hongbin Zhang,
  • Gui Gui,
  • Weihua Gui
Jinzong Dong
Author Profile
Zhaohui Jiang

Corresponding Author:

Dong Pan
Zhiwen Chen
Qingyi Guan
Hongbin Zhang
Gui Gui
Weihua Gui


Confidence calibration in classification models, a technique to achieve accurate posterior probability estimation for classification results, is crucial for assessing the likelihood of correct decisions in real-world applications. Class imbalance data, which biases the learning of the model and subsequently skews the posterior probabilities of the model, makes confidence calibration more challenging. Especially for often more important minority classes with high uncertainty, confidence calibration is more complex and necessary. Unlike previous surveys that typically separately investigate confidence calibration or class imbalance, this paper comprehensively investigates confidence calibration methods for deep learning-based classification models under class imbalance. Firstly, the problem of confidence calibration under class imbalance data is outlined. Secondly, a novel exploratory analysis regarding the impact of class imbalance data on confidence calibration is carried out, which can explain some experimental findings in existing studies. Then, this paper conducts a comprehensive review of 57 state-of-the-art confidence calibration methods under class imbalance data, divides these methods into six groups according to method differences, and systematically compares seven properties to evaluate their superiority. Subsequently, some commonly used and emerging evaluation methods in this field are summarized. Finally, we discuss several promising research directions that may serve as a guideline for future studies.
15 May 2024Submitted to TechRxiv
20 May 2024Published in TechRxiv