loading page

Bridging the Gap between Few-Shot and Many-Shot Learning via Distribution Calibration
  • +1
  • Shuo Yang ,
  • Songhua Wu ,
  • Tongliang Liu ,
  • Min Xu
Shuo Yang
University of Technology Sydney

Corresponding Author:[email protected]

Author Profile
Songhua Wu
Author Profile
Tongliang Liu
Author Profile

Abstract

A major gap between few-shot and many-shot learning is the data distribution empirically observed by the model during training. In few-shot learning, the learned model can easily become over-fitted based on the biased distribution formed by only a few training examples, while the ground-truth data distribution is more accurately uncovered in many-shot learning to learn a well-generalized model. In this paper, we propose to calibrate the distribution of these few-sample classes to be more unbiased to alleviate such an over-fitting problem. The distribution calibration is achieved by transferring statistics from the classes with sufficient examples to those few-sample classes. After calibration, an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier. Extensive experiments on three datasets, miniImageNet, tieredImageNet, and CUB, show that a simple linear classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy by a large margin. We also establish a generalization error bound for the proposed distribution-calibration-based few-shot learning, which consists of the distribution assumption error, the distribution approximation error, and the estimation error. This generalization error bound theoretically justifies the effectiveness of the proposed method.
01 Dec 2022Published in IEEE Transactions on Pattern Analysis and Machine Intelligence volume 44 issue 12 on pages 9830-9843. 10.1109/TPAMI.2021.3132021