Unsupervised Color Retention Network and New Quantization Metric for
Blind Motion Deblurring
Abstract
Unsupervised blind motion deblurring is still a challenging topic due to
the inherent ill-posed properties, and lacking of paired data and
accurate quality assessment method. Besides, virtually all the current
studies suffer from large chromatic aberration between the latent and
original images, which will directly cause the loss of image details.
However, how to model and quantify the chromatic aberration
appropriately are difficult issues urgent to be solved. In this paper,
we propose a general unsupervised color retention network termed CRNet
for blind motion deblurring, which can be easily extended to other tasks
suffering from chromatic aberration. New concepts of blur offset
estimation and adaptive blur correction are introduced, so that more
detailed information can be retained to improve the deblurring task.
Specifically, CRNet firstly learns a mapping from the blurry image to
motion offset, rather than directly from the blurry image to latent
image as previous work. With obtained motion offset, an adaptive blur
correction operation is then performed on the original blurry image to
obtain the latent image, thereby retaining the color information of
image to the greatest extent. A new pyramid global blur feature
perception module is also designed to further retain the color
information and extract more blur information. To assess the color
retention ability for image deblurring, we present a new chromatic
aberration quantization metric termed Color-Sensitive Error (CSE) in
line with human perception, which can be applied to both the cases
with/without paired data. Extensive experiments demonstrated the
effectiveness of our CRNet for the color retention in unsupervised
deblurring.