Abstract
Low-light image enhancement (LLIE) aims at refining illumination and
restoring the detail of low-light image. Current deep LLIE models still
face two issues: low-quality detail-recovery result due to information
loss, and complex model design. On one hand, current methods usually
adopt U-Net with multiple feature scaling operations as main structure,
but feature scaling inevitably loses informative visual primitives,
which will result in blur textures and inaccurate illumination. On the
other hand, current models are usually complicated and even redundant,
which goes against the original goal of building plain model to
effectively handle the LLIE task. To address these issues, we branch out
to propose a simple yet effective deep LLIE architecture, termed
Full-Resolution Context Network (FRC-Net). Specifically, to avoid
information loss caused by feature scaling, we propose a novel
full-resolution representation strategy to replace all feature scaling
operations. The model structure of FRC-Net is very simple, which only
contains 12 cascaded layers: 7 convolution layers and 5 newly-designed
context attention (CA) modules. CA module is mainly designed to overcome
the limited receptive field caused by shallow structures, by not only
learning global context but also retaining local details. Extensive
experiments show that our FRC-Net obtains better detail-recovery
quality, which performs favorably against the current SOTA methods.