Essential Maintenance: All Authorea-powered sites will be offline 9am-10am EDT Tuesday 28 May
and 11pm-1am EDT Tuesday 28-Wednesday 29 May. We apologise for any inconvenience.

loading page

FRC-Net: A Simple Yet Effective Architecture for Low-Light Image Enhancement
  • +3
  • Zhao Zhang ,
  • Huan Zheng ,
  • Richang Hong ,
  • Jicong Fan ,
  • Yi Yang ,
  • Shuicheng Yan
Zhao Zhang
Hefei University of Technology, Hefei University of Technology

Corresponding Author:[email protected]

Author Profile
Huan Zheng
Author Profile
Richang Hong
Author Profile
Jicong Fan
Author Profile
Shuicheng Yan
Author Profile

Abstract

Low-light image enhancement (LLIE) aims at refining illumination and restoring the detail of low-light image. Current deep LLIE models still face two issues: low-quality detail-recovery result due to information loss, and complex model design. On one hand, current methods usually adopt U-Net with multiple feature scaling operations as main structure, but feature scaling inevitably loses informative visual primitives, which will result in blur textures and inaccurate illumination. On the other hand, current models are usually complicated and even redundant, which goes against the original goal of building plain model to effectively handle the LLIE task. To address these issues, we branch out to propose a simple yet effective deep LLIE architecture, termed Full-Resolution Context Network (FRC-Net). Specifically, to avoid information loss caused by feature scaling, we propose a novel full-resolution representation strategy to replace all feature scaling operations. The model structure of FRC-Net is very simple, which only contains 12 cascaded layers: 7 convolution layers and 5 newly-designed context attention (CA) modules. CA module is mainly designed to overcome the limited receptive field caused by shallow structures, by not only learning global context but also retaining local details. Extensive experiments show that our FRC-Net obtains better detail-recovery quality, which performs favorably against the current SOTA methods.
2023Published in IEEE Transactions on Consumer Electronics on pages 1-1. 10.1109/TCE.2023.3280467