TechRxiv
MR-InpaintNet-Toward Deep Multi-Resolution Learning for Image Inpainting_IEEE.pdf (2.92 MB)
Download file

MR-InpaintNet: Toward Deep Multi-Resolution Learning for Progressive Image Inpainting

Download (2.92 MB)
preprint
posted on 24.09.2021, 21:19 by Huan ZhangHuan Zhang, Zhao ZhangZhao Zhang, Haijun Zhang, Yi YangYi Yang, Shuicheng Yan, Meng Wang
Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.

Funding

This work is partially supported by National Natural Science Foundation of China (62072151, 62020106007, 61972112 and 61832004), Anhui Provincial Natural Science Fund for Distinguished Young Scholars (2008085J30), Guangdong Basic and Applied Basic Research Foundation under Grant no. 2021B1515020088, and the Fundamental Research Funds for Central Universities of China (JZ2019HGPA0102).

History

Email Address of Submitting Author

cszzhang@gmail.com

Submitting Author's Institution

Hefei University of Technology

Submitting Author's Country

China