TechRxiv
bare_jrnl_new_sample4.pdf (11.82 MB)
Download file

MNCAA: Balanced Style Transfer Based on Multi-level Normalized Cross Attention Alignment

Download (11.82 MB)
preprint
posted on 15.04.2022, 03:38 by Liu HengLiu Heng, mi zhipengmi zhipeng, Chen Feng, Jungong Han

Given a content image and an artistic style one, style transfer usually refers to applying the patterns learned from the style image to the content image to generate a new stylized image. Despite the thrilling success achieved by existing style transfer methods, most of them are in the mire of two limitations: 1) cannot preserve the structure of the content image well; 2) cannot generate delicate enough style effects or may produce significant artifacts. It is a challenge to maintain a balance between content structure preservation and style pattern transformation. In this work, we observe that multi-level content-style cross attention can extract the content features matching to the style characteristics at different feature levels. In addition, we also find that through multi-level dynamic normalization and alignment, hierarchical content-style cross attention can effectively transform the content image through the style characteristics of different levels while preserving its local structure and semantics as much as possible. The perceptual loss and the contextual loss are introduced individually to ensure the generated stylized image is close both to the content image and the style image in the feature space. At the same time, the identity loss of the content image and the style one is deployed to encourage the proposed model to retain the global appearance and the feature semantics of the input images without overall statistical deviation. A large number of qualitative and quantitative experiments and evaluations on the benchmark MSCOCO and WikiArt datasets demonstrate that, compared with other state-of-the-art (SOTA) methods, the proposed approach can obtain high-quality stylized images with structure-style balance. The project code is available at https://github.com/hengliusky/MNCAA.

Funding

This work is supported by the National Natural Science Foundation of China under Grant No. 61971004, by the Nat- ural Science Foundation of Anhui Province under Grant No. 2008085MF190, by the Natural Science Foundation of Anhui Provincial Education Department Grant No. KJ2021A0375.

History

Email Address of Submitting Author

hengliusky@aliyun.com

Submitting Author's Institution

Anhui University of Technology

Submitting Author's Country

China

Usage metrics

Licence

Exports