Abstract
In this paper, we consider the problem of removing clouds and recovering
ground cover information from remote sensing images by proposing novel
framework based on a deep unfolded and prior-aided robust principal
component analysis (DUPA-RPCA) network. Clouds, together with their
shadows, usually occlude ground-cover features in optical remote sensing
images. This hinders the utilization of these images for a range of
applications such as earth observation, land-cover classification and
urban planning. We model these cloud-contaminated images as a sum of low
rank and sparse elements and then unfold an iterative RPCA algorithm
that has been designed for reweighted l1-minimization. As a result, the
activation function in DUPA-RPCA adapts for every input at each layer of
the network. Our experimental results on both Landsat and Sentinel
images indicate that our method gives better accuracy and efficiency
when compared with existing state of the art methods.