Generating Views Using Atmospheric Correction for Contrastive Self-supervised Learning of Multi-spectral Images
In remote sensing, plenty of multi-spectral images are publicly available from various landcover satellite missions. Contrastive self-supervised learning is commonly applied to unlabeled data but relies on domain-specific transformations used for learning. When focusing on vegetation, standard transformations from image processing cannot be applied to the NIR channel, which carries valuable information about the vegetation state. Therefore, we use contrastive learning, relying on different views of unlabelled, multi-spectral images to obtain a pre-trained model to improve the accuracy scores on small-sized remote sensing datasets. This study presents the generation of additional views tailored to remote sensing images using atmospheric correction as an alternative transformation to color jittering. The purpose of the atmospheric transformation is to provide a physically consistent transformation. The proposed transformation can be easily integrated with multiple channels to exploit spectral signatures of objects. Our approach can be applied to other remote sensing tasks. Using this transformation leads to improved classification accuracy of up to 6%.
Funding
Verbund - KI: KI S strategy for Earth system data
Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection
Find out more...SFB 1502: Regional Climate Change: Disentangling the Role of Land Use and Water Management
Deutsche Forschungsgemeinschaft
Find out more...History
Email Address of Submitting Author
a.patnala@fz-juelich.deORCID of Submitting Author
0000-0002-8472-7463Submitting Author's Institution
Forschungszentrum JuelichSubmitting Author's Country
- Germany