loading page

SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification
  • +2
  • Mohammed Q. Alkhatib,
  • M Sami Zitouni,
  • Mina Al-Saad,
  • Nour Aburaed,
  • Hussain Al-Ahmad
Mohammed Q. Alkhatib

Corresponding Author:[email protected]

Author Profile
M Sami Zitouni
Mina Al-Saad
Nour Aburaed
Hussain Al-Ahmad

Abstract

Polarimetric Synthetic Aperture Radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep Learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional Neural Networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complexvalued nature of PolSAR data. In this study, a novel threebranch fusion of complex-valued CNN named Shallow to Deep Feature Fusion Network (SDF2Net) is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple stateof-the-art approaches using the Airborne Synthetic Aperture Radar (AIRSAR) datasets of Flevoland, San Francisco, and ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in OA, with 1.3% and 0.8% enhancement for the AIRSAR datasets and 0.5% improvement for the ESAR dataset. The analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising OA of 96.01% even with only 1% sampling ratio. The source code is available at: https://github.com/mqalkhatib/SDF2Net
03 Mar 2024Submitted to TechRxiv
04 Mar 2024Published in TechRxiv