Neural-network fusion processing and inverse mapping to combine multi-sensor satellite data and to analyze significant features
preprintposted on 18.11.2021, 06:10 by Gunjan JoshiGunjan Joshi, Ryo Natsuaki, Akira Hirose
In the last decade, the increase in the number of active and passive earth observation satellites has provided us with more remote sensing data. This fact has led to increased interests in the field of fusion of the different satellite data since some of the satellites have properties complementary to one another. Fusion techniques can improve the estimation in areas of interest (AOIs) by using complementary information and inferring unknown parameters. However, when the observation area is large, extensive human labor and domain expertise are required for processing and analysis. Thus, we propose a neural network which combines and analyzes the data obtained from synthetic aperture radars (SAR) and optical sensors. The neural network employs a modified logarithmic activation function, unlike conventional networks, to realize inverse mapping for significant feature analysis based on dynamics consistent with its forward processing. In this paper, we focus on earthquake damage detection by dealing with the data of the 2018 Sulawesi earthquake in Indonesia. The fusion-based results show increased classification accuracy compared to the results of independent sensors. We further attempt to understand which input features are the significant contributors for which classification outputs by inverse-mapping in the data fusion neural network. We observe that inverse mapping shows reasonable explanations in a consistent manner. It also indicates contributions of features different from straightforward counterparts, namely pre- and post-seismic features, in the detection of particular classes.