loading page

Bridging the Gap Between Indoor Depth Completion and Masked Autoencoders
  • Zhou Yang ,
  • Kailai Sun ,
  • Qianchuan Zhao
Zhou Yang
Author Profile
Kailai Sun
Author Profile
Qianchuan Zhao
Author Profile

Abstract

Depth images have a wide range of applications, such as 3D reconstruction, autonomous driving, augmented reality, robot navigation, and scene understanding. Commodity-grade depth cameras are hard to sense depth for bright, glossy, transparent, and distant surfaces. Although existing depth completion methods have achieved remarkable progress, their performance is limited when applied to complex indoor scenarios. Moreover, the expectations of Transformer are high for the depth completion task. To address these problems, we propose a two-step Transformer-based network for indoor depth completion. Unlike existing depth completion approaches, we adopt an MAE-based self-supervision pre-training encoder to learn an effective latent representation for the missing depth value; then we propose a decoder based on a token fusion mechanism to complete (i.e., reconstruct) the full depth from the jointly RGB and incomplete depth image. Compared to the existing methods, our proposed network, achieves the state-of-the-art performance on the Matterport3D dataset. In addition, to validate the importance of the depth completion task, we apply our methods to indoor 3D reconstruction. The code, dataset, and demo are available at https://github.com/kailaisun/Indoor-Depth-Completion.