loading page

LO-SLAM: Lunar Object-centric SLAM using Point Prompted SAM for Data Association
  • +3
  • Yaolin Tian,
  • Xue Wan,
  • Shengyang Zhang,
  • Jianhong Zuo,
  • Yadong Shao,
  • Mengmeng Yang
Yaolin Tian

Corresponding Author:[email protected]

Author Profile
Xue Wan
Shengyang Zhang
Jianhong Zuo
Yadong Shao
Mengmeng Yang

Abstract

To ensure long-term space missions, an autonomous visual navigation system for lunar rovers supporting selfexploration is demanded. While the object-centric localization and mapping problem can be solved through state-of-the-art methods, they greatly rely on human-in-loop remote operations, posing several challenges for the visual system of a rover when operating in a distant, unknown, and feature-sparse lunar environment. This paper presents a SAM-augmented object-centric SLAM framework which enable rovers to estimate the relative distance to the target on lunar surface, thus ensuring the safety of exploration task. Based on the feature-matching baseline and an auto-label segmentation approach, a prompted-based object instance extraction pipeline is first proposed to predict object correspondences at a pixel level. We then integrate with front-end outputs in the middle-end of SLAM to tightly associate reliable object-centric constraints between image frames. Moreover, the data association in LO-SLAM maintains robust camera-object relative position estimation between the camera and target object. Extensive experiments are conducted on our dataset, Stereo Planetary Tracks (SePT). Results show that the proposed LO-SLAM is validated on challenging lunar scenarios with dramatic viewpoints and object scale changes. The average pose errors are less than 0.37 meters in centroid, and the average object-centric trajectory error is less than 0.49%. An open-source dataset has been released at https://github.com/miaTian99/SePT_Stereo-Planetary-Tracks.
05 Mar 2024Submitted to TechRxiv
06 Mar 2024Published in TechRxiv