Motion-based Lidar-camera Calibration via Cross-modality Structure Consistency
Lidar and cameras serve as essential sensors for automated vehicles and intelligent robots, and they are frequently fused in complicated tasks. Precise extrinsic calibration is the prerequisite of Lidar-camera fusion. Hand-eye calibration is almost the most commonly used targetless calibration approach. This paper presents a particular degeneration problem of hand-eye calibration when sensor motions lack rotation. This context is common for ground vehicles, especially those traveling on urban roads, leading to a significant deterioration in translational calibration performance. To address this problem, we propose a novel motion-based Lidar-camera calibration framework based on cross-modality structure consistency. It is globally convergent within the specified search range and can achieve satisfactory translation calibration accuracy in degenerate scenarios. To verify the effectiveness of our framework, we compare its performance to one motion-based method and two appearance-based methods using six Lidar-camera data sequences from the KITTI dataset. Additionally, an ablation study is conducted to demonstrate the effectiveness of each module within our framework.
Our codes will be public on github after acceptance.
Funding
National Key Research and Development Program of China under Grant 2019YFC1511401
National Natural Science Foundation of China under Grant 62173038
National Natural Science Foundation of China under Grant 61773060
History
Email Address of Submitting Author
3120205431@bit.edu.cnORCID of Submitting Author
0000-0002-2643-8989Submitting Author's Institution
Beijing Institute of TechnologySubmitting Author's Country
- China