Loop closure detection in visual appearance-based SLAM using deep autoencoders

Document Type : Original Article

Authors

1 Department of Mathematics and Computer Science, Amirkabir University of Technology (Tehran Polytechnic), Iran

2 Staffordshire University, School of Digital, Technologies and Arts, College Rd, Stoke-on-Trent ST4 2DE, United Kingdom

Abstract

Abstract: Loop closure detection (LCD) and trajectory generation are critical components of visual simultaneous localization and mapping (vSLAM). In this paper, we aim to solve the LCD and trajectory generation problem in vSLAM using a newly devised vector quantization (VQ) algorithm. The proposed new VQ algorithm is constructed based on a self-supervised deep convolutional autoencoder (AE). The new VQ step is then incorporated into the two famous SLAM algorithms fast appearancebased mapping (FABMAP) and ORB-SLAM, which we now call AE-FABMAP and AE-ORB-SLAM, respectively. Experiments show that using self-supervised autoencoders in the VQ step is far more efficient in terms of speed and memory consumption with respect to other methods such as graph convolutional neural networks. Furthermore, the newly presented algorithms, AE-ORB-SLAM and AE-FABMAP outperform the standard FABMAP2 and ORB SLAM, and in large-scale SLAM, the new approaches improve the accuracy and recall of the LCD.

Keywords

Main Subjects