Abstract—Mobile platforms are now computationally able to implement SLAM extending the scope of SLAM applications, which has been limited due to the requirement of elaborate sensor support to work. Visual SLAM methods help in tackling sensor limitations on such devices by exploiting information rich data from cameras. The proposed method aims to exploit such visual information from a monocular RGB input to derive depth information of contents of same using DenseNet-169 based Encoder-Decoder architecture. Thus, obtained depth map was combined with the keypoints extracted from monocular input to be processed by ORB SLAM. Further, analysis was done to evaluate the usage of various feature extractors vis-a-vis Oriented Fast and Rotated BRIEF (ORB), SIFT, BRISK. The map was generated from input visual trajectory and pipeline developed was able to implement RGB-D SLAM from only monocular input. The proposed system, thus, helps in executing an efficient SLAM algorithm using only the monocular RGB input.
Index Terms—visual SLAM, depth reconstruction, encoder, decoder, ORB
Cite: Yatharth Ahuja, Tushar Nitharwal, Utkarsh Sundriyal, Sreedevi Indu, and Anup K. Mandpura, "Depth Reconstruction Based Visual SLAM Using ORB Feature Extraction," Journal of Image and Graphics, Vol. 10, No. 4, pp. 172-177, December 2022.
Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2024 Journal of Image and Graphics, All Rights Reserved