Abstract—This study is an attempt to actualize the autonomous movement of a robot using a navigation system with a camera, instead of expensive external sensors such as light detection and ranging. The present implementation of our approach basically consists of road-following, intersection detection, and intersection recognition, using the results of semantic segmentation. In this study, we focus on the accuracy improvement of the intersection detection and recognition. Classifiers are constructed for these tasks using deep neural networks. We evaluated the proposed classifier using three-dimensional computer graphics generated from the CARLA simulator and the Ikuta dataset composed of actual images that we took. The Experimental results demonstrated that the proposed system could detect and recognize intersections accurately; the F measure exceeded 0.96 for detection, and the actual images were recognized and classified with perfect accuracy.
Index Terms—intersection detection, intersection recognition, semantic segmentation, robot navigation
Cite: Takuto Watanabe, Kouchi Matsutani, Miho Adachi, Takuro Oki, and Ryusuke Miyamoto, "Feasibility Study of Intersection Detection and Recognition Using a Single Shot Image for Robot Navigation," Journal of Image and Graphics, Vol. 9, No. 2, pp. 39-44, June 2021. doi: 10.18178/joig.9.2.39-44
Copyright © 2021 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2024 Journal of Image and Graphics, All Rights Reserved