Distance estimation with semantic segmentation and edge detection of surround view images

Junwoo Jung, Hyunjin Lee, Chibum Lee

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a method for obtaining 2D distance data through a robot’s surround view camera system. By converting semantic segmentation images into bird’s eye view, the location of the traversable region can be determined. However, since this depends entirely on the performance of the segmentation, noise may exist at the boundary between the traversable region and obstacle in untrained objects and environments. Therefore, instead of classifying the class of each pixel through semantic segmentation, obtaining the probability distribution for each class can yield the probability distribution for the boundary between traversable region and obstacle. Using this probability distribution, the boundary can be obtained from the edges obtained from each image. By transforming this into the accurate x, y coordinates through bird’s eye view, the position of the obstacle can be obtained from each image. We compared the results with LiDAR measurements and observed an error of about 5%, and it was confirmed that the localization algorithm can obtain the global pose of the robot.

Original languageEnglish
Pages (from-to)633-641
Number of pages9
JournalIntelligent Service Robotics
Volume16
Issue number5
DOIs
StatePublished - Nov 2023

Keywords

  • Computer vision
  • Deep learning
  • Localization
  • Mobile robot
  • Semantic segmentation

Fingerprint

Dive into the research topics of 'Distance estimation with semantic segmentation and edge detection of surround view images'. Together they form a unique fingerprint.

Cite this