TY - JOUR
T1 - Distance estimation with semantic segmentation and edge detection of surround view images
AU - Jung, Junwoo
AU - Lee, Hyunjin
AU - Lee, Chibum
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
PY - 2023/11
Y1 - 2023/11
N2 - This paper presents a method for obtaining 2D distance data through a robot’s surround view camera system. By converting semantic segmentation images into bird’s eye view, the location of the traversable region can be determined. However, since this depends entirely on the performance of the segmentation, noise may exist at the boundary between the traversable region and obstacle in untrained objects and environments. Therefore, instead of classifying the class of each pixel through semantic segmentation, obtaining the probability distribution for each class can yield the probability distribution for the boundary between traversable region and obstacle. Using this probability distribution, the boundary can be obtained from the edges obtained from each image. By transforming this into the accurate x, y coordinates through bird’s eye view, the position of the obstacle can be obtained from each image. We compared the results with LiDAR measurements and observed an error of about 5%, and it was confirmed that the localization algorithm can obtain the global pose of the robot.
AB - This paper presents a method for obtaining 2D distance data through a robot’s surround view camera system. By converting semantic segmentation images into bird’s eye view, the location of the traversable region can be determined. However, since this depends entirely on the performance of the segmentation, noise may exist at the boundary between the traversable region and obstacle in untrained objects and environments. Therefore, instead of classifying the class of each pixel through semantic segmentation, obtaining the probability distribution for each class can yield the probability distribution for the boundary between traversable region and obstacle. Using this probability distribution, the boundary can be obtained from the edges obtained from each image. By transforming this into the accurate x, y coordinates through bird’s eye view, the position of the obstacle can be obtained from each image. We compared the results with LiDAR measurements and observed an error of about 5%, and it was confirmed that the localization algorithm can obtain the global pose of the robot.
KW - Computer vision
KW - Deep learning
KW - Localization
KW - Mobile robot
KW - Semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85172984139&partnerID=8YFLogxK
U2 - 10.1007/s11370-023-00486-2
DO - 10.1007/s11370-023-00486-2
M3 - Article
AN - SCOPUS:85172984139
SN - 1861-2776
VL - 16
SP - 633
EP - 641
JO - Intelligent Service Robotics
JF - Intelligent Service Robotics
IS - 5
ER -