Abstract
In autonomous driving, the robust detection of objects in the driving environment is essential. Existing driving environment object recognition algorithms are based on analyzing the class and location of objects through various data formats. In this study, we use Nuscenes, a driving environment dataset, to check and verify performance in the driving environment. We propose a method that uses only point cloud data and one that complements the weaknesses of using only image data and the weaknesses of using only point cloud data by adding image data to a model called CenterFormer, which combines CenterPoint and Transformer. The results revealed that the proposed method performed better than the method using only point cloud data.
Original language | English |
---|---|
Pages (from-to) | 804-809 |
Number of pages | 6 |
Journal | Journal of Institute of Control, Robotics and Systems |
Volume | 30 |
Issue number | 8 |
DOIs | |
State | Published - 2024 |
Keywords
- Transformer
- deep learning
- driving environment
- object detection