TY - GEN
T1 - Deep Reinforcement Learning-Based Path-Tracking for Unmanned Vehicle Navigation Enhancement
AU - Yang, Seung Geon
AU - Cho, Eun Ho
AU - Kim, Jeongyun
AU - Lim, Seung Chan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Despite the growing interest in autonomous vehicles, practical challenges remain particularly in achieving high tracking performance. In this study, we address this issue by developing a path tracking algorithm based on deep reinforcement learning. To this end, we elaborately design the Markov decision process and implement the deep Q-network (DQN) training algorithm. Simulation results validate the superior convergence speed, accuracy, and stability of the proposed DQN-based tracking algorithm in comparison to the conventional approach.
AB - Despite the growing interest in autonomous vehicles, practical challenges remain particularly in achieving high tracking performance. In this study, we address this issue by developing a path tracking algorithm based on deep reinforcement learning. To this end, we elaborately design the Markov decision process and implement the deep Q-network (DQN) training algorithm. Simulation results validate the superior convergence speed, accuracy, and stability of the proposed DQN-based tracking algorithm in comparison to the conventional approach.
KW - Autonomous vehicle
KW - deep reinforcement learning
KW - Markov decision process
KW - path tracking
UR - https://www.scopus.com/pages/publications/85189239532
U2 - 10.1109/ICEIC61013.2024.10457143
DO - 10.1109/ICEIC61013.2024.10457143
M3 - Conference contribution
AN - SCOPUS:85189239532
T3 - 2024 International Conference on Electronics, Information, and Communication, ICEIC 2024
BT - 2024 International Conference on Electronics, Information, and Communication, ICEIC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 International Conference on Electronics, Information, and Communication, ICEIC 2024
Y2 - 28 January 2024 through 31 January 2024
ER -