Reinforcement learning-based dynamic obstacle avoidance and integration of path planning

Jaewan Choi, Geonhee Lee, Chibum Lee

Research output: Contribution to journalArticlepeer-review

58 Scopus citations

Abstract

Deep reinforcement learning has the advantage of being able to encode fairly complex behaviors by collecting and learning empirical information. In the current study, we have proposed a framework for reinforcement learning in decentralized collision avoidance where each agent independently makes its decision without communication with others. In an environment exposed to various kinds of dynamic obstacles with irregular movements, mobile robot agents could learn how to avoid obstacles and reach a target point efficiently. Moreover, a path planner was integrated with the reinforcement learning-based obstacle avoidance to solve the problem of not finding a path in a specific situation, thereby imposing path efficiency. The robots were trained about the policy of obstacle avoidance in environments where dynamic characteristics were considered with soft actor critic algorithm. The trained policy was implemented in the robot operating system (ROS), tested in virtual and real environments for the differential drive wheel robot to prove the effectiveness of the proposed method. Videos are available at https://youtu.be/xxzoh1XbAl0.

Original languageEnglish
Pages (from-to)663-677
Number of pages15
JournalIntelligent Service Robotics
Volume14
Issue number5
DOIs
StatePublished - Nov 2021

Keywords

  • Collision avoidance
  • Deep learning
  • Mobile robot
  • Navigation
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Reinforcement learning-based dynamic obstacle avoidance and integration of path planning'. Together they form a unique fingerprint.

Cite this