Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty

Myeong Seop Kim, Jung Su Kim, Myoung Su Choi, Jae Han Park

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Reinforcement learning (RL) trains an agent by maximizing the sum of a discounted reward. Since the discount factor has a critical effect on the learning performance of the RL agent, it is important to choose the discount factor properly. When uncertainties are involved in the training, the learning performance with a constant discount factor can be limited. For the purpose of obtaining acceptable learning performance consistently, this paper proposes an adaptive rule for the discount factor based on the advantage function. Additionally, how to use the advantage function in both on-policy and off-policy algorithms is presented. To demonstrate the performance of the proposed adaptive rule, it is applied to PPO (Proximal Policy Optimization) for Tetris in order to validate the on-policy case, and to SAC (Soft Actor-Critic) for the motion planning of a robot manipulator to validate the off-policy case. In both cases, the proposed method results in a better or similar performance compared with cases using the best constant discount factors found by exhaustive search. Hence, the proposed adaptive discount factor automatically finds a discount factor that leads to comparable training performance, and that can be applied to representative deep reinforcement learning problems.

Original languageEnglish
Article number7266
JournalSensors
Volume22
Issue number19
DOIs
StatePublished - Oct 2022

Keywords

  • Tetris
  • discount factor
  • path planning
  • reinforcement learning
  • uncertainty

Fingerprint

Dive into the research topics of 'Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty'. Together they form a unique fingerprint.

Cite this