CNN-based Apprenticeship Learning for Inverse Reinforcement Learning

Ye Rin Kim, Hyun Duck Choi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, we propose a CNN-based inverse reinforcement learning method that optimizes a reward function modeled by a linear combination. The proposed method efficiently extracts features from expert demonstrations using a CNN-based network and effectively estimates the reward function with a few iterations. The proposed method is called CNN-based apprenticeship learning for inverse reinforcement learning. The policy estimated by this method guarantees performance similar to or better than that of expert behavior. Through the Super Mario simulation, we demonstrate that the proposed CNN-based apprenticeship learning outperforms traditional imitation learning and reinforcement learning methods.

Original languageEnglish
Title of host publication2024 24th International Conference on Control, Automation and Systems, ICCAS 2024
PublisherIEEE Computer Society
Pages73-78
Number of pages6
ISBN (Electronic)9788993215380
DOIs
StatePublished - 2024
Event24th International Conference on Control, Automation and Systems, ICCAS 2024 - Jeju, Korea, Republic of
Duration: 29 Oct 20241 Nov 2024

Publication series

NameInternational Conference on Control, Automation and Systems
ISSN (Print)1598-7833

Conference

Conference24th International Conference on Control, Automation and Systems, ICCAS 2024
Country/TerritoryKorea, Republic of
CityJeju
Period29/10/241/11/24

Keywords

  • Apprenticeship Learning via Inverse Reinforcement Learning (ALIRL)
  • Convolutional Neural Network (CNN)
  • Deep Q-Network (DQN)

Fingerprint

Dive into the research topics of 'CNN-based Apprenticeship Learning for Inverse Reinforcement Learning'. Together they form a unique fingerprint.

Cite this