심층 큐 신경망을 이용한 게임 에이전트 구현

Translated title of the contribution: Deep Q-Network based Game Agents

Research output: Contribution to journalArticlepeer-review

Abstract

The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and a.jpgicial fingers by 3D printing.
Translated title of the contributionDeep Q-Network based Game Agents
Original languageKorean
Pages (from-to)157-162
Number of pages6
Journal로봇학회 논문지
Volume14
Issue number3
DOIs
StatePublished - Aug 2019

Fingerprint

Dive into the research topics of 'Deep Q-Network based Game Agents'. Together they form a unique fingerprint.

Cite this