Abstract
The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and a.jpgicial fingers by 3D printing.
Translated title of the contribution | Deep Q-Network based Game Agents |
---|---|
Original language | Korean |
Pages (from-to) | 157-162 |
Number of pages | 6 |
Journal | 로봇학회 논문지 |
Volume | 14 |
Issue number | 3 |
DOIs | |
State | Published - Aug 2019 |