Abstract
When people empathize, nonverbal communication such as facial expressions and gestures is more weighted than language. Therefore, in the human-robot interaction (HRI), it is important for robots to recognize and understand facial expression of humans. This is because it helps emotional communication and alleviates the psychological distance between robots and humans. In this paper, we introduce the robot system using facial expression recognition and motion generation models for interaction with human. It recognizes human emotions using POSTER++ based on Vision Transformer (ViT) and generates motions using FLAME that is developed to utilize diffusion model. In our scenario, we first perceive facial expression by POSTER++ and generate appropriate reaction texts using text generation model, Gemini AI. Then, we input the generated reaction text into FLAME to obtain 3D motion data, and the data is used to get arm joint angles that is calculated for Forward Kinematics (FK) and Inverse Kinematics (IK). Focusing on scenarios involving human interaction, we only use the head and arm motors of the humanoid robot, HUMIC. Furthermore, we simulate scenarios to verify that our system can recognize facial expression and show various reaction motions.
| Translated title of the contribution | A Robot System that Interacts with Human using Facial Expression Recognition and Motion Generation Model |
|---|---|
| Original language | Korean |
| Pages (from-to) | 335-346 |
| Number of pages | 12 |
| Journal | 로봇학회 논문지 |
| Volume | 19 |
| Issue number | 4 |
| DOIs | |
| State | Published - 2024 |