Abstract
As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.
| Translated title of the contribution | Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera |
|---|---|
| Original language | Korean |
| Pages (from-to) | 269-282 |
| Number of pages | 13 |
| Journal | 방송공학회 논문지 |
| Volume | 26 |
| Issue number | 3 |
| DOIs | |
| State | Published - May 2021 |