Improving Avatar-Video Layout and Interaction Design for Video Meeting Metaverses

Seung Yeon Lee, Huhn Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Background Due to the COVID-19 pandemic, the need for video meeting metaverse platforms such as Gather. Town, which support non-face-to-face communication using avatars and video screens, has increased rapidly. However, existing video meeting metaverses have a problem in visually mapping the avatar displayed on the screen to the corresponding video screen. Users find it difficult to communicate with others by moving the avatar in the metaverse space and synthesizing the avatar's motions and emoticons as well as other people’s facial expressions and hand gestures on the video screen. Methods Three experiments were conducted to derive an avatar-video layout and interaction design that can solve the mapping problem between avatar and video screen. Experiment I proposed various layout designs that facilitate avatar-video mapping, and compared ease of the mapping, visual complexity, and subjective satisfaction of each layout. In addition to the optimal layout derived from Experiment I, Experiment II verified the effect of the interaction that changes the position of the video according to the movement of the avatar. Experiment III investigated how to add effects such as opacity or size reduction to the video only when the avatar moves affect the user experience. Results As a result of Experiment I, the floating layout of placing the video on the avatar head and the four-side layout of placing the videos on the four sides corresponding to the avatar position showed the highest satisfaction with a 100% avatar-video matching answer rate. However, the two layouts had higher visual complexity than the existing layout where participants' videos were placed in a line at the top of the screen. In Experiment II, the interaction in which the video is above the avatar's head when the avatar is moving and the video is moved to the top or four sides of the screen when the avatar is not moving, showed better in terms of matching ease and satisfaction than the floating fixed layout in which the video is always fixed above the avatar's head. Experiment III showed that the user experience was further improved by reducing the size and opacity of the video above the avatar's head only when the avatar moved in the floating fixed layout. Conclusions Design layout and interaction, which reduce the size and opacity of the video only when moving while the video screen is located on the avatar's head, slightly increase visual complexity, solve the avatar-video mapping problem, and increase manipulation convenience and subjective satisfaction. If the layout and interaction is applied to the actual metaverse platform, it is expected to improve the user experience of communicating with others by improving mapping avatars and videos.

Original languageEnglish
Pages (from-to)91-109
Number of pages19
JournalArchives of Design Research
Volume36
Issue number3
DOIs
StatePublished - 2023

Keywords

  • Gather.Town
  • Location Compatibility
  • Metaverse
  • Video Conference

Fingerprint

Dive into the research topics of 'Improving Avatar-Video Layout and Interaction Design for Video Meeting Metaverses'. Together they form a unique fingerprint.

Cite this