Multitask Autoencoder-Based Two-Phase Framework Using Multilevel Feature Fusion for EEG Emotion Recognition

Chang Gyun Jin, Chan Woo Shin, Hanul Kim, Seong Eun Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Emotion recognition has emerged as a active research area, gaining relevance from advancements in deep learning. This study focuses on using electroencephalogram (EEG) data for emotion recognition and addresses the challenge of subject-dependent variability in EEG-based emotion recognition by proposing a novel architecture that employs multilevel feature fusion and a multitask autoencoder-based two-phase framework. The first phase generates classspecific data, while the second phase uses these for model training. The proposed model was validated using the SEED dataset and demonstrated state-of-the art perforamnce with an accuracy of 99.4 % in a subject-independent setting.

Original languageEnglish
Title of host publication2024 International Conference on Electronics, Information, and Communication, ICEIC 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350371888
DOIs
StatePublished - 2024
Event2024 International Conference on Electronics, Information, and Communication, ICEIC 2024 - Taipei, Taiwan, Province of China
Duration: 28 Jan 202431 Jan 2024

Publication series

Name2024 International Conference on Electronics, Information, and Communication, ICEIC 2024

Conference

Conference2024 International Conference on Electronics, Information, and Communication, ICEIC 2024
Country/TerritoryTaiwan, Province of China
CityTaipei
Period28/01/2431/01/24

Keywords

  • component
  • formatting
  • insert
  • style
  • styling

Fingerprint

Dive into the research topics of 'Multitask Autoencoder-Based Two-Phase Framework Using Multilevel Feature Fusion for EEG Emotion Recognition'. Together they form a unique fingerprint.

Cite this