EEG-based Mental Workload Estimation using Encoder-Decoder Networks with Multilevel Feature Fusion

Chang Gyun Jin, Seong Eun Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, we propose a model that combines the multilevel feature fusion algorithm and encoder-decoder structure for evaluation of mental workload using electroencephalogram (EEG) signals. The encoder-decoder structure was used to reduce additive noise and subject variations of EEG data. The encoder is structured by incorporating a 3D convolutional neural network (3DCNN) and multilevel feature fusion concept, which extracts unified key features from combining the low-level and high-level features. The decoder consists of simple 3DCNN layers to recover the input EEG image from the latent vector. The proposed model can achieve higher performance by mitigating feature variations. We evaluate our network with EEG data obtained through the Sternberg task to estimate mental workload, which has 91.6% accuracy and outperforms the conventional algorithm.

Original languageEnglish
Title of host publication2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665464345
DOIs
StatePublished - 2022
Event2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022 - Yeosu, Korea, Republic of
Duration: 26 Oct 202228 Oct 2022

Publication series

Name2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022

Conference

Conference2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022
Country/TerritoryKorea, Republic of
CityYeosu
Period26/10/2228/10/22

Keywords

  • CNN
  • EEG
  • Encoder-Decoder
  • Multilevel feature fusion
  • Workload Estimation

Fingerprint

Dive into the research topics of 'EEG-based Mental Workload Estimation using Encoder-Decoder Networks with Multilevel Feature Fusion'. Together they form a unique fingerprint.

Cite this