Changgyun Jin, Chanwoo Shin, Hanul Kim, Seong-Eun Kim
{"title":"Multitask Autoencoder-Based Two-Phase Framework Using Multilevel Feature Fusion for EEG Emotion Recognition","authors":"Changgyun Jin, Chanwoo Shin, Hanul Kim, Seong-Eun Kim","doi":"10.1109/ICEIC61013.2024.10457197","DOIUrl":null,"url":null,"abstract":"Emotion recognition has emerged as a active research area, gaining relevance from advancements in deep learning. This study focuses on using electroencephalogram (EEG) data for emotion recognition and addresses the challenge of subject-dependent variability in EEG-based emotion recognition by proposing a novel architecture that employs multilevel feature fusion and a multitask autoencoder-based two-phase framework. The first phase generates classspecific data, while the second phase uses these for model training. The proposed model was validated using the SEED dataset and demonstrated state-of-the art perforamnce with an accuracy of 99.4 % in a subject-independent setting.","PeriodicalId":518726,"journal":{"name":"2024 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"308 5","pages":"1-3"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 International Conference on Electronics, Information, and Communication (ICEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEIC61013.2024.10457197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion recognition has emerged as a active research area, gaining relevance from advancements in deep learning. This study focuses on using electroencephalogram (EEG) data for emotion recognition and addresses the challenge of subject-dependent variability in EEG-based emotion recognition by proposing a novel architecture that employs multilevel feature fusion and a multitask autoencoder-based two-phase framework. The first phase generates classspecific data, while the second phase uses these for model training. The proposed model was validated using the SEED dataset and demonstrated state-of-the art perforamnce with an accuracy of 99.4 % in a subject-independent setting.