Qifei Li;Yingming Gao;Yuhua Wen;Ziping Zhao;Ya Li;Björn W. Schuller
{"title":"SeeNet: A Soft Emotion Expert and Data Augmentation Method to Enhance Speech Emotion Recognition","authors":"Qifei Li;Yingming Gao;Yuhua Wen;Ziping Zhao;Ya Li;Björn W. Schuller","doi":"10.1109/TAFFC.2025.3555406","DOIUrl":null,"url":null,"abstract":"Speech emotion recognition (SER) systems are designed to enable machines to recognize emotional states in human speech during human-computer interactions, enhancing the interactive experience. While considerable progress has been achieved in this field recently, SER systems still encounter challenges related to performance and robustness, primarily stemming from the limited labeled data. To this end, we propose a novel multitask learning framework to learn a distinctive and robust emotional representation by our “Soft Emotion Expert Network (SeeNet)”. SeeNet consists of three components: a pretrained model, an auxiliary task soft emotion expert (SEE) module and an energy-based mixup (EBM) data augmentation module. The pretrained model and EBM module are employed to mitigate the challenges arising from limited labeled data, thereby enhancing the model performance and bolstering robustness. The SEE module as an auxiliary task is designed to assist the main task of SER by enhancing the distinction between samples exhibiting high similarity across categories. This aims to further improve the performance and robustness of the system. Comprehensive experiments on three different settings and multiple datasets are conducted to evaluate the performance and robustness of our proposed method. The experimental results demonstrate that SeeNet surpasses the state-of-the-art (SOTA) methods in both performance and robustness.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"2142-2156"},"PeriodicalIF":9.8000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10943183/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Speech emotion recognition (SER) systems are designed to enable machines to recognize emotional states in human speech during human-computer interactions, enhancing the interactive experience. While considerable progress has been achieved in this field recently, SER systems still encounter challenges related to performance and robustness, primarily stemming from the limited labeled data. To this end, we propose a novel multitask learning framework to learn a distinctive and robust emotional representation by our “Soft Emotion Expert Network (SeeNet)”. SeeNet consists of three components: a pretrained model, an auxiliary task soft emotion expert (SEE) module and an energy-based mixup (EBM) data augmentation module. The pretrained model and EBM module are employed to mitigate the challenges arising from limited labeled data, thereby enhancing the model performance and bolstering robustness. The SEE module as an auxiliary task is designed to assist the main task of SER by enhancing the distinction between samples exhibiting high similarity across categories. This aims to further improve the performance and robustness of the system. Comprehensive experiments on three different settings and multiple datasets are conducted to evaluate the performance and robustness of our proposed method. The experimental results demonstrate that SeeNet surpasses the state-of-the-art (SOTA) methods in both performance and robustness.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.