SeeNet: A Soft Emotion Expert and Data Augmentation Method to Enhance Speech Emotion Recognition

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2025-03-28 DOI:10.1109/TAFFC.2025.3555406
Qifei Li;Yingming Gao;Yuhua Wen;Ziping Zhao;Ya Li;Björn W. Schuller
{"title":"SeeNet: A Soft Emotion Expert and Data Augmentation Method to Enhance Speech Emotion Recognition","authors":"Qifei Li;Yingming Gao;Yuhua Wen;Ziping Zhao;Ya Li;Björn W. Schuller","doi":"10.1109/TAFFC.2025.3555406","DOIUrl":null,"url":null,"abstract":"Speech emotion recognition (SER) systems are designed to enable machines to recognize emotional states in human speech during human-computer interactions, enhancing the interactive experience. While considerable progress has been achieved in this field recently, SER systems still encounter challenges related to performance and robustness, primarily stemming from the limited labeled data. To this end, we propose a novel multitask learning framework to learn a distinctive and robust emotional representation by our “Soft Emotion Expert Network (SeeNet)”. SeeNet consists of three components: a pretrained model, an auxiliary task soft emotion expert (SEE) module and an energy-based mixup (EBM) data augmentation module. The pretrained model and EBM module are employed to mitigate the challenges arising from limited labeled data, thereby enhancing the model performance and bolstering robustness. The SEE module as an auxiliary task is designed to assist the main task of SER by enhancing the distinction between samples exhibiting high similarity across categories. This aims to further improve the performance and robustness of the system. Comprehensive experiments on three different settings and multiple datasets are conducted to evaluate the performance and robustness of our proposed method. The experimental results demonstrate that SeeNet surpasses the state-of-the-art (SOTA) methods in both performance and robustness.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"2142-2156"},"PeriodicalIF":9.8000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10943183/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Speech emotion recognition (SER) systems are designed to enable machines to recognize emotional states in human speech during human-computer interactions, enhancing the interactive experience. While considerable progress has been achieved in this field recently, SER systems still encounter challenges related to performance and robustness, primarily stemming from the limited labeled data. To this end, we propose a novel multitask learning framework to learn a distinctive and robust emotional representation by our “Soft Emotion Expert Network (SeeNet)”. SeeNet consists of three components: a pretrained model, an auxiliary task soft emotion expert (SEE) module and an energy-based mixup (EBM) data augmentation module. The pretrained model and EBM module are employed to mitigate the challenges arising from limited labeled data, thereby enhancing the model performance and bolstering robustness. The SEE module as an auxiliary task is designed to assist the main task of SER by enhancing the distinction between samples exhibiting high similarity across categories. This aims to further improve the performance and robustness of the system. Comprehensive experiments on three different settings and multiple datasets are conducted to evaluate the performance and robustness of our proposed method. The experimental results demonstrate that SeeNet surpasses the state-of-the-art (SOTA) methods in both performance and robustness.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种增强语音情感识别的软情绪专家和数据增强方法
语音情感识别(SER)系统旨在使机器能够在人机交互过程中识别人类语音中的情绪状态,从而增强交互体验。虽然最近在这一领域取得了相当大的进展,但SER系统仍然遇到与性能和鲁棒性相关的挑战,主要源于有限的标记数据。为此,我们提出了一种新的多任务学习框架,通过我们的“软情绪专家网络(SeeNet)”来学习独特而稳健的情绪表征。SeeNet由三个部分组成:预训练模型、辅助任务软情绪专家(SEE)模块和基于能量的混合(EBM)数据增强模块。采用预训练模型和EBM模块来缓解有限标记数据带来的挑战,从而提高模型性能和增强鲁棒性。SEE模块作为辅助任务,旨在通过增强跨类别具有高相似性的样本之间的区别来辅助SER的主要任务。目的是进一步提高系统的性能和鲁棒性。在三种不同设置和多个数据集上进行了综合实验,以评估我们提出的方法的性能和鲁棒性。实验结果表明,SeeNet在性能和鲁棒性方面都优于最先进的SOTA方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Graph-Based Representation Learning with Beta Uncertainty for Enhanced Multimodal Emotion Recognition SpotFormer: Multi-Scale Spatio-Temporal Transformer for Facial Expression Spotting Weakly Supervised Learning for Facial Affective Behavior Analysis: a Review CWEFS: Brain volume conduction effects inspired channel-wise EEG feature selection for multi-dimensional emotion recognition LES-Talker: Fine-Grained Emotion Editing for Talking Head Generation in Linear Emotion Space
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1