Adaptive Alignment and Time Aggregation Network for Speech-Visual Emotion Recognition

IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Signal Processing Letters Pub Date : 2025-03-10 DOI:10.1109/LSP.2025.3550007
Lile Wu;Lei Bai;Wenhao Cheng;Zutian Cheng;Guanghui Chen
{"title":"Adaptive Alignment and Time Aggregation Network for Speech-Visual Emotion Recognition","authors":"Lile Wu;Lei Bai;Wenhao Cheng;Zutian Cheng;Guanghui Chen","doi":"10.1109/LSP.2025.3550007","DOIUrl":null,"url":null,"abstract":"Video-based speech-visual emotion recognition plays a crucial role in human-computer interaction applications. However, it faces several challenges, including: 1) the redundancy in the extracted speech-visual features caused by the heterogeneity between speech and visual modalities, and 2) the ineffective modeling of the time-varying characteristics of emotions. To this end, this paper proposes an adaptive alignment and time aggregation network (AataNet). Specifically, AataNet designs a low redundancy speech-visual adaptive alignment (LRSVAA) module to acquire the low-redundant aligned features of speech-visual modalities. Meanwhile, AataNet also designs a computationally efficient time-adaptive aggregation (CETAA) module to model the time-varying characteristics of emotions. Experiments on RAVDESS, BAUM-1 s and eNTERFACE05 datasets also demonstrate that the proposed AataNet achieves better results.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1181-1185"},"PeriodicalIF":3.9000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10919089/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Video-based speech-visual emotion recognition plays a crucial role in human-computer interaction applications. However, it faces several challenges, including: 1) the redundancy in the extracted speech-visual features caused by the heterogeneity between speech and visual modalities, and 2) the ineffective modeling of the time-varying characteristics of emotions. To this end, this paper proposes an adaptive alignment and time aggregation network (AataNet). Specifically, AataNet designs a low redundancy speech-visual adaptive alignment (LRSVAA) module to acquire the low-redundant aligned features of speech-visual modalities. Meanwhile, AataNet also designs a computationally efficient time-adaptive aggregation (CETAA) module to model the time-varying characteristics of emotions. Experiments on RAVDESS, BAUM-1 s and eNTERFACE05 datasets also demonstrate that the proposed AataNet achieves better results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
语音-视觉情感识别的自适应对齐和时间聚合网络
基于视频的语音-视觉情感识别在人机交互应用中发挥着至关重要的作用。然而,它也面临着一些挑战,包括1) 语音和视觉模式之间的异质性导致提取的语音-视觉特征存在冗余;2) 对情绪的时变特征建模效果不佳。为此,本文提出了一种自适应对齐和时间聚合网络(AataNet)。具体来说,AataNet 设计了一个低冗余语音-视觉自适应配准(LRSVAA)模块,以获取语音-视觉模态的低冗余配准特征。同时,AataNet 还设计了计算效率高的时间自适应聚合(CETAA)模块,以模拟情绪的时变特征。在 RAVDESS、BAUM-1 s 和 eNTERFACE05 数据集上的实验也证明,所提出的 AataNet 取得了更好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Signal Processing Letters
IEEE Signal Processing Letters 工程技术-工程:电子与电气
CiteScore
7.40
自引率
12.80%
发文量
339
审稿时长
2.8 months
期刊介绍: The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.
期刊最新文献
WDE-SBL: Waveform Design for SBL-Based Low- SNR Target Parameter Estimation in RFPA Radar Generative Model-Aided Continual Learning for CSI Feedback in FDD mMIMO-OFDM Systems MSVM-UNet: Multi-Scale Spatial Attention Enhanced Vision Mamba U-Net for Agricultural Disease Segmentation Adaptive Multi-Resolution Dynamic Mode Decomposition for Non-Stationary Signal Analysis Geometric Deviation: An Information-Theoretic Health Indicator for Cross-Condition Prognostics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1