Deep Learning for Audio Visual Emotion Recognition

Tassadaq Hussain, Wenwu Wang, N. Bouaynaya, H. Fathallah-Shaykh, L. Mihaylova
{"title":"Deep Learning for Audio Visual Emotion Recognition","authors":"Tassadaq Hussain, Wenwu Wang, N. Bouaynaya, H. Fathallah-Shaykh, L. Mihaylova","doi":"10.23919/fusion49751.2022.9841342","DOIUrl":null,"url":null,"abstract":"Human emotions can be presented in data with multiple modalities, e.g. video, audio and text. An automated system for emotion recognition needs to consider a number of challenging issues, including feature extraction, and dealing with variations and noise in data. Deep learning have been extensively used recently, offering excellent performance in emotion recognition. This work presents a new method based on audio and visual modalities, where visual cues facilitate the detection of the speech or non-speech frames and the emotional state of the speaker. Different from previous works, we propose the use of novel speech features, e.g. the Wavegram, which is extracted with a one-dimensional Convolutional Neural Network (CNN) learned directly from time-domain waveforms, and Wavegram-Logmel features which combines the Wavegram with the log mel spectrogram. The system is then trained in an end-to-end fashion on the SAVEE database by also taking advantage of the correlations among each of the streams. It is shown that the proposed approach outperforms the traditional and state-of-the art deep learning based approaches, built separately on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Information Fusion (FUSION)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/fusion49751.2022.9841342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human emotions can be presented in data with multiple modalities, e.g. video, audio and text. An automated system for emotion recognition needs to consider a number of challenging issues, including feature extraction, and dealing with variations and noise in data. Deep learning have been extensively used recently, offering excellent performance in emotion recognition. This work presents a new method based on audio and visual modalities, where visual cues facilitate the detection of the speech or non-speech frames and the emotional state of the speaker. Different from previous works, we propose the use of novel speech features, e.g. the Wavegram, which is extracted with a one-dimensional Convolutional Neural Network (CNN) learned directly from time-domain waveforms, and Wavegram-Logmel features which combines the Wavegram with the log mel spectrogram. The system is then trained in an end-to-end fashion on the SAVEE database by also taking advantage of the correlations among each of the streams. It is shown that the proposed approach outperforms the traditional and state-of-the art deep learning based approaches, built separately on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
音频视觉情感识别的深度学习
人类的情感可以以多种形式呈现在数据中,例如视频、音频和文本。一个用于情感识别的自动化系统需要考虑许多具有挑战性的问题,包括特征提取,以及处理数据中的变化和噪声。近年来,深度学习在情感识别方面得到了广泛的应用。这项工作提出了一种基于音频和视觉模式的新方法,其中视觉线索有助于检测语音或非语音帧以及说话者的情绪状态。与以往的工作不同,我们提出了使用新的语音特征,例如,使用直接从时域波形中学习的一维卷积神经网络(CNN)提取的波形图,以及将波形图与对数谱图相结合的波形图- logmel特征。然后,通过利用每个流之间的相关性,以端到端的方式在SAVEE数据库上对系统进行训练。研究表明,所提出的方法优于传统和最先进的基于深度学习的方法,这些方法分别建立在听觉和视觉手工特征上,用于预测自发和自然的情绪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Integrated Localization Method for Mixed Near-Field and Far-Field Sources Based on Mixed-order Statistic A Comparison of Correlation-Agnostic Techniques for Magnetic Navigation On the Development of Quantitative Operator Situational Awareness Assessment Methods for Small-Scale Unmanned Aircraft Systems Visual-Inertial Odometry aided by Speed and Steering Angle Measurements Data fusion strategies for improving resilience to sensor noise in cable-stayed tower monitoring
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1