Speech emotion recognition using energy based adaptive mode selection

IF 3 3区 计算机科学 Q2 ACOUSTICS Speech Communication Pub Date : 2025-03-22 DOI:10.1016/j.specom.2025.103228
Ravi, Sachin Taran
{"title":"Speech emotion recognition using energy based adaptive mode selection","authors":"Ravi,&nbsp;Sachin Taran","doi":"10.1016/j.specom.2025.103228","DOIUrl":null,"url":null,"abstract":"<div><div>In this framework, a speech emotion recognition approach is presented, relying on Variational Mode Decomposition (VMD) and adaptive mode selection utilizing energy information. Instead of directly analyzing speech signals this work is focused on the preprocessing of raw speech signals. Initially, a given speech signal is decomposed using VMD and then the energy of each mode is calculated. Based on energy estimation, the dominant modes are selected for signal reconstruction. VMD combined with energy estimation improves the predictability of the reconstructed speech signal. The improvement in predictability is demonstrated using root mean square and spectral entropy measures. The reconstructed signal is divided into frames, and prosodic and spectral features are then calculated. Following feature extraction, ReliefF algorithm is utilized for the feature optimization. The resultant feature set is utilized to train the fine K- nearest neighbor classifier for emotion identification. The proposed framework was tested on publicly available acted and elicited datasets. For the acted datasets, the proposed framework achieved 93.8 %, 95.8 %, and 93.4 % accuracy on different language-based RAVDESS-speech, Emo-DB, and EMOVO datasets. Furthermore, the proposed method has also proven to be robust across three languages: English, German, and Italian, with language sensitivity as low as 2.4 % compared to existing methods. For the elicited dataset IEMOCAP, the proposed framework achieved the highest accuracy of 83.1 % compared to the existing state of the art.</div></div>","PeriodicalId":49485,"journal":{"name":"Speech Communication","volume":"171 ","pages":"Article 103228"},"PeriodicalIF":3.0000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Communication","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167639325000433","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

In this framework, a speech emotion recognition approach is presented, relying on Variational Mode Decomposition (VMD) and adaptive mode selection utilizing energy information. Instead of directly analyzing speech signals this work is focused on the preprocessing of raw speech signals. Initially, a given speech signal is decomposed using VMD and then the energy of each mode is calculated. Based on energy estimation, the dominant modes are selected for signal reconstruction. VMD combined with energy estimation improves the predictability of the reconstructed speech signal. The improvement in predictability is demonstrated using root mean square and spectral entropy measures. The reconstructed signal is divided into frames, and prosodic and spectral features are then calculated. Following feature extraction, ReliefF algorithm is utilized for the feature optimization. The resultant feature set is utilized to train the fine K- nearest neighbor classifier for emotion identification. The proposed framework was tested on publicly available acted and elicited datasets. For the acted datasets, the proposed framework achieved 93.8 %, 95.8 %, and 93.4 % accuracy on different language-based RAVDESS-speech, Emo-DB, and EMOVO datasets. Furthermore, the proposed method has also proven to be robust across three languages: English, German, and Italian, with language sensitivity as low as 2.4 % compared to existing methods. For the elicited dataset IEMOCAP, the proposed framework achieved the highest accuracy of 83.1 % compared to the existing state of the art.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于能量自适应模式选择的语音情感识别
在此框架下,提出了一种基于变分模式分解(VMD)和利用能量信息的自适应模式选择的语音情感识别方法。与直接分析语音信号不同,本研究的重点是对原始语音信号进行预处理。首先对给定的语音信号进行VMD分解,然后计算各模的能量。基于能量估计,选择优势模态进行信号重构。VMD与能量估计相结合,提高了重构语音信号的可预测性。使用均方根和谱熵度量证明了可预测性的提高。重构后的信号被分割成帧,然后计算韵律特征和频谱特征。在特征提取之后,利用ReliefF算法进行特征优化。所得到的特征集被用来训练用于情感识别的精细K近邻分类器。提出的框架在公开可用的行动和引出的数据集上进行了测试。对于行为数据集,所提出的框架在不同的基于语言的RAVDESS-speech, Emo-DB和EMOVO数据集上实现了93.8%,95.8%和93.4%的准确率。此外,所提出的方法也被证明对英语、德语和意大利语这三种语言具有鲁棒性,与现有方法相比,语言敏感性低至2.4%。对于提取的数据集IEMOCAP,与现有技术相比,所提出的框架达到了83.1%的最高准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Speech Communication
Speech Communication 工程技术-计算机:跨学科应用
CiteScore
6.80
自引率
6.20%
发文量
94
审稿时长
19.2 weeks
期刊介绍: Speech Communication is an interdisciplinary journal whose primary objective is to fulfil the need for the rapid dissemination and thorough discussion of basic and applied research results. The journal''s primary objectives are: • to present a forum for the advancement of human and human-machine speech communication science; • to stimulate cross-fertilization between different fields of this domain; • to contribute towards the rapid and wide diffusion of scientifically sound contributions in this domain.
期刊最新文献
Editorial Board MS-VBRVQ: Multi-scale variable bitrate speech residual vector quantization Hand gesture realisation of contrastive focus in real-time whisper-to-speech synthesis: Investigating the transfer from implicit to explicit control of intonation Lateral channel dynamics and F3 modulation: Quantifying para-sagittal articulation in Australian English /l/ A review on speech emotion recognition for low-resource and Indigenous languages
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1