Tianhao Yan;Hao Meng;Emilia Parada-Cabaleiro;Jianhua Tao;Taihao Li;Björn W. Schuller
{"title":"A Residual Multi-Scale Convolutional Neural Network With Transformers for Speech Emotion Recognition","authors":"Tianhao Yan;Hao Meng;Emilia Parada-Cabaleiro;Jianhua Tao;Taihao Li;Björn W. Schuller","doi":"10.1109/TAFFC.2024.3481253","DOIUrl":null,"url":null,"abstract":"The great variety of human emotional expression as well as the differences in the ways they perceive and annotate them make Speech Emotion Recognition (SER) an ambiguous and challenging task. With the development of deep learning, long-term progress has been made in SER systems. However, the existing convolutional neural networks present certain limitations, such as their inability to well capture global features, which contain important emotional information. Moreover, the position encoding in the Transformer structure is relatively fixed and only encodes the time domain dimension, which cannot effectively obtain the position information of discriminative features in the frequency domain dimension. In order to overtake these limitations, we propose an end-to-end Residual Multi-Scale Convolutional Neural Networks (RMSCNN) with Transformer model network. Simultaneously, to further validate the effectivenessof RMSCNN in extracting multi-scale features and delivering pertinent emotion localization data, we developed the RMSC_down network in conjunction with the Wav2Vec 2.0 model. The results of the prediction of Arousal, Valenceand Dominanceon the popular corpora demonstrate the superiority and robustness of our approach for SER, showing an improvement of the recognition accuracy in the public dataset MSP-Podcast 1.9 version.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"915-932"},"PeriodicalIF":9.8000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10716771/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The great variety of human emotional expression as well as the differences in the ways they perceive and annotate them make Speech Emotion Recognition (SER) an ambiguous and challenging task. With the development of deep learning, long-term progress has been made in SER systems. However, the existing convolutional neural networks present certain limitations, such as their inability to well capture global features, which contain important emotional information. Moreover, the position encoding in the Transformer structure is relatively fixed and only encodes the time domain dimension, which cannot effectively obtain the position information of discriminative features in the frequency domain dimension. In order to overtake these limitations, we propose an end-to-end Residual Multi-Scale Convolutional Neural Networks (RMSCNN) with Transformer model network. Simultaneously, to further validate the effectivenessof RMSCNN in extracting multi-scale features and delivering pertinent emotion localization data, we developed the RMSC_down network in conjunction with the Wav2Vec 2.0 model. The results of the prediction of Arousal, Valenceand Dominanceon the popular corpora demonstrate the superiority and robustness of our approach for SER, showing an improvement of the recognition accuracy in the public dataset MSP-Podcast 1.9 version.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.