{"title":"SVM based speaker emotion recognition in continuous scale","authors":"Martin Hric, M. Chmulik, Igor Guoth, R. Jarina","doi":"10.1109/RADIOELEK.2015.7129063","DOIUrl":null,"url":null,"abstract":"In this paper we propose a system of speaker emotion recognition based on the SVM regression. Recognized emotional state is expressed in continuous scale in three dimensions: valence, activation and dominance. Experiments have been performed on the IEMOCAP database that contains 6 basic emotions supplemented with 3 additional emotions. Audio recordings from the corpus were divided into voiced and unvoiced segments, and for both types, a vast collection of diverse audio features (830/710) were extracted. Then 40 features for each type of segment were selected by Particle Swarm Optimization. Classification accuracy is expressed by cross-correlation coefficients between the estimated (by the propose system) and real (assigned according to human judgements) emotional state labels. Experiments conducted over dataset show very promising results for the future experiments.","PeriodicalId":193275,"journal":{"name":"2015 25th International Conference Radioelektronika (RADIOELEKTRONIKA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 25th International Conference Radioelektronika (RADIOELEKTRONIKA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RADIOELEK.2015.7129063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
In this paper we propose a system of speaker emotion recognition based on the SVM regression. Recognized emotional state is expressed in continuous scale in three dimensions: valence, activation and dominance. Experiments have been performed on the IEMOCAP database that contains 6 basic emotions supplemented with 3 additional emotions. Audio recordings from the corpus were divided into voiced and unvoiced segments, and for both types, a vast collection of diverse audio features (830/710) were extracted. Then 40 features for each type of segment were selected by Particle Swarm Optimization. Classification accuracy is expressed by cross-correlation coefficients between the estimated (by the propose system) and real (assigned according to human judgements) emotional state labels. Experiments conducted over dataset show very promising results for the future experiments.