{"title":"Speech emotion recognition using derived features from speech segment and kernel principal component analysis","authors":"Matee Charoendee, A. Suchato, P. Punyabukkana","doi":"10.1109/JCSSE.2017.8025936","DOIUrl":null,"url":null,"abstract":"Speech emotion recognition is a challenging problem, with identifying efficient features being of particular concern. This paper has two components. First, it presents an empirical study that evaluated four feature reduction methods, chi-square, gain ratio, RELIEF-F, and kernel principal component analysis (KPCA), on utterance level using a support vector machine (SVM) as a classifier. KPCA had the highest F-score when its F-score was compared with the average F-score of the other methods. Using KPCA is more effective than classifying without using feature reduction methods up to 5.73%. The paper also presents an application of statistical functions to raw features from the segment level to derive global features. The features were then reduced using KPCA and classified with SVM. Subsequently, we conducted a majority vote to determine the emotion for the entire utterance. The results demonstrate that this approach outperformed the baseline approaches, which used features from the utterance level, the utterance level with KPCA, the segment level, the segment level with KPCA, and the segment level with the application of statistical functions without KPCA. This yielded a higher F-score at 13.16%, 7.03%, 5.13%, 4.92% and 11.04%, respectively.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"8 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/JCSSE.2017.8025936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Speech emotion recognition is a challenging problem, with identifying efficient features being of particular concern. This paper has two components. First, it presents an empirical study that evaluated four feature reduction methods, chi-square, gain ratio, RELIEF-F, and kernel principal component analysis (KPCA), on utterance level using a support vector machine (SVM) as a classifier. KPCA had the highest F-score when its F-score was compared with the average F-score of the other methods. Using KPCA is more effective than classifying without using feature reduction methods up to 5.73%. The paper also presents an application of statistical functions to raw features from the segment level to derive global features. The features were then reduced using KPCA and classified with SVM. Subsequently, we conducted a majority vote to determine the emotion for the entire utterance. The results demonstrate that this approach outperformed the baseline approaches, which used features from the utterance level, the utterance level with KPCA, the segment level, the segment level with KPCA, and the segment level with the application of statistical functions without KPCA. This yielded a higher F-score at 13.16%, 7.03%, 5.13%, 4.92% and 11.04%, respectively.