{"title":"Emotional Speech Recognition Using Acoustic Models of Decomposed Component Words","authors":"Vivatchai Kaveeta, K. Patanukhom","doi":"10.1109/ACPR.2013.13","DOIUrl":null,"url":null,"abstract":"This paper presents a novel approach for emotional speech recognition. Instead of using a full length of speech for classification, the proposed method decomposes speech signals into component words, groups the words into segments and generates an acoustic model for each segment by using features such as audio power, MFCC, log attack time, spectrum spread and segment duration. Based on the proposed segment-based classification, unknown speech signals can be recognized into sequences of segment emotions. Emotion profiles (EPs) are extracted from the emotion sequences. Finally, speech emotion can be determined by using EP as features. Experiments are conducted by using 6,810 training samples and 722 test samples which are composed of eight emotional classes from IEMOCAP database. In comparison with a conventional method, the proposed method can improve recognition rate from 46.81% to 58.59% in eight emotion classification and from 60.18% to 71.25% in four emotion classification.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 2nd IAPR Asian Conference on Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACPR.2013.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a novel approach for emotional speech recognition. Instead of using a full length of speech for classification, the proposed method decomposes speech signals into component words, groups the words into segments and generates an acoustic model for each segment by using features such as audio power, MFCC, log attack time, spectrum spread and segment duration. Based on the proposed segment-based classification, unknown speech signals can be recognized into sequences of segment emotions. Emotion profiles (EPs) are extracted from the emotion sequences. Finally, speech emotion can be determined by using EP as features. Experiments are conducted by using 6,810 training samples and 722 test samples which are composed of eight emotional classes from IEMOCAP database. In comparison with a conventional method, the proposed method can improve recognition rate from 46.81% to 58.59% in eight emotion classification and from 60.18% to 71.25% in four emotion classification.