{"title":"Analyzing the expression of annoyance during phone calls to complaint services","authors":"J. Irastorza, M. Inés Torres","doi":"10.1109/COGINFOCOM.2016.7804533","DOIUrl":null,"url":null,"abstract":"The identification of emotional hints from speech shows a large number of applications. Machine learning researchers have analyzed sets of acoustic parameters as potential cues for the identification of discrete emotional categories or, alternatively, of the dimensions of emotions. Experiments have been carried out over records including simulated or induced emotions, even if recently more research has been carried out on spontaneous emotions. However, it is well known that emotion expression depends not only on cultural factors but also on the individual and also on the specific situation. In this work we deal with the tracking of annoyance shifts during real phone-calls to complaint services. The audio files analyzed show different ways to express annoyance, as, for example, disappointment, impotence or anger. However variations of parameters derived from intensity combined with some spectral information and suprasegmental features have shown to be very robust for each speaker and annoyance rate. The work also discussed the annotation problem and proposed an extended rating scale in order to include annotators disagreements. Our frame classification results validated the annotation procedure. Experimental results also showed that shifts in customer annoyance rates could be potentially tracked during phone calls.","PeriodicalId":440408,"journal":{"name":"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGINFOCOM.2016.7804533","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
The identification of emotional hints from speech shows a large number of applications. Machine learning researchers have analyzed sets of acoustic parameters as potential cues for the identification of discrete emotional categories or, alternatively, of the dimensions of emotions. Experiments have been carried out over records including simulated or induced emotions, even if recently more research has been carried out on spontaneous emotions. However, it is well known that emotion expression depends not only on cultural factors but also on the individual and also on the specific situation. In this work we deal with the tracking of annoyance shifts during real phone-calls to complaint services. The audio files analyzed show different ways to express annoyance, as, for example, disappointment, impotence or anger. However variations of parameters derived from intensity combined with some spectral information and suprasegmental features have shown to be very robust for each speaker and annoyance rate. The work also discussed the annotation problem and proposed an extended rating scale in order to include annotators disagreements. Our frame classification results validated the annotation procedure. Experimental results also showed that shifts in customer annoyance rates could be potentially tracked during phone calls.