T. Nishide, Kei Shimonishi, H. Kawashima, T. Matsuyama
{"title":"Voting-Based Backchannel Timing Prediction Using Audio-Visual Information","authors":"T. Nishide, Kei Shimonishi, H. Kawashima, T. Matsuyama","doi":"10.1145/2974804.2980501","DOIUrl":null,"url":null,"abstract":"While many spoken dialog systems are recently developed, users need to summarize and convey what they want the system to do clearly. However, in a human dialog, a speaker often summarize what to say incrementally, provided that there is a good listener who responds to the speaker's utterances at appropriate timing. We consider that generating backchannel responses, where appropriate, overlapped with the user's utterances is crucial for an artificial listener system that can promote user's utterances since such overlaps are the norm in human dialogs. Toward the goal to realize such a listener system, in this paper, we propose a voting-based algorithm of predicting the end of utterances early (i.e., before the utterances end) using audio-visual information. In the evaluation, we demonstrate the effectiveness of using audio-visual information and the applicability of the voting-based prediction algorithm with some early results.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fourth International Conference on Human Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2974804.2980501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
While many spoken dialog systems are recently developed, users need to summarize and convey what they want the system to do clearly. However, in a human dialog, a speaker often summarize what to say incrementally, provided that there is a good listener who responds to the speaker's utterances at appropriate timing. We consider that generating backchannel responses, where appropriate, overlapped with the user's utterances is crucial for an artificial listener system that can promote user's utterances since such overlaps are the norm in human dialogs. Toward the goal to realize such a listener system, in this paper, we propose a voting-based algorithm of predicting the end of utterances early (i.e., before the utterances end) using audio-visual information. In the evaluation, we demonstrate the effectiveness of using audio-visual information and the applicability of the voting-based prediction algorithm with some early results.