Kiyonori Ohtake, Teruhisa Misu, Chiori Hori, H. Kashioka, Satoshi Nakamura
{"title":"Dialogue Act Annotation for Statistically Managed Spoken Dialogue Systems","authors":"Kiyonori Ohtake, Teruhisa Misu, Chiori Hori, H. Kashioka, Satoshi Nakamura","doi":"10.1109/ISUC.2008.52","DOIUrl":null,"url":null,"abstract":"This paper uses the Kyoto tour guide dialogue corpus and its annotations to construct a dialogue management system by employing a statistical approach. We defined dialogue act (DA) tags to express a user's intention. Two kinds of tag sets can be used to annotate the corpus. One denotes a communicative function (speech act), and the other denotes the semantic content of an utterance. We have annotated the speech act tags in our corpus using several annotators. We evaluate the annotation results by measuring the agreement ratios between the annotators.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Second International Symposium on Universal Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISUC.2008.52","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper uses the Kyoto tour guide dialogue corpus and its annotations to construct a dialogue management system by employing a statistical approach. We defined dialogue act (DA) tags to express a user's intention. Two kinds of tag sets can be used to annotate the corpus. One denotes a communicative function (speech act), and the other denotes the semantic content of an utterance. We have annotated the speech act tags in our corpus using several annotators. We evaluate the annotation results by measuring the agreement ratios between the annotators.