{"title":"通过主题导向的多头注意,利用不同的上下文来产生反应","authors":"Weikang Zhang, Zhanzhe Li, Yupu Guo","doi":"10.1145/3446132.3446168","DOIUrl":null,"url":null,"abstract":"Multi-turn dialogue system plays an important role in intelligent interaction. In particular, the subtask response generation in a multi- turn conversation system is a challenging task, which aims to generate more diverse and contextually relevant responses. Most of the methods focus on the sequential connection between sentence levels by using hierarchical framework and attention mechanism, but lack reflection from the overall semantic level such as topical information. Previous work would lead to a lack of full understanding of the dialogue history. In this paper, we propose a context-augmented model, named TGMA-RG, which leverages the conversational context to promote interactivity and persistence of multi-turn dialogues through topic-guided multi-head attention mechanism. Especially, we extract the topics from conversational context and design a hierarchical encoder-decoder models with a multi-head attention mechanism. Among them, we utilize topics vectors as queries of attention mechanism to obtain the corresponding weights between each utterance and each topic. Our experimental results on two publicly available datasets show that TGMA-RG improves the performance than other baselines in terms of BLEU-1, BLEU-2, Distinct-1, Distinct-2 and PPL.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"189 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Leveraging Different Context for Response Generation through Topic-guided Multi-head Attention\",\"authors\":\"Weikang Zhang, Zhanzhe Li, Yupu Guo\",\"doi\":\"10.1145/3446132.3446168\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-turn dialogue system plays an important role in intelligent interaction. In particular, the subtask response generation in a multi- turn conversation system is a challenging task, which aims to generate more diverse and contextually relevant responses. Most of the methods focus on the sequential connection between sentence levels by using hierarchical framework and attention mechanism, but lack reflection from the overall semantic level such as topical information. Previous work would lead to a lack of full understanding of the dialogue history. In this paper, we propose a context-augmented model, named TGMA-RG, which leverages the conversational context to promote interactivity and persistence of multi-turn dialogues through topic-guided multi-head attention mechanism. Especially, we extract the topics from conversational context and design a hierarchical encoder-decoder models with a multi-head attention mechanism. Among them, we utilize topics vectors as queries of attention mechanism to obtain the corresponding weights between each utterance and each topic. Our experimental results on two publicly available datasets show that TGMA-RG improves the performance than other baselines in terms of BLEU-1, BLEU-2, Distinct-1, Distinct-2 and PPL.\",\"PeriodicalId\":125388,\"journal\":{\"name\":\"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"189 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3446132.3446168\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3446132.3446168","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Leveraging Different Context for Response Generation through Topic-guided Multi-head Attention
Multi-turn dialogue system plays an important role in intelligent interaction. In particular, the subtask response generation in a multi- turn conversation system is a challenging task, which aims to generate more diverse and contextually relevant responses. Most of the methods focus on the sequential connection between sentence levels by using hierarchical framework and attention mechanism, but lack reflection from the overall semantic level such as topical information. Previous work would lead to a lack of full understanding of the dialogue history. In this paper, we propose a context-augmented model, named TGMA-RG, which leverages the conversational context to promote interactivity and persistence of multi-turn dialogues through topic-guided multi-head attention mechanism. Especially, we extract the topics from conversational context and design a hierarchical encoder-decoder models with a multi-head attention mechanism. Among them, we utilize topics vectors as queries of attention mechanism to obtain the corresponding weights between each utterance and each topic. Our experimental results on two publicly available datasets show that TGMA-RG improves the performance than other baselines in terms of BLEU-1, BLEU-2, Distinct-1, Distinct-2 and PPL.