{"title":"全局观与说话人感知的会话情感原因提取","authors":"Jiaming An;Zixiang Ding;Ke Li;Rui Xia","doi":"10.1109/TASLP.2023.3319990","DOIUrl":null,"url":null,"abstract":"Emotion cause extraction in conversations, the task of recognizing and extracting the causes behind the emotions in a conversation, is a new and under-explored task. It was previously treated as an utterance-level task, that can only extract cause of one emotion from one utterance at a time and is difficult to model the correlation between different emotions and causes in the conversation. The role of speakers was also not fully utilized in the previous methods. In this article, we introduce a global-view and speaker-aware conversational emotion cause extraction framework. It can fully model the interaction between utterances and emotions in the conversation and simultaneously extract all the causes corresponding to all emotions or one given emotion in a conversation, and can be applied to both real-time and non-real-time task settings. We further propose a Speaker-aware Couple-Decoder Module and a Speaker-Emotion Graph Attention Network, to better model the role of speakers in the conversation. The experimental results prove our approach's advantages in both emotion cause extraction performance and computational efficiency.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"31 ","pages":"3814-3823"},"PeriodicalIF":4.1000,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Global-View and Speaker-Aware Emotion Cause Extraction in Conversations\",\"authors\":\"Jiaming An;Zixiang Ding;Ke Li;Rui Xia\",\"doi\":\"10.1109/TASLP.2023.3319990\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emotion cause extraction in conversations, the task of recognizing and extracting the causes behind the emotions in a conversation, is a new and under-explored task. It was previously treated as an utterance-level task, that can only extract cause of one emotion from one utterance at a time and is difficult to model the correlation between different emotions and causes in the conversation. The role of speakers was also not fully utilized in the previous methods. In this article, we introduce a global-view and speaker-aware conversational emotion cause extraction framework. It can fully model the interaction between utterances and emotions in the conversation and simultaneously extract all the causes corresponding to all emotions or one given emotion in a conversation, and can be applied to both real-time and non-real-time task settings. We further propose a Speaker-aware Couple-Decoder Module and a Speaker-Emotion Graph Attention Network, to better model the role of speakers in the conversation. The experimental results prove our approach's advantages in both emotion cause extraction performance and computational efficiency.\",\"PeriodicalId\":13332,\"journal\":{\"name\":\"IEEE/ACM Transactions on Audio, Speech, and Language Processing\",\"volume\":\"31 \",\"pages\":\"3814-3823\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2023-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE/ACM Transactions on Audio, Speech, and Language Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10274611/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10274611/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
Global-View and Speaker-Aware Emotion Cause Extraction in Conversations
Emotion cause extraction in conversations, the task of recognizing and extracting the causes behind the emotions in a conversation, is a new and under-explored task. It was previously treated as an utterance-level task, that can only extract cause of one emotion from one utterance at a time and is difficult to model the correlation between different emotions and causes in the conversation. The role of speakers was also not fully utilized in the previous methods. In this article, we introduce a global-view and speaker-aware conversational emotion cause extraction framework. It can fully model the interaction between utterances and emotions in the conversation and simultaneously extract all the causes corresponding to all emotions or one given emotion in a conversation, and can be applied to both real-time and non-real-time task settings. We further propose a Speaker-aware Couple-Decoder Module and a Speaker-Emotion Graph Attention Network, to better model the role of speakers in the conversation. The experimental results prove our approach's advantages in both emotion cause extraction performance and computational efficiency.
期刊介绍:
The IEEE/ACM Transactions on Audio, Speech, and Language Processing covers audio, speech and language processing and the sciences that support them. In audio processing: transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. In speech processing: areas such as speech analysis, synthesis, coding, speech and speaker recognition, speech production and perception, and speech enhancement. In language processing: speech and text analysis, understanding, generation, dialog management, translation, summarization, question answering and document indexing and retrieval, as well as general language modeling.