{"title":"Multiple Information Extraction and Interaction for Emotion Recognition in Multi-Party Conversation","authors":"Feifei Xu, Guangzhen Li","doi":"10.1109/ISCC58397.2023.10218135","DOIUrl":null,"url":null,"abstract":"Emotion recognition in multi-party conversation (ERMC) has garnered attention in the field of natural language processing (NLP) due to its wide range of applications. Its objective is to identify the emotion of each utterance. Existing models mainly focus on context modeling, while ignoring the emotional interaction and dependency between utterances. In this work, we put forward a Multiple Information Extraction and Interaction network (MIEI) for ERMC that captures emotions by integrating emotional interaction, speaker-aware context, and discourse dependency in a conversation. Emotional interaction is simulated by proposed commonsense emotion modeling. Speaker-aware context is obtained by proposed speaker-aware context modeling with muti-head attention. Discourse dependency is improved to better depict the discourse structures. We verify the superiority of our proposed method by comparing it with existing models and validate the effectiveness of each module through ablation experiments.","PeriodicalId":265337,"journal":{"name":"2023 IEEE Symposium on Computers and Communications (ISCC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Symposium on Computers and Communications (ISCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC58397.2023.10218135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion recognition in multi-party conversation (ERMC) has garnered attention in the field of natural language processing (NLP) due to its wide range of applications. Its objective is to identify the emotion of each utterance. Existing models mainly focus on context modeling, while ignoring the emotional interaction and dependency between utterances. In this work, we put forward a Multiple Information Extraction and Interaction network (MIEI) for ERMC that captures emotions by integrating emotional interaction, speaker-aware context, and discourse dependency in a conversation. Emotional interaction is simulated by proposed commonsense emotion modeling. Speaker-aware context is obtained by proposed speaker-aware context modeling with muti-head attention. Discourse dependency is improved to better depict the discourse structures. We verify the superiority of our proposed method by comparing it with existing models and validate the effectiveness of each module through ablation experiments.