Ke Xu, Bin Liu, Jianhua Tao, Zhao Lv, Cunhang Fan, Leichao Song
{"title":"AHRNN: Attention-Based Hybrid Robust Neural Network for emotion recognition","authors":"Ke Xu, Bin Liu, Jianhua Tao, Zhao Lv, Cunhang Fan, Leichao Song","doi":"10.1049/ccs2.12038","DOIUrl":null,"url":null,"abstract":"<p>In order to solve the problem that the existing methods cannot effectively capture the semantic emotion of the sentence when faced with the lack of cross-language corpus, it is difficult to effectively perform cross-language sentiment analysis, we propose a neural network architecture called the Attention-Based Hybrid Robust Neural Network. The proposed architecture includes pre-trained word embedding with fine-tuning training to obtain prior semantic information, two sub-networks and attention mechanism to capture the global semantic emotional information in the text, and a fully connected layer and softmax function to jointly perform final emotional classification. The Convolutional Neural Networks sub-network captures the local semantic emotional information of the text, the BiLSTM sub-network captures the contextual semantic emotional information of the text, and the attention mechanism dynamically integrates the semantic emotional information to obtain key emotional information. We conduct experiments on Chinese (International Conference on Natural Language Processing and Chinese Computing) and English (SST) datasets. The experiment is divided into three subtasks to evaluate the superiority of our method. It improves the recognition accuracy of single sentence positive/negative classification from 79% to 86% in the single-language emotion recognition task. The recognition performance of fine-grained emotional tags is also improved by 9.6%. The recognition accuracy of cross-language emotion recognition tasks has also been improved by 1.5%. Even in the face of faulty data, the performance of our model is not significantly reduced when the error rate is less than 20%. These experimental results prove the superiority of our method.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2022-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12038","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2
Abstract
In order to solve the problem that the existing methods cannot effectively capture the semantic emotion of the sentence when faced with the lack of cross-language corpus, it is difficult to effectively perform cross-language sentiment analysis, we propose a neural network architecture called the Attention-Based Hybrid Robust Neural Network. The proposed architecture includes pre-trained word embedding with fine-tuning training to obtain prior semantic information, two sub-networks and attention mechanism to capture the global semantic emotional information in the text, and a fully connected layer and softmax function to jointly perform final emotional classification. The Convolutional Neural Networks sub-network captures the local semantic emotional information of the text, the BiLSTM sub-network captures the contextual semantic emotional information of the text, and the attention mechanism dynamically integrates the semantic emotional information to obtain key emotional information. We conduct experiments on Chinese (International Conference on Natural Language Processing and Chinese Computing) and English (SST) datasets. The experiment is divided into three subtasks to evaluate the superiority of our method. It improves the recognition accuracy of single sentence positive/negative classification from 79% to 86% in the single-language emotion recognition task. The recognition performance of fine-grained emotional tags is also improved by 9.6%. The recognition accuracy of cross-language emotion recognition tasks has also been improved by 1.5%. Even in the face of faulty data, the performance of our model is not significantly reduced when the error rate is less than 20%. These experimental results prove the superiority of our method.
为了解决现有方法在缺乏跨语言语料库的情况下无法有效捕获句子的语义情感,难以有效进行跨语言情感分析的问题,我们提出了一种基于注意力的混合鲁棒神经网络(Attention-Based Hybrid Robust neural network)。该架构包括预训练词嵌入和微调训练来获取先验语义信息,两个子网络和注意机制来捕获文本中的全局语义情感信息,以及一个全连接层和softmax函数来共同进行最终的情感分类。卷积神经网络子网络捕获文本的局部语义情感信息,BiLSTM子网络捕获文本的上下文语义情感信息,注意机制动态集成语义情感信息以获取关键情感信息。我们在中文(International Conference on Natural Language Processing and Chinese Computing)和英文(SST)数据集上进行实验。实验分为三个子任务来评估我们的方法的优越性。它将单语言情感识别任务中单句正负分类的识别准确率从79%提高到86%。细粒度情感标签的识别性能也提高了9.6%。跨语言情感识别任务的识别准确率也提高了1.5%。即使面对错误的数据,当错误率小于20%时,我们的模型的性能也不会明显下降。这些实验结果证明了我们方法的优越性。