{"title":"利用关联词引导的情感增强和混合学习增强基于方面的情感分析","authors":"","doi":"10.1016/j.neucom.2024.128705","DOIUrl":null,"url":null,"abstract":"<div><div>Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing aspect-based sentiment analysis with linking words-guided emotional augmentation and hybrid learning\",\"authors\":\"\",\"doi\":\"10.1016/j.neucom.2024.128705\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224014760\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014760","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
基于方面的情感分析(ABSA)是自然语言处理领域的一项复杂任务,旨在识别与文本特定方面相关的情感倾向。然而,ABSA 经常面临严重的数据短缺问题,这限制了用于训练的注释数据的可用性,并影响了模型的鲁棒性。此外,当文本包含多个情感维度时,这些维度可能会相互作用,从而使情感极性的判断变得复杂。为了应对这些挑战,本研究提出了一个创新的训练框架:词引导多维情感数据增强和对抗性对比训练(LWEDA-ACT)。具体来说,该方法通过使用四种不同的文本生成器合成额外的训练样本,缓解了数据稀缺的问题。为了获得最具代表性的样本,我们通过计算句子熵来选择样本。同时,为了减少潜在的噪音,我们引入了关联词以确保文本的连贯性。此外,通过对抗训练,该模型能够学习通用特征表征,以处理微小的输入扰动,从而提高其在复杂情感维度交互中的鲁棒性和准确性。通过对比学习,我们构建了正负样本对,使模型能够更准确地识别和区分不同方面术语的情感极性。我们在餐厅、笔记本电脑和 Twitter 这三个流行的 ABSA 数据集上进行了全面的实验,并将我们的方法与当前最先进的技术进行了比较。实验结果表明,我们的方法在餐厅数据集上的准确率提高了 +0.98%,宏观 F1 分数提高了 +0.52%。此外,在具有挑战性的 Twitter 数据集上,我们的方法将准确率提高了 +0.77%,宏观 F1 分数提高了 +1.14%。
Enhancing aspect-based sentiment analysis with linking words-guided emotional augmentation and hybrid learning
Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.