{"title":"Enhancing aspect-based sentiment analysis with linking words-guided emotional augmentation and hybrid learning","authors":"","doi":"10.1016/j.neucom.2024.128705","DOIUrl":null,"url":null,"abstract":"<div><div>Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014760","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.