为基于对比方面的情感分析探索基于 ChatGPT 的增强策略

Lingling Xu, Haoran Xie, S. Joe Qin, Fu Lee Wang, Xiaohui Tao
{"title":"为基于对比方面的情感分析探索基于 ChatGPT 的增强策略","authors":"Lingling Xu, Haoran Xie, S. Joe Qin, Fu Lee Wang, Xiaohui Tao","doi":"arxiv-2409.11218","DOIUrl":null,"url":null,"abstract":"Aspect-based sentiment analysis (ABSA) involves identifying sentiment towards\nspecific aspect terms in a sentence and allows us to uncover nuanced\nperspectives and attitudes on particular aspects of a product, service, or\ntopic. However, the scarcity of labeled data poses a significant challenge to\ntraining high-quality models. To address this issue, we explore the potential\nof data augmentation using ChatGPT, a well-performing large language model\n(LLM), to enhance the sentiment classification performance towards aspect\nterms. Specifically, we explore three data augmentation strategies based on\nChatGPT: context-focused, aspect-focused, and context-aspect data augmentation\ntechniques. Context-focused data augmentation focuses on changing the word\nexpression of context words in the sentence while keeping aspect terms\nunchanged. In contrast, aspect-focused data augmentation aims to change aspect\nterms but keep context words unchanged. Context-Aspect data augmentation\nintegrates the above two data augmentations to generate augmented samples.\nFurthermore, we incorporate contrastive learning into the ABSA tasks to improve\nperformance. Extensive experiments show that all three data augmentation\ntechniques lead to performance improvements, with the context-aspect data\naugmentation strategy performing best and surpassing the performance of the\nbaseline models.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring ChatGPT-based Augmentation Strategies for Contrastive Aspect-based Sentiment Analysis\",\"authors\":\"Lingling Xu, Haoran Xie, S. Joe Qin, Fu Lee Wang, Xiaohui Tao\",\"doi\":\"arxiv-2409.11218\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aspect-based sentiment analysis (ABSA) involves identifying sentiment towards\\nspecific aspect terms in a sentence and allows us to uncover nuanced\\nperspectives and attitudes on particular aspects of a product, service, or\\ntopic. However, the scarcity of labeled data poses a significant challenge to\\ntraining high-quality models. To address this issue, we explore the potential\\nof data augmentation using ChatGPT, a well-performing large language model\\n(LLM), to enhance the sentiment classification performance towards aspect\\nterms. Specifically, we explore three data augmentation strategies based on\\nChatGPT: context-focused, aspect-focused, and context-aspect data augmentation\\ntechniques. Context-focused data augmentation focuses on changing the word\\nexpression of context words in the sentence while keeping aspect terms\\nunchanged. In contrast, aspect-focused data augmentation aims to change aspect\\nterms but keep context words unchanged. Context-Aspect data augmentation\\nintegrates the above two data augmentations to generate augmented samples.\\nFurthermore, we incorporate contrastive learning into the ABSA tasks to improve\\nperformance. Extensive experiments show that all three data augmentation\\ntechniques lead to performance improvements, with the context-aspect data\\naugmentation strategy performing best and surpassing the performance of the\\nbaseline models.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11218\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于方面的情感分析(ABSA)涉及识别句子中特定方面术语的情感,使我们能够发现对产品、服务或主题特定方面的细微观点和态度。然而,标注数据的匮乏给高质量模型的训练带来了巨大挑战。为了解决这个问题,我们探索了使用 ChatGPT(一种性能良好的大型语言模型(LLM))进行数据增强的潜力,以提高对方面术语的情感分类性能。具体来说,我们在 ChatGPT 的基础上探索了三种数据增强策略:以上下文为重点的数据增强技术、以方面为重点的数据增强技术和以上下文为重点的数据增强技术。以上下文为重点的数据增强侧重于改变句子中上下文词语的表达方式,同时保持方面词不变。与此相反,以方面为重点的数据增强旨在改变方面词,但保持上下文词不变。此外,我们还在 ABSA 任务中加入了对比学习以提高性能。广泛的实验表明,这三种数据增强技术都能提高性能,其中上下文方面数据增强策略的性能最好,超过了基准模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Exploring ChatGPT-based Augmentation Strategies for Contrastive Aspect-based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) involves identifying sentiment towards specific aspect terms in a sentence and allows us to uncover nuanced perspectives and attitudes on particular aspects of a product, service, or topic. However, the scarcity of labeled data poses a significant challenge to training high-quality models. To address this issue, we explore the potential of data augmentation using ChatGPT, a well-performing large language model (LLM), to enhance the sentiment classification performance towards aspect terms. Specifically, we explore three data augmentation strategies based on ChatGPT: context-focused, aspect-focused, and context-aspect data augmentation techniques. Context-focused data augmentation focuses on changing the word expression of context words in the sentence while keeping aspect terms unchanged. In contrast, aspect-focused data augmentation aims to change aspect terms but keep context words unchanged. Context-Aspect data augmentation integrates the above two data augmentations to generate augmented samples. Furthermore, we incorporate contrastive learning into the ABSA tasks to improve performance. Extensive experiments show that all three data augmentation techniques lead to performance improvements, with the context-aspect data augmentation strategy performing best and surpassing the performance of the baseline models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1