Neural multi-task learning for end-to-end Arabic aspect-based sentiment analysis

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computer Speech and Language Pub Date : 2024-06-23 DOI:10.1016/j.csl.2024.101683
Rajae Bensoltane, Taher Zaki
{"title":"Neural multi-task learning for end-to-end Arabic aspect-based sentiment analysis","authors":"Rajae Bensoltane,&nbsp;Taher Zaki","doi":"10.1016/j.csl.2024.101683","DOIUrl":null,"url":null,"abstract":"<div><p>Most existing aspect-based sentiment analysis (ABSA) methods perform the tasks of aspect extraction and sentiment classification independently, assuming that the aspect terms are already determined when handling the aspect sentiment classification task. However, such settings are neither practical nor appropriate in real-life applications, as aspects must be extracted prior to sentiment classification. This study aims to overcome this shortcoming by jointly identifying aspect terms and the corresponding sentiments using a multi-task learning approach based on a unified tagging scheme. The proposed model uses the Bidirectional Encoder Representations from Transformers (BERT) model to produce the input representations, followed by a Bidirectional Gated Recurrent Unit (BiGRU) layer for further contextual and semantic coding. An attention layer is added on top of BiGRU to force the model to focus on the important parts of the sentence. Finally, a Conditional Random Fields (CRF) layer is used to handle inter-label dependencies. Experiments conducted on a reference Arabic hotel dataset show that the proposed model significantly outperforms the baseline and related work models.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"89 ","pages":"Article 101683"},"PeriodicalIF":3.1000,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885230824000664/pdfft?md5=5af89b8ac3b7169819a4f2bf2d9a12ff&pid=1-s2.0-S0885230824000664-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230824000664","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Most existing aspect-based sentiment analysis (ABSA) methods perform the tasks of aspect extraction and sentiment classification independently, assuming that the aspect terms are already determined when handling the aspect sentiment classification task. However, such settings are neither practical nor appropriate in real-life applications, as aspects must be extracted prior to sentiment classification. This study aims to overcome this shortcoming by jointly identifying aspect terms and the corresponding sentiments using a multi-task learning approach based on a unified tagging scheme. The proposed model uses the Bidirectional Encoder Representations from Transformers (BERT) model to produce the input representations, followed by a Bidirectional Gated Recurrent Unit (BiGRU) layer for further contextual and semantic coding. An attention layer is added on top of BiGRU to force the model to focus on the important parts of the sentence. Finally, a Conditional Random Fields (CRF) layer is used to handle inter-label dependencies. Experiments conducted on a reference Arabic hotel dataset show that the proposed model significantly outperforms the baseline and related work models.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于阿拉伯语方面的端到端情感分析的神经多任务学习
大多数现有的基于方面的情感分析(ABSA)方法都是独立完成方面提取和情感分类任务的,假设在处理方面情感分类任务时已经确定了方面术语。然而,这种设置在实际应用中既不实用也不合适,因为在进行情感分类之前必须先提取方面。本研究旨在克服这一缺陷,采用基于统一标记方案的多任务学习方法,联合识别方面术语和相应的情感。所提出的模型使用来自变换器的双向编码器表征(BERT)模型来生成输入表征,然后使用双向门控递归单元(BiGRU)层进一步进行上下文和语义编码。在 BiGRU 的基础上增加了注意力层,以迫使模型关注句子的重要部分。最后,条件随机场(CRF)层用于处理标签间的依赖关系。在参考阿拉伯语酒店数据集上进行的实验表明,所提出的模型明显优于基线模型和相关模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Speech and Language
Computer Speech and Language 工程技术-计算机:人工智能
CiteScore
11.30
自引率
4.70%
发文量
80
审稿时长
22.9 weeks
期刊介绍: Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.
期刊最新文献
Modeling correlated causal-effect structure with a hypergraph for document-level event causality identification You Are What You Write: Author re-identification privacy attacks in the era of pre-trained language models End-to-End Speech-to-Text Translation: A Survey Corpus and unsupervised benchmark: Towards Tagalog grammatical error correction TR-Net: Token Relation Inspired Table Filling Network for Joint Entity and Relation Extraction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1