Jer Min Eyu, Kok-Lim Alvin Yau, Lei Liu, Yung-Wey Chong
{"title":"情感分析中的强化学习:回顾与未来方向","authors":"Jer Min Eyu, Kok-Lim Alvin Yau, Lei Liu, Yung-Wey Chong","doi":"10.1007/s10462-024-10967-0","DOIUrl":null,"url":null,"abstract":"<div><p>Sentiment analysis in natural language processing (NLP) is used to understand the polarity of human emotions (e.g., positive and negative) and preferences (e.g., price and quality). Reinforcement learning (RL) enables a decision maker (or <i>agent</i>) to observe the operating environment (or the current <i>state</i>) and select the optimal action to receive feedback signals (or <i>reward</i>) from the operating environment. Deep reinforcement learning (DRL) extends RL with deep neural networks (i.e., <i>main</i> and <i>target</i> networks) to capture the state information of inputs and address the curse of dimensionality issue of RL. In sentiment analysis, RL and DRL reduce the need for a large labeled dataset and linguistic resources, increasing scalability and preserving the context and order of logical partitions. Through enhancement, the RL and DRL algorithms identify negations, enhance the quality of the generated responses, predict the logical partitions, remove the irrelevant aspects, and ultimately capture the correct sentiment polarity. This paper presents a review of RL and DRL models and algorithms with their objectives, applications, datasets, performance, and open issues in sentiment analysis.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 1","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10967-0.pdf","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning in sentiment analysis: a review and future directions\",\"authors\":\"Jer Min Eyu, Kok-Lim Alvin Yau, Lei Liu, Yung-Wey Chong\",\"doi\":\"10.1007/s10462-024-10967-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Sentiment analysis in natural language processing (NLP) is used to understand the polarity of human emotions (e.g., positive and negative) and preferences (e.g., price and quality). Reinforcement learning (RL) enables a decision maker (or <i>agent</i>) to observe the operating environment (or the current <i>state</i>) and select the optimal action to receive feedback signals (or <i>reward</i>) from the operating environment. Deep reinforcement learning (DRL) extends RL with deep neural networks (i.e., <i>main</i> and <i>target</i> networks) to capture the state information of inputs and address the curse of dimensionality issue of RL. In sentiment analysis, RL and DRL reduce the need for a large labeled dataset and linguistic resources, increasing scalability and preserving the context and order of logical partitions. Through enhancement, the RL and DRL algorithms identify negations, enhance the quality of the generated responses, predict the logical partitions, remove the irrelevant aspects, and ultimately capture the correct sentiment polarity. This paper presents a review of RL and DRL models and algorithms with their objectives, applications, datasets, performance, and open issues in sentiment analysis.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 1\",\"pages\":\"\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2024-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-024-10967-0.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-024-10967-0\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-10967-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Reinforcement learning in sentiment analysis: a review and future directions
Sentiment analysis in natural language processing (NLP) is used to understand the polarity of human emotions (e.g., positive and negative) and preferences (e.g., price and quality). Reinforcement learning (RL) enables a decision maker (or agent) to observe the operating environment (or the current state) and select the optimal action to receive feedback signals (or reward) from the operating environment. Deep reinforcement learning (DRL) extends RL with deep neural networks (i.e., main and target networks) to capture the state information of inputs and address the curse of dimensionality issue of RL. In sentiment analysis, RL and DRL reduce the need for a large labeled dataset and linguistic resources, increasing scalability and preserving the context and order of logical partitions. Through enhancement, the RL and DRL algorithms identify negations, enhance the quality of the generated responses, predict the logical partitions, remove the irrelevant aspects, and ultimately capture the correct sentiment polarity. This paper presents a review of RL and DRL models and algorithms with their objectives, applications, datasets, performance, and open issues in sentiment analysis.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.