{"title":"Suppression of negative tweets using reinforcement learning systems","authors":"Kazuteru Miyazaki , Hitomi Miyazaki","doi":"10.1016/j.cogsys.2023.101207","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, damage caused by negative tweets has become a social problem. In this paper, we consider a method of suppressing negative tweets by using reinforcement learning<span>. In particular, we consider the case where tweet writing is modeled as a multi-agent environment. Numerical experiments verify the effects of suppression using various reinforcement learning methods. We will also verify robustness to environmental changes. We compared the results of Profit Sharing (PS) and Q-learning (QL) as reinforcement learning methods to confirm the effectiveness of PS, and confirmed the behavior of the rationality theorem in a multi-agent environment. Furthermore, in experiments regarding the ability to follow environmental changes, it was confirmed that PS is more robust than QL. If machines can appropriately intervene and interact with posts made by humans, we can expect that negative tweets and even blow-ups can be suppressed automatically without the need for costly human eye monitoring.</span></p></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041723001419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, damage caused by negative tweets has become a social problem. In this paper, we consider a method of suppressing negative tweets by using reinforcement learning. In particular, we consider the case where tweet writing is modeled as a multi-agent environment. Numerical experiments verify the effects of suppression using various reinforcement learning methods. We will also verify robustness to environmental changes. We compared the results of Profit Sharing (PS) and Q-learning (QL) as reinforcement learning methods to confirm the effectiveness of PS, and confirmed the behavior of the rationality theorem in a multi-agent environment. Furthermore, in experiments regarding the ability to follow environmental changes, it was confirmed that PS is more robust than QL. If machines can appropriately intervene and interact with posts made by humans, we can expect that negative tweets and even blow-ups can be suppressed automatically without the need for costly human eye monitoring.