社会满意:满足型代理的多代理强化学习

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS ACS Applied Bio Materials Pub Date : 2024-07-19 DOI:10.1016/j.biosystems.2024.105276
Daisuke Uragami , Noriaki Sonota , Tatsuji Takahashi
{"title":"社会满意:满足型代理的多代理强化学习","authors":"Daisuke Uragami ,&nbsp;Noriaki Sonota ,&nbsp;Tatsuji Takahashi","doi":"10.1016/j.biosystems.2024.105276","DOIUrl":null,"url":null,"abstract":"<div><p>For a reinforcement learning agent to finish trial-and-error in a realistic time duration, it is necessary to limit the scope of exploration during the learning process. However, limiting the exploration scope means limitation in optimality: the agent could fall into a suboptimal solution. This is the nature of local, bottom-up way of learning. An alternative way is to set a goal to be achieved, which is a more global, top-down way. The risk-sensitive satisficing (RS) value function incorporate, as a method of the latter way, the satisficing principle into reinforcement learning and enables agents to quickly converge to exploiting the optimal solution without falling into a suboptimal one, when an appropriate goal (aspiration level) is given. However, how best to determine the aspiration level is still an open problem. This study proposes social satisficing, a framework for multi-agent reinforcement learning which determines the aspiration level through information sharing among multiple agents. In order to verify the effectiveness of this novel method, we conducted simulations in a learning environment with many suboptimal goals (SuboptimaWorld). The results show that the proposed method, which converts the aspiration level at the episodic level into local (state-wise) aspiration levels, possesses a higher learning efficiency than any of the compared methods, and that the novel method has the ability to autonomously adjust exploration scope, while keeping the shared information minimal. This study provides a glimpse into an aspect of human and biological sociality which has been mentioned little in the context of artificial intelligence and machine learning.</p></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0303264724001618/pdfft?md5=1013f746e0723d63b95dde32bc8a58b3&pid=1-s2.0-S0303264724001618-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Social satisficing: Multi-agent reinforcement learning with satisficing agents\",\"authors\":\"Daisuke Uragami ,&nbsp;Noriaki Sonota ,&nbsp;Tatsuji Takahashi\",\"doi\":\"10.1016/j.biosystems.2024.105276\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>For a reinforcement learning agent to finish trial-and-error in a realistic time duration, it is necessary to limit the scope of exploration during the learning process. However, limiting the exploration scope means limitation in optimality: the agent could fall into a suboptimal solution. This is the nature of local, bottom-up way of learning. An alternative way is to set a goal to be achieved, which is a more global, top-down way. The risk-sensitive satisficing (RS) value function incorporate, as a method of the latter way, the satisficing principle into reinforcement learning and enables agents to quickly converge to exploiting the optimal solution without falling into a suboptimal one, when an appropriate goal (aspiration level) is given. However, how best to determine the aspiration level is still an open problem. This study proposes social satisficing, a framework for multi-agent reinforcement learning which determines the aspiration level through information sharing among multiple agents. In order to verify the effectiveness of this novel method, we conducted simulations in a learning environment with many suboptimal goals (SuboptimaWorld). The results show that the proposed method, which converts the aspiration level at the episodic level into local (state-wise) aspiration levels, possesses a higher learning efficiency than any of the compared methods, and that the novel method has the ability to autonomously adjust exploration scope, while keeping the shared information minimal. This study provides a glimpse into an aspect of human and biological sociality which has been mentioned little in the context of artificial intelligence and machine learning.</p></div>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0303264724001618/pdfft?md5=1013f746e0723d63b95dde32bc8a58b3&pid=1-s2.0-S0303264724001618-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0303264724001618\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0303264724001618","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0

摘要

要让强化学习代理在现实时间内完成试错,就必须限制学习过程中的探索范围。然而,限制探索范围意味着限制优化:代理可能会陷入次优解。这就是自下而上的局部学习方式的本质。另一种方法是设定一个要实现的目标,这是一种更具全局性、自上而下的方法。作为后一种方式的一种方法,风险敏感的满足(RS)价值函数将满足原则纳入强化学习中,并在给出适当目标(期望水平)时,使代理能够迅速收敛到利用最优解,而不会陷入次优解。然而,如何更好地确定期望水平仍是一个未决问题。本研究提出了一种多代理强化学习框架--社会满意度(social satisficing),它通过多个代理之间的信息共享来确定期望水平。为了验证这种新方法的有效性,我们在一个有许多次优目标的学习环境(SuboptimaWorld)中进行了模拟。结果表明,所提出的方法能将偶发水平上的愿望水平转换为局部(状态上的)愿望水平,其学习效率高于任何一种比较方法,而且这种新方法有能力自主调整探索范围,同时将共享信息保持在最低水平。这项研究让我们看到了人类和生物社会性的一个方面,而这个方面在人工智能和机器学习中很少被提及。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Social satisficing: Multi-agent reinforcement learning with satisficing agents

For a reinforcement learning agent to finish trial-and-error in a realistic time duration, it is necessary to limit the scope of exploration during the learning process. However, limiting the exploration scope means limitation in optimality: the agent could fall into a suboptimal solution. This is the nature of local, bottom-up way of learning. An alternative way is to set a goal to be achieved, which is a more global, top-down way. The risk-sensitive satisficing (RS) value function incorporate, as a method of the latter way, the satisficing principle into reinforcement learning and enables agents to quickly converge to exploiting the optimal solution without falling into a suboptimal one, when an appropriate goal (aspiration level) is given. However, how best to determine the aspiration level is still an open problem. This study proposes social satisficing, a framework for multi-agent reinforcement learning which determines the aspiration level through information sharing among multiple agents. In order to verify the effectiveness of this novel method, we conducted simulations in a learning environment with many suboptimal goals (SuboptimaWorld). The results show that the proposed method, which converts the aspiration level at the episodic level into local (state-wise) aspiration levels, possesses a higher learning efficiency than any of the compared methods, and that the novel method has the ability to autonomously adjust exploration scope, while keeping the shared information minimal. This study provides a glimpse into an aspect of human and biological sociality which has been mentioned little in the context of artificial intelligence and machine learning.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
期刊最新文献
A Systematic Review of Sleep Disturbance in Idiopathic Intracranial Hypertension. Advancing Patient Education in Idiopathic Intracranial Hypertension: The Promise of Large Language Models. Anti-Myelin-Associated Glycoprotein Neuropathy: Recent Developments. Approach to Managing the Initial Presentation of Multiple Sclerosis: A Worldwide Practice Survey. Association Between LACE+ Index Risk Category and 90-Day Mortality After Stroke.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1