噪声迭代囚徒困境模拟中自我与他者的合作行为

Takaki Makino, Kazuyuki Aihara
{"title":"噪声迭代囚徒困境模拟中自我与他者的合作行为","authors":"Takaki Makino, Kazuyuki Aihara","doi":"10.1109/DEVLRN.2005.1490943","DOIUrl":null,"url":null,"abstract":"We developed self learning for simulation study of mutual understanding between peer agents. We designed them to use various types of coplayer models and a reinforcement learning algorithm to learn to play a noisy iterated prisoners' dilemma game so that the pay-off for the agent itself is maximized. We measured the mutual-modeling ability of each type of agent in terms of cooperative behavior when playing with another equivalent agent. We observed that agents with a complex coplayer model, which includes a model of the agent itself, showed higher cooperation than agents with a simpler coplayer model only. Moreover, in low-noise environments, Level-M agent, which develops equivalent models of the self and the other, showed higher cooperation than other types of agents. These results suggest the importance of \"self-observation\" in the design of communicative agents","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"111 3S 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cooperative Behavior of Agents That Model the Other and the Self in Noisy Iterated Prisoners' Dilemma Simulation\",\"authors\":\"Takaki Makino, Kazuyuki Aihara\",\"doi\":\"10.1109/DEVLRN.2005.1490943\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We developed self learning for simulation study of mutual understanding between peer agents. We designed them to use various types of coplayer models and a reinforcement learning algorithm to learn to play a noisy iterated prisoners' dilemma game so that the pay-off for the agent itself is maximized. We measured the mutual-modeling ability of each type of agent in terms of cooperative behavior when playing with another equivalent agent. We observed that agents with a complex coplayer model, which includes a model of the agent itself, showed higher cooperation than agents with a simpler coplayer model only. Moreover, in low-noise environments, Level-M agent, which develops equivalent models of the self and the other, showed higher cooperation than other types of agents. These results suggest the importance of \\\"self-observation\\\" in the design of communicative agents\",\"PeriodicalId\":297121,\"journal\":{\"name\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"volume\":\"111 3S 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2005.1490943\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2005.1490943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们开发了自学习来模拟研究同伴代理之间的相互理解。我们设计它们使用各种类型的合作玩家模型和强化学习算法来学习玩一个嘈杂的迭代囚犯困境游戏,这样智能体本身的收益就会最大化。我们根据与另一个等效智能体玩耍时的合作行为来衡量每种智能体的相互建模能力。我们观察到,具有复杂合作播放器模型(包括代理本身的模型)的代理比仅具有简单合作播放器模型的代理表现出更高的合作。此外,在低噪声环境下,Level-M智能体发展了自我与他者的等效模型,比其他类型的智能体表现出更高的合作水平。这些结果表明了“自我观察”在沟通代理设计中的重要性
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cooperative Behavior of Agents That Model the Other and the Self in Noisy Iterated Prisoners' Dilemma Simulation
We developed self learning for simulation study of mutual understanding between peer agents. We designed them to use various types of coplayer models and a reinforcement learning algorithm to learn to play a noisy iterated prisoners' dilemma game so that the pay-off for the agent itself is maximized. We measured the mutual-modeling ability of each type of agent in terms of cooperative behavior when playing with another equivalent agent. We observed that agents with a complex coplayer model, which includes a model of the agent itself, showed higher cooperation than agents with a simpler coplayer model only. Moreover, in low-noise environments, Level-M agent, which develops equivalent models of the self and the other, showed higher cooperation than other types of agents. These results suggest the importance of "self-observation" in the design of communicative agents
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
'Infants' preference for infants and adults','','','','','','','','93','95', Imitation faculty based on a simple visuo-motor mapping towards interaction rule learning with a human partner Online Injection of Teacher's Abstract Concepts into a Real-time Developmental Robot with Autonomous Navigation as Example How can prosody help to learn actions? Longitudinal Observations of Structural Changes in the Mother-Infant Interaction: A New Perspectives Based on Infants' Locomotion Development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1