Combining imitation and deep reinforcement learning to human-level performance on a virtual foraging task

IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Adaptive Behavior Pub Date : 2023-09-15 DOI:10.1177/10597123231201655
Vittorio Giammarino, Matthew F Dunne, Kylie N Moore, Michael E Hasselmo, Chantal E Stern, Ioannis Ch Paschalidis
{"title":"Combining imitation and deep reinforcement learning to human-level performance on a virtual foraging task","authors":"Vittorio Giammarino, Matthew F Dunne, Kylie N Moore, Michael E Hasselmo, Chantal E Stern, Ioannis Ch Paschalidis","doi":"10.1177/10597123231201655","DOIUrl":null,"url":null,"abstract":"We develop a framework to learn bio-inspired foraging policies using human data. We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards. A Markov Decision Process (MDP) framework is introduced to model the human decision dynamics. Then, Imitation Learning (IL) based on maximum likelihood estimation is used to train Neural Networks (NN) that map human decisions to observed states. The results show that passive imitation substantially underperforms humans. We further refine the human-inspired policies via Reinforcement Learning (RL) using the on-policy Proximal Policy Optimization (PPO) algorithm which shows better stability than other algorithms and can steadily improve the policies pre-trained with IL. We show that the combination of IL and RL match human performance and that the artificial agents trained with our approach can quickly adapt to reward distribution shift. We finally show that good performance and robustness to reward distribution shift strongly depend on combining allocentric information with an egocentric representation of the environment.","PeriodicalId":55552,"journal":{"name":"Adaptive Behavior","volume":"7 1","pages":"0"},"PeriodicalIF":1.2000,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adaptive Behavior","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/10597123231201655","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

Abstract

We develop a framework to learn bio-inspired foraging policies using human data. We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards. A Markov Decision Process (MDP) framework is introduced to model the human decision dynamics. Then, Imitation Learning (IL) based on maximum likelihood estimation is used to train Neural Networks (NN) that map human decisions to observed states. The results show that passive imitation substantially underperforms humans. We further refine the human-inspired policies via Reinforcement Learning (RL) using the on-policy Proximal Policy Optimization (PPO) algorithm which shows better stability than other algorithms and can steadily improve the policies pre-trained with IL. We show that the combination of IL and RL match human performance and that the artificial agents trained with our approach can quickly adapt to reward distribution shift. We finally show that good performance and robustness to reward distribution shift strongly depend on combining allocentric information with an egocentric representation of the environment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将模仿和深度强化学习结合起来,在虚拟觅食任务中实现人类水平的表现
我们开发了一个框架来学习使用人类数据的生物启发觅食政策。我们进行了一个实验,让人类沉浸在一个开阔的野外觅食环境中,并接受训练,以收集最高数量的奖励。引入马尔可夫决策过程(MDP)框架对人类决策动力学进行建模。然后,基于最大似然估计的模仿学习(IL)用于训练神经网络(NN),将人类决策映射到观察状态。结果表明,被动模仿在很大程度上落后于人类。我们使用策略上近端策略优化(PPO)算法通过强化学习(RL)进一步改进人类启发的策略,该算法比其他算法具有更好的稳定性,并且可以稳定地改进使用IL预训练的策略。我们表明,IL和RL的组合与人类的表现相匹配,并且用我们的方法训练的人工智能体可以快速适应奖励分布的变化。我们最后表明,奖励分布转移的良好性能和鲁棒性强烈依赖于将非中心信息与环境的自我中心表示相结合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Adaptive Behavior
Adaptive Behavior 工程技术-计算机:人工智能
CiteScore
4.30
自引率
18.80%
发文量
34
审稿时长
>12 weeks
期刊介绍: _Adaptive Behavior_ publishes articles on adaptive behaviour in living organisms and autonomous artificial systems. The official journal of the _International Society of Adaptive Behavior_, _Adaptive Behavior_, addresses topics such as perception and motor control, embodied cognition, learning and evolution, neural mechanisms, artificial intelligence, behavioral sequences, motivation and emotion, characterization of environments, decision making, collective and social behavior, navigation, foraging, communication and signalling. Print ISSN: 1059-7123
期刊最新文献
Environmental complexity, cognition, and plant stress physiology A model of how hierarchical representations constructed in the hippocampus are used to navigate through space Mechanical Problem Solving in Goffin’s Cockatoos—Towards Modeling Complex Behavior Coupling First-Person Cognitive Research With Neurophilosophy and Enactivism: An Outline of Arguments The origin and function of external representations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1