Emergence of chemotactic strategies with multi-agent reinforcement learning

IF 6.3 2区 物理与天体物理 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Machine Learning Science and Technology Pub Date : 2024-08-21 DOI:10.1088/2632-2153/ad5f73
Samuel Tovey, Christoph Lohrmann, Christian Holm
{"title":"Emergence of chemotactic strategies with multi-agent reinforcement learning","authors":"Samuel Tovey, Christoph Lohrmann, Christian Holm","doi":"10.1088/2632-2153/ad5f73","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners’ training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"97 1","pages":""},"PeriodicalIF":6.3000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning Science and Technology","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1088/2632-2153/ad5f73","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners’ training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多代理强化学习催化策略的出现
强化学习(RL)是在复杂环境中对微型机器人进行编程的一种灵活高效的方法。在此,我们将研究当强化学习被训练用于执行趋化时,它是否能为生物系统提供洞察力。也就是说,我们能否了解智能代理如何处理给定信息以游向目标。我们运行了涵盖一系列代理形状、大小和游速的模拟,以确定生物游泳者的物理限制(即布朗运动)是否会导致强化学习器的训练失败。我们发现,只要物理条件允许,RL 代理就能执行趋化,在某些情况下,甚至在主动游动压倒随机环境之前就能执行趋化。我们研究了新兴策略的效率,并确定了代理规模和游动速度的收敛性。最后,我们研究了 RL 算法采用的策略,以解释代理如何执行任务。为此,我们确定了三种新出现的主导策略和几种罕见的方法。这些策略虽然在模拟中产生了几乎相同的轨迹,但却各具特色,让我们深入了解了生物制剂探索环境和应对不断变化的条件的可能机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Machine Learning Science and Technology
Machine Learning Science and Technology Computer Science-Artificial Intelligence
CiteScore
9.10
自引率
4.40%
发文量
86
审稿时长
5 weeks
期刊介绍: Machine Learning Science and Technology is a multidisciplinary open access journal that bridges the application of machine learning across the sciences with advances in machine learning methods and theory as motivated by physical insights. Specifically, articles must fall into one of the following categories: advance the state of machine learning-driven applications in the sciences or make conceptual, methodological or theoretical advances in machine learning with applications to, inspiration from, or motivated by scientific problems.
期刊最新文献
Quality assurance for online adaptive radiotherapy: a secondary dose verification model with geometry-encoded U-Net. Optimizing ZX-diagrams with deep reinforcement learning DiffLense: a conditional diffusion model for super-resolution of gravitational lensing data Equivariant tensor network potentials Masked particle modeling on sets: towards self-supervised high energy physics foundation models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1