Realizing Synthetic Active Inference Agents, Part II: Variational Message Updates.

IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Computation Pub Date : 2024-09-23 DOI:10.1162/neco_a_01713
Thijs van de Laar, Magnus Koudahl, Bert de Vries
{"title":"Realizing Synthetic Active Inference Agents, Part II: Variational Message Updates.","authors":"Thijs van de Laar, Magnus Koudahl, Bert de Vries","doi":"10.1162/neco_a_01713","DOIUrl":null,"url":null,"abstract":"<p><p>The free energy principle (FEP) describes (biological) agents as minimizing a variational free energy (FE) with respect to a generative model of their environment. Active inference (AIF) is a corollary of the FEP that describes how agents explore and exploit their environment by minimizing an expected FE objective. In two related papers, we describe a scalable, epistemic approach to synthetic AIF by message passing on free-form Forney-style factor graphs (FFGs). A companion paper (part I of this article; Koudahl et al., 2023) introduces a constrained FFG (CFFG) notation that visually represents (generalized) FE objectives for AIF. This article (part II) derives message-passing algorithms that minimize (generalized) FE objectives on a CFFG by variational calculus. A comparison between simulated Bethe and generalized FE agents illustrates how the message-passing approach to synthetic AIF induces epistemic behavior on a T-maze navigation task. Extension of the T-maze simulation to learning goal statistics and a multiagent bargaining setting illustrate how this approach encourages reuse of nodes and updates in alternative settings. With a full message-passing account of synthetic AIF agents, it becomes possible to derive and reuse message updates across models and move closer to industrial applications of synthetic AIF.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/neco_a_01713","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The free energy principle (FEP) describes (biological) agents as minimizing a variational free energy (FE) with respect to a generative model of their environment. Active inference (AIF) is a corollary of the FEP that describes how agents explore and exploit their environment by minimizing an expected FE objective. In two related papers, we describe a scalable, epistemic approach to synthetic AIF by message passing on free-form Forney-style factor graphs (FFGs). A companion paper (part I of this article; Koudahl et al., 2023) introduces a constrained FFG (CFFG) notation that visually represents (generalized) FE objectives for AIF. This article (part II) derives message-passing algorithms that minimize (generalized) FE objectives on a CFFG by variational calculus. A comparison between simulated Bethe and generalized FE agents illustrates how the message-passing approach to synthetic AIF induces epistemic behavior on a T-maze navigation task. Extension of the T-maze simulation to learning goal statistics and a multiagent bargaining setting illustrate how this approach encourages reuse of nodes and updates in alternative settings. With a full message-passing account of synthetic AIF agents, it becomes possible to derive and reuse message updates across models and move closer to industrial applications of synthetic AIF.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
实现合成主动推理代理,第二部分:变异信息更新。
自由能原理(FEP)将(生物)代理描述为相对于其环境的生成模型最小化可变自由能(FE)。主动推理(AIF)是自由能原理的必然结果,它描述了生物体如何通过最小化预期自由能目标来探索和利用其环境。在两篇相关论文中,我们描述了通过在自由形式的福尼式因子图(FFGs)上进行消息传递来合成 AIF 的可扩展认识论方法。另一篇相关论文(本文第一部分;Koudahl 等人,2023 年)介绍了一种受限 FFG(CFFG)符号,它能直观地表示 AIF 的(广义)FE 目标。本文(第二部分)通过变分法推导了在 CFFG 上最小化(广义)FE 目标的消息传递算法。模拟贝特代理和广义 FE 代理之间的比较说明了合成 AIF 的信息传递方法如何在 T 型迷宫导航任务中诱导认识行为。将 T 型迷宫模拟扩展到学习目标统计和多代理讨价还价设置,说明了这种方法如何鼓励在其他设置中重复使用节点和更新。有了合成 AIF 代理的完整消息传递账户,就有可能在不同模型中推导和重用消息更新,并更接近合成 AIF 的工业应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Computation
Neural Computation 工程技术-计算机:人工智能
CiteScore
6.30
自引率
3.40%
发文量
83
审稿时长
3.0 months
期刊介绍: Neural Computation is uniquely positioned at the crossroads between neuroscience and TMCS and welcomes the submission of original papers from all areas of TMCS, including: Advanced experimental design; Analysis of chemical sensor data; Connectomic reconstructions; Analysis of multielectrode and optical recordings; Genetic data for cell identity; Analysis of behavioral data; Multiscale models; Analysis of molecular mechanisms; Neuroinformatics; Analysis of brain imaging data; Neuromorphic engineering; Principles of neural coding, computation, circuit dynamics, and plasticity; Theories of brain function.
期刊最新文献
Associative Learning and Active Inference. Deep Nonnegative Matrix Factorization with Beta Divergences. KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function. ℓ 1 -Regularized ICA: A Novel Method for Analysis of Task-Related fMRI Data. Latent Space Bayesian Optimization With Latent Data Augmentation for Enhanced Exploration.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1