From pixels to planning: scale-free active inference

Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr
{"title":"From pixels to planning: scale-free active inference","authors":"Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr","doi":"arxiv-2407.20292","DOIUrl":null,"url":null,"abstract":"This paper describes a discrete state-space model -- and accompanying methods\n-- for generative modelling. This model generalises partially observed Markov\ndecision processes to include paths as latent variables, rendering it suitable\nfor active inference and learning in a dynamic setting. Specifically, we\nconsider deep or hierarchical forms using the renormalisation group. The\nensuing renormalising generative models (RGM) can be regarded as discrete\nhomologues of deep convolutional neural networks or continuous state-space\nmodels in generalised coordinates of motion. By construction, these\nscale-invariant models can be used to learn compositionality over space and\ntime, furnishing models of paths or orbits; i.e., events of increasing temporal\ndepth and itinerancy. This technical note illustrates the automatic discovery,\nlearning and deployment of RGMs using a series of applications. We start with\nimage classification and then consider the compression and generation of movies\nand music. Finally, we apply the same variational principles to the learning of\nAtari-like games.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper describes a discrete state-space model -- and accompanying methods -- for generative modelling. This model generalises partially observed Markov decision processes to include paths as latent variables, rendering it suitable for active inference and learning in a dynamic setting. Specifically, we consider deep or hierarchical forms using the renormalisation group. The ensuing renormalising generative models (RGM) can be regarded as discrete homologues of deep convolutional neural networks or continuous state-space models in generalised coordinates of motion. By construction, these scale-invariant models can be used to learn compositionality over space and time, furnishing models of paths or orbits; i.e., events of increasing temporal depth and itinerancy. This technical note illustrates the automatic discovery, learning and deployment of RGMs using a series of applications. We start with image classification and then consider the compression and generation of movies and music. Finally, we apply the same variational principles to the learning of Atari-like games.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从像素到规划:无标度主动推理
本文介绍了一种用于生成建模的离散状态空间模型及相关方法。该模型概括了部分观测的马尔可夫决策过程,将路径作为潜在变量,使其适用于动态环境下的主动推理和学习。具体来说,我们使用重正化组来考虑深度或分层形式。所研究的重正化生成模型(RGM)可视为深度卷积神经网络或广义运动坐标连续状态空间模型的离散同源模型。通过构造,这些阶跃不变模型可用于学习空间和时间的构成性,提供路径或轨道模型,即时间深度和行程不断增加的事件模型。本技术说明通过一系列应用说明了 RGM 的自动发现、学习和部署。我们从图像分类开始,然后考虑电影和音乐的压缩与生成。最后,我们将同样的变分原理应用于类阿塔里游戏的学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Early reduced dopaminergic tone mediated by D3 receptor and dopamine transporter in absence epileptogenesis Contrasformer: A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification Identifying Influential nodes in Brain Networks via Self-Supervised Graph-Transformer Contrastive Learning in Memristor-based Neuromorphic Systems Self-Attention Limits Working Memory Capacity of Transformer-Based Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1