Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr
{"title":"From pixels to planning: scale-free active inference","authors":"Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr","doi":"arxiv-2407.20292","DOIUrl":null,"url":null,"abstract":"This paper describes a discrete state-space model -- and accompanying methods\n-- for generative modelling. This model generalises partially observed Markov\ndecision processes to include paths as latent variables, rendering it suitable\nfor active inference and learning in a dynamic setting. Specifically, we\nconsider deep or hierarchical forms using the renormalisation group. The\nensuing renormalising generative models (RGM) can be regarded as discrete\nhomologues of deep convolutional neural networks or continuous state-space\nmodels in generalised coordinates of motion. By construction, these\nscale-invariant models can be used to learn compositionality over space and\ntime, furnishing models of paths or orbits; i.e., events of increasing temporal\ndepth and itinerancy. This technical note illustrates the automatic discovery,\nlearning and deployment of RGMs using a series of applications. We start with\nimage classification and then consider the compression and generation of movies\nand music. Finally, we apply the same variational principles to the learning of\nAtari-like games.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"41 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper describes a discrete state-space model -- and accompanying methods
-- for generative modelling. This model generalises partially observed Markov
decision processes to include paths as latent variables, rendering it suitable
for active inference and learning in a dynamic setting. Specifically, we
consider deep or hierarchical forms using the renormalisation group. The
ensuing renormalising generative models (RGM) can be regarded as discrete
homologues of deep convolutional neural networks or continuous state-space
models in generalised coordinates of motion. By construction, these
scale-invariant models can be used to learn compositionality over space and
time, furnishing models of paths or orbits; i.e., events of increasing temporal
depth and itinerancy. This technical note illustrates the automatic discovery,
learning and deployment of RGMs using a series of applications. We start with
image classification and then consider the compression and generation of movies
and music. Finally, we apply the same variational principles to the learning of
Atari-like games.