Associative Learning and Active Inference.

IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Computation Pub Date : 2024-11-19 DOI:10.1162/neco_a_01711
Petr Anokhin, Artyom Sorokin, Mikhail Burtsev, Karl Friston
{"title":"Associative Learning and Active Inference.","authors":"Petr Anokhin, Artyom Sorokin, Mikhail Burtsev, Karl Friston","doi":"10.1162/neco_a_01711","DOIUrl":null,"url":null,"abstract":"<p><p>Associative learning is a behavioral phenomenon in which individuals develop connections between stimuli or events based on their co-occurrence. Initially studied by Pavlov in his conditioning experiments, the fundamental principles of learning have been expanded on through the discovery of a wide range of learning phenomena. Computational models have been developed based on the concept of minimizing reward prediction errors. The Rescorla-Wagner model, in particular, is a well-known model that has greatly influenced the field of reinforcement learning. However, the simplicity of these models restricts their ability to fully explain the diverse range of behavioral phenomena associated with learning. In this study, we adopt the free energy principle, which suggests that living systems strive to minimize surprise or uncertainty under their internal models of the world. We consider the learning process as the minimization of free energy and investigate its relationship with the Rescorla-Wagner model, focusing on the informational aspects of learning, different types of surprise, and prediction errors based on beliefs and values. Furthermore, we explore how well-known behavioral phenomena such as blocking, overshadowing, and latent inhibition can be modeled within the active inference framework. We accomplish this by using the informational and novelty aspects of attention, which share similar ideas proposed by seemingly contradictory models such as Mackintosh and Pearce-Hall models. Thus, we demonstrate that the free energy principle, as a theoretical framework derived from first principles, can integrate the ideas and models of associative learning proposed based on empirical experiments and serve as a framework for a better understanding of the computational processes behind associative learning in the brain.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"2602-2635"},"PeriodicalIF":2.7000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/neco_a_01711","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Associative learning is a behavioral phenomenon in which individuals develop connections between stimuli or events based on their co-occurrence. Initially studied by Pavlov in his conditioning experiments, the fundamental principles of learning have been expanded on through the discovery of a wide range of learning phenomena. Computational models have been developed based on the concept of minimizing reward prediction errors. The Rescorla-Wagner model, in particular, is a well-known model that has greatly influenced the field of reinforcement learning. However, the simplicity of these models restricts their ability to fully explain the diverse range of behavioral phenomena associated with learning. In this study, we adopt the free energy principle, which suggests that living systems strive to minimize surprise or uncertainty under their internal models of the world. We consider the learning process as the minimization of free energy and investigate its relationship with the Rescorla-Wagner model, focusing on the informational aspects of learning, different types of surprise, and prediction errors based on beliefs and values. Furthermore, we explore how well-known behavioral phenomena such as blocking, overshadowing, and latent inhibition can be modeled within the active inference framework. We accomplish this by using the informational and novelty aspects of attention, which share similar ideas proposed by seemingly contradictory models such as Mackintosh and Pearce-Hall models. Thus, we demonstrate that the free energy principle, as a theoretical framework derived from first principles, can integrate the ideas and models of associative learning proposed based on empirical experiments and serve as a framework for a better understanding of the computational processes behind associative learning in the brain.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
联想学习和主动推理
联想学习是一种行为现象,在这种现象中,个体会根据刺激物或事件的共同发生建立起它们之间的联系。学习的基本原理最初是由巴甫洛夫在他的条件反射实验中研究出来的。基于奖励预测误差最小化的概念,人们开发出了计算模型。其中,雷斯科拉-瓦格纳模型(Rescorla-Wagner model)是一个著名的模型,对强化学习领域产生了巨大影响。然而,这些模型的简单性限制了它们完全解释与学习相关的各种行为现象的能力。在本研究中,我们采用了自由能原理,该原理认为生命系统在其内部世界模型下,会努力将意外或不确定性降至最低。我们将学习过程视为自由能最小化的过程,并研究其与雷斯科拉-瓦格纳模型的关系,重点关注学习的信息方面、不同类型的惊喜以及基于信念和价值观的预测错误。此外,我们还探讨了如何在主动推理框架内对阻滞、阴影和潜在抑制等众所周知的行为现象进行建模。我们利用注意力的信息性和新颖性来实现这一目标,这两个方面与麦金托什模型和皮尔斯-霍尔模型等看似矛盾的模型所提出的观点相似。因此,我们证明了自由能原理作为一个从第一性原理衍生出来的理论框架,可以整合根据经验实验提出的联想学习思想和模型,并以此为框架更好地理解大脑中联想学习背后的计算过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Computation
Neural Computation 工程技术-计算机:人工智能
CiteScore
6.30
自引率
3.40%
发文量
83
审稿时长
3.0 months
期刊介绍: Neural Computation is uniquely positioned at the crossroads between neuroscience and TMCS and welcomes the submission of original papers from all areas of TMCS, including: Advanced experimental design; Analysis of chemical sensor data; Connectomic reconstructions; Analysis of multielectrode and optical recordings; Genetic data for cell identity; Analysis of behavioral data; Multiscale models; Analysis of molecular mechanisms; Neuroinformatics; Analysis of brain imaging data; Neuromorphic engineering; Principles of neural coding, computation, circuit dynamics, and plasticity; Theories of brain function.
期刊最新文献
Associative Learning and Active Inference. KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function. Orthogonal Gated Recurrent Unit With Neumann-Cayley Transformation. A Fast Algorithm for All-Pairs-Shortest-Paths Suitable for Neural Networks. Fine Granularity Is Critical for Intelligent Neural Network Pruning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1