A particle filtering account of selective attention during learning

Angela Radulescu, Y. Niv, N. Daw
{"title":"A particle filtering account of selective attention during learning","authors":"Angela Radulescu, Y. Niv, N. Daw","doi":"10.32470/ccn.2019.1338-0","DOIUrl":null,"url":null,"abstract":"A growing literature has highlighted a role for selective attention in shaping representation learning of relevant task features, yet little is known about how humans learn what to attend to. Here we model the dynamics of selective attention as a memory-augmented particle filter. In a task where participants had to learn from trial and error which of nine features is more predictive of reward, we show that trial-by-trial attention to features measured with eye-tracking is better fit by the particle filter, compared to a reinforcement learning mechanism that had been proposed in the past. This is because inference based on a single particle captures the sparse allocation and rapid switching of attention better than incremental error-driven updates. However, because a single particle maintains insufficient information about past events to switch hypotheses as efficiently as do participants, we show that the data are best fit by the filter augmented with a memory buffer for recent observations. This proposal suggests a new role for memory in enabling tractable, resource-efficient approximations to normative inference.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Conference on Cognitive Computational Neuroscience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32470/ccn.2019.1338-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

A growing literature has highlighted a role for selective attention in shaping representation learning of relevant task features, yet little is known about how humans learn what to attend to. Here we model the dynamics of selective attention as a memory-augmented particle filter. In a task where participants had to learn from trial and error which of nine features is more predictive of reward, we show that trial-by-trial attention to features measured with eye-tracking is better fit by the particle filter, compared to a reinforcement learning mechanism that had been proposed in the past. This is because inference based on a single particle captures the sparse allocation and rapid switching of attention better than incremental error-driven updates. However, because a single particle maintains insufficient information about past events to switch hypotheses as efficiently as do participants, we show that the data are best fit by the filter augmented with a memory buffer for recent observations. This proposal suggests a new role for memory in enabling tractable, resource-efficient approximations to normative inference.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学习过程中选择性注意的粒子过滤
越来越多的文献强调了选择性注意在形成相关任务特征的表征学习中的作用,但人们对人类如何学习注意什么知之甚少。在这里,我们将选择性注意的动力学建模为记忆增强粒子滤波器。在一项任务中,参与者必须从试验和错误中学习九个特征中哪一个更能预测奖励,我们表明,与过去提出的强化学习机制相比,粒子过滤器更适合用眼动追踪测量的特征的反复试验注意力。这是因为基于单个粒子的推理比增量错误驱动的更新更好地捕获了注意力的稀疏分配和快速切换。然而,由于单个粒子保留的关于过去事件的信息不足,无法像参与者那样有效地转换假设,因此我们表明,对数据进行最佳拟合的是带有近期观察记忆缓冲的过滤器。这一建议提出了一个新的角色,内存在启用可处理的,资源高效的近似规范推理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Narratives as Networks: Predicting Memory from the Structure of Naturalistic Events Subtractive gating improves generalization in working memory tasks Do LSTMs know about Principle C? Unfolding of multisensory inference in the brain and behavior Adversarial Training of Neural Encoding Models on Population Spike Trains
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1