通过强化学习模拟情境提示效应的动态变化。

IF 2 4区 心理学 Q2 OPHTHALMOLOGY Journal of Vision Pub Date : 2024-11-04 DOI:10.1167/jov.24.12.11
Yasuhiro Hatori, Zheng-Xiong Yuan, Chia-Huei Tseng, Ichiro Kuriki, Satoshi Shioiri
{"title":"通过强化学习模拟情境提示效应的动态变化。","authors":"Yasuhiro Hatori, Zheng-Xiong Yuan, Chia-Huei Tseng, Ichiro Kuriki, Satoshi Shioiri","doi":"10.1167/jov.24.12.11","DOIUrl":null,"url":null,"abstract":"<p><p>Humans use environmental context for facilitating object searches. The benefit of context for visual search requires learning. Modeling the learning process of context for efficient processing is vital to understanding visual function in everyday environments. We proposed a model that accounts for the contextual cueing effect, which refers to the learning effect of scene context to identify the location of a target item. The model extracted the global feature of a scene and gradually strengthened the relationship between the global feature and its target location with repeated observations. We compared the model and human performance with two visual search experiments (letter arrangements on a gray background or a natural scene). The proposed model successfully simulated the faster reduction of the number of saccades required before target detection for the natural scene background compared with the uniform gray background. We further tested whether the model replicated the known characteristics of the contextual cueing effect in terms of local learning around the target, the effect of the ratio of repeated and novel stimuli, and the superiority of natural scenes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"11"},"PeriodicalIF":2.0000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11578146/pdf/","citationCount":"0","resultStr":"{\"title\":\"Modeling the dynamics of contextual cueing effect by reinforcement learning.\",\"authors\":\"Yasuhiro Hatori, Zheng-Xiong Yuan, Chia-Huei Tseng, Ichiro Kuriki, Satoshi Shioiri\",\"doi\":\"10.1167/jov.24.12.11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Humans use environmental context for facilitating object searches. The benefit of context for visual search requires learning. Modeling the learning process of context for efficient processing is vital to understanding visual function in everyday environments. We proposed a model that accounts for the contextual cueing effect, which refers to the learning effect of scene context to identify the location of a target item. The model extracted the global feature of a scene and gradually strengthened the relationship between the global feature and its target location with repeated observations. We compared the model and human performance with two visual search experiments (letter arrangements on a gray background or a natural scene). The proposed model successfully simulated the faster reduction of the number of saccades required before target detection for the natural scene background compared with the uniform gray background. We further tested whether the model replicated the known characteristics of the contextual cueing effect in terms of local learning around the target, the effect of the ratio of repeated and novel stimuli, and the superiority of natural scenes.</p>\",\"PeriodicalId\":49955,\"journal\":{\"name\":\"Journal of Vision\",\"volume\":\"24 12\",\"pages\":\"11\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11578146/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Vision\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/jov.24.12.11\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.24.12.11","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

人类利用环境背景来促进物体搜索。环境对视觉搜索的益处需要学习。为有效处理而建立情境学习过程模型,对于理解日常环境中的视觉功能至关重要。我们提出了一个能够解释情境提示效应的模型,情境提示效应是指通过学习场景情境来识别目标物品位置的效果。该模型提取了场景的全局特征,并通过反复观察逐步加强全局特征与目标位置之间的关系。我们用两个视觉搜索实验(灰色背景上的字母排列或自然场景)比较了模型和人类的表现。与统一的灰色背景相比,提出的模型成功地模拟了自然场景背景下目标检测前所需的眼动次数的快速减少。我们还进一步测试了该模型是否复制了已知的情境提示效应特征,包括目标周围的局部学习、重复刺激和新刺激比例的影响以及自然场景的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Modeling the dynamics of contextual cueing effect by reinforcement learning.

Humans use environmental context for facilitating object searches. The benefit of context for visual search requires learning. Modeling the learning process of context for efficient processing is vital to understanding visual function in everyday environments. We proposed a model that accounts for the contextual cueing effect, which refers to the learning effect of scene context to identify the location of a target item. The model extracted the global feature of a scene and gradually strengthened the relationship between the global feature and its target location with repeated observations. We compared the model and human performance with two visual search experiments (letter arrangements on a gray background or a natural scene). The proposed model successfully simulated the faster reduction of the number of saccades required before target detection for the natural scene background compared with the uniform gray background. We further tested whether the model replicated the known characteristics of the contextual cueing effect in terms of local learning around the target, the effect of the ratio of repeated and novel stimuli, and the superiority of natural scenes.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Vision
Journal of Vision 医学-眼科学
CiteScore
2.90
自引率
5.60%
发文量
218
审稿时长
3-6 weeks
期刊介绍: Exploring all aspects of biological visual function, including spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.
期刊最新文献
Individual differences reveal similarities in serial dependence effects across perceptual tasks, but not to oculomotor tasks. Investigating the relationship between subjective perception and unconscious feature integration. Binocular integration of chromatic and luminance signals. Deep convolutional neural networks are sensitive to face configuration. How the window of visibility varies around polar angle.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1