M Pilar Aivar, Chia-Ling Li, Matthew H Tong, Dmitry M Kit, Mary M Hayhoe
{"title":"Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task.","authors":"M Pilar Aivar, Chia-Ling Li, Matthew H Tong, Dmitry M Kit, Mary M Hayhoe","doi":"10.1167/jov.24.9.1","DOIUrl":null,"url":null,"abstract":"<p><p>Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373708/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.24.9.1","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
大多数有关视觉搜索的研究都是使用在计算机屏幕上呈现的简单任务。然而,在自然环境中,视觉搜索几乎总是涉及在三维(3D)环境中眼睛、头部和身体的运动。这两种类型的搜索任务所带来的不同限制可能解释了我们在搜索过程中对记忆资源的使用和上下文对象的作用的理解上存在的一些差异。为了探讨这个问题,我们分析了在沉浸式虚拟现实公寓中进行的视觉搜索任务。参与者一边搜索一系列几何 3D 物体,一边记录眼球运动和头部坐标。参与者在公寓中进行探索,以找到其位置和可见度都受到操控的目标物体。我们发现,对于位置可靠的物体,重复搜索会减少搜索时间和固定次数,并减少错误。搜索那些在之前的试验中可见但在实验结束时才被测试的物体也比第一次寻找物体更容易,这表明了偶然学习的背景。更重要的是,我们发现肢体运动的变化反映了对目标位置的记忆:运动轨迹更短,运动速度更高,但仅限于那些曾多次被搜索到的物体。我们的结论是,对三维空间和目标位置的记忆是视觉搜索的关键组成部分,同时也会改变运动运动学。在自然搜索中,记忆用于优化运动控制和降低能量成本。
期刊介绍:
Exploring all aspects of biological visual function, including spatial vision, perception,
low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.