Catriona L Scrivener, Elisa Zamboni, Antony B Morland, Edward H Silson
The occipital place area (OPA) is a scene-selective region on the lateral surface of human occipitotemporal cortex that spatially overlaps multiple visual field maps, as well as portions of cortex that are not currently defined as retinotopic. Here we combined population receptive field modeling and responses to scenes in a representational similarity analysis (RSA) framework to test the prediction that the OPA's visual field map divisions contribute uniquely to the overall pattern of scene selectivity within the OPA. Consistent with this prediction, the patterns of response to a set of complex scenes were heterogeneous between maps. To explain this heterogeneity, we tested the explanatory power of seven candidate models using RSA. These models spanned different scene dimensions (Content, Expanse, Distance), low- and high-level visual features, and navigational affordances. None of the tested models could account for the variation in scene response observed between the OPA's visual field maps. However, the heterogeneity in scene response was correlated with the differences in retinotopic profiles across maps. These data highlight the need to carefully examine the relationship between regions defined as category-selective and the underlying retinotopy, and they suggest that, in the case of the OPA, it may not be appropriate to conceptualize it as a single scene-selective region.
{"title":"Retinotopy drives the variation in scene responses across visual field map divisions of the occipital place area.","authors":"Catriona L Scrivener, Elisa Zamboni, Antony B Morland, Edward H Silson","doi":"10.1167/jov.24.8.10","DOIUrl":"10.1167/jov.24.8.10","url":null,"abstract":"<p><p>The occipital place area (OPA) is a scene-selective region on the lateral surface of human occipitotemporal cortex that spatially overlaps multiple visual field maps, as well as portions of cortex that are not currently defined as retinotopic. Here we combined population receptive field modeling and responses to scenes in a representational similarity analysis (RSA) framework to test the prediction that the OPA's visual field map divisions contribute uniquely to the overall pattern of scene selectivity within the OPA. Consistent with this prediction, the patterns of response to a set of complex scenes were heterogeneous between maps. To explain this heterogeneity, we tested the explanatory power of seven candidate models using RSA. These models spanned different scene dimensions (Content, Expanse, Distance), low- and high-level visual features, and navigational affordances. None of the tested models could account for the variation in scene response observed between the OPA's visual field maps. However, the heterogeneity in scene response was correlated with the differences in retinotopic profiles across maps. These data highlight the need to carefully examine the relationship between regions defined as category-selective and the underlying retinotopy, and they suggest that, in the case of the OPA, it may not be appropriate to conceptualize it as a single scene-selective region.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11343012/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Across the visual periphery, perceptual and metacognitive abilities differ depending on the locus of visual attention, the location of peripheral stimulus presentation, the task design, and many other factors. In this investigation, we aimed to illuminate the relationship between attention and eccentricity in the visual periphery by estimating perceptual sensitivity, metacognitive sensitivity, and response biases across the visual field. In a 2AFC detection task, participants were asked to determine whether a signal was present or absent at one of eight peripheral locations (±10°, 20°, 30°, and 40°), using either a valid or invalid attentional cue. As expected, results revealed that perceptual sensitivity declined with eccentricity and was modulated by attention, with higher sensitivity on validly cued trials. Furthermore, a significant main effect of eccentricity on response bias emerged, with variable (but relatively unbiased) c'a values from 10° to 30°, and conservative c'a values at 40°. Regarding metacognitive sensitivity, significant main effects of attention and eccentricity were found, with metacognitive sensitivity decreasing with eccentricity, and decreasing in the invalid cue condition. Interestingly, metacognitive efficiency, as measured by the ratio of meta-d'a/d'a, was not modulated by attention or eccentricity. Overall, these findings demonstrate (1) that in some circumstances, observers have surprisingly robust metacognitive insights into how performance changes across the visual field and (2) that the periphery may be subject to variable detection biases that are contingent on the exact location in peripheral space.
{"title":"Consistent metacognitive efficiency and variable response biases in peripheral vision.","authors":"Joseph Pruitt, J D Knotts, Brian Odegaard","doi":"10.1167/jov.24.8.4","DOIUrl":"10.1167/jov.24.8.4","url":null,"abstract":"<p><p>Across the visual periphery, perceptual and metacognitive abilities differ depending on the locus of visual attention, the location of peripheral stimulus presentation, the task design, and many other factors. In this investigation, we aimed to illuminate the relationship between attention and eccentricity in the visual periphery by estimating perceptual sensitivity, metacognitive sensitivity, and response biases across the visual field. In a 2AFC detection task, participants were asked to determine whether a signal was present or absent at one of eight peripheral locations (±10°, 20°, 30°, and 40°), using either a valid or invalid attentional cue. As expected, results revealed that perceptual sensitivity declined with eccentricity and was modulated by attention, with higher sensitivity on validly cued trials. Furthermore, a significant main effect of eccentricity on response bias emerged, with variable (but relatively unbiased) c'a values from 10° to 30°, and conservative c'a values at 40°. Regarding metacognitive sensitivity, significant main effects of attention and eccentricity were found, with metacognitive sensitivity decreasing with eccentricity, and decreasing in the invalid cue condition. Interestingly, metacognitive efficiency, as measured by the ratio of meta-d'a/d'a, was not modulated by attention or eccentricity. Overall, these findings demonstrate (1) that in some circumstances, observers have surprisingly robust metacognitive insights into how performance changes across the visual field and (2) that the periphery may be subject to variable detection biases that are contingent on the exact location in peripheral space.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11314628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we show that the model we proposed earlier to account for the disparity vergence eye movements (disparity vergence responses, or DVRs) in response to horizontal and vertical disparity steps of white noise visual stimuli also provides an excellent description of the short-latency ocular following responses (OFRs) to broadband stimuli in the visual motion domain. In addition, we reanalyzed the data and applied the model to several earlier studies that used sine-wave gratings (single or a combination of two or three gratings) and white noise stimuli. The model provides a very good account of all of these data. The model postulates that the short-latency eye movements-OFRs and DVRs-can be accounted for by the operation of two factors: an excitatory drive, determined by a weighted sum of contributions of stimulus Fourier components, scaled by a global contrast normalization mechanism. The output of the operation of these two factors is then nonlinearly scaled by the total contrast of the stimulus. Despite different roles of disparity (horizontal and vertical) and motion signals in visual scene analyses, the earliest processing stages of these different signals appear to be very similar.
{"title":"Weighted power summation and contrast normalization mechanisms account for short-latency eye movements to motion and disparity of sine-wave gratings and broadband visual stimuli in humans.","authors":"Boris M Sheliga, Edmond J FitzGibbon","doi":"10.1167/jov.24.8.14","DOIUrl":"10.1167/jov.24.8.14","url":null,"abstract":"<p><p>In this paper, we show that the model we proposed earlier to account for the disparity vergence eye movements (disparity vergence responses, or DVRs) in response to horizontal and vertical disparity steps of white noise visual stimuli also provides an excellent description of the short-latency ocular following responses (OFRs) to broadband stimuli in the visual motion domain. In addition, we reanalyzed the data and applied the model to several earlier studies that used sine-wave gratings (single or a combination of two or three gratings) and white noise stimuli. The model provides a very good account of all of these data. The model postulates that the short-latency eye movements-OFRs and DVRs-can be accounted for by the operation of two factors: an excitatory drive, determined by a weighted sum of contributions of stimulus Fourier components, scaled by a global contrast normalization mechanism. The output of the operation of these two factors is then nonlinearly scaled by the total contrast of the stimulus. Despite different roles of disparity (horizontal and vertical) and motion signals in visual scene analyses, the earliest processing stages of these different signals appear to be very similar.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363211/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142057109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Corsi (block-tapping) paradigm is a classic and well-established visuospatial working memory task in humans involving internal computations (memorizing of item sequences, organizing and updating the memorandum, and recall processes), as well as both overt and covert shifts of attention to facilitate rehearsal, serving to maintain the Corsi sequences during the retention phase. Here, we introduce a novel digital version of a Corsi task in which i) the difficulty of the memorandum (using sequence lengths ranging from 3 to 8) was controlled, ii) the execution of overt and/or covert attention as well as the visuospatial working memory load during the retention phase was manipulated, and iii) shifts of attention were quantified in all experimental phases. With this, we present behavioral data that demonstrate, characterize, and classify the individual effects of overt and covert strategies used as a means of encoding and rehearsal. In a full within-subject design, we tested 28 participants who had to solve three different Corsi conditions. While in condition A neither of the two strategies were restricted, in condition B the overt and in condition C the overt as well as the covert strategies were suppressed. Analyzing Corsi span, (eye) exploration index, and pupil size (change), data clearly show a continuum between overt and covert strategies over all participants (indicating inter-individual variability). Further, all participants showed stable strategy choice (indicating intra-individual stability), meaning that the preferred strategy was maintained in all three conditions, phases, and sequence lengths of the experiment.
科尔西(敲击积木)范式是一项经典且行之有效的人类视觉空间工作记忆任务,它涉及内部计算(记忆项目序列、组织和更新备忘录以及回忆过程),以及明显和隐蔽的注意力转移以促进排练,从而在保持阶段维持科尔西序列。在这里,我们介绍了一种新颖的数字版 Corsi 任务:i) 控制备忘录的难度(序列长度从 3 到 8 不等);ii) 在保留阶段操纵公开和/或隐蔽注意力的执行以及视觉空间工作记忆负荷;iii) 在所有实验阶段量化注意力的转移。通过这些数据,我们展示了作为编码和预演手段的公开和隐蔽策略的个体效应,并对这些效应进行了描述和分类。在一个完整的主体内设计中,我们对 28 名参与者进行了测试,他们必须解决三种不同的 Corsi 条件。在条件 A 中,两种策略都不受限制;而在条件 B 中,公开策略受到限制;在条件 C 中,公开策略和隐蔽策略都受到抑制。通过分析 Corsi 跨度、(眼睛)探索指数和瞳孔大小(变化),数据清楚地显示出所有参与者在公开和隐蔽策略之间的连续性(表明个体间存在差异)。此外,所有参与者都表现出稳定的策略选择(表明个体内部的稳定性),这意味着在实验的所有三个条件、阶段和序列长度中,首选策略都得以保持。
{"title":"Inter-individual variability (but intra-individual stability) of overt versus covert rehearsal strategies in a digital Corsi task.","authors":"Lílian de Sardenberg Schmid, Gregor Hardiess","doi":"10.1167/jov.24.8.2","DOIUrl":"10.1167/jov.24.8.2","url":null,"abstract":"<p><p>The Corsi (block-tapping) paradigm is a classic and well-established visuospatial working memory task in humans involving internal computations (memorizing of item sequences, organizing and updating the memorandum, and recall processes), as well as both overt and covert shifts of attention to facilitate rehearsal, serving to maintain the Corsi sequences during the retention phase. Here, we introduce a novel digital version of a Corsi task in which i) the difficulty of the memorandum (using sequence lengths ranging from 3 to 8) was controlled, ii) the execution of overt and/or covert attention as well as the visuospatial working memory load during the retention phase was manipulated, and iii) shifts of attention were quantified in all experimental phases. With this, we present behavioral data that demonstrate, characterize, and classify the individual effects of overt and covert strategies used as a means of encoding and rehearsal. In a full within-subject design, we tested 28 participants who had to solve three different Corsi conditions. While in condition A neither of the two strategies were restricted, in condition B the overt and in condition C the overt as well as the covert strategies were suppressed. Analyzing Corsi span, (eye) exploration index, and pupil size (change), data clearly show a continuum between overt and covert strategies over all participants (indicating inter-individual variability). Further, all participants showed stable strategy choice (indicating intra-individual stability), meaning that the preferred strategy was maintained in all three conditions, phases, and sequence lengths of the experiment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11305427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donald I A MacLeod, Patrick Cavanagh, Stuart Anstis
Motion can produce large changes in the apparent locations of briefly flashed tests presented on or near the motion. These motion-induced position shifts may have a variety of sources. They may be due to a frame effect where the moving pattern provides a frame of reference for the locations of events within it. The motion of the background may act through high-level mechanisms that track its explicit contours or the motion may act on position through the signals from low-level motion detectors. Here we isolate the contribution of low-level motion by eliminating explicit contours and trackable features. In this case, motion still supports a robust shift in probe locations with the shift being in the direction of the motion that follows the probe. Although robust, the magnitude of the shift in our first experiment is about 20% of the shift seen in a previous study with explicit frames and, in the second, about 45% of that found with explicit frames. Clearly, low-level motion alone can produce position shifts although the magnitude is much reduced compared to that seen when high-level mechanisms can contribute.
{"title":"Contribution of low-level motion to position shifts.","authors":"Donald I A MacLeod, Patrick Cavanagh, Stuart Anstis","doi":"10.1167/jov.24.8.13","DOIUrl":"10.1167/jov.24.8.13","url":null,"abstract":"<p><p>Motion can produce large changes in the apparent locations of briefly flashed tests presented on or near the motion. These motion-induced position shifts may have a variety of sources. They may be due to a frame effect where the moving pattern provides a frame of reference for the locations of events within it. The motion of the background may act through high-level mechanisms that track its explicit contours or the motion may act on position through the signals from low-level motion detectors. Here we isolate the contribution of low-level motion by eliminating explicit contours and trackable features. In this case, motion still supports a robust shift in probe locations with the shift being in the direction of the motion that follows the probe. Although robust, the magnitude of the shift in our first experiment is about 20% of the shift seen in a previous study with explicit frames and, in the second, about 45% of that found with explicit frames. Clearly, low-level motion alone can produce position shifts although the magnitude is much reduced compared to that seen when high-level mechanisms can contribute.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11346155/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moving frames produce large displacements in the perceived location of flashed and continuously moving probes. In a series of experiments, we test the contributions of the probe's displacement and the frame's displacement on the strength of the frame's effect. In the first experiment, we find a dramatic position shift of flashed probes whereas the effect on a continuously moving probe is only one-third as strong. In Experiment 2, we show that the absence of an effect for the static probe is a consequence of its perceptual grouping with the static background. As long as the continuously present probe has some motion, it appears to group to some extent with the frame and show an illusory shift of intermediate magnitude. Finally, we informally explored the illusory shifts seen for a continuously moving probe when the frame itself has a more complex path. In this case, the probe appears to group more strongly with the frame. Overall, the effects of the frame on the probe demonstrate the outcome of a competition between the frame and the static background in determining the frame of reference for the probe's perceived position.
{"title":"Influence of frame and probe paths on the frame effect.","authors":"Stuart Anstis, Patrick Cavanagh","doi":"10.1167/jov.24.7.11","DOIUrl":"10.1167/jov.24.7.11","url":null,"abstract":"<p><p>Moving frames produce large displacements in the perceived location of flashed and continuously moving probes. In a series of experiments, we test the contributions of the probe's displacement and the frame's displacement on the strength of the frame's effect. In the first experiment, we find a dramatic position shift of flashed probes whereas the effect on a continuously moving probe is only one-third as strong. In Experiment 2, we show that the absence of an effect for the static probe is a consequence of its perceptual grouping with the static background. As long as the continuously present probe has some motion, it appears to group to some extent with the frame and show an illusory shift of intermediate magnitude. Finally, we informally explored the illusory shifts seen for a continuously moving probe when the frame itself has a more complex path. In this case, the probe appears to group more strongly with the frame. Overall, the effects of the frame on the probe demonstrate the outcome of a competition between the frame and the static background in determining the frame of reference for the probe's perceived position.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11257013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Super recognizers (SRs) are people that exhibit a naturally occurring superiority for processing facial identity. Despite the increase of SR research, the mechanisms underlying their exceptional abilities remain unclear. Here, we investigated whether the enhanced facial identity processing of SRs could be attributed to the lack of sequential effects, such as serial dependence. In serial dependence, perception of stimulus features is assimilated toward stimuli presented in previous trials. This constant error in visual perception has been proposed as a mechanism that promotes perceptual stability in everyday life. We hypothesized that an absence of this constant source of error in SRs could account for their superior processing-potentially in a domain-general fashion. We tested SRs (n = 17) identified via a recently proposed diagnostic framework (Ramon, 2021) and age-matched controls (n = 20) with two experiments probing serial dependence in the face and shape domains. In each experiment, observers were presented with randomly morphed face identities or shapes and were asked to adjust a face's identity or a shape to match the stimulus they saw. We found serial dependence in controls and SRs alike, with no difference in its magnitude across groups. Interestingly, we found that serial dependence impacted the performance of SRs more than that of controls. Taken together, our results show that enhanced face identity processing skills in SRs cannot be attributed to the lack of serial dependence. Rather, serial dependence, a beneficial nested error in our visual system, may in fact further stabilize the perception of SRs and thus enhance their visual processing proficiency.
超级识别者(SR)是指在处理面部特征方面表现出自然优势的人。尽管对超级辨认者的研究越来越多,但其超常能力的内在机制仍不清楚。在此,我们研究了超级辨认者面部身份处理能力的增强是否可归因于缺乏序列效应,如序列依赖。在序列依赖中,对刺激特征的感知会被之前试验中出现的刺激所同化。这种视觉感知中的恒定误差被认为是日常生活中促进感知稳定性的一种机制。我们假设,如果 SR 中不存在这种恒定的误差源,就可以解释其卓越的处理能力--有可能是一种领域通用的方式。我们通过最近提出的诊断框架(Ramon,2021 年)对 SR(n = 17)和年龄匹配的对照组(n = 20)进行了测试,这两个实验分别探究了脸部和形状领域的序列依赖性。在每个实验中,观察者都会看到随机变形的脸部特征或形状,并被要求调整脸部特征或形状以匹配他们所看到的刺激。我们在对照组和自闭症患者身上都发现了序列依赖性,而且不同组别之间的依赖程度没有差异。有趣的是,我们发现序列依赖对 SR 表现的影响大于对照组。综上所述,我们的研究结果表明,自闭症患者面部识别处理能力的提高不能归因于缺乏序列依赖。相反,序列依赖是我们视觉系统中一个有益的嵌套错误,它实际上可能会进一步稳定自闭症儿童的感知,从而提高他们的视觉处理能力。
{"title":"Super recognizers: Increased sensitivity or reduced biases? Insights from serial dependence.","authors":"Fiammetta Marini, Mauro Manassi, Meike Ramon","doi":"10.1167/jov.24.7.13","DOIUrl":"10.1167/jov.24.7.13","url":null,"abstract":"<p><p>Super recognizers (SRs) are people that exhibit a naturally occurring superiority for processing facial identity. Despite the increase of SR research, the mechanisms underlying their exceptional abilities remain unclear. Here, we investigated whether the enhanced facial identity processing of SRs could be attributed to the lack of sequential effects, such as serial dependence. In serial dependence, perception of stimulus features is assimilated toward stimuli presented in previous trials. This constant error in visual perception has been proposed as a mechanism that promotes perceptual stability in everyday life. We hypothesized that an absence of this constant source of error in SRs could account for their superior processing-potentially in a domain-general fashion. We tested SRs (n = 17) identified via a recently proposed diagnostic framework (Ramon, 2021) and age-matched controls (n = 20) with two experiments probing serial dependence in the face and shape domains. In each experiment, observers were presented with randomly morphed face identities or shapes and were asked to adjust a face's identity or a shape to match the stimulus they saw. We found serial dependence in controls and SRs alike, with no difference in its magnitude across groups. Interestingly, we found that serial dependence impacted the performance of SRs more than that of controls. Taken together, our results show that enhanced face identity processing skills in SRs cannot be attributed to the lack of serial dependence. Rather, serial dependence, a beneficial nested error in our visual system, may in fact further stabilize the perception of SRs and thus enhance their visual processing proficiency.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11271810/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Ghiani, Daan Amelink, Eli Brenner, Ignace T C Hooge, Roy S Hessels
It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.
{"title":"When knowing the activity is not enough to predict gaze.","authors":"Andrea Ghiani, Daan Amelink, Eli Brenner, Ignace T C Hooge, Roy S Hessels","doi":"10.1167/jov.24.7.6","DOIUrl":"10.1167/jov.24.7.6","url":null,"abstract":"<p><p>It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11238878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Baptiste Durand, Sarah Marchand, Ilyas Nasres, Bruno Laeng, Vanessa De Castro
In humans, the eye pupils respond to both physical light sensed by the retina and mental representations of light produced by the brain. Notably, our pupils constrict when a visual stimulus is illusorily perceived brighter, even if retinal illumination is constant. However, it remains unclear whether such perceptual penetrability of pupil responses is an epiphenomenon unique to humans or whether it represents an adaptive mechanism shared with other animals to anticipate variations in retinal illumination between successive eye fixations. To address this issue, we measured the pupil responses of both humans and macaque monkeys exposed to three chromatic versions (cyan, magenta, and yellow) of the Asahi brightness illusion. We found that the stimuli illusorily perceived brighter or darker trigger differential pupil responses that are very similar in macaques and human participants. Additionally, we show that this phenomenon exhibits an analogous cyan bias in both primate species. Beyond evincing the macaque monkey as a relevant model to study the perceptual penetrability of pupil responses, our results suggest that this phenomenon is tuned to ecological conditions because the exposure to a "bright cyan-bluish sky" may be associated with increased risks of dazzle and retinal damages.
{"title":"Illusory light drives pupil responses in primates.","authors":"Jean-Baptiste Durand, Sarah Marchand, Ilyas Nasres, Bruno Laeng, Vanessa De Castro","doi":"10.1167/jov.24.7.14","DOIUrl":"10.1167/jov.24.7.14","url":null,"abstract":"<p><p>In humans, the eye pupils respond to both physical light sensed by the retina and mental representations of light produced by the brain. Notably, our pupils constrict when a visual stimulus is illusorily perceived brighter, even if retinal illumination is constant. However, it remains unclear whether such perceptual penetrability of pupil responses is an epiphenomenon unique to humans or whether it represents an adaptive mechanism shared with other animals to anticipate variations in retinal illumination between successive eye fixations. To address this issue, we measured the pupil responses of both humans and macaque monkeys exposed to three chromatic versions (cyan, magenta, and yellow) of the Asahi brightness illusion. We found that the stimuli illusorily perceived brighter or darker trigger differential pupil responses that are very similar in macaques and human participants. Additionally, we show that this phenomenon exhibits an analogous cyan bias in both primate species. Beyond evincing the macaque monkey as a relevant model to study the perceptual penetrability of pupil responses, our results suggest that this phenomenon is tuned to ecological conditions because the exposure to a \"bright cyan-bluish sky\" may be associated with increased risks of dazzle and retinal damages.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11271809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
{"title":"Instruction alters the influence of allocentric landmarks in a reach task.","authors":"Lina Musa, Xiaogang Yan, J Douglas Crawford","doi":"10.1167/jov.24.7.17","DOIUrl":"10.1167/jov.24.7.17","url":null,"abstract":"<p><p>Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11290568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}