首页 > 最新文献

Journal of Vision最新文献

英文 中文
Predicting what and when across saccades: Evidence from the extrafoveal preview effect. 预测扫视的内容和时间:来自中央凹外预览效应的证据。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-02-02 DOI: 10.1167/jov.26.2.3
David Melcher, Michele Deodato

In order to fully process items of interest, we use information from outside the fovea to plan the target of the next saccadic eye movement. There is growing evidence that our initial preview of the identity and features of the saccade target, prior to bringing it to the fovea using the saccade, is used to make our subsequent post-saccadic processing more efficient. However, the mechanisms underlying trans-saccadic previews remain unknown. We investigated this in a gaze-contingent preview paradigm in which a face stimulus either remained the same ("valid preview") or changed ("invalid preview") during the saccadic eye movement. On some trials, a brief blank gap was added at the beginning of the new fixation, before the face was presented at the fovea. Although the expected preview benefit was found when the face stimulus was present after the saccade, the addition of the blank period eliminated the preview effect. Our results suggest that the preview effect relies on a sensorimotor prediction about both "what" will be present at the fovea after the saccade and "when" the new fixation will begin. These findings provided further evidence for an active, predictive mechanism for trans-saccadic perception.

为了充分处理感兴趣的项目,我们使用来自中央凹外的信息来计划下一个跳眼运动的目标。越来越多的证据表明,在使用扫视将目标带至中央凹之前,我们对扫视目标的身份和特征的初步预览,用于使我们随后的扫视后处理更有效。然而,跨跳眼预览的机制尚不清楚。我们在注视偶然预览范式中对此进行了研究,在该范式中,在跳眼运动期间,面部刺激要么保持不变(“有效预览”),要么发生变化(“无效预览”)。在一些试验中,在新的注视开始时,在中央凹处呈现面部之前,增加了一个短暂的空白。虽然在扫视后存在面部刺激时发现了预期的预览效果,但空白期的加入消除了预览效果。我们的研究结果表明,预览效应依赖于一种感觉运动预测,即在扫视后,“什么”会出现在中央凹,以及“什么时候”会开始新的注视。这些发现进一步证明了一种主动的、可预测的跨跳眼知觉机制。
{"title":"Predicting what and when across saccades: Evidence from the extrafoveal preview effect.","authors":"David Melcher, Michele Deodato","doi":"10.1167/jov.26.2.3","DOIUrl":"https://doi.org/10.1167/jov.26.2.3","url":null,"abstract":"<p><p>In order to fully process items of interest, we use information from outside the fovea to plan the target of the next saccadic eye movement. There is growing evidence that our initial preview of the identity and features of the saccade target, prior to bringing it to the fovea using the saccade, is used to make our subsequent post-saccadic processing more efficient. However, the mechanisms underlying trans-saccadic previews remain unknown. We investigated this in a gaze-contingent preview paradigm in which a face stimulus either remained the same (\"valid preview\") or changed (\"invalid preview\") during the saccadic eye movement. On some trials, a brief blank gap was added at the beginning of the new fixation, before the face was presented at the fovea. Although the expected preview benefit was found when the face stimulus was present after the saccade, the addition of the blank period eliminated the preview effect. Our results suggest that the preview effect relies on a sensorimotor prediction about both \"what\" will be present at the fovea after the saccade and \"when\" the new fixation will begin. These findings provided further evidence for an active, predictive mechanism for trans-saccadic perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The visual perception of relative mass from object collisions. 从物体碰撞中获得相对质量的视觉感知。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-02-02 DOI: 10.1167/jov.26.2.1
James T Todd, J Farley Norman

Previous research has shown that observers can make reliable judgments about the relative mass of moving objects that collide in animated displays. One popular explanation of this is that observers' judgments are based on an internal model of Newtonian dynamics. An alternative explanation is that these judgments are based on measurable optical properties that are correlated with relative mass. To better understand this issue, the present investigation reanalyzed the data from three previous studies by Mitko and Fischer (2023), Sanborn et al. (2013), and Todd and Warren (1982), and it replicated an additional study by Hamrick et al. (2016). These new analyses demonstrate that observers' judgments of relative mass are most likely based on the post-collision optical velocities of objects without having to invoke an implausible mental representation of Newtonian dynamics as has been argued by several previous investigators.

先前的研究表明,观察者可以对动画显示中碰撞的移动物体的相对质量做出可靠的判断。对此,一种流行的解释是,观察者的判断是基于牛顿动力学的内部模型。另一种解释是,这些判断是基于与相对质量相关的可测量光学特性。为了更好地理解这一问题,本研究重新分析了Mitko和Fischer(2023)、Sanborn等人(2013)和Todd和Warren(1982)此前三项研究的数据,并复制了Hamrick等人(2016)的另一项研究。这些新的分析表明,观察者对相对质量的判断很可能是基于物体碰撞后的光学速度,而不必像之前几位研究者所争论的那样,援引牛顿动力学的难以置信的心理表征。
{"title":"The visual perception of relative mass from object collisions.","authors":"James T Todd, J Farley Norman","doi":"10.1167/jov.26.2.1","DOIUrl":"10.1167/jov.26.2.1","url":null,"abstract":"<p><p>Previous research has shown that observers can make reliable judgments about the relative mass of moving objects that collide in animated displays. One popular explanation of this is that observers' judgments are based on an internal model of Newtonian dynamics. An alternative explanation is that these judgments are based on measurable optical properties that are correlated with relative mass. To better understand this issue, the present investigation reanalyzed the data from three previous studies by Mitko and Fischer (2023), Sanborn et al. (2013), and Todd and Warren (1982), and it replicated an additional study by Hamrick et al. (2016). These new analyses demonstrate that observers' judgments of relative mass are most likely based on the post-collision optical velocities of objects without having to invoke an implausible mental representation of Newtonian dynamics as has been argued by several previous investigators.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Texture density discrimination is more precise than number discrimination. 纹理密度判别比数字判别更精确。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-02-02 DOI: 10.1167/jov.26.2.2
Frank H Durgin, Nichole Suero Gonzalez, Ping Wen, Alexander C Huk

Density information is a possible primitive for the perception of numerosity. It has been argued, however, that the perception of numerosity is more precise than density perception at low numbers, whereas density is more precise for high numbers. An interpretive problem with the stimuli used to make those claims is that actual stimulus density was often mis-specified owing to an ambiguity regarding the idealized versus actual filled area. This ambiguity had the effect of underestimating density precision at low numerosities. Here we used a novel method of stimulus generation that allows us to accurately specify stimulus density independent of patch size and number, while varying patch size from trial to trial to dissociate numerosity and density. For both numerosity discrimination and density discrimination, we presented single stimuli in central vision for comparison with an internal standard. Feedback was given after each judgment. Using well-defined densities, density discrimination was more precise than numerosity perception at all densities and showed no evidence of varying as a function of density, as previously hypothesized. This was found with 8 practiced observers, and then replicated in a pre-registered study with 32 observers. As expected, feedback nullified size biases on number judgments, showing that observers were adaptively combining density and size. Reanalysis of data from a recent investigation of downward sloping Weber fractions for numerosity showed that the square root-like effects in those sorts of studies were most likely owing to reductions in patch size variance that were correlated with increases in density.

密度信息是数字感知的可能原语。然而,有人认为,在低数字时,对数量的感知比密度感知更精确,而在高数字时,密度感知更精确。用于做出这些声明的刺激的一个解释问题是,由于理想和实际填充区域之间的模糊性,实际刺激密度经常被错误地指定。这种模糊性造成了低数值下密度精度的低估。在这里,我们使用了一种新的刺激产生方法,使我们能够准确地指定与贴片大小和数量无关的刺激密度,同时通过每次试验改变贴片大小来分离数量和密度。对于数量辨别和密度辨别,我们在中央视觉中呈现单一刺激,并与内部标准进行比较。每次评判后都会给出反馈。使用定义良好的密度,密度辨别在所有密度下都比数量感知更精确,并且没有证据表明密度随密度的变化而变化,正如先前假设的那样。这是在8名有经验的观察者身上发现的,然后在一项有32名观察者的预注册研究中得到了重复。正如预期的那样,反馈消除了对数字判断的大小偏差,表明观察者自适应地将密度和大小结合起来。对最近韦伯分数向下倾斜的数值调查数据的重新分析表明,这些研究中的平方根效应最有可能是由于斑块大小差异的减少与密度的增加相关。
{"title":"Texture density discrimination is more precise than number discrimination.","authors":"Frank H Durgin, Nichole Suero Gonzalez, Ping Wen, Alexander C Huk","doi":"10.1167/jov.26.2.2","DOIUrl":"10.1167/jov.26.2.2","url":null,"abstract":"<p><p>Density information is a possible primitive for the perception of numerosity. It has been argued, however, that the perception of numerosity is more precise than density perception at low numbers, whereas density is more precise for high numbers. An interpretive problem with the stimuli used to make those claims is that actual stimulus density was often mis-specified owing to an ambiguity regarding the idealized versus actual filled area. This ambiguity had the effect of underestimating density precision at low numerosities. Here we used a novel method of stimulus generation that allows us to accurately specify stimulus density independent of patch size and number, while varying patch size from trial to trial to dissociate numerosity and density. For both numerosity discrimination and density discrimination, we presented single stimuli in central vision for comparison with an internal standard. Feedback was given after each judgment. Using well-defined densities, density discrimination was more precise than numerosity perception at all densities and showed no evidence of varying as a function of density, as previously hypothesized. This was found with 8 practiced observers, and then replicated in a pre-registered study with 32 observers. As expected, feedback nullified size biases on number judgments, showing that observers were adaptively combining density and size. Reanalysis of data from a recent investigation of downward sloping Weber fractions for numerosity showed that the square root-like effects in those sorts of studies were most likely owing to reductions in patch size variance that were correlated with increases in density.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid ensemble encoding of average scene features. 平均场景特征的快速集成编码。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.3
Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant

Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.

视觉集成感知涉及从项目组中快速提取汇总统计(例如,平均特征),而不需要单个项目识别和工作记忆资源。一个有助于解释全局视觉感知的理论是特征诊断原理。这是指信息自下而上的视觉特征被优先处理,以完成手头的任务,与一个人自上而下的期望保持一致。过去的文献已经研究了使用对象和面孔组的整体感知,并表明低级(例如,平均颜色,方向)和高级视觉统计(例如,平均人群活力,对象经济价值)都可以有效地提取。然而,没有研究探讨是否可以从视觉复杂性较高的刺激中提取汇总统计信息,这就需要对感知进行全局的、基于列表的处理。为了研究这一点,在五个实验中,我们让参与者从现实世界场景的集合中提取各种汇总统计特征。我们发现,平均场景内容(即感知场景集合的自然性或制造性)和平均空间边界(即感知场景集合的开放性或封闭性)可以在125 ms内快速提取,而不依赖于工作记忆。有趣的是,当我们旋转场景时,平均场景方向无法提取,可能是因为在旋转场景时,诊断边缘信息的感知(即通常遇到的垂直场景的基数边缘)被破坏了。这些结果表明,集合感知是一种灵活的资源,可以用于提取跨多种刺激类型的汇总统计信息,但也存在基于全局视觉感知特征诊断原则的局限性。
{"title":"Rapid ensemble encoding of average scene features.","authors":"Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant","doi":"10.1167/jov.26.1.3","DOIUrl":"10.1167/jov.26.1.3","url":null,"abstract":"<p><p>Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual resolution of ambiguity: A divisive normalization account for both interocular color grouping and difference enhancement. 模糊的知觉分辨:眼间颜色分组和差异增强的分裂归一化解释。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.8
Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell

Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based "grouped" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.

我们的视觉系统通常为外部世界提供一种独特的、功能性的表征。然而,有时对同一个视网膜刺激有不止一种令人信服的解释;在这种情况下,神经群竞争感知优势来解决歧义。空间和时间背景可以引导这种感知体验。最近的证据表明,模糊的视网膜刺激有时可以通过增强多个模糊刺激之间的相似性或差异性来解决。虽然竞争传统上归因于刺激强度的差异,但色觉引入的非线性难以与基于亮度的模型相协调。本研究表明,一个调整的、分裂的标准化框架可以解释知觉选择如何在双目竞争中灵活地产生基于相似性的“分组”知觉或差异增强的知觉。经验和模拟结果表明,分裂归一化可以解释相似性增强(所谓的分组)或差异增强的感知表征,为相反的感知结果提供了统一的框架。
{"title":"Perceptual resolution of ambiguity: A divisive normalization account for both interocular color grouping and difference enhancement.","authors":"Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell","doi":"10.1167/jov.26.1.8","DOIUrl":"10.1167/jov.26.1.8","url":null,"abstract":"<p><p>Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based \"grouped\" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differential effects of attention and contrast on transition appearance during binocular rivalry. 双眼竞争中注意和对比对过渡现象的不同影响。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.14
Cemre Yilmaz, Kerstin Maitz, Maximilian Gerschütz, Wilfried Grassegger, Anja Ischebeck, Andreas Bartels, Natalia Zaretskaya

Binocular rivalry occurs when two eyes are presented with two conflicting stimuli. Although the physical stimulation stays the same, the conscious percept changes over time. This property makes it a unique paradigm in both vision science and consciousness research. Two key parameters, contrast and attention, were repeatedly shown to affect binocular rivalry dynamics in a similar manner. This was taken as evidence that attention acts by enhancing effective stimulus contrast. Brief transition periods between the two clear percepts have so far been much less investigated. In a previous study we demonstrated that transition periods can appear in different forms depending on the stimulus type and the observer. In the current study, we investigated how attention and contrast affect transition appearance. Observers viewed binocular rivalry and reported their perception of the four most common transition types by a button press while either the stimulus contrast or the locus of exogenous attention was manipulated. We show that contrast and attention similarly affect the overall binocular rivalry dynamics, but their effects on the appearance of transitions differ. These results suggest that the effect of attention is different from a simple enhancement of stimulus strength, which becomes evident only when different transition types are considered.

双眼竞争发生在两只眼睛受到两种相互冲突的刺激时。虽然物理刺激保持不变,但意识感知随着时间的推移而改变。这一特性使其在视觉科学和意识研究中都成为一个独特的范式。两个关键参数,对比度和注意力,反复显示以类似的方式影响双眼竞争动态。这被认为是注意力通过增强有效刺激对比而起作用的证据。到目前为止,对这两种清晰感知之间的短暂过渡时期的研究要少得多。在之前的一项研究中,我们证明了根据刺激类型和观察者的不同,过渡期可以以不同的形式出现。在本研究中,我们探讨了注意和对比如何影响过渡外观。观察者观察了双眼竞争,并报告了他们对四种最常见的过渡类型的感知,同时刺激对比或外生注意的轨迹被操纵。我们表明,对比和注意同样影响整体双目竞争动态,但它们对过渡外观的影响不同。这些结果表明,注意的作用不同于简单的刺激强度的增强,这一点只有在考虑不同的过渡类型时才会变得明显。
{"title":"Differential effects of attention and contrast on transition appearance during binocular rivalry.","authors":"Cemre Yilmaz, Kerstin Maitz, Maximilian Gerschütz, Wilfried Grassegger, Anja Ischebeck, Andreas Bartels, Natalia Zaretskaya","doi":"10.1167/jov.26.1.14","DOIUrl":"10.1167/jov.26.1.14","url":null,"abstract":"<p><p>Binocular rivalry occurs when two eyes are presented with two conflicting stimuli. Although the physical stimulation stays the same, the conscious percept changes over time. This property makes it a unique paradigm in both vision science and consciousness research. Two key parameters, contrast and attention, were repeatedly shown to affect binocular rivalry dynamics in a similar manner. This was taken as evidence that attention acts by enhancing effective stimulus contrast. Brief transition periods between the two clear percepts have so far been much less investigated. In a previous study we demonstrated that transition periods can appear in different forms depending on the stimulus type and the observer. In the current study, we investigated how attention and contrast affect transition appearance. Observers viewed binocular rivalry and reported their perception of the four most common transition types by a button press while either the stimulus contrast or the locus of exogenous attention was manipulated. We show that contrast and attention similarly affect the overall binocular rivalry dynamics, but their effects on the appearance of transitions differ. These results suggest that the effect of attention is different from a simple enhancement of stimulus strength, which becomes evident only when different transition types are considered.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12854236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EasyEyes: Crowded dynamic fixation for online psychophysics. easyyeyes:在线心理物理学的拥挤动态固定。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.18
Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer

Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.

在线视力测试可以从不同的参与者中有效地收集数据,但通常需要准确的注视。当需要时,固定精度通常是通过使用相机跟踪凝视来保证的。这在实验室里效果很好,但在使用内置网络摄像头进行在线测试时的跟踪还不够精确。Kurzawski, Pombo等人(2023)引入了一项通过手眼协调来提高注视能力的注视任务,要求参与者用鼠标控制光标跟踪移动的十字瞄准标。这种动态注视任务相对于静态注视任务大大减少了对周边目标的窥视,但并不能完全消除这种现象。在这里,我们引入了一个拥挤的动态注视任务,通过在注视标记周围添加杂波来进一步增强注视。我们评估了外周阈值测量时的固定精度。相对于静止注视任务的均方根注视误差,动态注视误差为55%,而拥挤注视任务的动态注视误差仅为40%。在1.5°容限下,7%的固定试验出现窥视现象,1.5%的动态固定试验出现窥视现象,0%的拥挤动态固定试验出现窥视现象。这种改进消除了令人难以置信的低外围阈值,可能是通过防止窥视。我们得出结论,拥挤动态注视为在线测试提供了准确的注视控制。
{"title":"EasyEyes: Crowded dynamic fixation for online psychophysics.","authors":"Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer","doi":"10.1167/jov.26.1.18","DOIUrl":"10.1167/jov.26.1.18","url":null,"abstract":"<p><p>Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dissociated temporal and spatial impairments of microsaccade dynamics in homonymous hemianopia following ischemic stroke. 缺血性脑卒中后同型偏盲微跳动力学的解离性时空损伤。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.17
Ying Gao, Huiguang He, Bernhard A Sabel

This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.

本研究探讨了缺血性脑卒中后同型偏盲(HH)患者微跳动的时间和空间组成及其与视力障碍的关系。在视野测试中记录15例HH患者和15例对照者的眼位数据。我们分析了HH在不同缺陷深度的视野区域的微扫视率(时间)和方向(空间)动态,并与对照组进行了比较。基于微跃动动力学,训练支持向量机对视场缺陷进行表征。与对照组相比,患者在视力区表现出更强的微跳抑制,在残余视力区(ARVs)表现出延迟和更强的微跳抑制。与此同时,在视力正常的地区有明显的反弹,但在抗逆转录病毒药物和盲区没有反弹。抑制后的微眼跳更倾向于刺激,而抑制后的微眼跳则远离刺激。在HH中没有观察到这种模式。微跳动力学的解离性时间和空间损伤表明HH的视觉和动眼神经网络存在多重损伤。基于微跳动速率动态下的微跳动相位特征,我们对患者的视野缺陷进行了表征,并在失明和正常的半视野内发现了残障功能区域。这些发现表明,监测微跳动可以提供有价值的补充信息,而不仅仅是行为反应所捕获的信息。
{"title":"Dissociated temporal and spatial impairments of microsaccade dynamics in homonymous hemianopia following ischemic stroke.","authors":"Ying Gao, Huiguang He, Bernhard A Sabel","doi":"10.1167/jov.26.1.17","DOIUrl":"10.1167/jov.26.1.17","url":null,"abstract":"<p><p>This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859727/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146068421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What affects the movement can be seen from the movement: Effects of optical information and dynamical constraints on movement production and perception. 从运动中可以看出影响运动的因素:光学信息和动态约束对运动产生和感知的影响。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.6
Huiyuan Zhang, Feifei Jiang, Yijing Mao, Xian Yang, Jing Samantha Pan

This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.

本研究探讨了光学信息和动态约束如何影响运动的产生和感知。在实验1中,16名志愿者在坚固的或有泡沫垫的地板上行走或进行y轴平衡运动。光信息和力环境影响参与者的运动学,如步幅持续时间、步幅长度、步幅宽度、步态速度、行走关节运动范围、总运动持续时间和y -平衡关节运动范围。Naïve观察者在点光显示器上观察这些运动,并区分在不同光学信息(实验2)和力环境(实验3)条件下的运动。他们能够分辨出在看不见的情况下进行的动作,尤其是在有垫的地板上进行的动作;他们还能够辨别在不同的支撑表面上表演的动作,尤其是当演员被蒙住眼睛的时候。因此,从点光显示中区分运动条件是可能的,并且在较高的运动可变性下效果更好。逻辑回归显示,判别运动依赖于运动运动学在不同条件下变化最大。无论从哪个角度看,这些信息都是有效和有用的;也就是说,无论是在正面还是侧面显示行走和y轴平衡,感知表现都是相等的。因此,光学信息和动态约束都以通过运动学变化可感知的方式塑造运动模式。
{"title":"What affects the movement can be seen from the movement: Effects of optical information and dynamical constraints on movement production and perception.","authors":"Huiyuan Zhang, Feifei Jiang, Yijing Mao, Xian Yang, Jing Samantha Pan","doi":"10.1167/jov.26.1.6","DOIUrl":"10.1167/jov.26.1.6","url":null,"abstract":"<p><p>This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12786393/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the visual cortex with Zebra noise and wavelets. 用斑马噪声和小波映射视觉皮层。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2026-01-05 DOI: 10.1167/jov.26.1.1
Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini

Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.

早期视觉系统的研究通常需要描述大量神经元的视觉偏好。这项任务通常需要多种刺激,如稀疏噪声和漂移光栅,每种刺激只探测一组有限的视觉特征。在这里,我们引入了一种新的具有锐边条纹的动态刺激,我们称之为斑马噪声和一种新的基于小波的分析模型,我们表明,它们结合在一起可以高效地映射数千个神经元的视觉偏好的多个方面。采用双光子钙成像技术记录小鼠视觉皮层神经元的活动。斑马噪声引起的强烈反应比传统刺激引起的反应更容易重复。基于小波的模型捕获了结果响应的可重复方面,为多个刺激特征(位置、方向、大小、空间频率、漂移率和方向)提供了神经元调节的措施。该方法被证明是有效的,只需要5分钟的刺激(重复三次)就可以表征视觉区域数千个神经元的调谐。斑马噪声刺激和基于小波的模型相结合,为快速表征视觉表征提供了广泛适用的工具包,有望加速未来视觉功能的研究。
{"title":"Mapping the visual cortex with Zebra noise and wavelets.","authors":"Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini","doi":"10.1167/jov.26.1.1","DOIUrl":"10.1167/jov.26.1.1","url":null,"abstract":"<p><p>Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1