Jason F Rubinstein, Noelia Gabriela Alcalde, Adrien Chopin, Preeti Verghese
Macular degeneration (MD), which affects the central visual field including the fovea, has a profound impact on acuity and oculomotor control. We used a motion extrapolation task to investigate the contribution of various factors that potentially impact motion estimation, including the transient disappearance of the target into the scotoma, increased position uncertainty associated with eccentric target positions, and increased oculomotor noise due to the use of a non-foveal locus for fixation and for eye movements. Observers performed a perceptual baseball task where they judged whether the target would intersect or miss a rectangular region (the plate). The target was extinguished before reaching the plate and participants were instructed either to fixate a marker or smoothly track the target before making the judgment. We tested nine eyes of six participants with MD and four control observers with simulated scotomata that matched those of individual participants with MD. Both groups used their habitual oculomotor locus-eccentric preferred retinal locus (PRL) for MD and fovea for controls. In the fixation condition, motion extrapolation was less accurate for controls with simulated scotomata than without, indicating that occlusion by the scotoma impacted the task. In both the fixation and pursuit conditions, MD participants with eccentric preferred retinal loci typically had worse motion extrapolation than controls with a matched artificial scotoma and foveal preferred retinal loci. Statistical analysis revealed occlusion and target eccentricity significantly impacted motion extrapolation in the pursuit condition, indicating that these factors make it challenging to estimate and track the path of a moving target in MD.
{"title":"Oculomotor challenges in macular degeneration impact motion extrapolation.","authors":"Jason F Rubinstein, Noelia Gabriela Alcalde, Adrien Chopin, Preeti Verghese","doi":"10.1167/jov.25.1.17","DOIUrl":"10.1167/jov.25.1.17","url":null,"abstract":"<p><p>Macular degeneration (MD), which affects the central visual field including the fovea, has a profound impact on acuity and oculomotor control. We used a motion extrapolation task to investigate the contribution of various factors that potentially impact motion estimation, including the transient disappearance of the target into the scotoma, increased position uncertainty associated with eccentric target positions, and increased oculomotor noise due to the use of a non-foveal locus for fixation and for eye movements. Observers performed a perceptual baseball task where they judged whether the target would intersect or miss a rectangular region (the plate). The target was extinguished before reaching the plate and participants were instructed either to fixate a marker or smoothly track the target before making the judgment. We tested nine eyes of six participants with MD and four control observers with simulated scotomata that matched those of individual participants with MD. Both groups used their habitual oculomotor locus-eccentric preferred retinal locus (PRL) for MD and fovea for controls. In the fixation condition, motion extrapolation was less accurate for controls with simulated scotomata than without, indicating that occlusion by the scotoma impacted the task. In both the fixation and pursuit conditions, MD participants with eccentric preferred retinal loci typically had worse motion extrapolation than controls with a matched artificial scotoma and foveal preferred retinal loci. Statistical analysis revealed occlusion and target eccentricity significantly impacted motion extrapolation in the pursuit condition, indicating that these factors make it challenging to estimate and track the path of a moving target in MD.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"17"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11781323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesús Malo, José Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A Wichmann
Human performance in psychophysical detection and discrimination tasks is limited by inner noise. It is unclear to what extent this inner noise arises from early noise (e.g., in the photoreceptors) or from late noise (at or immediately prior to the decision stage, presumably in cortex). Very likely, the behaviorally limiting inner noise is a nontrivial combination of both early and late noise. Here we propose a method to quantify the contributions of early and late noise purely from psychophysical data. Our approach generalizes classical results for linear systems by combining the theory of noise propagation through a nonlinear network with expressions to obtain a perceptual metric through a nonlinear network. We show that from threshold-only data, the relative contributions of early and late noise can only be disentangled when the experiments include substantial external noise. When full psychometric functions are available, early and late noise sources can be quantified even in the absence of external noise. Our psychophysical estimate of the magnitude of early noise-assuming a standard cascade of linear and nonlinear model stages-is substantially lower than the noise in cone photocurrents computed via an accurate model of retinal physiology, the ISETBio. This is consistent with the idea that one of the fundamental tasks of early vision is to reduce the comparatively large retinal noise.
{"title":"Estimating the contribution of early and late noise in vision from psychophysical data.","authors":"Jesús Malo, José Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A Wichmann","doi":"10.1167/jov.25.1.12","DOIUrl":"10.1167/jov.25.1.12","url":null,"abstract":"<p><p>Human performance in psychophysical detection and discrimination tasks is limited by inner noise. It is unclear to what extent this inner noise arises from early noise (e.g., in the photoreceptors) or from late noise (at or immediately prior to the decision stage, presumably in cortex). Very likely, the behaviorally limiting inner noise is a nontrivial combination of both early and late noise. Here we propose a method to quantify the contributions of early and late noise purely from psychophysical data. Our approach generalizes classical results for linear systems by combining the theory of noise propagation through a nonlinear network with expressions to obtain a perceptual metric through a nonlinear network. We show that from threshold-only data, the relative contributions of early and late noise can only be disentangled when the experiments include substantial external noise. When full psychometric functions are available, early and late noise sources can be quantified even in the absence of external noise. Our psychophysical estimate of the magnitude of early noise-assuming a standard cascade of linear and nonlinear model stages-is substantially lower than the noise in cone photocurrents computed via an accurate model of retinal physiology, the ISETBio. This is consistent with the idea that one of the fundamental tasks of early vision is to reduce the comparatively large retinal noise.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The characterization of how precisely we perceive visual speed has traditionally relied on psychophysical judgments in discrimination tasks. Such tasks are often considered laborious and susceptible to biases, particularly without the involvement of highly trained participants. Additionally, thresholds for motion-in-depth perception are frequently reported as higher compared to lateral motion, a discrepancy that contrasts with everyday visuomotor tasks. In this research, we rely on a smooth pursuit model, based on a Kalman filter, to quantify speed observational uncertainties. This model allows us to distinguish between additive and multiplicative noise across three conditions of motion dynamics within a virtual reality setting: random walk, linear motion, and nonlinear motion, incorporating both lateral and depth motion components. We aim to assess tracking performance and perceptual uncertainties for lateral versus motion-in-depth. In alignment with prior research, our results indicate diminished performance for depth motion in the random walk condition, characterized by unpredictable positioning. However, when velocity information is available and facilitates predictions of future positions, perceptual uncertainties become more consistent between lateral and in-depth motion. This consistency is particularly noticeable within ranges where retinal speeds overlap between these two dimensions. Significantly, additive noise emerges as the primary source of uncertainty, largely exceeding multiplicative noise. This predominance of additive noise is consistent with computational accounts of visual motion. Our study challenges earlier beliefs of marked differences in processing lateral versus in-depth motions, suggesting similar levels of perceptual uncertainty and underscoring the significant role of additive noise.
{"title":"A comparative analysis of perceptual noise in lateral and depth motion: Evidence from eye tracking.","authors":"Joan López-Moliner","doi":"10.1167/jov.25.1.15","DOIUrl":"10.1167/jov.25.1.15","url":null,"abstract":"<p><p>The characterization of how precisely we perceive visual speed has traditionally relied on psychophysical judgments in discrimination tasks. Such tasks are often considered laborious and susceptible to biases, particularly without the involvement of highly trained participants. Additionally, thresholds for motion-in-depth perception are frequently reported as higher compared to lateral motion, a discrepancy that contrasts with everyday visuomotor tasks. In this research, we rely on a smooth pursuit model, based on a Kalman filter, to quantify speed observational uncertainties. This model allows us to distinguish between additive and multiplicative noise across three conditions of motion dynamics within a virtual reality setting: random walk, linear motion, and nonlinear motion, incorporating both lateral and depth motion components. We aim to assess tracking performance and perceptual uncertainties for lateral versus motion-in-depth. In alignment with prior research, our results indicate diminished performance for depth motion in the random walk condition, characterized by unpredictable positioning. However, when velocity information is available and facilitates predictions of future positions, perceptual uncertainties become more consistent between lateral and in-depth motion. This consistency is particularly noticeable within ranges where retinal speeds overlap between these two dimensions. Significantly, additive noise emerges as the primary source of uncertainty, largely exceeding multiplicative noise. This predominance of additive noise is consistent with computational accounts of visual motion. Our study challenges earlier beliefs of marked differences in processing lateral versus in-depth motions, suggesting similar levels of perceptual uncertainty and underscoring the significant role of additive noise.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11761139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to bring stimuli of interest into our central field of vision, we perform saccadic eye movements. After every saccade, the error between the predicted and actual landing position is monitored. In the laboratory, artificial post-saccadic errors are created by displacing the target during saccade execution. Previous research found that even a single post-saccadic error induces immediate amplitude changes to minimize that error. The saccadic amplitude adjustment could result from a recalibration of the saccade target representation. We asked if recalibration follows an integration scheme in which the impact magnitude of the previous post-saccadic target location depends on the certainty of the current target. We asked subjects to perform saccades to Gaussian blobs as targets, the visuospatial certainty of which we manipulated by changing its spatial constant. In separate sessions, either the pre-saccadic or post-saccadic target was uncertain. Additionally, we manipulated the contrast to further decrease certainty, changing the spatial constant mid-saccade. We found saccade-by-saccade amplitude reductions only with a currently uncertain target, a previously certain one, and a constant target contrast. We conclude that the features of the pre-saccadic target (i.e., size and contrast) determine the extent to which post-saccadic error shapes upcoming saccade amplitudes.
{"title":"Serial dependencies in motor targeting as a function of target appearance.","authors":"Sandra Tyralla, Eckart Zimmermann","doi":"10.1167/jov.24.13.6","DOIUrl":"10.1167/jov.24.13.6","url":null,"abstract":"<p><p>In order to bring stimuli of interest into our central field of vision, we perform saccadic eye movements. After every saccade, the error between the predicted and actual landing position is monitored. In the laboratory, artificial post-saccadic errors are created by displacing the target during saccade execution. Previous research found that even a single post-saccadic error induces immediate amplitude changes to minimize that error. The saccadic amplitude adjustment could result from a recalibration of the saccade target representation. We asked if recalibration follows an integration scheme in which the impact magnitude of the previous post-saccadic target location depends on the certainty of the current target. We asked subjects to perform saccades to Gaussian blobs as targets, the visuospatial certainty of which we manipulated by changing its spatial constant. In separate sessions, either the pre-saccadic or post-saccadic target was uncertain. Additionally, we manipulated the contrast to further decrease certainty, changing the spatial constant mid-saccade. We found saccade-by-saccade amplitude reductions only with a currently uncertain target, a previously certain one, and a constant target contrast. We conclude that the features of the pre-saccadic target (i.e., size and contrast) determine the extent to which post-saccadic error shapes upcoming saccade amplitudes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Puneeth N Chakravarthula, Ansh K Soni, Miguel P Eckstein
Humans consistently land their first saccade to a face at a preferred fixation location (PFL). Humans also typically process faces as wholes, as evidenced by perceptual effects such as the composite face effect (CFE). However, not known is whether an individual's tendency to process faces as wholes varies with their gaze patterns on the face. Here, we investigated variation of the CFE with the PFL. We compared the strength of the CFE for two groups of observers who were screened to have their PFLs either higher up, closer to the eyes, or lower on the face, closer to the tip of the nose. During the task, observers maintained their gaze at either their own group's mean PFL or at the other group's mean PFL. We found that the top half of the face elicits a stronger CFE than the bottom half. Further, the strength of the CFE was modulated by the distance of the PFL from the eyes, such that individuals with a PFL closer to the eyes had a stronger CFE than those with a PFL closer to the mouth. Finally, the top-half CFE for both upper-lookers and lower-lookers was abolished when they fixated at a non-preferred location on the face. Our findings show that the CFE relies on internal face representations shaped by the long-term use of a consistent oculomotor strategy to view faces.
{"title":"Preferred fixation position and gaze location: Two factors modulating the composite face effect.","authors":"Puneeth N Chakravarthula, Ansh K Soni, Miguel P Eckstein","doi":"10.1167/jov.24.13.15","DOIUrl":"10.1167/jov.24.13.15","url":null,"abstract":"<p><p>Humans consistently land their first saccade to a face at a preferred fixation location (PFL). Humans also typically process faces as wholes, as evidenced by perceptual effects such as the composite face effect (CFE). However, not known is whether an individual's tendency to process faces as wholes varies with their gaze patterns on the face. Here, we investigated variation of the CFE with the PFL. We compared the strength of the CFE for two groups of observers who were screened to have their PFLs either higher up, closer to the eyes, or lower on the face, closer to the tip of the nose. During the task, observers maintained their gaze at either their own group's mean PFL or at the other group's mean PFL. We found that the top half of the face elicits a stronger CFE than the bottom half. Further, the strength of the CFE was modulated by the distance of the PFL from the eyes, such that individuals with a PFL closer to the eyes had a stronger CFE than those with a PFL closer to the mouth. Finally, the top-half CFE for both upper-lookers and lower-lookers was abolished when they fixated at a non-preferred location on the face. Our findings show that the CFE relies on internal face representations shaped by the long-term use of a consistent oculomotor strategy to view faces.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11681917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reviewers.","authors":"","doi":"10.1167/jov.24.13.16","DOIUrl":"10.1167/jov.24.13.16","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nino Sharvashidze, Matteo Valsecchi, Alexander C Schütz
The visual system compensates for differences between peripheral and foveal vision using different mechanisms. Although peripheral vision is characterized by higher spatial uncertainty and lower resolution than foveal vision, observers reported objects to be less distorted and less blurry in the periphery than the fovea in a visual matching task during fixation (Valsecchi et al., 2018). Here, we asked whether a similar overcompensation could be found across saccadic eye movements and whether it would bias the detection of transsaccadic changes in object regularity. The blur and distortion levels of simple geometric shapes were manipulated in the Eidolons algorithm (Koenderink et al., 2017). In an appearance discrimination task, participants had to judge the appearance of blur (experiment 1) and distortion (experiment 2) separately before and after a saccade. Objects appeared less blurry before a saccade (in the periphery) than after a saccade (in the fovea). No differences were found in the appearance of distortion. In a change discrimination task, participants had to judge if blur (experiment 1) and distortion (experiment 2) either increased or decreased during a saccade. Overall, they showed a tendency to report an increase in both blur and distortion across saccades. The precision of the responses was improved by a 200-ms postsaccadic blank. Results from the change discrimination task of both experiments suggest that a transsaccadic decrease in regularity is more visible, compared to an increase in regularity. In line with the previous study that reported a peripheral overcompensation in the visual matching task, we found a similar mechanism, exhibiting a phenomenological sharpening of blurry edges before a saccade. These results generalize peripheral-foveal differences observed during fixation to the here tested dynamic, transsaccadic conditions where they contribute to biases in transsaccadic change detection.
视觉系统使用不同的机制来补偿周围和中央凹视觉之间的差异。尽管周边视觉的特点是比中央凹视觉具有更高的空间不确定性和更低的分辨率,但观察者报告说,在注视过程中的视觉匹配任务中,周边的物体比中央凹扭曲和模糊的程度更低(Valsecchi等人,2018)。在这里,我们想知道类似的过度补偿是否可以在眼跳跃性运动中发现,以及它是否会对物体规律性的跨跳跃性变化的检测产生偏差。在Eidolons算法中操纵简单几何形状的模糊和失真水平(Koenderink et al., 2017)。在外观辨别任务中,被试必须在扫视前和扫视后分别判断模糊(实验1)和扭曲(实验2)的外观。物体在扫视前(在外围)比扫视后(在中央凹)显得更模糊。在畸变的外观上没有发现差异。在变化辨别任务中,参与者必须判断在扫视过程中模糊(实验1)和失真(实验2)是增加还是减少。总的来说,他们在扫视过程中呈现出模糊和扭曲的增加趋势。跳眼后空白200 ms可提高反应精度。这两个实验的变化辨别任务的结果表明,与规律性的增加相比,经扫视的规律性减少更为明显。与先前报道的视觉匹配任务中的外围过度补偿的研究一致,我们发现了一个类似的机制,在扫视前表现出模糊边缘的现象锐化。这些结果将固定期间观察到的外周-中央凹差异推广到这里所测试的动态,经眼窝条件下,它们会导致经眼窝变化检测的偏差。
{"title":"Transsaccadic perception of changes in object regularity.","authors":"Nino Sharvashidze, Matteo Valsecchi, Alexander C Schütz","doi":"10.1167/jov.24.13.3","DOIUrl":"10.1167/jov.24.13.3","url":null,"abstract":"<p><p>The visual system compensates for differences between peripheral and foveal vision using different mechanisms. Although peripheral vision is characterized by higher spatial uncertainty and lower resolution than foveal vision, observers reported objects to be less distorted and less blurry in the periphery than the fovea in a visual matching task during fixation (Valsecchi et al., 2018). Here, we asked whether a similar overcompensation could be found across saccadic eye movements and whether it would bias the detection of transsaccadic changes in object regularity. The blur and distortion levels of simple geometric shapes were manipulated in the Eidolons algorithm (Koenderink et al., 2017). In an appearance discrimination task, participants had to judge the appearance of blur (experiment 1) and distortion (experiment 2) separately before and after a saccade. Objects appeared less blurry before a saccade (in the periphery) than after a saccade (in the fovea). No differences were found in the appearance of distortion. In a change discrimination task, participants had to judge if blur (experiment 1) and distortion (experiment 2) either increased or decreased during a saccade. Overall, they showed a tendency to report an increase in both blur and distortion across saccades. The precision of the responses was improved by a 200-ms postsaccadic blank. Results from the change discrimination task of both experiments suggest that a transsaccadic decrease in regularity is more visible, compared to an increase in regularity. In line with the previous study that reported a peripheral overcompensation in the visual matching task, we found a similar mechanism, exhibiting a phenomenological sharpening of blurry edges before a saccade. These results generalize peripheral-foveal differences observed during fixation to the here tested dynamic, transsaccadic conditions where they contribute to biases in transsaccadic change detection.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11627247/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guido Marco Cicchini, Giovanni D'Errico, David Charles Burr
Crowding is the inability to recognize an object in clutter, classically considered a fundamental low-level bottleneck to object recognition. Recently, however, it has been suggested that crowding, like predictive phenomena such as serial dependence, may result from optimizing strategies that exploit redundancies in natural scenes. This notion leads to several testable predictions, such as crowding being greater for nonsalient targets and, counterintuitively, that flanker interference should be associated with higher precision in judgements, leading to a lower overall error rate. Here we measured color discrimination for targets flanked by stimuli of variable color. The results verified both predictions, showing that although crowding can affect object recognition, it may be better understood not as a processing bottleneck, but rather as a consequence of mechanisms evolved to efficiently exploit the spatial redundancies of the natural world. Analyses of reaction times of judgments shows that the integration occurs at sensory rather than decisional levels.
{"title":"Color crowding considered as adaptive spatial integration.","authors":"Guido Marco Cicchini, Giovanni D'Errico, David Charles Burr","doi":"10.1167/jov.24.13.9","DOIUrl":"10.1167/jov.24.13.9","url":null,"abstract":"<p><p>Crowding is the inability to recognize an object in clutter, classically considered a fundamental low-level bottleneck to object recognition. Recently, however, it has been suggested that crowding, like predictive phenomena such as serial dependence, may result from optimizing strategies that exploit redundancies in natural scenes. This notion leads to several testable predictions, such as crowding being greater for nonsalient targets and, counterintuitively, that flanker interference should be associated with higher precision in judgements, leading to a lower overall error rate. Here we measured color discrimination for targets flanked by stimuli of variable color. The results verified both predictions, showing that although crowding can affect object recognition, it may be better understood not as a processing bottleneck, but rather as a consequence of mechanisms evolved to efficiently exploit the spatial redundancies of the natural world. Analyses of reaction times of judgments shows that the integration occurs at sensory rather than decisional levels.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11636666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fresnel effects, that is, shape-dependent changes in the strength of specular reflection from glossy objects, can lead to large changes in reflection strength when objects are scaled along the viewing axis. In an experiment, we scaled sphere-like bumpy objects with fixed material parameters in the depth direction and then measured with and without Fresnel effects how this influences the gloss impression, gloss constancy, and perceived depth. The results show that Fresnel effects in this case lead to a strong increase in gloss with depth, indicating lower gloss constancy than without them, but that they improve depth perception. In addition, we used inverse rendering to investigate the extent to which Fresnel effects in a rendered image limit the possible object shapes in the underlying scene. We found that, for a static monocular view of an unknown object, Fresnel effects by themselves provide only a weak constraint on the overall shape of the object.
{"title":"Influence of Fresnel effects on the glossiness and perceived depth of depth-scaled glossy objects.","authors":"Franz Faul, Christian Robbes","doi":"10.1167/jov.24.13.1","DOIUrl":"10.1167/jov.24.13.1","url":null,"abstract":"<p><p>Fresnel effects, that is, shape-dependent changes in the strength of specular reflection from glossy objects, can lead to large changes in reflection strength when objects are scaled along the viewing axis. In an experiment, we scaled sphere-like bumpy objects with fixed material parameters in the depth direction and then measured with and without Fresnel effects how this influences the gloss impression, gloss constancy, and perceived depth. The results show that Fresnel effects in this case lead to a strong increase in gloss with depth, indicating lower gloss constancy than without them, but that they improve depth perception. In addition, we used inverse rendering to investigate the extent to which Fresnel effects in a rendered image limit the possible object shapes in the underlying scene. We found that, for a static monocular view of an unknown object, Fresnel effects by themselves provide only a weak constraint on the overall shape of the object.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tabea-Maria Haase, Anina N Rich, Iain D Gilchrist, Christopher Kent
Being able to detect changes in our visual environment reliably and quickly is important for many daily tasks. The motion silencing effect describes a decrease in the ability to detect feature changes for faster moving objects compared with stationary or slowly moving objects. One theory is that spatiotemporal receptive field properties in early vision might account for the silencing effect, suggesting that its origins are low-level visual processing. Here, we explore whether spatial attention can modulate motion silencing of orientation changes to gain greater understanding of the underlying mechanisms. In Experiment 1, we confirm that the motion silencing effect occurs for the discrimination of orientation changes. In Experiment 2, we use a Posner-style cueing paradigm to investigate whether manipulating covert attention modulates motion silencing for orientation. The results show a clear spatial cueing effect: Participants were able to discriminate orientation changes successfully at higher velocities when the cue was valid compared to neutral cues and performance was worst when the cue was invalid. These results show that motion silencing can be modulated by directing spatial attention toward a moving target and provides support for a role for higher level processes, such as attention, in motion silencing of orientation changes.
{"title":"Attention moderates the motion silencing effect for dynamic orientation changes in a discrimination task.","authors":"Tabea-Maria Haase, Anina N Rich, Iain D Gilchrist, Christopher Kent","doi":"10.1167/jov.24.13.13","DOIUrl":"10.1167/jov.24.13.13","url":null,"abstract":"<p><p>Being able to detect changes in our visual environment reliably and quickly is important for many daily tasks. The motion silencing effect describes a decrease in the ability to detect feature changes for faster moving objects compared with stationary or slowly moving objects. One theory is that spatiotemporal receptive field properties in early vision might account for the silencing effect, suggesting that its origins are low-level visual processing. Here, we explore whether spatial attention can modulate motion silencing of orientation changes to gain greater understanding of the underlying mechanisms. In Experiment 1, we confirm that the motion silencing effect occurs for the discrimination of orientation changes. In Experiment 2, we use a Posner-style cueing paradigm to investigate whether manipulating covert attention modulates motion silencing for orientation. The results show a clear spatial cueing effect: Participants were able to discriminate orientation changes successfully at higher velocities when the cue was valid compared to neutral cues and performance was worst when the cue was invalid. These results show that motion silencing can be modulated by directing spatial attention toward a moving target and provides support for a role for higher level processes, such as attention, in motion silencing of orientation changes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"13"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684489/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}