Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell
Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based "grouped" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.
{"title":"Perceptual resolution of ambiguity: A divisive normalization account for both interocular color grouping and difference enhancement.","authors":"Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell","doi":"10.1167/jov.26.1.8","DOIUrl":"10.1167/jov.26.1.8","url":null,"abstract":"<p><p>Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based \"grouped\" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cemre Yilmaz, Kerstin Maitz, Maximilian Gerschütz, Wilfried Grassegger, Anja Ischebeck, Andreas Bartels, Natalia Zaretskaya
Binocular rivalry occurs when two eyes are presented with two conflicting stimuli. Although the physical stimulation stays the same, the conscious percept changes over time. This property makes it a unique paradigm in both vision science and consciousness research. Two key parameters, contrast and attention, were repeatedly shown to affect binocular rivalry dynamics in a similar manner. This was taken as evidence that attention acts by enhancing effective stimulus contrast. Brief transition periods between the two clear percepts have so far been much less investigated. In a previous study we demonstrated that transition periods can appear in different forms depending on the stimulus type and the observer. In the current study, we investigated how attention and contrast affect transition appearance. Observers viewed binocular rivalry and reported their perception of the four most common transition types by a button press while either the stimulus contrast or the locus of exogenous attention was manipulated. We show that contrast and attention similarly affect the overall binocular rivalry dynamics, but their effects on the appearance of transitions differ. These results suggest that the effect of attention is different from a simple enhancement of stimulus strength, which becomes evident only when different transition types are considered.
{"title":"Differential effects of attention and contrast on transition appearance during binocular rivalry.","authors":"Cemre Yilmaz, Kerstin Maitz, Maximilian Gerschütz, Wilfried Grassegger, Anja Ischebeck, Andreas Bartels, Natalia Zaretskaya","doi":"10.1167/jov.26.1.14","DOIUrl":"10.1167/jov.26.1.14","url":null,"abstract":"<p><p>Binocular rivalry occurs when two eyes are presented with two conflicting stimuli. Although the physical stimulation stays the same, the conscious percept changes over time. This property makes it a unique paradigm in both vision science and consciousness research. Two key parameters, contrast and attention, were repeatedly shown to affect binocular rivalry dynamics in a similar manner. This was taken as evidence that attention acts by enhancing effective stimulus contrast. Brief transition periods between the two clear percepts have so far been much less investigated. In a previous study we demonstrated that transition periods can appear in different forms depending on the stimulus type and the observer. In the current study, we investigated how attention and contrast affect transition appearance. Observers viewed binocular rivalry and reported their perception of the four most common transition types by a button press while either the stimulus contrast or the locus of exogenous attention was manipulated. We show that contrast and attention similarly affect the overall binocular rivalry dynamics, but their effects on the appearance of transitions differ. These results suggest that the effect of attention is different from a simple enhancement of stimulus strength, which becomes evident only when different transition types are considered.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12854236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer
Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.
{"title":"EasyEyes: Crowded dynamic fixation for online psychophysics.","authors":"Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer","doi":"10.1167/jov.26.1.18","DOIUrl":"10.1167/jov.26.1.18","url":null,"abstract":"<p><p>Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.
{"title":"Dissociated temporal and spatial impairments of microsaccade dynamics in homonymous hemianopia following ischemic stroke.","authors":"Ying Gao, Huiguang He, Bernhard A Sabel","doi":"10.1167/jov.26.1.17","DOIUrl":"10.1167/jov.26.1.17","url":null,"abstract":"<p><p>This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859727/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146068421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.
{"title":"What affects the movement can be seen from the movement: Effects of optical information and dynamical constraints on movement production and perception.","authors":"Huiyuan Zhang, Feifei Jiang, Yijing Mao, Xian Yang, Jing Samantha Pan","doi":"10.1167/jov.26.1.6","DOIUrl":"10.1167/jov.26.1.6","url":null,"abstract":"<p><p>This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12786393/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini
Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.
{"title":"Mapping the visual cortex with Zebra noise and wavelets.","authors":"Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini","doi":"10.1167/jov.26.1.1","DOIUrl":"10.1167/jov.26.1.1","url":null,"abstract":"<p><p>Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher DiMattina, Eden E Sterk, Madelyn G Arena, Francesca E Monteferrante
To correctly parse the visual scene, one must detect edges and determine their underlying cause. Previous work has demonstrated that neural networks trained to differentiate shadow and occlusion edges exhibit sensitivity to boundary sharpness and texture differences. Here, we investigate whether human observers are also sensitive to these cues using synthetic edge stimuli formed by quilting together two natural textures, allowing us to parametrically manipulate boundary sharpness, texture modulation, and luminance modulation. Observers classified five sets of synthetic boundary images as shadows, occlusions, or textures generated by varying these three cues in all possible combinations. These three cues exhibited strong interactions to determine categorization. For sharp edges, increasing luminance modulation made it less likely the patch would be classified as a texture and more likely it would be classified as an occlusion, whereas for blurred edges, increasing luminance modulation made it more likely the patch would be classified as a shadow. Boundary sharpness had a profound effect, so that in the presence of luminance modulation, increasing sharpness decreased the likelihood of classification as a shadow and increased the likelihood of classification as an occlusion. Texture modulation had little effect, except for a sharp boundary with zero luminance modulation. Results were consistent across all five stimulus sets, and human performance was well explained by a multinomial logistic regression model. Our results demonstrate that human observers make use of the same cues as previous machine learning models when detecting and determining the cause of an edge.
{"title":"Local cues enable classification of image patches as surfaces, object boundaries, or illumination changes.","authors":"Christopher DiMattina, Eden E Sterk, Madelyn G Arena, Francesca E Monteferrante","doi":"10.1167/jov.26.1.9","DOIUrl":"10.1167/jov.26.1.9","url":null,"abstract":"<p><p>To correctly parse the visual scene, one must detect edges and determine their underlying cause. Previous work has demonstrated that neural networks trained to differentiate shadow and occlusion edges exhibit sensitivity to boundary sharpness and texture differences. Here, we investigate whether human observers are also sensitive to these cues using synthetic edge stimuli formed by quilting together two natural textures, allowing us to parametrically manipulate boundary sharpness, texture modulation, and luminance modulation. Observers classified five sets of synthetic boundary images as shadows, occlusions, or textures generated by varying these three cues in all possible combinations. These three cues exhibited strong interactions to determine categorization. For sharp edges, increasing luminance modulation made it less likely the patch would be classified as a texture and more likely it would be classified as an occlusion, whereas for blurred edges, increasing luminance modulation made it more likely the patch would be classified as a shadow. Boundary sharpness had a profound effect, so that in the presence of luminance modulation, increasing sharpness decreased the likelihood of classification as a shadow and increased the likelihood of classification as an occlusion. Texture modulation had little effect, except for a sharp boundary with zero luminance modulation. Results were consistent across all five stimulus sets, and human performance was well explained by a multinomial logistic regression model. Our results demonstrate that human observers make use of the same cues as previous machine learning models when detecting and determining the cause of an edge.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12814982/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For goal-directed movements like throwing darts or shooting a soccer penalty, the optimal location to aim depends on the endpoint variability of an individual. Currently, there is no consensus on whether people can optimize their movement planning based on information about their motor variability. Here, we tested the role of different types of feedback for movement planning under risk. We measured saccades toward a bar that consisted of a reward and a penalty region. Participants either received error-based feedback about their endpoint or reinforcement feedback about the resulting reward. We additionally manipulated the feedback schedule to assess the role of feedback frequency and whether feedback focusses on individual trials or a group of trials. Participants with trial-by-trial reinforcement feedback performed best. They were less loss-aversive, had the least endpoint deviation from optimality, and showed more consistent performance at the group level. This combination of reduced between-participant variability and the improved alignment with optimality suggests that reinforcement feedback about a single movement is particularly effective to optimize movement planning under risk.
{"title":"The role of feedback for sensorimotor decisions under risk.","authors":"Christian Wolf, Artem V Belopolsky, Markus Lappe","doi":"10.1167/jov.26.1.13","DOIUrl":"10.1167/jov.26.1.13","url":null,"abstract":"<p><p>For goal-directed movements like throwing darts or shooting a soccer penalty, the optimal location to aim depends on the endpoint variability of an individual. Currently, there is no consensus on whether people can optimize their movement planning based on information about their motor variability. Here, we tested the role of different types of feedback for movement planning under risk. We measured saccades toward a bar that consisted of a reward and a penalty region. Participants either received error-based feedback about their endpoint or reinforcement feedback about the resulting reward. We additionally manipulated the feedback schedule to assess the role of feedback frequency and whether feedback focusses on individual trials or a group of trials. Participants with trial-by-trial reinforcement feedback performed best. They were less loss-aversive, had the least endpoint deviation from optimality, and showed more consistent performance at the group level. This combination of reduced between-participant variability and the improved alignment with optimality suggests that reinforcement feedback about a single movement is particularly effective to optimize movement planning under risk.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12849826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher J Whyte, Hugh R Wilson, James M Shine, David Alais
Visual rivalry paradigms provide a powerful tool for probing the mechanisms of visual awareness and perceptual suppression. Although the dynamics and determinants of perceptual switches in visual rivalry have been extensively studied and modeled, recent advances in experimental design-particularly those that quantify the depth and variability of perceptual suppression-have outpaced the development of computational models. Here we extend an existing dynamical model of binocular rivalry to encompass two novel experimental paradigms: a threshold detection variant of binocular rivalry, and tracking continuous flash suppression. Together, these tasks provide complementary measures of the dynamics and magnitude of perceptual suppression. Through numerical simulation, we demonstrate that a single mechanism, competitive (hysteretic) inhibition between slowly adapting monocular populations, is sufficient to account for the suppression depth findings across both paradigms. This unified model offers a foundation for the development of a quantitative theory of perceptual suppression in visual rivalry.
{"title":"A minimal physiological model of perceptual suppression and breakthrough in visual rivalry.","authors":"Christopher J Whyte, Hugh R Wilson, James M Shine, David Alais","doi":"10.1167/jov.26.1.7","DOIUrl":"10.1167/jov.26.1.7","url":null,"abstract":"<p><p>Visual rivalry paradigms provide a powerful tool for probing the mechanisms of visual awareness and perceptual suppression. Although the dynamics and determinants of perceptual switches in visual rivalry have been extensively studied and modeled, recent advances in experimental design-particularly those that quantify the depth and variability of perceptual suppression-have outpaced the development of computational models. Here we extend an existing dynamical model of binocular rivalry to encompass two novel experimental paradigms: a threshold detection variant of binocular rivalry, and tracking continuous flash suppression. Together, these tasks provide complementary measures of the dynamics and magnitude of perceptual suppression. Through numerical simulation, we demonstrate that a single mechanism, competitive (hysteretic) inhibition between slowly adapting monocular populations, is sufficient to account for the suppression depth findings across both paradigms. This unified model offers a foundation for the development of a quantitative theory of perceptual suppression in visual rivalry.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12805966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individuals with cerebral visual impairment (CVI) often struggle with visuospatial processing, particularly in highly cluttered or complex environments. These challenges are commonly assessed through visual search tasks, using global measures such as reaction time (RT), accuracy, and search area. Accordingly, impaired search performance in CVI manifests as longer RTs, lower accuracy, and broader search areas. However, rather than elucidating the underlying mechanism of the impaired search process, these measures decode its outcome. In the present study, we utilized eye-tracking data to compute detailed measures of fixation count and duration, aiming to characterize gaze pattern sequences and determine whether prolonged RTs in CVI stem from slower visual scanning or increased fixation counts. Our reanalysis of two previously published datasets reveals that longer RTs in CVI arise from elevated fixation counts, specifically on distractors, rather than from slower visual scanning. Our findings indicate recurrent disruptions in maintaining gaze on the target, likely reflecting difficulties in sustaining attention on the target, suppressing distractors, and preventing inhibition of return. Together, these findings highlight an inefficient search pattern that is more biased toward distractors than focused on targets. By revealing these underlying mechanisms, gaze-based measures offer a deeper understanding of visuospatial processing deficits in CVI.
{"title":"Uncovering atypical gaze patterns in cerebral visual impairment: New insights from an exploratory gaze-based analysis.","authors":"Nilsu Saglam, Lotfi B Merabet, Zahide Pamir","doi":"10.1167/jov.26.1.5","DOIUrl":"10.1167/jov.26.1.5","url":null,"abstract":"<p><p>Individuals with cerebral visual impairment (CVI) often struggle with visuospatial processing, particularly in highly cluttered or complex environments. These challenges are commonly assessed through visual search tasks, using global measures such as reaction time (RT), accuracy, and search area. Accordingly, impaired search performance in CVI manifests as longer RTs, lower accuracy, and broader search areas. However, rather than elucidating the underlying mechanism of the impaired search process, these measures decode its outcome. In the present study, we utilized eye-tracking data to compute detailed measures of fixation count and duration, aiming to characterize gaze pattern sequences and determine whether prolonged RTs in CVI stem from slower visual scanning or increased fixation counts. Our reanalysis of two previously published datasets reveals that longer RTs in CVI arise from elevated fixation counts, specifically on distractors, rather than from slower visual scanning. Our findings indicate recurrent disruptions in maintaining gaze on the target, likely reflecting difficulties in sustaining attention on the target, suppressing distractors, and preventing inhibition of return. Together, these findings highlight an inefficient search pattern that is more biased toward distractors than focused on targets. By revealing these underlying mechanisms, gaze-based measures offer a deeper understanding of visuospatial processing deficits in CVI.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12786391/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}