Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell
Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based "grouped" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.
{"title":"Perceptual resolution of ambiguity: A divisive normalization account for both interocular color grouping and difference enhancement.","authors":"Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell","doi":"10.1167/jov.26.1.8","DOIUrl":"https://doi.org/10.1167/jov.26.1.8","url":null,"abstract":"<p><p>Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based \"grouped\" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant
Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.
{"title":"Rapid ensemble encoding of average scene features.","authors":"Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant","doi":"10.1167/jov.26.1.3","DOIUrl":"https://doi.org/10.1167/jov.26.1.3","url":null,"abstract":"<p><p>Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.
{"title":"What affects the movement can be seen from the movement: Effects of optical information and dynamical constraints on movement production and perception.","authors":"Huiyuan Zhang, Feifei Jiang, Yijing Mao, Xian Yang, Jing Samantha Pan","doi":"10.1167/jov.26.1.6","DOIUrl":"https://doi.org/10.1167/jov.26.1.6","url":null,"abstract":"<p><p>This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini
Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.
{"title":"Mapping the visual cortex with Zebra noise and wavelets.","authors":"Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini","doi":"10.1167/jov.26.1.1","DOIUrl":"https://doi.org/10.1167/jov.26.1.1","url":null,"abstract":"<p><p>Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher DiMattina, Eden E Sterk, Madelyn G Arena, Francesca E Monteferrante
To correctly parse the visual scene, one must detect edges and determine their underlying cause. Previous work has demonstrated that neural networks trained to differentiate shadow and occlusion edges exhibit sensitivity to boundary sharpness and texture differences. Here, we investigate whether human observers are also sensitive to these cues using synthetic edge stimuli formed by quilting together two natural textures, allowing us to parametrically manipulate boundary sharpness, texture modulation, and luminance modulation. Observers classified five sets of synthetic boundary images as shadows, occlusions, or textures generated by varying these three cues in all possible combinations. These three cues exhibited strong interactions to determine categorization. For sharp edges, increasing luminance modulation made it less likely the patch would be classified as a texture and more likely it would be classified as an occlusion, whereas for blurred edges, increasing luminance modulation made it more likely the patch would be classified as a shadow. Boundary sharpness had a profound effect, so that in the presence of luminance modulation, increasing sharpness decreased the likelihood of classification as a shadow and increased the likelihood of classification as an occlusion. Texture modulation had little effect, except for a sharp boundary with zero luminance modulation. Results were consistent across all five stimulus sets, and human performance was well explained by a multinomial logistic regression model. Our results demonstrate that human observers make use of the same cues as previous machine learning models when detecting and determining the cause of an edge.
{"title":"Local cues enable classification of image patches as surfaces, object boundaries, or illumination changes.","authors":"Christopher DiMattina, Eden E Sterk, Madelyn G Arena, Francesca E Monteferrante","doi":"10.1167/jov.26.1.9","DOIUrl":"https://doi.org/10.1167/jov.26.1.9","url":null,"abstract":"<p><p>To correctly parse the visual scene, one must detect edges and determine their underlying cause. Previous work has demonstrated that neural networks trained to differentiate shadow and occlusion edges exhibit sensitivity to boundary sharpness and texture differences. Here, we investigate whether human observers are also sensitive to these cues using synthetic edge stimuli formed by quilting together two natural textures, allowing us to parametrically manipulate boundary sharpness, texture modulation, and luminance modulation. Observers classified five sets of synthetic boundary images as shadows, occlusions, or textures generated by varying these three cues in all possible combinations. These three cues exhibited strong interactions to determine categorization. For sharp edges, increasing luminance modulation made it less likely the patch would be classified as a texture and more likely it would be classified as an occlusion, whereas for blurred edges, increasing luminance modulation made it more likely the patch would be classified as a shadow. Boundary sharpness had a profound effect, so that in the presence of luminance modulation, increasing sharpness decreased the likelihood of classification as a shadow and increased the likelihood of classification as an occlusion. Texture modulation had little effect, except for a sharp boundary with zero luminance modulation. Results were consistent across all five stimulus sets, and human performance was well explained by a multinomial logistic regression model. Our results demonstrate that human observers make use of the same cues as previous machine learning models when detecting and determining the cause of an edge.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher J Whyte, Hugh R Wilson, James M Shine, David Alais
Visual rivalry paradigms provide a powerful tool for probing the mechanisms of visual awareness and perceptual suppression. Although the dynamics and determinants of perceptual switches in visual rivalry have been extensively studied and modeled, recent advances in experimental design-particularly those that quantify the depth and variability of perceptual suppression-have outpaced the development of computational models. Here we extend an existing dynamical model of binocular rivalry to encompass two novel experimental paradigms: a threshold detection variant of binocular rivalry, and tracking continuous flash suppression. Together, these tasks provide complementary measures of the dynamics and magnitude of perceptual suppression. Through numerical simulation, we demonstrate that a single mechanism, competitive (hysteretic) inhibition between slowly adapting monocular populations, is sufficient to account for the suppression depth findings across both paradigms. This unified model offers a foundation for the development of a quantitative theory of perceptual suppression in visual rivalry.
{"title":"A minimal physiological model of perceptual suppression and breakthrough in visual rivalry.","authors":"Christopher J Whyte, Hugh R Wilson, James M Shine, David Alais","doi":"10.1167/jov.26.1.7","DOIUrl":"https://doi.org/10.1167/jov.26.1.7","url":null,"abstract":"<p><p>Visual rivalry paradigms provide a powerful tool for probing the mechanisms of visual awareness and perceptual suppression. Although the dynamics and determinants of perceptual switches in visual rivalry have been extensively studied and modeled, recent advances in experimental design-particularly those that quantify the depth and variability of perceptual suppression-have outpaced the development of computational models. Here we extend an existing dynamical model of binocular rivalry to encompass two novel experimental paradigms: a threshold detection variant of binocular rivalry, and tracking continuous flash suppression. Together, these tasks provide complementary measures of the dynamics and magnitude of perceptual suppression. Through numerical simulation, we demonstrate that a single mechanism, competitive (hysteretic) inhibition between slowly adapting monocular populations, is sufficient to account for the suppression depth findings across both paradigms. This unified model offers a foundation for the development of a quantitative theory of perceptual suppression in visual rivalry.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individuals with cerebral visual impairment (CVI) often struggle with visuospatial processing, particularly in highly cluttered or complex environments. These challenges are commonly assessed through visual search tasks, using global measures such as reaction time (RT), accuracy, and search area. Accordingly, impaired search performance in CVI manifests as longer RTs, lower accuracy, and broader search areas. However, rather than elucidating the underlying mechanism of the impaired search process, these measures decode its outcome. In the present study, we utilized eye-tracking data to compute detailed measures of fixation count and duration, aiming to characterize gaze pattern sequences and determine whether prolonged RTs in CVI stem from slower visual scanning or increased fixation counts. Our reanalysis of two previously published datasets reveals that longer RTs in CVI arise from elevated fixation counts, specifically on distractors, rather than from slower visual scanning. Our findings indicate recurrent disruptions in maintaining gaze on the target, likely reflecting difficulties in sustaining attention on the target, suppressing distractors, and preventing inhibition of return. Together, these findings highlight an inefficient search pattern that is more biased toward distractors than focused on targets. By revealing these underlying mechanisms, gaze-based measures offer a deeper understanding of visuospatial processing deficits in CVI.
{"title":"Uncovering atypical gaze patterns in cerebral visual impairment: New insights from an exploratory gaze-based analysis.","authors":"Nilsu Saglam, Lotfi B Merabet, Zahide Pamir","doi":"10.1167/jov.26.1.5","DOIUrl":"https://doi.org/10.1167/jov.26.1.5","url":null,"abstract":"<p><p>Individuals with cerebral visual impairment (CVI) often struggle with visuospatial processing, particularly in highly cluttered or complex environments. These challenges are commonly assessed through visual search tasks, using global measures such as reaction time (RT), accuracy, and search area. Accordingly, impaired search performance in CVI manifests as longer RTs, lower accuracy, and broader search areas. However, rather than elucidating the underlying mechanism of the impaired search process, these measures decode its outcome. In the present study, we utilized eye-tracking data to compute detailed measures of fixation count and duration, aiming to characterize gaze pattern sequences and determine whether prolonged RTs in CVI stem from slower visual scanning or increased fixation counts. Our reanalysis of two previously published datasets reveals that longer RTs in CVI arise from elevated fixation counts, specifically on distractors, rather than from slower visual scanning. Our findings indicate recurrent disruptions in maintaining gaze on the target, likely reflecting difficulties in sustaining attention on the target, suppressing distractors, and preventing inhibition of return. Together, these findings highlight an inefficient search pattern that is more biased toward distractors than focused on targets. By revealing these underlying mechanisms, gaze-based measures offer a deeper understanding of visuospatial processing deficits in CVI.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Continuous flash suppression (CFS) is a variant of interocular conflict that occurs when one eye views a dynamic high-contrast mask that increases the duration of target suppression. A variant of CFS known as tracking continuous flash suppression (tCFS) was developed, allowing the depth of interocular suppression to be measured. Although previous research has measured how the duration of suppression may be modulated by the contrast and size of the masking stimulus, no study has assessed how mask features impact suppression depth. In our first study, we manipulated mask contrast to measure the consequent impact on suppression depth as measured by the tCFS procedure. We observed that high mask contrast increased the threshold required for a target to break into awareness. Critically, the decrease in contrast required to re-suppress each target was proportionately the same across all conditions so that suppression depth-the ratio of the two thresholds-remained constant. In the second experiment, we manipulated the size of the masking stimulus and found no change in breakthrough/suppression thresholds or suppression depth (i.e., the difference between the thresholds when using log-contrast). These findings clarify that, although changes in mask contrast may alter the threshold to enter awareness, there is no overall change in suppression depth as the changes in breakthrough threshold are reflected by proportionately equivalent changes in suppression threshold. This result matches findings obtained with binocular rivalry showing that suppression depth is constant despite changes in stimulus contrast. Differing levels of mask contrast and size, therefore, can be used by researchers in CFS without altering the strength of suppression, consistent with the perspective that interocular suppression operates in small local spatial zones determined by receptive field size in the primary visual cortex.
{"title":"Mask contrast and size do not alter suppression depth in the tracking continuous flash suppression paradigm.","authors":"Jacob Coorey, Matthew Davidson, David Alais","doi":"10.1167/jov.26.1.10","DOIUrl":"https://doi.org/10.1167/jov.26.1.10","url":null,"abstract":"<p><p>Continuous flash suppression (CFS) is a variant of interocular conflict that occurs when one eye views a dynamic high-contrast mask that increases the duration of target suppression. A variant of CFS known as tracking continuous flash suppression (tCFS) was developed, allowing the depth of interocular suppression to be measured. Although previous research has measured how the duration of suppression may be modulated by the contrast and size of the masking stimulus, no study has assessed how mask features impact suppression depth. In our first study, we manipulated mask contrast to measure the consequent impact on suppression depth as measured by the tCFS procedure. We observed that high mask contrast increased the threshold required for a target to break into awareness. Critically, the decrease in contrast required to re-suppress each target was proportionately the same across all conditions so that suppression depth-the ratio of the two thresholds-remained constant. In the second experiment, we manipulated the size of the masking stimulus and found no change in breakthrough/suppression thresholds or suppression depth (i.e., the difference between the thresholds when using log-contrast). These findings clarify that, although changes in mask contrast may alter the threshold to enter awareness, there is no overall change in suppression depth as the changes in breakthrough threshold are reflected by proportionately equivalent changes in suppression threshold. This result matches findings obtained with binocular rivalry showing that suppression depth is constant despite changes in stimulus contrast. Differing levels of mask contrast and size, therefore, can be used by researchers in CFS without altering the strength of suppression, consistent with the perspective that interocular suppression operates in small local spatial zones determined by receptive field size in the primary visual cortex.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"10"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cemre Baykan, Pascal Mamassian, Alexander C Schütz
Pervasive gaps in sensory information are completed in perception. Interestingly, humans are unaware of that perceptual completion in cases of proximal gaps, which are caused by properties of their own sensory system, and report high confidence for the inferred information in those gaps. Here, we investigated whether such overconfidence is also observed in perceptual completion of visual information in distal gaps (i.e., those caused by the properties of the stimulus). In three experiments, we asked participants to perform a perceptual (type 1) task and report their confidence (type 2 task) using stimuli that were either intact (full stimulus), with a partial cutout (stimulus with gap), partially occluded (amodal completion) or induced (modal completion). We examined whether participants report high confidence for amodal and modal completion in comparison to a full stimulus or stimulus with gap. Over three experiments, participants had the highest confidence for full stimuli, whereas amodal and modal completion led to comparable confidence as stimuli with gap. These findings demonstrate that there was low confidence for stimuli whose distal gaps are perceptually filled in. In combination with previous research, our results suggest that visibility of the gaps in information influences confidence judgments.
{"title":"Low confidence for perceptual completion of partially occluded objects.","authors":"Cemre Baykan, Pascal Mamassian, Alexander C Schütz","doi":"10.1167/jov.26.1.4","DOIUrl":"https://doi.org/10.1167/jov.26.1.4","url":null,"abstract":"<p><p>Pervasive gaps in sensory information are completed in perception. Interestingly, humans are unaware of that perceptual completion in cases of proximal gaps, which are caused by properties of their own sensory system, and report high confidence for the inferred information in those gaps. Here, we investigated whether such overconfidence is also observed in perceptual completion of visual information in distal gaps (i.e., those caused by the properties of the stimulus). In three experiments, we asked participants to perform a perceptual (type 1) task and report their confidence (type 2 task) using stimuli that were either intact (full stimulus), with a partial cutout (stimulus with gap), partially occluded (amodal completion) or induced (modal completion). We examined whether participants report high confidence for amodal and modal completion in comparison to a full stimulus or stimulus with gap. Over three experiments, participants had the highest confidence for full stimuli, whereas amodal and modal completion led to comparable confidence as stimuli with gap. These findings demonstrate that there was low confidence for stimuli whose distal gaps are perceptually filled in. In combination with previous research, our results suggest that visibility of the gaps in information influences confidence judgments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josephine C D'Angelo, Pavan Tiruveedhula, Raymond J Weber, David W Arathorn, Jorge Otero-Millan, Austin Roorda
The human visual system is tasked with perceiving stable and moving objects despite ever-present eye movements. Normally, our visual system performs this task exceptionally well; indeed, under conditions with frames of reference, our ability to detect relative motion exceeds the sampling limits of foveal cones. However, during fixational drift, if an image is programmed to move in a direction consistent with retinal slip, little to no motion is perceived, even if this motion is amplified. We asked: Would a stimulus moving in a direction consistent with retinal slip, but with a smaller magnitude across the retina, also appear relatively stable? We used an adaptive optics scanning light ophthalmoscope to deliver stimuli that moved contingent to retinal motion and measured subjects' perceived motion under conditions with world-fixed background content. We also tested under conditions with background content closer to and farther from the stimuli. We found a sharp discontinuity in motion perception. Stimuli moving in a direction consistent with retinal slip, no matter how small, appear to have relatively little to no motion, whereas stimuli moving in the same direction as eye motion appear to be moving. Displacing background content to greater than 4° from the stimuli diminishes the effects of this phenomenon.
{"title":"A discontinuity in motion perception during fixational drift.","authors":"Josephine C D'Angelo, Pavan Tiruveedhula, Raymond J Weber, David W Arathorn, Jorge Otero-Millan, Austin Roorda","doi":"10.1167/jov.26.1.2","DOIUrl":"https://doi.org/10.1167/jov.26.1.2","url":null,"abstract":"<p><p>The human visual system is tasked with perceiving stable and moving objects despite ever-present eye movements. Normally, our visual system performs this task exceptionally well; indeed, under conditions with frames of reference, our ability to detect relative motion exceeds the sampling limits of foveal cones. However, during fixational drift, if an image is programmed to move in a direction consistent with retinal slip, little to no motion is perceived, even if this motion is amplified. We asked: Would a stimulus moving in a direction consistent with retinal slip, but with a smaller magnitude across the retina, also appear relatively stable? We used an adaptive optics scanning light ophthalmoscope to deliver stimuli that moved contingent to retinal motion and measured subjects' perceived motion under conditions with world-fixed background content. We also tested under conditions with background content closer to and farther from the stimuli. We found a sharp discontinuity in motion perception. Stimuli moving in a direction consistent with retinal slip, no matter how small, appear to have relatively little to no motion, whereas stimuli moving in the same direction as eye motion appear to be moving. Displacing background content to greater than 4° from the stimuli diminishes the effects of this phenomenon.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}