William Swann, Matthew Davidson, Gabriel Clouston, David Alais
Presenting unique visual stimuli to each eye induces a dynamic perceptual state where only one image is perceived at a time, and the other is suppressed from awareness. This phenomenon, known as interocular suppression, has allowed researchers to probe the dynamics of visual awareness and unconscious processing in the visual system. A key result is that different categories of visual stimuli may not be suppressed equally, but there is still a wide debate as to whether low- or high-level visual features modulate interocular suppression. Here we quantify and compare the strength of suppression for various motion stimuli in comparison to biological motion stimuli that are rich in high-level semantic information. We employ the tracking continuous flash suppression method, which recently demonstrated uniform suppression depth for a variety of static images that varied in semantic content. The accumulative findings of our three experiments outline that suppression depth is varied not by the strength of the suppressor alone but with different low-level visual motion features, in contrast to the uniform suppression depth previously shown for static images. Notably, disrupting high-level semantic information via the inversion or rotation of biological motion did not alter suppression depth. Ultimately, our data support the dependency of suppression depth on local motion information, further supporting the low-level local-precedence hypothesis of interocular suppression.
{"title":"Local motion governs visibility and suppression of biological motion in continuous flash suppression.","authors":"William Swann, Matthew Davidson, Gabriel Clouston, David Alais","doi":"10.1167/jov.25.12.25","DOIUrl":"10.1167/jov.25.12.25","url":null,"abstract":"<p><p>Presenting unique visual stimuli to each eye induces a dynamic perceptual state where only one image is perceived at a time, and the other is suppressed from awareness. This phenomenon, known as interocular suppression, has allowed researchers to probe the dynamics of visual awareness and unconscious processing in the visual system. A key result is that different categories of visual stimuli may not be suppressed equally, but there is still a wide debate as to whether low- or high-level visual features modulate interocular suppression. Here we quantify and compare the strength of suppression for various motion stimuli in comparison to biological motion stimuli that are rich in high-level semantic information. We employ the tracking continuous flash suppression method, which recently demonstrated uniform suppression depth for a variety of static images that varied in semantic content. The accumulative findings of our three experiments outline that suppression depth is varied not by the strength of the suppressor alone but with different low-level visual motion features, in contrast to the uniform suppression depth previously shown for static images. Notably, disrupting high-level semantic information via the inversion or rotation of biological motion did not alter suppression depth. Ultimately, our data support the dependency of suppression depth on local motion information, further supporting the low-level local-precedence hypothesis of interocular suppression.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"25"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12582193/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martina Morea, Michael H Herzog, Gregory Francis, Mauro Manassi
Vision is often understood as a hierarchical, feedforward process, where visual processing proceeds from low-level features to high-level representations. Within tens of milliseconds, the fundamental features of the percept are established. Traditional models use this framework to explain visual crowding, where nearby elements impair target perception with minimal influence from stimulus duration. Here, we show that, at least for more complex displays, crowding involves highly dynamic processes. We determined vernier offset discrimination thresholds for different flanker configurations. In Experiment 1, for a 160-ms stimulus duration, crowding was lower for flanking Cubes/Rectangles compared to Lines, pointing toward underlying grouping processes. However, strong crowding occurred in all conditions at 20 ms, showing that grouping requires a minimum stimulus duration to occur. In Experiment 2, the crowded vernier (20 ms) was preceded by a 20-ms Cubes display. This brief preview led to uncrowding of the subsequently presented flanked vernier, but only for flankers that ungroup for longer durations (i.e., Cubes). This uncrowding effect occurred for time spans up to 1 s (Experiment 3) but could be interrupted by elements presented between the preview and the flanked vernier (Experiment 4). Our findings are well predicted by the LAMINART model, which employs recurrent segmentation processes unfolding over time to separate objects into distinct representation layers. Taken together, our novel preview effect highlights the importance of spatiotemporal grouping in crowding. In contrast to classic feedforward models, we propose that crowding is a dynamic process where multiple interpretations are modulated and gated by grouping mechanisms evolving over time.
{"title":"Dynamics of vision: Grouping takes longer than crowding.","authors":"Martina Morea, Michael H Herzog, Gregory Francis, Mauro Manassi","doi":"10.1167/jov.25.12.16","DOIUrl":"10.1167/jov.25.12.16","url":null,"abstract":"<p><p>Vision is often understood as a hierarchical, feedforward process, where visual processing proceeds from low-level features to high-level representations. Within tens of milliseconds, the fundamental features of the percept are established. Traditional models use this framework to explain visual crowding, where nearby elements impair target perception with minimal influence from stimulus duration. Here, we show that, at least for more complex displays, crowding involves highly dynamic processes. We determined vernier offset discrimination thresholds for different flanker configurations. In Experiment 1, for a 160-ms stimulus duration, crowding was lower for flanking Cubes/Rectangles compared to Lines, pointing toward underlying grouping processes. However, strong crowding occurred in all conditions at 20 ms, showing that grouping requires a minimum stimulus duration to occur. In Experiment 2, the crowded vernier (20 ms) was preceded by a 20-ms Cubes display. This brief preview led to uncrowding of the subsequently presented flanked vernier, but only for flankers that ungroup for longer durations (i.e., Cubes). This uncrowding effect occurred for time spans up to 1 s (Experiment 3) but could be interrupted by elements presented between the preview and the flanked vernier (Experiment 4). Our findings are well predicted by the LAMINART model, which employs recurrent segmentation processes unfolding over time to separate objects into distinct representation layers. Taken together, our novel preview effect highlights the importance of spatiotemporal grouping in crowding. In contrast to classic feedforward models, we propose that crowding is a dynamic process where multiple interpretations are modulated and gated by grouping mechanisms evolving over time.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"16"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A well-known finding from research on gaze perception in triadic gaze tasks is the overestimation of horizontal gaze directions. In general, a looker model's gaze appears to deviate more from the straight line of sight than is objectively the case. Although there is, up to now, a substantial amount of evidence for what Anstis et al. (1969) termed the overestimation effect, results vary regarding the absolute overestimation factor. Starting from the occlusion hypothesis by Anstis et al. (1969), the present study examines the influence of horizontal iris movement range, operationalized as the sclera size index on overestimation factors acquired for a sample of 40 looker models. The study rendered two main findings. First, horizontal iris movement range (sclera size index: M = 2.02, SD = 0.11, min = 1.79, max = 2.25) proved not useful for the explanation of variance in the overestimation factors (M = 1.79, SD = 0.16, min = 1.49, max = 2.24) obtained separately for each of the looker models. Second, intraclass correlations revealed that variance in perceived gaze directions between observers was roughly 10 times larger (ICC = 0.189) than variance between looker models (ICC = 0.019). The results strongly emphasize the need for larger and more diverse observer samples and may serve as a post hoc justification for using only a few or no different looker models in triadic gaze judgment tasks.
三合一凝视任务中凝视感知的一个著名研究发现是对水平凝视方向的高估。一般来说,模特的视线偏离直线的程度比客观情况要大。尽管到目前为止,有大量证据证明Anstis等人(1969)所说的高估效应,但关于绝对高估因素的结果各不相同。从Anstis等人(1969)的遮挡假说出发,本研究考察了虹膜水平运动范围(作为巩膜大小指数)对40个looker模型样本中获得的高估因子的影响。这项研究有两个主要发现。首先,水平虹膜运动范围(巩膜大小指数:M = 2.02, SD = 0.11, min = 1.79, max = 2.25)被证明对每个观察者模型分别获得的高估因子(M = 1.79, SD = 0.16, min = 1.49, max = 2.24)的方差解释无效。其次,类内相关性表明,观察者之间感知凝视方向的差异(ICC = 0.189)大约是观察者模型之间差异(ICC = 0.019)的10倍。结果强烈强调需要更大、更多样化的观察者样本,并可能作为在三合一凝视判断任务中只使用少数或不使用不同的观察者模型的事后理由。
{"title":"In the eye of the beholder? Gaze perception and the external morphology of the human eye.","authors":"Conrad Alting, Gernot Horstmann","doi":"10.1167/jov.25.12.24","DOIUrl":"10.1167/jov.25.12.24","url":null,"abstract":"<p><p>A well-known finding from research on gaze perception in triadic gaze tasks is the overestimation of horizontal gaze directions. In general, a looker model's gaze appears to deviate more from the straight line of sight than is objectively the case. Although there is, up to now, a substantial amount of evidence for what Anstis et al. (1969) termed the overestimation effect, results vary regarding the absolute overestimation factor. Starting from the occlusion hypothesis by Anstis et al. (1969), the present study examines the influence of horizontal iris movement range, operationalized as the sclera size index on overestimation factors acquired for a sample of 40 looker models. The study rendered two main findings. First, horizontal iris movement range (sclera size index: M = 2.02, SD = 0.11, min = 1.79, max = 2.25) proved not useful for the explanation of variance in the overestimation factors (M = 1.79, SD = 0.16, min = 1.49, max = 2.24) obtained separately for each of the looker models. Second, intraclass correlations revealed that variance in perceived gaze directions between observers was roughly 10 times larger (ICC = 0.189) than variance between looker models (ICC = 0.019). The results strongly emphasize the need for larger and more diverse observer samples and may serve as a post hoc justification for using only a few or no different looker models in triadic gaze judgment tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"24"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12574756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145372412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark A Georgeson, Hiromi Sato, Ronald Chang, Frederick A A Kingdom
How are signals from the two eyes combined? We asked whether the mechanisms that limit detectability of simple binocular and dichoptic stimuli also set the limits for their identification. For example, at low contrasts, can we (a) identify monocular versus binocular stimulation and/or (b) identify stimuli that are the same in both eyes (e.g., both light discs or both dark) versus stimuli with opposite polarity (light disc in one eye, dark disc in the other). For the same- versus opposite-polarity tasks, mean proportions of correct trials for detection and for identification were almost identical. This is the classic signature of separate mechanisms for the two stimuli in question. For the monocular versus binocular task, however, identification (one eye or two?) was notably worse than detection, but these very different outcomes do not demand fundamentally different explanations. We developed a model with binocular sum and difference channels and formulated the identification task in a two-dimensional decision space whose coordinates were the sum and difference channel responses. This space was ideally suited to the same versus opposite polarity tasks, having orthogonal response axes (90° apart) for these stimuli. But monocular discs stimulated both channels, with greater overlap of monocular and binocular response distributions, hence greater perceptual confusion and poorer identification. When bias and uncertainty were also accounted for, the model fit to identification data was excellent. We conclude that the same binocular sum and difference channels are used in stimulus detection and in perceptually encoding the degree of difference between inputs to the two eyes.
{"title":"Detection and identification of monocular, binocular, and dichoptic stimuli are mediated by binocular sum and difference channels.","authors":"Mark A Georgeson, Hiromi Sato, Ronald Chang, Frederick A A Kingdom","doi":"10.1167/jov.25.12.22","DOIUrl":"10.1167/jov.25.12.22","url":null,"abstract":"<p><p>How are signals from the two eyes combined? We asked whether the mechanisms that limit detectability of simple binocular and dichoptic stimuli also set the limits for their identification. For example, at low contrasts, can we (a) identify monocular versus binocular stimulation and/or (b) identify stimuli that are the same in both eyes (e.g., both light discs or both dark) versus stimuli with opposite polarity (light disc in one eye, dark disc in the other). For the same- versus opposite-polarity tasks, mean proportions of correct trials for detection and for identification were almost identical. This is the classic signature of separate mechanisms for the two stimuli in question. For the monocular versus binocular task, however, identification (one eye or two?) was notably worse than detection, but these very different outcomes do not demand fundamentally different explanations. We developed a model with binocular sum and difference channels and formulated the identification task in a two-dimensional decision space whose coordinates were the sum and difference channel responses. This space was ideally suited to the same versus opposite polarity tasks, having orthogonal response axes (90° apart) for these stimuli. But monocular discs stimulated both channels, with greater overlap of monocular and binocular response distributions, hence greater perceptual confusion and poorer identification. When bias and uncertainty were also accounted for, the model fit to identification data was excellent. We conclude that the same binocular sum and difference channels are used in stimulus detection and in perceptually encoding the degree of difference between inputs to the two eyes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"22"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12582191/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebecca B Esquenazi, Kimberly Meier, Michael Beyeler, Drake Wright, Geoffrey M Boynton, Ione Fine
A key limitation shared by both electronic and optogenetic sight recovery technologies is that they cause simultaneous rather than complementary firing within on- and off-center cells. Here, using "virtual patients"-sighted individuals viewing distorted input-we examine whether gamified training improves the ability to compensate for distortions in neuronal population coding. We measured perceptual learning using dichoptic input, filtered so that regions of the image that produced on-center responses in one eye produced off-center responses in the other eye. The Non-Gaming control group carried out an object discrimination task over five sessions using this filtered input. The Gaming group carried out an additional 25 hours of gamified training using a similarly filtered variant of the video game Fruit Ninja. Both groups showed improvements over time in the object discrimination task. However, there was no significant transfer of learning from the "Fruit Ninja" task to the object discrimination task. The lack of transfer of learning from video game training to object recognition suggests that gamification-based rehabilitation for sight recovery technologies may have limited utility and may be most effective when targeted on learning specific visual tasks.
{"title":"Perceptual learning of prosthetic vision using video game training.","authors":"Rebecca B Esquenazi, Kimberly Meier, Michael Beyeler, Drake Wright, Geoffrey M Boynton, Ione Fine","doi":"10.1167/jov.25.12.12","DOIUrl":"10.1167/jov.25.12.12","url":null,"abstract":"<p><p>A key limitation shared by both electronic and optogenetic sight recovery technologies is that they cause simultaneous rather than complementary firing within on- and off-center cells. Here, using \"virtual patients\"-sighted individuals viewing distorted input-we examine whether gamified training improves the ability to compensate for distortions in neuronal population coding. We measured perceptual learning using dichoptic input, filtered so that regions of the image that produced on-center responses in one eye produced off-center responses in the other eye. The Non-Gaming control group carried out an object discrimination task over five sessions using this filtered input. The Gaming group carried out an additional 25 hours of gamified training using a similarly filtered variant of the video game Fruit Ninja. Both groups showed improvements over time in the object discrimination task. However, there was no significant transfer of learning from the \"Fruit Ninja\" task to the object discrimination task. The lack of transfer of learning from video game training to object recognition suggests that gamification-based rehabilitation for sight recovery technologies may have limited utility and may be most effective when targeted on learning specific visual tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"12"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12514978/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sae Kaneko, Ichiro Kuriki, Søren K Andersen, David Henry Peterzell
We investigated how early human visual cortex processes color by analyzing individual variability in steady-state visual evoked potentials (SSVEPs). Sixteen participants viewed a flickering checkerboard that swept around the isoluminant hue circle at three chromatic contrasts. The current study analyzed the individual variability in the SSVEP data from the study to elucidate the hue-selective mechanisms in the early visual areas using a factor-analytic approach. The initial analyses of the correlations revealed that the responses to the nearby hues correlated highly, which is consistent with multiple overlapping color channels. Also, the correlational pattern showed consistent peaks and troughs at specific hue angles: 0° (+L-M), 30°, 120°, 180° (-L+M), 240°, and 300°. We further performed nonmetric multidimensional scaling, identifying four significant hue dimensions. Peaks and troughs of the dimension components were consistent with the hue angles revealed in the correlational pattern. Additional four hues also appeared in the last dimension: 90° (+S), 150°, 270° (-S), and 330°. The 10 (six plus four) hues suggested in these analyses may subserve the basis of early cortical color processing, including classical cone opponency and the mechanisms tuned to the intermediate hues.
{"title":"Individual variability in steady-state VEP responses for hues sweeping around cardinal color axes: Clues to cortical color coding?","authors":"Sae Kaneko, Ichiro Kuriki, Søren K Andersen, David Henry Peterzell","doi":"10.1167/jov.25.12.2","DOIUrl":"10.1167/jov.25.12.2","url":null,"abstract":"<p><p>We investigated how early human visual cortex processes color by analyzing individual variability in steady-state visual evoked potentials (SSVEPs). Sixteen participants viewed a flickering checkerboard that swept around the isoluminant hue circle at three chromatic contrasts. The current study analyzed the individual variability in the SSVEP data from the study to elucidate the hue-selective mechanisms in the early visual areas using a factor-analytic approach. The initial analyses of the correlations revealed that the responses to the nearby hues correlated highly, which is consistent with multiple overlapping color channels. Also, the correlational pattern showed consistent peaks and troughs at specific hue angles: 0° (+L-M), 30°, 120°, 180° (-L+M), 240°, and 300°. We further performed nonmetric multidimensional scaling, identifying four significant hue dimensions. Peaks and troughs of the dimension components were consistent with the hue angles revealed in the correlational pattern. Additional four hues also appeared in the last dimension: 90° (+S), 150°, 270° (-S), and 330°. The 10 (six plus four) hues suggested in these analyses may subserve the basis of early cortical color processing, including classical cone opponency and the mechanisms tuned to the intermediate hues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12510397/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145201881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lia E Tsotsos, Eugenie Roudaia, Allison B Sekuler, Patrick J Bennett
Motion perception is degraded in older adults. Previous studies suggest that this effect of aging may be due in part to an increase in the bandwidth of directionally selective mechanisms. We tested this idea by measuring directional masking in younger and older adults. Experiments 1-3 measured the contrast needed to discriminate the direction of coherently moving signal dots embedded in high-contrast mask dots. The distribution of mask dot directions was varied with notch filters, and directional selectivity was indexed by the slope of the threshold-versus-notch function. Thresholds were higher and directional selectivity of masking was lower in older compared to younger adults. However, age differences were eliminated when signal and mask contrast were expressed as multiples of discrimination thresholds for unmasked motion. Experiments 4-5 measured direction discrimination thresholds by varying the proportion of coherently moving dots embedded in a mask consisting of dots whose directions varied across conditions. All dots were high contrast, so age differences in masking are unlikely to be caused by differences in contrast sensitivity in these conditions. Nevertheless, Experiments 4-5 also found higher discrimination thresholds and reduced directional selectivity in older adults. The results are consistent with the hypothesis that directionally selective mechanisms become more broadly tuned during senescence.
{"title":"The effects of aging on directionally selective masking.","authors":"Lia E Tsotsos, Eugenie Roudaia, Allison B Sekuler, Patrick J Bennett","doi":"10.1167/jov.25.12.21","DOIUrl":"10.1167/jov.25.12.21","url":null,"abstract":"<p><p>Motion perception is degraded in older adults. Previous studies suggest that this effect of aging may be due in part to an increase in the bandwidth of directionally selective mechanisms. We tested this idea by measuring directional masking in younger and older adults. Experiments 1-3 measured the contrast needed to discriminate the direction of coherently moving signal dots embedded in high-contrast mask dots. The distribution of mask dot directions was varied with notch filters, and directional selectivity was indexed by the slope of the threshold-versus-notch function. Thresholds were higher and directional selectivity of masking was lower in older compared to younger adults. However, age differences were eliminated when signal and mask contrast were expressed as multiples of discrimination thresholds for unmasked motion. Experiments 4-5 measured direction discrimination thresholds by varying the proportion of coherently moving dots embedded in a mask consisting of dots whose directions varied across conditions. All dots were high contrast, so age differences in masking are unlikely to be caused by differences in contrast sensitivity in these conditions. Nevertheless, Experiments 4-5 also found higher discrimination thresholds and reduced directional selectivity in older adults. The results are consistent with the hypothesis that directionally selective mechanisms become more broadly tuned during senescence.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"21"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12551926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxwell J Greene, Vimal P Pandiyan, Ramkumar Sabesan, William S Tuten
The distribution of long (L)-, middle (M)-, and short (S)-wavelength sensitive cones in the retina determines how different frequencies of incident light are sampled across space and has been hypothesized to influence spatial and color vision. We examined how the detection and color naming of small, short-duration increment stimuli (λ = 543 or 680 nm) depend on the local spectral topography of the cone mosaic. Stimuli were corrected for optical aberrations by an adaptive optics system and targeted to locations in the parafovea where cone spectral types were known. We found that sensitivity to 680-nm light, normalized by sensitivity to 543-nm light, grew with the proportion of L cones at the stimulated locus, although intra- and intersubject variability was considerable. A similar trend was derived from a simple model of the achromatic (L+M) pathway, suggesting that small spot detection mainly relies on a non-opponent mechanism. Most stimuli were categorized as achromatic, with red and green responses becoming more common as stimulus intensity increased and as the local proportion of L and M cones became more balanced. The proximity of S cones to the stimulated region did not influence the likelihood of eliciting a chromatic percept. Our detection data confirm earlier reports that small spot psychophysics can reveal information about local cone topography, and our color naming findings suggest that chromatic sensitivity may improve when the L/M ratio approaches unity.
{"title":"Local variations in L/M ratio influence the detection and color naming of small spots.","authors":"Maxwell J Greene, Vimal P Pandiyan, Ramkumar Sabesan, William S Tuten","doi":"10.1167/jov.25.12.13","DOIUrl":"10.1167/jov.25.12.13","url":null,"abstract":"<p><p>The distribution of long (L)-, middle (M)-, and short (S)-wavelength sensitive cones in the retina determines how different frequencies of incident light are sampled across space and has been hypothesized to influence spatial and color vision. We examined how the detection and color naming of small, short-duration increment stimuli (λ = 543 or 680 nm) depend on the local spectral topography of the cone mosaic. Stimuli were corrected for optical aberrations by an adaptive optics system and targeted to locations in the parafovea where cone spectral types were known. We found that sensitivity to 680-nm light, normalized by sensitivity to 543-nm light, grew with the proportion of L cones at the stimulated locus, although intra- and intersubject variability was considerable. A similar trend was derived from a simple model of the achromatic (L+M) pathway, suggesting that small spot detection mainly relies on a non-opponent mechanism. Most stimuli were categorized as achromatic, with red and green responses becoming more common as stimulus intensity increased and as the local proportion of L and M cones became more balanced. The proximity of S cones to the stimulated region did not influence the likelihood of eliciting a chromatic percept. Our detection data confirm earlier reports that small spot psychophysics can reveal information about local cone topography, and our color naming findings suggest that chromatic sensitivity may improve when the L/M ratio approaches unity.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12514989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Binocular integration and interocular suppression are fundamental processes underlying binocular vision, giving rise to stereopsis and binocular rivalry, respectively. To investigate how the visual system dynamically coordinates these processes to form a unified percept, we conducted four psychophysical experiments examining the temporal interactions between binocular rivalry and stereopsis. In Experiment 1a, binocular rivalry, especially with high-contrast stimuli, impaired subsequent stereopsis, significantly elevating average stereo detection thresholds from 60.5 to 111.8 arcsec. Experiment 1b revealed no effect on contrast detection, confirming that the suppression was specific to stereopsis rather than due to general attentional distraction. Experiment 2a revealed that preceding stereopsis rebalanced subsequent rivalry dynamics by reducing ocular dominance asymmetry and increasing mixed percepts, without affecting alternation rate. Experiment 2b further demonstrated that anti-correlated stereograms, which do not elicit stable stereopsis, exerted no effect on subsequent rivalry dynamics. These findings underscore a dynamic interplay between binocular integration and suppression in resolving perceptual ambiguity and achieving unified visual perception. Crucially, our results reinforce that stereopsis is not merely a passive consequence of binocular integration, but actively contributes to rebalancing ocular dominance, thus offering insights for interventions aimed at restoring binocular function.
{"title":"Breaking and restoring ocular balance: Temporal interactions in binocular rivalry and stereopsis.","authors":"Rong Jiang, Ming Meng","doi":"10.1167/jov.25.12.5","DOIUrl":"10.1167/jov.25.12.5","url":null,"abstract":"<p><p>Binocular integration and interocular suppression are fundamental processes underlying binocular vision, giving rise to stereopsis and binocular rivalry, respectively. To investigate how the visual system dynamically coordinates these processes to form a unified percept, we conducted four psychophysical experiments examining the temporal interactions between binocular rivalry and stereopsis. In Experiment 1a, binocular rivalry, especially with high-contrast stimuli, impaired subsequent stereopsis, significantly elevating average stereo detection thresholds from 60.5 to 111.8 arcsec. Experiment 1b revealed no effect on contrast detection, confirming that the suppression was specific to stereopsis rather than due to general attentional distraction. Experiment 2a revealed that preceding stereopsis rebalanced subsequent rivalry dynamics by reducing ocular dominance asymmetry and increasing mixed percepts, without affecting alternation rate. Experiment 2b further demonstrated that anti-correlated stereograms, which do not elicit stable stereopsis, exerted no effect on subsequent rivalry dynamics. These findings underscore a dynamic interplay between binocular integration and suppression in resolving perceptual ambiguity and achieving unified visual perception. Crucially, our results reinforce that stereopsis is not merely a passive consequence of binocular integration, but actively contributes to rebalancing ocular dominance, thus offering insights for interventions aimed at restoring binocular function.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517368/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Schroeger, Alexander Goettker, Doris I Braun, Karl R Gegenfurtner
In everyday life, we must adapt our behavior to a continuous stream of tasks and time motor responses and periods of resting accordingly. To mimic these challenges, we used a continuous interception computer game (Pong) on an iPad. This allowed us to measure the coordination of eye, hand, and head movements during natural sequential behavior while maintaining the benefits of experimental control. Participants intercepted a moving ball by sliding a paddle at the bottom of the screen so that the ball bounced back and moved toward the computerized opponent. We tested (i) how participants adapted their eye, hand, and head movements to this dynamic, continuous task, (ii) whether these adaptations are related to interception performance, and (iii) how their behavior changed under different conditions and (iv) over time. We showed that all movements are carefully adapted to the upcoming action. Pursuit eye movements provide crucial motion information and are emphasized shortly before participants must act; a strategy associated with better performance. Participants also increasingly used pursuit eye movements under more difficult conditions (fast targets and small paddles). Saccades, blinks, and head movements, which would lead to information loss, are minimized at critical times of interception. These strategic patterns are intuitively established and maintained over time and across manipulations. We conclude that humans carefully orchestrate their full repertoire of movements to aid performance and finely adjust them to the changing demands of our environment.
{"title":"Keeping your eye, head, and hand on the ball: Rapidly orchestrated visuomotor behavior in a continuous action task.","authors":"Anna Schroeger, Alexander Goettker, Doris I Braun, Karl R Gegenfurtner","doi":"10.1167/jov.25.12.20","DOIUrl":"10.1167/jov.25.12.20","url":null,"abstract":"<p><p>In everyday life, we must adapt our behavior to a continuous stream of tasks and time motor responses and periods of resting accordingly. To mimic these challenges, we used a continuous interception computer game (Pong) on an iPad. This allowed us to measure the coordination of eye, hand, and head movements during natural sequential behavior while maintaining the benefits of experimental control. Participants intercepted a moving ball by sliding a paddle at the bottom of the screen so that the ball bounced back and moved toward the computerized opponent. We tested (i) how participants adapted their eye, hand, and head movements to this dynamic, continuous task, (ii) whether these adaptations are related to interception performance, and (iii) how their behavior changed under different conditions and (iv) over time. We showed that all movements are carefully adapted to the upcoming action. Pursuit eye movements provide crucial motion information and are emphasized shortly before participants must act; a strategy associated with better performance. Participants also increasingly used pursuit eye movements under more difficult conditions (fast targets and small paddles). Saccades, blinks, and head movements, which would lead to information loss, are minimized at critical times of interception. These strategic patterns are intuitively established and maintained over time and across manipulations. We conclude that humans carefully orchestrate their full repertoire of movements to aid performance and finely adjust them to the changing demands of our environment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"20"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12551928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}