Sara C Sereno, Christopher J Hand, Aisha Shahid, Bo Yao
Despite more than five decades of research into eye movements in reading, questions remain about the relationship between lower-level lexical and higher-level semantic factors. We explored the simultaneous effects of word frequency (lower, higher), contextual predictability (lower, higher), and parafoveal preview (valid, invalid) on the processing of target words embedded in short passages of text. Using a repeated-measures design, 80 participants read 240 two-line passages, each containing a four- or five-letter target word. Corpus-based word frequencies and Cloze predictabilities were used as continuous variables in Bayesian mixed-effect analyses of fixation time and skipping measures. Key findings included robust main effects of frequency, predictability, and preview validity, as well as two-way interactions between Frequency × Preview in gaze duration, and Predictability × Preview in gaze duration and skipping. Frequency effects on gaze duration were greater under invalid preview conditions, suggesting that higher-frequency words facilitate corrective processing when preview is misleading. Predictability effects on gaze duration and skipping were enhanced under valid preview, indicating that contextual facilitation depends on coherent parafoveal input. No interaction was observed between frequency and predictability nor a three-way interaction, supporting the view that lexical access and contextual integration operate via distinct mechanisms. These findings highlight the critical role of parafoveal information in shaping the expression of lexical and contextual influences during reading.
{"title":"Parafoveal preview differentially modulates word frequency and contextual predictability effects during reading.","authors":"Sara C Sereno, Christopher J Hand, Aisha Shahid, Bo Yao","doi":"10.1167/jov.26.2.13","DOIUrl":"10.1167/jov.26.2.13","url":null,"abstract":"<p><p>Despite more than five decades of research into eye movements in reading, questions remain about the relationship between lower-level lexical and higher-level semantic factors. We explored the simultaneous effects of word frequency (lower, higher), contextual predictability (lower, higher), and parafoveal preview (valid, invalid) on the processing of target words embedded in short passages of text. Using a repeated-measures design, 80 participants read 240 two-line passages, each containing a four- or five-letter target word. Corpus-based word frequencies and Cloze predictabilities were used as continuous variables in Bayesian mixed-effect analyses of fixation time and skipping measures. Key findings included robust main effects of frequency, predictability, and preview validity, as well as two-way interactions between Frequency × Preview in gaze duration, and Predictability × Preview in gaze duration and skipping. Frequency effects on gaze duration were greater under invalid preview conditions, suggesting that higher-frequency words facilitate corrective processing when preview is misleading. Predictability effects on gaze duration and skipping were enhanced under valid preview, indicating that contextual facilitation depends on coherent parafoveal input. No interaction was observed between frequency and predictability nor a three-way interaction, supporting the view that lexical access and contextual integration operate via distinct mechanisms. These findings highlight the critical role of parafoveal information in shaping the expression of lexical and contextual influences during reading.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12924140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146229526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veronika Lukyanova, Julius Ameln, Jenny L Witten, Aleksandr Gutnikov, Maximilian Freiberg, Bilge Sayim, Wolf Harmening
The retinal area inspecting a visual stimulus and, consequently, the number of photoreceptors engaged in a visual task increase with presentation time, as fixational eye movements continuously move the retina across the retinal image. Here, we varied stimulus duration in a Tumbling-E visual acuity task while recording videos of the photoreceptor mosaic in seven participants with adaptive optics micro-psychophysical techniques to determine how far the retinal image must move across the cone mosaic before this motion begins to improve visual acuity. Five stimulus presentation durations were tested (3, 80, 220, 370, and 600 ms), while participants exhibited natural eye movements. Retinal slip amplitudes (i.e., the total displacement stimuli underwent) increased linearly with stimulus duration at individual rates. Higher cone density was associated with drift over smaller retinal areas, making the number of traversed cones more similar across participants at longer durations. At the shortest presentation duration, retinal slip was virtually absent and acuity was limited by retinal resolution, averaging to 1.07 ± 0.08 times the cone row-to-row spacing (Nyquist limit of sampling). At an 80-ms duration, corresponding to approximately two cone diameters of retinal slip, acuity thresholds improved significantly, reaching 0.90 ± 0.10 of the Nyquist limit. Thresholds continued to improve with longer durations at a lower rate, reaching 0.75 ± 0.10 times the Nyquist limit at 600 ms. These results demonstrate that humans can extract visual information with sub-cone precision within less than 100 ms, with a retinal slip approaching single foveal cone spacing.
{"title":"Sub-cone visual acuity can be achieved with less than 1 arcmin retinal slip.","authors":"Veronika Lukyanova, Julius Ameln, Jenny L Witten, Aleksandr Gutnikov, Maximilian Freiberg, Bilge Sayim, Wolf Harmening","doi":"10.1167/jov.26.2.7","DOIUrl":"10.1167/jov.26.2.7","url":null,"abstract":"<p><p>The retinal area inspecting a visual stimulus and, consequently, the number of photoreceptors engaged in a visual task increase with presentation time, as fixational eye movements continuously move the retina across the retinal image. Here, we varied stimulus duration in a Tumbling-E visual acuity task while recording videos of the photoreceptor mosaic in seven participants with adaptive optics micro-psychophysical techniques to determine how far the retinal image must move across the cone mosaic before this motion begins to improve visual acuity. Five stimulus presentation durations were tested (3, 80, 220, 370, and 600 ms), while participants exhibited natural eye movements. Retinal slip amplitudes (i.e., the total displacement stimuli underwent) increased linearly with stimulus duration at individual rates. Higher cone density was associated with drift over smaller retinal areas, making the number of traversed cones more similar across participants at longer durations. At the shortest presentation duration, retinal slip was virtually absent and acuity was limited by retinal resolution, averaging to 1.07 ± 0.08 times the cone row-to-row spacing (Nyquist limit of sampling). At an 80-ms duration, corresponding to approximately two cone diameters of retinal slip, acuity thresholds improved significantly, reaching 0.90 ± 0.10 of the Nyquist limit. Thresholds continued to improve with longer durations at a lower rate, reaching 0.75 ± 0.10 times the Nyquist limit at 600 ms. These results demonstrate that humans can extract visual information with sub-cone precision within less than 100 ms, with a retinal slip approaching single foveal cone spacing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12922710/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146183143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The perception of the duration of a visual stimulus is a very peculiar sensory experience built without dedicated sensors. Perhaps due to this distinctiveness, duration perception is often influenced by stimulus sensory features such as speed, temporal frequency, or stimulus contrast. For instance, stimulus speed is known to distort temporal judgments, with faster stimuli being perceived as lasting longer compared to static or slow-moving ones. In this study, we explored whether this effect depends on stimulus configuration and persists when salient sensory cues at interval onset and offset are available to solve the temporal task ("filled" vs. "flanked" condition). Additionally, given the strong link between speed and time, we wonder whether stimulus duration can affect speed judgments. To answer these questions, we ran two distinct experiments in which healthy volunteers discriminated either the duration or the speed of noisy incoherent random dot kinematograms whose duration and speed were manipulated orthogonally. The results of both experiments revealed that perceived duration was biased by the stimulus speed, as expected, and that this effect persisted across stimulus configurations. Moreover, we found that the duration of the stimulus influenced the perception of speed, albeit to a lesser degree. These findings emphasize the significance of sensory input integration and the temporal structure of stimuli in shaping both duration and speed perception.
{"title":"The interaction of speed and time in biasing the perception of dynamically changing visual inputs.","authors":"Francesca Iris Bellotti, Domenica Bueti","doi":"10.1167/jov.26.2.12","DOIUrl":"10.1167/jov.26.2.12","url":null,"abstract":"<p><p>The perception of the duration of a visual stimulus is a very peculiar sensory experience built without dedicated sensors. Perhaps due to this distinctiveness, duration perception is often influenced by stimulus sensory features such as speed, temporal frequency, or stimulus contrast. For instance, stimulus speed is known to distort temporal judgments, with faster stimuli being perceived as lasting longer compared to static or slow-moving ones. In this study, we explored whether this effect depends on stimulus configuration and persists when salient sensory cues at interval onset and offset are available to solve the temporal task (\"filled\" vs. \"flanked\" condition). Additionally, given the strong link between speed and time, we wonder whether stimulus duration can affect speed judgments. To answer these questions, we ran two distinct experiments in which healthy volunteers discriminated either the duration or the speed of noisy incoherent random dot kinematograms whose duration and speed were manipulated orthogonally. The results of both experiments revealed that perceived duration was biased by the stimulus speed, as expected, and that this effect persisted across stimulus configurations. Moreover, we found that the duration of the stimulus influenced the perception of speed, albeit to a lesser degree. These findings emphasize the significance of sensory input integration and the temporal structure of stimuli in shaping both duration and speed perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"12"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12924134/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146229601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank H Durgin, Nichole Suero Gonzalez, Ping Wen, Alexander C Huk
Density information is a possible primitive for the perception of numerosity. It has been argued, however, that the perception of numerosity is more precise than density perception at low numbers, whereas density is more precise for high numbers. An interpretive problem with the stimuli used to make those claims is that actual stimulus density was often mis-specified owing to an ambiguity regarding the idealized versus actual filled area. This ambiguity had the effect of underestimating density precision at low numerosities. Here we used a novel method of stimulus generation that allows us to accurately specify stimulus density independent of patch size and number, while varying patch size from trial to trial to dissociate numerosity and density. For both numerosity discrimination and density discrimination, we presented single stimuli in central vision for comparison with an internal standard. Feedback was given after each judgment. Using well-defined densities, density discrimination was more precise than numerosity perception at all densities and showed no evidence of varying as a function of density, as previously hypothesized. This was found with 8 practiced observers, and then replicated in a pre-registered study with 32 observers. As expected, feedback nullified size biases on number judgments, showing that observers were adaptively combining density and size. Reanalysis of data from a recent investigation of downward sloping Weber fractions for numerosity showed that the square root-like effects in those sorts of studies were most likely owing to reductions in patch size variance that were correlated with increases in density.
{"title":"Texture density discrimination is more precise than number discrimination.","authors":"Frank H Durgin, Nichole Suero Gonzalez, Ping Wen, Alexander C Huk","doi":"10.1167/jov.26.2.2","DOIUrl":"10.1167/jov.26.2.2","url":null,"abstract":"<p><p>Density information is a possible primitive for the perception of numerosity. It has been argued, however, that the perception of numerosity is more precise than density perception at low numbers, whereas density is more precise for high numbers. An interpretive problem with the stimuli used to make those claims is that actual stimulus density was often mis-specified owing to an ambiguity regarding the idealized versus actual filled area. This ambiguity had the effect of underestimating density precision at low numerosities. Here we used a novel method of stimulus generation that allows us to accurately specify stimulus density independent of patch size and number, while varying patch size from trial to trial to dissociate numerosity and density. For both numerosity discrimination and density discrimination, we presented single stimuli in central vision for comparison with an internal standard. Feedback was given after each judgment. Using well-defined densities, density discrimination was more precise than numerosity perception at all densities and showed no evidence of varying as a function of density, as previously hypothesized. This was found with 8 practiced observers, and then replicated in a pre-registered study with 32 observers. As expected, feedback nullified size biases on number judgments, showing that observers were adaptively combining density and size. Reanalysis of data from a recent investigation of downward sloping Weber fractions for numerosity showed that the square root-like effects in those sorts of studies were most likely owing to reductions in patch size variance that were correlated with increases in density.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashley M Clark, Sanjana Kapisthalam, Matthew R Cavanaugh, Krystel R Huxlin, Martina Poletti
Cortically induced blindness (CB) resulting from stroke damage to the early visual cortex leads to extensive, typically extrafoveal visual deficits and is known to alter large-scale oculomotor behavior. Here, we show that even with preserved foveal acuity, fixational oculomotor behavior is subtly altered in CB patients. Using high-precision eye tracking, we observed a small but consistent gaze offset toward the blind field during passive fixation, which disappeared during a high-acuity central task. Despite this offset, fixation precision in both tasks was comparable, and it was similar between CB patients and age-matched controls. Curiously, the underlying oculomotor dynamics were also similar across the two task conditions: Microsaccades exhibited nonsignificant directional tendencies, while ocular drift was biased away from the blind field. Our findings indicate that the adult oculomotor system dynamically adapts to asymmetric visual injury and/or input. We speculate that the small fixational offsets observed in CB may reflect an attentional pointer toward the blind field and/or a compensatory oculomotor rebalancing that counteracts an asymmetric visual drive following cortical damage. Together, these results reveal a surprising preservation of context-dependent fixation control following early visual cortex damage in adulthood.
{"title":"Systematic arcminute-scale fixational offsets in patients with early visual cortex damage.","authors":"Ashley M Clark, Sanjana Kapisthalam, Matthew R Cavanaugh, Krystel R Huxlin, Martina Poletti","doi":"10.1167/jov.26.2.5","DOIUrl":"10.1167/jov.26.2.5","url":null,"abstract":"<p><p>Cortically induced blindness (CB) resulting from stroke damage to the early visual cortex leads to extensive, typically extrafoveal visual deficits and is known to alter large-scale oculomotor behavior. Here, we show that even with preserved foveal acuity, fixational oculomotor behavior is subtly altered in CB patients. Using high-precision eye tracking, we observed a small but consistent gaze offset toward the blind field during passive fixation, which disappeared during a high-acuity central task. Despite this offset, fixation precision in both tasks was comparable, and it was similar between CB patients and age-matched controls. Curiously, the underlying oculomotor dynamics were also similar across the two task conditions: Microsaccades exhibited nonsignificant directional tendencies, while ocular drift was biased away from the blind field. Our findings indicate that the adult oculomotor system dynamically adapts to asymmetric visual injury and/or input. We speculate that the small fixational offsets observed in CB may reflect an attentional pointer toward the blind field and/or a compensatory oculomotor rebalancing that counteracts an asymmetric visual drive following cortical damage. Together, these results reveal a surprising preservation of context-dependent fixation control following early visual cortex damage in adulthood.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12898927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reported perception can exhibit a repulsive bias away from a task-irrelevant prior stimulus. Previous research has suggested that this repulsive serial bias is driven by low-level adaptation, such that the prior stimulus repels the representation of the new stimulus during encoding. To test this account, the present study compared the repulsive serial bias with another perceptual bias that is known to be driven by an adaptation mechanism (e.g., the tilt aftereffect). We measured the repulsive serial bias using a common location delayed estimation task and the adaptation-driven bias using a location estimation task with an inducer stimulus. We found that, although both repulsive serial bias and adaptation-driven bias were evident, the two biases were not correlated. In addition, only the repulsive serial bias was associated with a response time effect, where responses were slower when the bias was stronger. Moreover, mouse-tracking data for the repulsive serial bias exhibited a pattern that started with a stronger repulsion and ended with smaller repulsion, which cannot be explained by an adaptation mechanism alone. Taken together, our findings suggest that repulsive serial bias in continuous estimation tasks involves post-perceptual decisional processes that are not present in the adaptation-driven bias.
{"title":"Is repulsive serial bias in visual perception driven by adaptation mechanisms?","authors":"Scott Janetsky, Kuo-Wei Chen, Gi-Yeul Bae","doi":"10.1167/jov.26.2.8","DOIUrl":"10.1167/jov.26.2.8","url":null,"abstract":"<p><p>Reported perception can exhibit a repulsive bias away from a task-irrelevant prior stimulus. Previous research has suggested that this repulsive serial bias is driven by low-level adaptation, such that the prior stimulus repels the representation of the new stimulus during encoding. To test this account, the present study compared the repulsive serial bias with another perceptual bias that is known to be driven by an adaptation mechanism (e.g., the tilt aftereffect). We measured the repulsive serial bias using a common location delayed estimation task and the adaptation-driven bias using a location estimation task with an inducer stimulus. We found that, although both repulsive serial bias and adaptation-driven bias were evident, the two biases were not correlated. In addition, only the repulsive serial bias was associated with a response time effect, where responses were slower when the bias was stronger. Moreover, mouse-tracking data for the repulsive serial bias exhibited a pattern that started with a stronger repulsion and ended with smaller repulsion, which cannot be explained by an adaptation mechanism alone. Taken together, our findings suggest that repulsive serial bias in continuous estimation tasks involves post-perceptual decisional processes that are not present in the adaptation-driven bias.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12922714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146183161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell
Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based "grouped" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.
{"title":"Perceptual resolution of ambiguity: A divisive normalization account for both interocular color grouping and difference enhancement.","authors":"Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell","doi":"10.1167/jov.26.1.8","DOIUrl":"10.1167/jov.26.1.8","url":null,"abstract":"<p><p>Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based \"grouped\" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant
Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.
{"title":"Rapid ensemble encoding of average scene features.","authors":"Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant","doi":"10.1167/jov.26.1.3","DOIUrl":"10.1167/jov.26.1.3","url":null,"abstract":"<p><p>Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer
Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.
{"title":"EasyEyes: Crowded dynamic fixation for online psychophysics.","authors":"Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer","doi":"10.1167/jov.26.1.18","DOIUrl":"10.1167/jov.26.1.18","url":null,"abstract":"<p><p>Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.
{"title":"Dissociated temporal and spatial impairments of microsaccade dynamics in homonymous hemianopia following ischemic stroke.","authors":"Ying Gao, Huiguang He, Bernhard A Sabel","doi":"10.1167/jov.26.1.17","DOIUrl":"10.1167/jov.26.1.17","url":null,"abstract":"<p><p>This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859727/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146068421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}