Organizing visual input into coherent percepts requires dynamic grouping and segmentation mechanisms that operate across both spatial and temporal domains. Crowding occurs when nearby elements interfere with target perception, but specific flanker configurations can alleviate this effect through Gestalt-based grouping, a phenomenon known as uncrowding. Here, we examined the temporal dynamics underlying these spatial organization processes using a Vernier discrimination task. In Experiment 1, we varied stimulus duration and found that uncrowding emerged only after 160 ms, suggesting a time-consuming process. In Experiment 2, we manipulated the stimulus onset asynchrony (SOA) between the target and flankers. We found that presenting good-Gestalt flankers briefly before the target (as little as 32 ms) significantly boosted uncrowding, even in the absence of temporal overlap between the two stimuli. This effect was specific to conditions in which flankers preceded the target, ruling out pure temporal integration and masking accounts. These findings suggest that spatial segmentation can be dynamically facilitated when the temporal order of presentation allows grouping mechanisms to engage prior to target processing. Moreover, the observed time course indicates that segmentation is not purely feedforward, particularly for stimuli that are likely to recruit higher level visual areas, pointing instead to the involvement of recurrent or feedback processes.
{"title":"Temporal windows of perceptual organization: Evidence from crowding and uncrowding.","authors":"Alessia Santoni, Luca Ronconi, Jason Samaha","doi":"10.1167/jov.25.14.5","DOIUrl":"10.1167/jov.25.14.5","url":null,"abstract":"<p><p>Organizing visual input into coherent percepts requires dynamic grouping and segmentation mechanisms that operate across both spatial and temporal domains. Crowding occurs when nearby elements interfere with target perception, but specific flanker configurations can alleviate this effect through Gestalt-based grouping, a phenomenon known as uncrowding. Here, we examined the temporal dynamics underlying these spatial organization processes using a Vernier discrimination task. In Experiment 1, we varied stimulus duration and found that uncrowding emerged only after 160 ms, suggesting a time-consuming process. In Experiment 2, we manipulated the stimulus onset asynchrony (SOA) between the target and flankers. We found that presenting good-Gestalt flankers briefly before the target (as little as 32 ms) significantly boosted uncrowding, even in the absence of temporal overlap between the two stimuli. This effect was specific to conditions in which flankers preceded the target, ruling out pure temporal integration and masking accounts. These findings suggest that spatial segmentation can be dynamically facilitated when the temporal order of presentation allows grouping mechanisms to engage prior to target processing. Moreover, the observed time course indicates that segmentation is not purely feedforward, particularly for stimuli that are likely to recruit higher level visual areas, pointing instead to the involvement of recurrent or feedback processes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12704212/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lynn Schmittwilken, Anna L Haverkamp, Marianne Maertens
To interact with the world effectively, the human visual system must extract meaningful features from visual scenes. One key feature are edges, luminance or texture discontinuities in two-dimensional (2D) images that often correspond to object boundaries in three-dimensional scenes. Edge sensitivity has traditionally been studied with well-controlled stimuli and binary choice tasks, but it is unclear how well these insights transfer to real-world behavior. Recent studies have extended this approach using natural images but typically retained binary button presses. In this study, we extend the approach further and ask observers (N = 20) to trace edges in natural scenes, presented with or without 2D visual noise. To quantify edge detection performance, we use a signal detection theory-inspired approach. Participants' edge traces in the noise-free condition serve as an individualized "ground-truth" or signal, used to categorize edge traces from noise conditions into hits, false alarms, misses, and correct rejections. Observers produce remarkably consistent edge traces across conditions. Noise interference patterns mirror results from traditional edge sensitivity studies, especially for edges with spectral properties similar to natural scenes. This suggests that insights from controlled paradigms can transfer to naturalistic ones. We also examined edge traces to identify which image features drive edge perception, using interindividual variability as a pointer to relevant features. We conclude that line drawings are a powerful tool to investigate edge sensitivity and potentially other aspects of visual perception, enabling nuanced exploration of real-world visual behavior with few experimental trials.
{"title":"Line drawings as a tool to probe edge sensitivity in natural scenes.","authors":"Lynn Schmittwilken, Anna L Haverkamp, Marianne Maertens","doi":"10.1167/jov.25.14.22","DOIUrl":"10.1167/jov.25.14.22","url":null,"abstract":"<p><p>To interact with the world effectively, the human visual system must extract meaningful features from visual scenes. One key feature are edges, luminance or texture discontinuities in two-dimensional (2D) images that often correspond to object boundaries in three-dimensional scenes. Edge sensitivity has traditionally been studied with well-controlled stimuli and binary choice tasks, but it is unclear how well these insights transfer to real-world behavior. Recent studies have extended this approach using natural images but typically retained binary button presses. In this study, we extend the approach further and ask observers (N = 20) to trace edges in natural scenes, presented with or without 2D visual noise. To quantify edge detection performance, we use a signal detection theory-inspired approach. Participants' edge traces in the noise-free condition serve as an individualized \"ground-truth\" or signal, used to categorize edge traces from noise conditions into hits, false alarms, misses, and correct rejections. Observers produce remarkably consistent edge traces across conditions. Noise interference patterns mirror results from traditional edge sensitivity studies, especially for edges with spectral properties similar to natural scenes. This suggests that insights from controlled paradigms can transfer to naturalistic ones. We also examined edge traces to identify which image features drive edge perception, using interindividual variability as a pointer to relevant features. We conclude that line drawings are a powerful tool to investigate edge sensitivity and potentially other aspects of visual perception, enabling nuanced exploration of real-world visual behavior with few experimental trials.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"22"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743498/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presaccadic attention enhances visual perception at the upcoming saccade target location. While this enhancement is often described as obligatory and temporally stereotyped, recent studies indicate that its strength varies depending on saccade direction. Here, we investigated whether the time course of presaccadic attention also differs across saccade directions. Participants performed a two-alternative forced-choice orientation discrimination task during saccade preparation. Tilt angles were individually titrated in a fixation baseline condition to equate task difficulty across the upper and lower vertical meridians. Sensitivity was then assessed at different time points relative to saccade onset and cue onset, allowing us to characterize the temporal dynamics of attentional enhancement. We found that presaccadic attention built up faster and reached higher levels preceding downward than upward saccades. Linear model fits revealed significant slope differences but no differences in intercepts, suggesting that the observed asymmetries reflect differences in attentional deployment during saccade preparation rather than preexisting differences in sensitivity. Saccade parameters did not account for these asymmetries. Our findings demonstrate that the temporal dynamics of presaccadic attention vary with saccade direction, which may be a potential mechanism underlying previously observed differences in presaccadic benefit at the upper and lower vertical meridians. This temporal flexibility challenges the view of a uniform presaccadic attention mechanism and suggests that presaccadic attentional deployment is shaped by movement goals. Our results provide new insights into how the visual and oculomotor systems coordinate under direction-specific demands.
{"title":"Saccade direction modulates the temporal dynamics of presaccadic attention.","authors":"Yuna Kwak, Nina M Hanning, Marisa Carrasco","doi":"10.1167/jov.25.14.2","DOIUrl":"10.1167/jov.25.14.2","url":null,"abstract":"<p><p>Presaccadic attention enhances visual perception at the upcoming saccade target location. While this enhancement is often described as obligatory and temporally stereotyped, recent studies indicate that its strength varies depending on saccade direction. Here, we investigated whether the time course of presaccadic attention also differs across saccade directions. Participants performed a two-alternative forced-choice orientation discrimination task during saccade preparation. Tilt angles were individually titrated in a fixation baseline condition to equate task difficulty across the upper and lower vertical meridians. Sensitivity was then assessed at different time points relative to saccade onset and cue onset, allowing us to characterize the temporal dynamics of attentional enhancement. We found that presaccadic attention built up faster and reached higher levels preceding downward than upward saccades. Linear model fits revealed significant slope differences but no differences in intercepts, suggesting that the observed asymmetries reflect differences in attentional deployment during saccade preparation rather than preexisting differences in sensitivity. Saccade parameters did not account for these asymmetries. Our findings demonstrate that the temporal dynamics of presaccadic attention vary with saccade direction, which may be a potential mechanism underlying previously observed differences in presaccadic benefit at the upper and lower vertical meridians. This temporal flexibility challenges the view of a uniform presaccadic attention mechanism and suggests that presaccadic attentional deployment is shaped by movement goals. Our results provide new insights into how the visual and oculomotor systems coordinate under direction-specific demands.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12701607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriela Mueller de Melo, Isabella de Oliveira Pitorri, Gustavo Rohenkohl
Lateral interactions are pervasive in early visual processing, contributing directly to processes such as object grouping and segregation. This study examines whether saccade preparation - known to affect visual perception - modulates lateral interactions. In a psychophysical task, participants were instructed to detect a Gabor target flanked by two adjacent Gabors, while they either prepared a saccade to the target or maintained central fixation. Flanker gratings could be iso- or orthogonally oriented to the target and were positioned at three different distances (4λ, 8λ, and 16λ). Contrast thresholds for target detection were estimated in each condition using a 3-down/1-up staircase procedure. The results showed that in both presaccadic and fixation conditions, the target was suppressed at the shortest flanker distance (4λ), revealed by markedly higher thresholds in iso-oriented compared to orthogonal flanker configurations. Lateral interaction effects were completely abolished at their largest separation (16λ). Interestingly, at the intermediate flanker distance (8λ), target suppression seemed to increase during the presaccadic period, whereas no such effect was observed during fixation. This result suggests that saccade preparation can modulate lateral interactions, promoting suppressive effects over larger distances. These findings are consistent with the visual remapping phenomenon observed before saccade execution, especially the convergent remapping of receptive fields in oculomotor and visual areas. Finally, this presaccadic expansion of inhibitory lateral interactions could assist target selection by suppressing homogeneous peripheral signals - such as iso-oriented collinear patterns - while prioritizing the processing of more salient visual information.
{"title":"Presaccadic modulation of lateral interactions.","authors":"Gabriela Mueller de Melo, Isabella de Oliveira Pitorri, Gustavo Rohenkohl","doi":"10.1167/jov.25.14.7","DOIUrl":"10.1167/jov.25.14.7","url":null,"abstract":"<p><p>Lateral interactions are pervasive in early visual processing, contributing directly to processes such as object grouping and segregation. This study examines whether saccade preparation - known to affect visual perception - modulates lateral interactions. In a psychophysical task, participants were instructed to detect a Gabor target flanked by two adjacent Gabors, while they either prepared a saccade to the target or maintained central fixation. Flanker gratings could be iso- or orthogonally oriented to the target and were positioned at three different distances (4λ, 8λ, and 16λ). Contrast thresholds for target detection were estimated in each condition using a 3-down/1-up staircase procedure. The results showed that in both presaccadic and fixation conditions, the target was suppressed at the shortest flanker distance (4λ), revealed by markedly higher thresholds in iso-oriented compared to orthogonal flanker configurations. Lateral interaction effects were completely abolished at their largest separation (16λ). Interestingly, at the intermediate flanker distance (8λ), target suppression seemed to increase during the presaccadic period, whereas no such effect was observed during fixation. This result suggests that saccade preparation can modulate lateral interactions, promoting suppressive effects over larger distances. These findings are consistent with the visual remapping phenomenon observed before saccade execution, especially the convergent remapping of receptive fields in oculomotor and visual areas. Finally, this presaccadic expansion of inhibitory lateral interactions could assist target selection by suppressing homogeneous peripheral signals - such as iso-oriented collinear patterns - while prioritizing the processing of more salient visual information.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12707330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During self-movement, the visual system uses optic flow to identify scene-relative object motion and estimate the direction of self-movement (heading). Although both processes rely on optic flow, their relationship and the conditions under which independent object motion biases heading estimation remain unclear. The causal inference model predicts that misjudging object motion leads to its integration into heading estimation, causing errors in heading estimation, whereas correct judgments reduce these errors. However, most studies have examined these processes independently. Here we used a dual-task paradigm to investigate how visual cues affect the judgment of scene-relative object motion direction and concurrent heading estimation. Participants viewed a 90° × 90° display simulating self-movement through a three-dimensional cloud with a laterally moving object positioned at 8° or 16° from the simulated heading direction. They judged both the object's motion direction in the scene and their heading direction. Results show that increasing an object's speed and reducing its positional offset from the simulated heading direction improved the accuracy of scene-relative object motion direction judgment, but did not consistently improve the accuracy of heading estimation. Surprisingly, visual cues such as binocular disparity and object density improved scene-relative object motion direction judgment but reduced heading estimation accuracy. Furthermore, heading errors mostly peaked at object speeds where observers could reliably judge scene-relative object motion direction, challenging the predictions of the causal inference model. These findings provide strong evidence that scene-relative object motion judgment and heading estimation operate independently and question the generality of the causal inference model in explaining heading biases caused by independent object motion.
{"title":"Effects of visual cues on scene-relative object motion judgments and concurrent heading estimation from optic flow.","authors":"Yinghua Yang, Zhoukuidong Shan, Li Li","doi":"10.1167/jov.25.14.20","DOIUrl":"10.1167/jov.25.14.20","url":null,"abstract":"<p><p>During self-movement, the visual system uses optic flow to identify scene-relative object motion and estimate the direction of self-movement (heading). Although both processes rely on optic flow, their relationship and the conditions under which independent object motion biases heading estimation remain unclear. The causal inference model predicts that misjudging object motion leads to its integration into heading estimation, causing errors in heading estimation, whereas correct judgments reduce these errors. However, most studies have examined these processes independently. Here we used a dual-task paradigm to investigate how visual cues affect the judgment of scene-relative object motion direction and concurrent heading estimation. Participants viewed a 90° × 90° display simulating self-movement through a three-dimensional cloud with a laterally moving object positioned at 8° or 16° from the simulated heading direction. They judged both the object's motion direction in the scene and their heading direction. Results show that increasing an object's speed and reducing its positional offset from the simulated heading direction improved the accuracy of scene-relative object motion direction judgment, but did not consistently improve the accuracy of heading estimation. Surprisingly, visual cues such as binocular disparity and object density improved scene-relative object motion direction judgment but reduced heading estimation accuracy. Furthermore, heading errors mostly peaked at object speeds where observers could reliably judge scene-relative object motion direction, challenging the predictions of the causal inference model. These findings provide strong evidence that scene-relative object motion judgment and heading estimation operate independently and question the generality of the causal inference model in explaining heading biases caused by independent object motion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"20"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12742601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When visual input is uncertain, visual perception is biased toward the stimulation from the recent past. We can attend to stimuli either endogenously based on an internal decision or exogenously, triggered by an external event. Here, we wondered whether serial dependencies are selective for the attentional mode which we draw to stimuli. We studied overt attention shifts: saccades and recorded either motor error correction or visual orientation judgments. In Experiment 1, we assessed sensorimotor serial dependencies, focusing on how the postsaccadic error influences subsequent saccade amplitudes. In Experiment 2, we evaluated visual serial dependencies by measuring orientation judgments, contingent on the type of saccade performed. In separate sessions, participants performed either only voluntary saccades or only delayed saccades, or both saccade types alternated within a session. Our results revealed that sensorimotor serial dependencies were selective for the saccade type performed. When voluntary saccades had been performed in the preceding trial, serial dependencies were much stronger in the current trial if voluntary instead of delayed saccades were executed. In contrast, visual serial dependencies were not influenced by the type of saccade performed. Our findings reveal that shifts in exogenous and endogenous attention differentially impact sensorimotor serial dependencies, but visual serial dependencies remain unaffected.
{"title":"Serial dependencies and overt attention shifts.","authors":"Sandra Tyralla, Eckart Zimmermann","doi":"10.1167/jov.25.14.12","DOIUrl":"10.1167/jov.25.14.12","url":null,"abstract":"<p><p>When visual input is uncertain, visual perception is biased toward the stimulation from the recent past. We can attend to stimuli either endogenously based on an internal decision or exogenously, triggered by an external event. Here, we wondered whether serial dependencies are selective for the attentional mode which we draw to stimuli. We studied overt attention shifts: saccades and recorded either motor error correction or visual orientation judgments. In Experiment 1, we assessed sensorimotor serial dependencies, focusing on how the postsaccadic error influences subsequent saccade amplitudes. In Experiment 2, we evaluated visual serial dependencies by measuring orientation judgments, contingent on the type of saccade performed. In separate sessions, participants performed either only voluntary saccades or only delayed saccades, or both saccade types alternated within a session. Our results revealed that sensorimotor serial dependencies were selective for the saccade type performed. When voluntary saccades had been performed in the preceding trial, serial dependencies were much stronger in the current trial if voluntary instead of delayed saccades were executed. In contrast, visual serial dependencies were not influenced by the type of saccade performed. Our findings reveal that shifts in exogenous and endogenous attention differentially impact sensorimotor serial dependencies, but visual serial dependencies remain unaffected.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"12"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716447/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Continuous flash suppression (CFS) is a popular method for suppressing visual stimuli from awareness for extended periods. It involves a dynamic, high-contrast masking stimulus presented to one eye that suppresses a target stimulus presented to the other. The strength of suppression is usually inferred from how long it takes for the target to break through from suppression into awareness (the bCFS threshold). A new variant known as tracking CFS (tCFS) directly measures the strength of suppression by measuring both breakthrough and suppression thresholds. Here, we employed the tCFS paradigm while varying the temporal frequency of the masking stimulus. Our data revealed two clear results: (a) CFS exhibits a clear temporal frequency tuning, with bCFS thresholds peaking for masks modulating at ∼1 Hz; and (b) suppression depth (the difference between breakthrough and suppression thresholds) remains constant despite changes in bCFS. The first result confirms an earlier finding that peak bCFS occurs for very low temporal frequencies. The second result provides valuable insight in showing that bCFS changes occur completely independently of suppression strength, which remains constant. In this study, suppression averaged 13 dB, around two to three times stronger than suppression reported in binocular rivalry studies.
{"title":"Breakthrough thresholds in continuous flash suppression are tuned to mask temporal frequency but suppression depth is constant.","authors":"David Alais, Sujin Kim","doi":"10.1167/jov.25.14.19","DOIUrl":"10.1167/jov.25.14.19","url":null,"abstract":"<p><p>Continuous flash suppression (CFS) is a popular method for suppressing visual stimuli from awareness for extended periods. It involves a dynamic, high-contrast masking stimulus presented to one eye that suppresses a target stimulus presented to the other. The strength of suppression is usually inferred from how long it takes for the target to break through from suppression into awareness (the bCFS threshold). A new variant known as tracking CFS (tCFS) directly measures the strength of suppression by measuring both breakthrough and suppression thresholds. Here, we employed the tCFS paradigm while varying the temporal frequency of the masking stimulus. Our data revealed two clear results: (a) CFS exhibits a clear temporal frequency tuning, with bCFS thresholds peaking for masks modulating at ∼1 Hz; and (b) suppression depth (the difference between breakthrough and suppression thresholds) remains constant despite changes in bCFS. The first result confirms an earlier finding that peak bCFS occurs for very low temporal frequencies. The second result provides valuable insight in showing that bCFS changes occur completely independently of suppression strength, which remains constant. In this study, suppression averaged 13 dB, around two to three times stronger than suppression reported in binocular rivalry studies.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"19"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721436/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebecca Keogh, Lachlan Kay, Christian Meagher, Joel Pearson
Recent theories propose that, like endogenous and exogenous visual attention, voluntary and involuntary forms of phantom vision (e.g., mental imagery and dreams) are related and hence depend on overlapping mechanisms. However, the relationship between voluntary and involuntary phantom vision remains largely unknown. Here, we assess this relationship by examining how voluntary visual imagery relates to involuntary forms of phantom vision (specifically, visual illusions) in a unique population with no voluntary visual imagery (aphantasia). In our first study, we presented individuals with aphantasia with seven different visual illusions (Hermann grid, Ponzo illusion, Kanizsa triangles, Ebbinghaus illusion, watercolor effect, neon color-spreading, and rotating snakes). Compared to both a large group of undergraduates and an age-matched control sample, the only illusion in which individuals with aphantasia reported a significant reduction was the neon color illusion. In a large online follow-up study, we used the method of adjustment to obtain a more precise measure of the neon color-spreading illusion in individuals with aphantasia and those with visual imagery. We found that this measure of neon color was lower in those with aphantasia than in those with visual imagery, as were their subjective ratings of the illusion. Importantly, there were no differences between the groups for catch/mock neon color "illusion" trials or a separate color adjustment task. Together, these data provide evidence that individuals with aphantasia experience the neon color illusion at a lower intensity, supporting the hypothesis that some forms of voluntary and involuntary phantom vision depend on overlapping mechanisms.
{"title":"Do you see what I see? Linking involuntary nonretinal (phantom) vision and mental imagery in aphantasia.","authors":"Rebecca Keogh, Lachlan Kay, Christian Meagher, Joel Pearson","doi":"10.1167/jov.25.14.10","DOIUrl":"10.1167/jov.25.14.10","url":null,"abstract":"<p><p>Recent theories propose that, like endogenous and exogenous visual attention, voluntary and involuntary forms of phantom vision (e.g., mental imagery and dreams) are related and hence depend on overlapping mechanisms. However, the relationship between voluntary and involuntary phantom vision remains largely unknown. Here, we assess this relationship by examining how voluntary visual imagery relates to involuntary forms of phantom vision (specifically, visual illusions) in a unique population with no voluntary visual imagery (aphantasia). In our first study, we presented individuals with aphantasia with seven different visual illusions (Hermann grid, Ponzo illusion, Kanizsa triangles, Ebbinghaus illusion, watercolor effect, neon color-spreading, and rotating snakes). Compared to both a large group of undergraduates and an age-matched control sample, the only illusion in which individuals with aphantasia reported a significant reduction was the neon color illusion. In a large online follow-up study, we used the method of adjustment to obtain a more precise measure of the neon color-spreading illusion in individuals with aphantasia and those with visual imagery. We found that this measure of neon color was lower in those with aphantasia than in those with visual imagery, as were their subjective ratings of the illusion. Importantly, there were no differences between the groups for catch/mock neon color \"illusion\" trials or a separate color adjustment task. Together, these data provide evidence that individuals with aphantasia experience the neon color illusion at a lower intensity, supporting the hypothesis that some forms of voluntary and involuntary phantom vision depend on overlapping mechanisms.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"10"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12720185/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When faces are blurred, presenting them at smaller sizes improves recognition. We term this unexpected advantage the blur paradox, which has been replicated in studies where face images are digitally blurred and scaled. To examine whether the blur paradox persists in physically realistic viewing conditions, we conducted two experiments using physical blur filters and varied viewing distances for size manipulation. First, we tested blurry celebrity face recognition at two viewing distances and found that recognition accuracy was significantly greater in the far condition than in the close condition. Second, we examined whether the blur paradox reflects gradual improvement across viewing distances or a sharp change in recognition performance at a particular distance. Across four viewing conditions, we found a significant main effect of viewing distance, with the highest recognition accuracy at the farthest viewing condition and lowest at the closest. Accuracy improved gradually, but nonlinearly, rather than showing an abrupt shift at a boundary. Exploration of participant demographics suggested a stronger effect among older participants (>50 years) and a weaker effect among left-handed participants. No significant sex differences were observed. These findings confirm the small-size advantage for recognition under blur and its persistence in physically realistic conditions, with accuracy improving gradually across a wide range of distances.
{"title":"The blur paradox: Better recognition at a distance.","authors":"Caitlin Long, Lei Yuan, Claudia Wu, Ipek Oruc","doi":"10.1167/jov.25.14.3","DOIUrl":"10.1167/jov.25.14.3","url":null,"abstract":"<p><p>When faces are blurred, presenting them at smaller sizes improves recognition. We term this unexpected advantage the blur paradox, which has been replicated in studies where face images are digitally blurred and scaled. To examine whether the blur paradox persists in physically realistic viewing conditions, we conducted two experiments using physical blur filters and varied viewing distances for size manipulation. First, we tested blurry celebrity face recognition at two viewing distances and found that recognition accuracy was significantly greater in the far condition than in the close condition. Second, we examined whether the blur paradox reflects gradual improvement across viewing distances or a sharp change in recognition performance at a particular distance. Across four viewing conditions, we found a significant main effect of viewing distance, with the highest recognition accuracy at the farthest viewing condition and lowest at the closest. Accuracy improved gradually, but nonlinearly, rather than showing an abrupt shift at a boundary. Exploration of participant demographics suggested a stronger effect among older participants (>50 years) and a weaker effect among left-handed participants. No significant sex differences were observed. These findings confirm the small-size advantage for recognition under blur and its persistence in physically realistic conditions, with accuracy improving gradually across a wide range of distances.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716454/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite decades of intense study, the spatiotemporal processing of letters in visual word recognition has yet to be elucidated, with the debate largely focusing on whether individual letters are processed serially or in parallel. The present study investigated the processing of individual letters and letter combinations through time in visual word recognition using displays where signal-to-noise ratio (SNR) varied randomly throughout a 200 ms exposure duration. In Experiment 1, SNR varied either homogeneously across all letters or independently for each letter position (cf. heterogeneous sampling). Reading accuracy was substantially greater with homogeneous than heterogeneous sampling. Experiment 2 again used heterogeneous sampling and classification images (CIs) were calculated for individual letter positions or conjunctions thereof, reflecting processing efficiency according to time during target exposure. These CIs or their Fourier transforms were passed to a classifier to assess differences in the result patterns across individual letter positions or their conjunctions. Overall, the present results indicate the following: (1) significant parallel letter processing capacity throughout exposure duration; (2) dissociable processing mechanisms for each letter position; and (3) letter position-specific mechanisms for letter conjunctions that are distinct from those for individual letters. The results also provide evidence relevant to the neural code underlying the perceptual mechanisms that were uncovered.
{"title":"Spatiotemporal letter processing in visual word recognition uncovered by perceptual oscillations.","authors":"Martin Arguin, Simon Fortier-St-Pierre","doi":"10.1167/jov.25.14.8","DOIUrl":"10.1167/jov.25.14.8","url":null,"abstract":"<p><p>Despite decades of intense study, the spatiotemporal processing of letters in visual word recognition has yet to be elucidated, with the debate largely focusing on whether individual letters are processed serially or in parallel. The present study investigated the processing of individual letters and letter combinations through time in visual word recognition using displays where signal-to-noise ratio (SNR) varied randomly throughout a 200 ms exposure duration. In Experiment 1, SNR varied either homogeneously across all letters or independently for each letter position (cf. heterogeneous sampling). Reading accuracy was substantially greater with homogeneous than heterogeneous sampling. Experiment 2 again used heterogeneous sampling and classification images (CIs) were calculated for individual letter positions or conjunctions thereof, reflecting processing efficiency according to time during target exposure. These CIs or their Fourier transforms were passed to a classifier to assess differences in the result patterns across individual letter positions or their conjunctions. Overall, the present results indicate the following: (1) significant parallel letter processing capacity throughout exposure duration; (2) dissociable processing mechanisms for each letter position; and (3) letter position-specific mechanisms for letter conjunctions that are distinct from those for individual letters. The results also provide evidence relevant to the neural code underlying the perceptual mechanisms that were uncovered.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}