Visual adaptation and attention are two processes that help manage the limited bioenergetic resources of the brain for perception. Visual perception is heterogeneous around the visual field: It is better along the horizontal than the vertical meridian (horizontal-vertical anisotropy [HVA]), and better along the lower than the upper vertical meridian (vertical meridian asymmetry [VMA]). Recently, we showed that visual adaptation is more pronounced at the horizontal than the vertical meridian, but whether and how this differential adaptation modulates the effects of covert spatial attention remain unknown. In this study, we investigated whether and how the effects of endogenous (voluntary) and exogenous (involuntary) covert attention on an orientation discrimination task vary at the cardinal meridians, with and without adaptation. We manipulated endogenous (Experiment 1) or exogenous (Experiment 2) attention via an informative central or uninformative peripheral cue, respectively. Results showed that (a) in the non-adapted condition, the typical HVA and VMA emerged in contrast thresholds; (b) the adaptation effect was stronger at the horizontal than the vertical meridian; and (c) regardless of adaptation, both endogenous and exogenous attention enhanced and impaired performance at the attended and unattended locations, respectively, to a similar degree at both cardinal meridians. Together, these findings reveal that, despite differences between endogenous and exogenous attention, their effects remain uniform across cardinal meridians-even under differential adaptation that reduces intrinsic asymmetries of visual field representations.
{"title":"Covert spatial attention is uniform across cardinal meridians despite differential adaptation.","authors":"Hsing-Hao Lee, Marisa Carrasco","doi":"10.1167/jov.26.1.15","DOIUrl":"10.1167/jov.26.1.15","url":null,"abstract":"<p><p>Visual adaptation and attention are two processes that help manage the limited bioenergetic resources of the brain for perception. Visual perception is heterogeneous around the visual field: It is better along the horizontal than the vertical meridian (horizontal-vertical anisotropy [HVA]), and better along the lower than the upper vertical meridian (vertical meridian asymmetry [VMA]). Recently, we showed that visual adaptation is more pronounced at the horizontal than the vertical meridian, but whether and how this differential adaptation modulates the effects of covert spatial attention remain unknown. In this study, we investigated whether and how the effects of endogenous (voluntary) and exogenous (involuntary) covert attention on an orientation discrimination task vary at the cardinal meridians, with and without adaptation. We manipulated endogenous (Experiment 1) or exogenous (Experiment 2) attention via an informative central or uninformative peripheral cue, respectively. Results showed that (a) in the non-adapted condition, the typical HVA and VMA emerged in contrast thresholds; (b) the adaptation effect was stronger at the horizontal than the vertical meridian; and (c) regardless of adaptation, both endogenous and exogenous attention enhanced and impaired performance at the attended and unattended locations, respectively, to a similar degree at both cardinal meridians. Together, these findings reveal that, despite differences between endogenous and exogenous attention, their effects remain uniform across cardinal meridians-even under differential adaptation that reduces intrinsic asymmetries of visual field representations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859706/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146068405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contour erasure describes the phenomenon that after brief flicker adaptation at the edge of an object, the object disappears and is replaced by the background - highlighting the importance of edges in perceiving a surface. The underlying mechanism remains unknown. The current study investigates the characteristics and functional properties of contour erasure, and its relationship with related phenomena such as perceptual filling-in, forward masking, and contrast adaptation. We used a homogeneous disk as a target, and circles that corresponded to the outline of the target disk as the adapter. Using a two-alternative forced choice (2AFC) paradigm, each trial began with a counterphase flickering adapter, followed by the target randomly presented in one of the two locations. Participants indicated the target location with a button press. The target detection threshold elevation relative to the no adaptation condition was used as an index of the adaptation effect. We manipulated two spatial properties (eccentricity and the adapter size) plus three temporal properties (adapter flickering rate, adaptation duration, and interstimulus interval [ISI]). Results indicated that the adaptation effect increased with eccentricity, flickering rate (plateauing at 6 hertz [Hz]) and adaptation duration, but decreased with longer ISI and for adapter sizes that were larger than the target. The target threshold first increased then decreased as the adapter size decreased from that of the target, indicating a size tuning that is slightly smaller than the target. Our results indicate that contour erasure shares some of the key features of other well-known perceptual phenomena like filling in and contrast adaptation.
{"title":"The spatial and temporal properties of the contour erasure effect and perceptual filling-in.","authors":"Yih-Shiuan Lin, Chien-Chung Chen, Mark W Greenlee","doi":"10.1167/jov.25.14.4","DOIUrl":"10.1167/jov.25.14.4","url":null,"abstract":"<p><p>Contour erasure describes the phenomenon that after brief flicker adaptation at the edge of an object, the object disappears and is replaced by the background - highlighting the importance of edges in perceiving a surface. The underlying mechanism remains unknown. The current study investigates the characteristics and functional properties of contour erasure, and its relationship with related phenomena such as perceptual filling-in, forward masking, and contrast adaptation. We used a homogeneous disk as a target, and circles that corresponded to the outline of the target disk as the adapter. Using a two-alternative forced choice (2AFC) paradigm, each trial began with a counterphase flickering adapter, followed by the target randomly presented in one of the two locations. Participants indicated the target location with a button press. The target detection threshold elevation relative to the no adaptation condition was used as an index of the adaptation effect. We manipulated two spatial properties (eccentricity and the adapter size) plus three temporal properties (adapter flickering rate, adaptation duration, and interstimulus interval [ISI]). Results indicated that the adaptation effect increased with eccentricity, flickering rate (plateauing at 6 hertz [Hz]) and adaptation duration, but decreased with longer ISI and for adapter sizes that were larger than the target. The target threshold first increased then decreased as the adapter size decreased from that of the target, indicating a size tuning that is slightly smaller than the target. Our results indicate that contour erasure shares some of the key features of other well-known perceptual phenomena like filling in and contrast adaptation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12704214/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multisensory information can help resolve perceptual ambiguity in situations such as the alternating visual experience during binocular rivalry. Across four experiments, participants viewed dichoptically presented spiky and round rival targets while simultaneously touching spiky, neutral, or round shapes in three-dimensional (3D) printed form. The primary aim was to investigate the influence of visuotactile shape congruence in the curvature dimension. In addition, the roles of voluntary action and spatial colocalization on successful crossmodal integration were investigated. Voluntary action was tested between active touch (Experiments 1 and 2) and passive touch (Experiments 3 and 4) conditions. Visual stimulus type differed between rapid successions of 3D-rendered images (Experiments 1 and 3) and real-world video recordings (Experiments 2 and 4), with the latter involving bodily cues to promote visuotactile colocalization. In general, the results showed that tactile shape congruence can lead to relative dominance of the corresponding visual target, especially when visuotactile colocalization was encouraged with video recordings as visual targets. The results suggest beneficial effects of crossmodal shape congruence on disambiguation, which seems to be generally comparable between the two modes of active versus passive touch. Using 3D stimuli and including free voluntary action, the study provides novel and connecting insights into the naturalistic object processing behavior of humans.
{"title":"Visuotactile object processing in binocular rivalry: The role of shape congruence, voluntary action, and spatial colocalization.","authors":"Seyoon Song, Haeji Shin, Chai-Youn Kim","doi":"10.1167/jov.25.14.11","DOIUrl":"10.1167/jov.25.14.11","url":null,"abstract":"<p><p>Multisensory information can help resolve perceptual ambiguity in situations such as the alternating visual experience during binocular rivalry. Across four experiments, participants viewed dichoptically presented spiky and round rival targets while simultaneously touching spiky, neutral, or round shapes in three-dimensional (3D) printed form. The primary aim was to investigate the influence of visuotactile shape congruence in the curvature dimension. In addition, the roles of voluntary action and spatial colocalization on successful crossmodal integration were investigated. Voluntary action was tested between active touch (Experiments 1 and 2) and passive touch (Experiments 3 and 4) conditions. Visual stimulus type differed between rapid successions of 3D-rendered images (Experiments 1 and 3) and real-world video recordings (Experiments 2 and 4), with the latter involving bodily cues to promote visuotactile colocalization. In general, the results showed that tactile shape congruence can lead to relative dominance of the corresponding visual target, especially when visuotactile colocalization was encouraged with video recordings as visual targets. The results suggest beneficial effects of crossmodal shape congruence on disambiguation, which seems to be generally comparable between the two modes of active versus passive touch. Using 3D stimuli and including free voluntary action, the study provides novel and connecting insights into the naturalistic object processing behavior of humans.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"11"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721434/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability to quickly and precisely follow another person's gaze reflects critical evolutionary mechanisms underlying social interactions, such as attention modulation and the prediction of others' future actions. Recent studies show that observers use another person's gaze direction and peripheral scene information to make anticipatory saccades toward the gaze goal. However, it remains unclear how these eye movements are influenced by complex features of natural scenes, such as a foveal gazer, multiple peripheral gaze goals, and the relative distance between gazer and goal. We presented dynamic stimuli (videos) of real-world scenes with or without a gazer shifting their head to gaze at other individuals (gaze goals). Participants were instructed to search for a specific target individual in the videos while their eye movements were recorded. We measured the accuracy of the first saccade in locating the gaze goal. First, we found that the absence of a foveal gazer significantly increased saccade error, but only when the goal was at least approximately 9 degrees of visual angle from the initial fixation. First saccade amplitude and onset latency were higher in the gazer-present condition. Second, when there were multiple potential gaze goals in the periphery, the first saccade was directed to the individual closer to the initial fixation (gazer) location. Finally, the presence of multiple peripheral gaze goals shortened saccade latencies and increased the frequency of anticipatory saccades made before the gazer completed their head movement. These findings extend our understanding of gaze following in complex, naturalistic scenes and inform theories of attention and real-world decision-making.
{"title":"The psychophysics of dynamic gaze-following saccades during search.","authors":"Srijita Karmakar, Miguel P Eckstein","doi":"10.1167/jov.25.14.14","DOIUrl":"10.1167/jov.25.14.14","url":null,"abstract":"<p><p>The ability to quickly and precisely follow another person's gaze reflects critical evolutionary mechanisms underlying social interactions, such as attention modulation and the prediction of others' future actions. Recent studies show that observers use another person's gaze direction and peripheral scene information to make anticipatory saccades toward the gaze goal. However, it remains unclear how these eye movements are influenced by complex features of natural scenes, such as a foveal gazer, multiple peripheral gaze goals, and the relative distance between gazer and goal. We presented dynamic stimuli (videos) of real-world scenes with or without a gazer shifting their head to gaze at other individuals (gaze goals). Participants were instructed to search for a specific target individual in the videos while their eye movements were recorded. We measured the accuracy of the first saccade in locating the gaze goal. First, we found that the absence of a foveal gazer significantly increased saccade error, but only when the goal was at least approximately 9 degrees of visual angle from the initial fixation. First saccade amplitude and onset latency were higher in the gazer-present condition. Second, when there were multiple potential gaze goals in the periphery, the first saccade was directed to the individual closer to the initial fixation (gazer) location. Finally, the presence of multiple peripheral gaze goals shortened saccade latencies and increased the frequency of anticipatory saccades made before the gazer completed their head movement. These findings extend our understanding of gaze following in complex, naturalistic scenes and inform theories of attention and real-world decision-making.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ulises Orbe, Hinze Hogendoorn, Stefan Bode, Gereon R Fink, Ralph Weidner, Simone Vossel
Attentive and predictive mechanisms crucially shape perception, but the interplay between these fundamental processes remains poorly understood. Studies on interactions between attention and prediction have yielded discrepant results, potentially because of differences in task demands. The present study examined whether the perceptual load (i.e., task difficulty) affects predictive processing in task-relevant and task-irrelevant hemifields. To this end, we developed a novel delayed match-to-reference task that orthogonally manipulated task-relevance, prediction, and perceptual load. We hypothesized that a low-load condition should facilitate the processing of prediction violations (oddball effects) in task-irrelevant space because of the availability of spare processing resources. We analyzed accuracy and response time (RT) data from 28 healthy young participants with separate repeated measures analyses of variance. The results confirmed the effectiveness of the load manipulation because a high perceptual load significantly increased RTs and decreased accuracy. Notably, the accuracy analysis yielded a significant three-way interaction between task-relevance, prediction, and load. Post-hoc tests revealed that load modulated the processing of prediction violations in the task-irrelevant hemifield. Importantly, the prediction violation, induced by a low-frequency and task-irrelevant feature (orientation), reduced accuracy in the low-load but not in the high-load condition. This finding suggests that predictive processing in task-irrelevant space is contingent on the availability of processing resources, with high perceptual load inhibiting the processing of unexpected events in task-irrelevant regions. The present study shows that load is a crucial factor in the interaction between task-relevance and prediction.
{"title":"Load-dependent processing of prediction violations in task-irrelevant space.","authors":"Ulises Orbe, Hinze Hogendoorn, Stefan Bode, Gereon R Fink, Ralph Weidner, Simone Vossel","doi":"10.1167/jov.25.14.6","DOIUrl":"10.1167/jov.25.14.6","url":null,"abstract":"<p><p>Attentive and predictive mechanisms crucially shape perception, but the interplay between these fundamental processes remains poorly understood. Studies on interactions between attention and prediction have yielded discrepant results, potentially because of differences in task demands. The present study examined whether the perceptual load (i.e., task difficulty) affects predictive processing in task-relevant and task-irrelevant hemifields. To this end, we developed a novel delayed match-to-reference task that orthogonally manipulated task-relevance, prediction, and perceptual load. We hypothesized that a low-load condition should facilitate the processing of prediction violations (oddball effects) in task-irrelevant space because of the availability of spare processing resources. We analyzed accuracy and response time (RT) data from 28 healthy young participants with separate repeated measures analyses of variance. The results confirmed the effectiveness of the load manipulation because a high perceptual load significantly increased RTs and decreased accuracy. Notably, the accuracy analysis yielded a significant three-way interaction between task-relevance, prediction, and load. Post-hoc tests revealed that load modulated the processing of prediction violations in the task-irrelevant hemifield. Importantly, the prediction violation, induced by a low-frequency and task-irrelevant feature (orientation), reduced accuracy in the low-load but not in the high-load condition. This finding suggests that predictive processing in task-irrelevant space is contingent on the availability of processing resources, with high perceptual load inhibiting the processing of unexpected events in task-irrelevant regions. The present study shows that load is a crucial factor in the interaction between task-relevance and prediction.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Organizing visual input into coherent percepts requires dynamic grouping and segmentation mechanisms that operate across both spatial and temporal domains. Crowding occurs when nearby elements interfere with target perception, but specific flanker configurations can alleviate this effect through Gestalt-based grouping, a phenomenon known as uncrowding. Here, we examined the temporal dynamics underlying these spatial organization processes using a Vernier discrimination task. In Experiment 1, we varied stimulus duration and found that uncrowding emerged only after 160 ms, suggesting a time-consuming process. In Experiment 2, we manipulated the stimulus onset asynchrony (SOA) between the target and flankers. We found that presenting good-Gestalt flankers briefly before the target (as little as 32 ms) significantly boosted uncrowding, even in the absence of temporal overlap between the two stimuli. This effect was specific to conditions in which flankers preceded the target, ruling out pure temporal integration and masking accounts. These findings suggest that spatial segmentation can be dynamically facilitated when the temporal order of presentation allows grouping mechanisms to engage prior to target processing. Moreover, the observed time course indicates that segmentation is not purely feedforward, particularly for stimuli that are likely to recruit higher level visual areas, pointing instead to the involvement of recurrent or feedback processes.
{"title":"Temporal windows of perceptual organization: Evidence from crowding and uncrowding.","authors":"Alessia Santoni, Luca Ronconi, Jason Samaha","doi":"10.1167/jov.25.14.5","DOIUrl":"10.1167/jov.25.14.5","url":null,"abstract":"<p><p>Organizing visual input into coherent percepts requires dynamic grouping and segmentation mechanisms that operate across both spatial and temporal domains. Crowding occurs when nearby elements interfere with target perception, but specific flanker configurations can alleviate this effect through Gestalt-based grouping, a phenomenon known as uncrowding. Here, we examined the temporal dynamics underlying these spatial organization processes using a Vernier discrimination task. In Experiment 1, we varied stimulus duration and found that uncrowding emerged only after 160 ms, suggesting a time-consuming process. In Experiment 2, we manipulated the stimulus onset asynchrony (SOA) between the target and flankers. We found that presenting good-Gestalt flankers briefly before the target (as little as 32 ms) significantly boosted uncrowding, even in the absence of temporal overlap between the two stimuli. This effect was specific to conditions in which flankers preceded the target, ruling out pure temporal integration and masking accounts. These findings suggest that spatial segmentation can be dynamically facilitated when the temporal order of presentation allows grouping mechanisms to engage prior to target processing. Moreover, the observed time course indicates that segmentation is not purely feedforward, particularly for stimuli that are likely to recruit higher level visual areas, pointing instead to the involvement of recurrent or feedback processes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12704212/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lynn Schmittwilken, Anna L Haverkamp, Marianne Maertens
To interact with the world effectively, the human visual system must extract meaningful features from visual scenes. One key feature are edges, luminance or texture discontinuities in two-dimensional (2D) images that often correspond to object boundaries in three-dimensional scenes. Edge sensitivity has traditionally been studied with well-controlled stimuli and binary choice tasks, but it is unclear how well these insights transfer to real-world behavior. Recent studies have extended this approach using natural images but typically retained binary button presses. In this study, we extend the approach further and ask observers (N = 20) to trace edges in natural scenes, presented with or without 2D visual noise. To quantify edge detection performance, we use a signal detection theory-inspired approach. Participants' edge traces in the noise-free condition serve as an individualized "ground-truth" or signal, used to categorize edge traces from noise conditions into hits, false alarms, misses, and correct rejections. Observers produce remarkably consistent edge traces across conditions. Noise interference patterns mirror results from traditional edge sensitivity studies, especially for edges with spectral properties similar to natural scenes. This suggests that insights from controlled paradigms can transfer to naturalistic ones. We also examined edge traces to identify which image features drive edge perception, using interindividual variability as a pointer to relevant features. We conclude that line drawings are a powerful tool to investigate edge sensitivity and potentially other aspects of visual perception, enabling nuanced exploration of real-world visual behavior with few experimental trials.
{"title":"Line drawings as a tool to probe edge sensitivity in natural scenes.","authors":"Lynn Schmittwilken, Anna L Haverkamp, Marianne Maertens","doi":"10.1167/jov.25.14.22","DOIUrl":"10.1167/jov.25.14.22","url":null,"abstract":"<p><p>To interact with the world effectively, the human visual system must extract meaningful features from visual scenes. One key feature are edges, luminance or texture discontinuities in two-dimensional (2D) images that often correspond to object boundaries in three-dimensional scenes. Edge sensitivity has traditionally been studied with well-controlled stimuli and binary choice tasks, but it is unclear how well these insights transfer to real-world behavior. Recent studies have extended this approach using natural images but typically retained binary button presses. In this study, we extend the approach further and ask observers (N = 20) to trace edges in natural scenes, presented with or without 2D visual noise. To quantify edge detection performance, we use a signal detection theory-inspired approach. Participants' edge traces in the noise-free condition serve as an individualized \"ground-truth\" or signal, used to categorize edge traces from noise conditions into hits, false alarms, misses, and correct rejections. Observers produce remarkably consistent edge traces across conditions. Noise interference patterns mirror results from traditional edge sensitivity studies, especially for edges with spectral properties similar to natural scenes. This suggests that insights from controlled paradigms can transfer to naturalistic ones. We also examined edge traces to identify which image features drive edge perception, using interindividual variability as a pointer to relevant features. We conclude that line drawings are a powerful tool to investigate edge sensitivity and potentially other aspects of visual perception, enabling nuanced exploration of real-world visual behavior with few experimental trials.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"22"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743498/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presaccadic attention enhances visual perception at the upcoming saccade target location. While this enhancement is often described as obligatory and temporally stereotyped, recent studies indicate that its strength varies depending on saccade direction. Here, we investigated whether the time course of presaccadic attention also differs across saccade directions. Participants performed a two-alternative forced-choice orientation discrimination task during saccade preparation. Tilt angles were individually titrated in a fixation baseline condition to equate task difficulty across the upper and lower vertical meridians. Sensitivity was then assessed at different time points relative to saccade onset and cue onset, allowing us to characterize the temporal dynamics of attentional enhancement. We found that presaccadic attention built up faster and reached higher levels preceding downward than upward saccades. Linear model fits revealed significant slope differences but no differences in intercepts, suggesting that the observed asymmetries reflect differences in attentional deployment during saccade preparation rather than preexisting differences in sensitivity. Saccade parameters did not account for these asymmetries. Our findings demonstrate that the temporal dynamics of presaccadic attention vary with saccade direction, which may be a potential mechanism underlying previously observed differences in presaccadic benefit at the upper and lower vertical meridians. This temporal flexibility challenges the view of a uniform presaccadic attention mechanism and suggests that presaccadic attentional deployment is shaped by movement goals. Our results provide new insights into how the visual and oculomotor systems coordinate under direction-specific demands.
{"title":"Saccade direction modulates the temporal dynamics of presaccadic attention.","authors":"Yuna Kwak, Nina M Hanning, Marisa Carrasco","doi":"10.1167/jov.25.14.2","DOIUrl":"10.1167/jov.25.14.2","url":null,"abstract":"<p><p>Presaccadic attention enhances visual perception at the upcoming saccade target location. While this enhancement is often described as obligatory and temporally stereotyped, recent studies indicate that its strength varies depending on saccade direction. Here, we investigated whether the time course of presaccadic attention also differs across saccade directions. Participants performed a two-alternative forced-choice orientation discrimination task during saccade preparation. Tilt angles were individually titrated in a fixation baseline condition to equate task difficulty across the upper and lower vertical meridians. Sensitivity was then assessed at different time points relative to saccade onset and cue onset, allowing us to characterize the temporal dynamics of attentional enhancement. We found that presaccadic attention built up faster and reached higher levels preceding downward than upward saccades. Linear model fits revealed significant slope differences but no differences in intercepts, suggesting that the observed asymmetries reflect differences in attentional deployment during saccade preparation rather than preexisting differences in sensitivity. Saccade parameters did not account for these asymmetries. Our findings demonstrate that the temporal dynamics of presaccadic attention vary with saccade direction, which may be a potential mechanism underlying previously observed differences in presaccadic benefit at the upper and lower vertical meridians. This temporal flexibility challenges the view of a uniform presaccadic attention mechanism and suggests that presaccadic attentional deployment is shaped by movement goals. Our results provide new insights into how the visual and oculomotor systems coordinate under direction-specific demands.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12701607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriela Mueller de Melo, Isabella de Oliveira Pitorri, Gustavo Rohenkohl
Lateral interactions are pervasive in early visual processing, contributing directly to processes such as object grouping and segregation. This study examines whether saccade preparation - known to affect visual perception - modulates lateral interactions. In a psychophysical task, participants were instructed to detect a Gabor target flanked by two adjacent Gabors, while they either prepared a saccade to the target or maintained central fixation. Flanker gratings could be iso- or orthogonally oriented to the target and were positioned at three different distances (4λ, 8λ, and 16λ). Contrast thresholds for target detection were estimated in each condition using a 3-down/1-up staircase procedure. The results showed that in both presaccadic and fixation conditions, the target was suppressed at the shortest flanker distance (4λ), revealed by markedly higher thresholds in iso-oriented compared to orthogonal flanker configurations. Lateral interaction effects were completely abolished at their largest separation (16λ). Interestingly, at the intermediate flanker distance (8λ), target suppression seemed to increase during the presaccadic period, whereas no such effect was observed during fixation. This result suggests that saccade preparation can modulate lateral interactions, promoting suppressive effects over larger distances. These findings are consistent with the visual remapping phenomenon observed before saccade execution, especially the convergent remapping of receptive fields in oculomotor and visual areas. Finally, this presaccadic expansion of inhibitory lateral interactions could assist target selection by suppressing homogeneous peripheral signals - such as iso-oriented collinear patterns - while prioritizing the processing of more salient visual information.
{"title":"Presaccadic modulation of lateral interactions.","authors":"Gabriela Mueller de Melo, Isabella de Oliveira Pitorri, Gustavo Rohenkohl","doi":"10.1167/jov.25.14.7","DOIUrl":"10.1167/jov.25.14.7","url":null,"abstract":"<p><p>Lateral interactions are pervasive in early visual processing, contributing directly to processes such as object grouping and segregation. This study examines whether saccade preparation - known to affect visual perception - modulates lateral interactions. In a psychophysical task, participants were instructed to detect a Gabor target flanked by two adjacent Gabors, while they either prepared a saccade to the target or maintained central fixation. Flanker gratings could be iso- or orthogonally oriented to the target and were positioned at three different distances (4λ, 8λ, and 16λ). Contrast thresholds for target detection were estimated in each condition using a 3-down/1-up staircase procedure. The results showed that in both presaccadic and fixation conditions, the target was suppressed at the shortest flanker distance (4λ), revealed by markedly higher thresholds in iso-oriented compared to orthogonal flanker configurations. Lateral interaction effects were completely abolished at their largest separation (16λ). Interestingly, at the intermediate flanker distance (8λ), target suppression seemed to increase during the presaccadic period, whereas no such effect was observed during fixation. This result suggests that saccade preparation can modulate lateral interactions, promoting suppressive effects over larger distances. These findings are consistent with the visual remapping phenomenon observed before saccade execution, especially the convergent remapping of receptive fields in oculomotor and visual areas. Finally, this presaccadic expansion of inhibitory lateral interactions could assist target selection by suppressing homogeneous peripheral signals - such as iso-oriented collinear patterns - while prioritizing the processing of more salient visual information.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12707330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During self-movement, the visual system uses optic flow to identify scene-relative object motion and estimate the direction of self-movement (heading). Although both processes rely on optic flow, their relationship and the conditions under which independent object motion biases heading estimation remain unclear. The causal inference model predicts that misjudging object motion leads to its integration into heading estimation, causing errors in heading estimation, whereas correct judgments reduce these errors. However, most studies have examined these processes independently. Here we used a dual-task paradigm to investigate how visual cues affect the judgment of scene-relative object motion direction and concurrent heading estimation. Participants viewed a 90° × 90° display simulating self-movement through a three-dimensional cloud with a laterally moving object positioned at 8° or 16° from the simulated heading direction. They judged both the object's motion direction in the scene and their heading direction. Results show that increasing an object's speed and reducing its positional offset from the simulated heading direction improved the accuracy of scene-relative object motion direction judgment, but did not consistently improve the accuracy of heading estimation. Surprisingly, visual cues such as binocular disparity and object density improved scene-relative object motion direction judgment but reduced heading estimation accuracy. Furthermore, heading errors mostly peaked at object speeds where observers could reliably judge scene-relative object motion direction, challenging the predictions of the causal inference model. These findings provide strong evidence that scene-relative object motion judgment and heading estimation operate independently and question the generality of the causal inference model in explaining heading biases caused by independent object motion.
{"title":"Effects of visual cues on scene-relative object motion judgments and concurrent heading estimation from optic flow.","authors":"Yinghua Yang, Zhoukuidong Shan, Li Li","doi":"10.1167/jov.25.14.20","DOIUrl":"10.1167/jov.25.14.20","url":null,"abstract":"<p><p>During self-movement, the visual system uses optic flow to identify scene-relative object motion and estimate the direction of self-movement (heading). Although both processes rely on optic flow, their relationship and the conditions under which independent object motion biases heading estimation remain unclear. The causal inference model predicts that misjudging object motion leads to its integration into heading estimation, causing errors in heading estimation, whereas correct judgments reduce these errors. However, most studies have examined these processes independently. Here we used a dual-task paradigm to investigate how visual cues affect the judgment of scene-relative object motion direction and concurrent heading estimation. Participants viewed a 90° × 90° display simulating self-movement through a three-dimensional cloud with a laterally moving object positioned at 8° or 16° from the simulated heading direction. They judged both the object's motion direction in the scene and their heading direction. Results show that increasing an object's speed and reducing its positional offset from the simulated heading direction improved the accuracy of scene-relative object motion direction judgment, but did not consistently improve the accuracy of heading estimation. Surprisingly, visual cues such as binocular disparity and object density improved scene-relative object motion direction judgment but reduced heading estimation accuracy. Furthermore, heading errors mostly peaked at object speeds where observers could reliably judge scene-relative object motion direction, challenging the predictions of the causal inference model. These findings provide strong evidence that scene-relative object motion judgment and heading estimation operate independently and question the generality of the causal inference model in explaining heading biases caused by independent object motion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"20"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12742601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}