Carolina Maria Oletto, Giulio Contemori, Esma Dilara Yavuz, Luca Battaglini, Michael Herzog, Marco Bertamini
Our percept of the world is the result of interactions between central and peripheral vision. They can be facilitatory, because central vision is informative about what is in the periphery, or detrimental, such as when shape elements are pooled. We introduce a novel phenomenon, in which elements in the central region impair perception in the periphery (central region interference with periphery [CRIP]). We showed participants a squared grid containing small lines (vertical or diagonal) or crosses in the central region and diagonal lines in the periphery. The regions were divided by a gap that varied in size and position. Participants reported the orientation of the diagonal lines in the periphery (/ or ). The central pattern caused interference and hindered discrimination. For a fixed eccentricity of the peripheral elements, the smaller the gap the larger the impairment. The effect was only present when the central and peripheral lines had a shared orientation (i.e., diagonal), suggesting that similarity plays a role. Surprisingly, performance was worse if central and peripheral lines had the same orientation. We conclude that people do not rely on extrapolation when perceiving elements in the periphery and that iso-orientation may cause greater interference.
{"title":"The CRIP effect: How a pattern in central vision interferes with perception of a pattern in the periphery.","authors":"Carolina Maria Oletto, Giulio Contemori, Esma Dilara Yavuz, Luca Battaglini, Michael Herzog, Marco Bertamini","doi":"10.1167/jov.25.2.10","DOIUrl":"10.1167/jov.25.2.10","url":null,"abstract":"<p><p>Our percept of the world is the result of interactions between central and peripheral vision. They can be facilitatory, because central vision is informative about what is in the periphery, or detrimental, such as when shape elements are pooled. We introduce a novel phenomenon, in which elements in the central region impair perception in the periphery (central region interference with periphery [CRIP]). We showed participants a squared grid containing small lines (vertical or diagonal) or crosses in the central region and diagonal lines in the periphery. The regions were divided by a gap that varied in size and position. Participants reported the orientation of the diagonal lines in the periphery (/ or ). The central pattern caused interference and hindered discrimination. For a fixed eccentricity of the peripheral elements, the smaller the gap the larger the impairment. The effect was only present when the central and peripheral lines had a shared orientation (i.e., diagonal), suggesting that similarity plays a role. Surprisingly, performance was worse if central and peripheral lines had the same orientation. We conclude that people do not rely on extrapolation when perceiving elements in the periphery and that iso-orientation may cause greater interference.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143505804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many natural tasks require the visual system to classify image patches accurately into target categories, including the category of no target. Natural target categories often involve high levels of within-category variability (uncertainty), making it challenging to uncover the underlying computational mechanisms. Here, we describe these tasks as identification from a set of exhaustive, mutually exclusive target categories, each partitioned into mutually exclusive subcategories. We derive the optimal decision rule and present a computational method to simulate performance for moderately large and complex tasks. We focus on the detection of an additive wavelet target in white noise with five dimensions of stimulus uncertainty: target amplitude, orientation, scale, background contrast, and spatial pattern. We compare the performance of the ideal observer with various heuristic observers. We find that a properly normalized heuristic MAX observer (SNN-MAX) approximates optimal performance. We also find that a convolutional neural network trained on this task approaches but does not reach optimal performance, even with extensive training. We measured human performance on a task with three of these dimensions of uncertainty (orientation, scale, and background pattern). Results show that the pattern of hits and correct rejections for the ideal and SNN-MAX observers (but not a simple MAX observer) aligns with the data. Additionally, we measured performance without scale and orientation uncertainty and found that the effect of uncertainty on performance was less than predicted by any model. This unexpectedly small effect can largely be explained by incorporating biologically plausible levels of intrinsic position uncertainty into the models.
{"title":"Target identification under high levels of amplitude, size, orientation and background uncertainty.","authors":"Can Oluk, Wilson S Geisler","doi":"10.1167/jov.25.2.3","DOIUrl":"10.1167/jov.25.2.3","url":null,"abstract":"<p><p>Many natural tasks require the visual system to classify image patches accurately into target categories, including the category of no target. Natural target categories often involve high levels of within-category variability (uncertainty), making it challenging to uncover the underlying computational mechanisms. Here, we describe these tasks as identification from a set of exhaustive, mutually exclusive target categories, each partitioned into mutually exclusive subcategories. We derive the optimal decision rule and present a computational method to simulate performance for moderately large and complex tasks. We focus on the detection of an additive wavelet target in white noise with five dimensions of stimulus uncertainty: target amplitude, orientation, scale, background contrast, and spatial pattern. We compare the performance of the ideal observer with various heuristic observers. We find that a properly normalized heuristic MAX observer (SNN-MAX) approximates optimal performance. We also find that a convolutional neural network trained on this task approaches but does not reach optimal performance, even with extensive training. We measured human performance on a task with three of these dimensions of uncertainty (orientation, scale, and background pattern). Results show that the pattern of hits and correct rejections for the ideal and SNN-MAX observers (but not a simple MAX observer) aligns with the data. Additionally, we measured performance without scale and orientation uncertainty and found that the effect of uncertainty on performance was less than predicted by any model. This unexpectedly small effect can largely be explained by incorporating biologically plausible levels of intrinsic position uncertainty into the models.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11798335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A well-known motion illusion can be seen in stationary patterns that contain repeated asymmetrical luminance gradients, which create a sawtooth-like spatial luminance profile. Such patterns can appear to move episodically, triggered by saccadic eye movements and blinks. The illusion has been known since 1979, but its origin remains unclear. Our hypothesis is that episodes of the illusory movement are caused by transitory changes in the retinal luminance of the pattern that accompany reflexive changes in pupil diameter after eye movements, blinks, and pattern onsets. Changes in retinal luminance are already known to cause illusory impressions of motion in patterns that contain asymmetrical luminance gradients. To test the hypothesis, participants viewed static illusion patterns and made controlled blinks or saccades, after which they pressed a button to indicate cessation of any illusion of movement. We measured changes in pupil diameter up to the point at which the illusion ceased. Results showed that both the amplitude and the duration of pupil dilation correlated well with illusion duration, consistent with the role of retinal luminance in generating in the illusions. This new explanation can account for the importance of eye movements and blinks, and for the effects of age and artificial pupils on the strength of the illusion. A simulation of the illusion in which pattern luminance is modulated with the same time-course as that caused by blinks and saccades creates a marked impression of illusory motion, confirming the causal role of temporal luminance change in generating the illusion.
{"title":"Pupil dilation underlies the peripheral drift illusion.","authors":"George Mather, Patrick Cavanagh","doi":"10.1167/jov.25.2.13","DOIUrl":"10.1167/jov.25.2.13","url":null,"abstract":"<p><p>A well-known motion illusion can be seen in stationary patterns that contain repeated asymmetrical luminance gradients, which create a sawtooth-like spatial luminance profile. Such patterns can appear to move episodically, triggered by saccadic eye movements and blinks. The illusion has been known since 1979, but its origin remains unclear. Our hypothesis is that episodes of the illusory movement are caused by transitory changes in the retinal luminance of the pattern that accompany reflexive changes in pupil diameter after eye movements, blinks, and pattern onsets. Changes in retinal luminance are already known to cause illusory impressions of motion in patterns that contain asymmetrical luminance gradients. To test the hypothesis, participants viewed static illusion patterns and made controlled blinks or saccades, after which they pressed a button to indicate cessation of any illusion of movement. We measured changes in pupil diameter up to the point at which the illusion ceased. Results showed that both the amplitude and the duration of pupil dilation correlated well with illusion duration, consistent with the role of retinal luminance in generating in the illusions. This new explanation can account for the importance of eye movements and blinks, and for the effects of age and artificial pupils on the strength of the illusion. A simulation of the illusion in which pattern luminance is modulated with the same time-course as that caused by blinks and saccades creates a marked impression of illusory motion, confirming the causal role of temporal luminance change in generating the illusion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"13"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Gallagher, Joshua D Haynes, John F Culling, Tom C A Freeman
Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.
{"title":"A model of audio-visual motion integration during active self-movement.","authors":"Maria Gallagher, Joshua D Haynes, John F Culling, Tom C A Freeman","doi":"10.1167/jov.25.2.8","DOIUrl":"10.1167/jov.25.2.8","url":null,"abstract":"<p><p>Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11841688/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vito Mengers, Nicolas Roth, Oliver Brock, Klaus Obermayer, Martin Rolfs
The objects we perceive guide our eye movements when observing real-world dynamic scenes. Yet, gaze shifts and selective attention are critical for perceiving details and refining object boundaries. Object segmentation and gaze behavior are, however, typically treated as two independent processes. Here, we present a computational model that simulates these processes in an interconnected manner and allows for hypothesis-driven investigations of distinct attentional mechanisms. Drawing on an information processing pattern from robotics, we use a Bayesian filter to recursively segment the scene, which also provides an uncertainty estimate for the object boundaries that we use to guide active scene exploration. We demonstrate that this model closely resembles observers' free viewing behavior on a dataset of dynamic real-world scenes, measured by scanpath statistics, including foveation duration and saccade amplitude distributions used for parameter fitting and higher-level statistics not used for fitting. These include how object detections, inspections, and returns are balanced and a delay of returning saccades without an explicit implementation of such temporal inhibition of return. Extensive simulations and ablation studies show that uncertainty promotes balanced exploration and that semantic object cues are crucial to forming the perceptual units used in object-based attention. Moreover, we show how our model's modular design allows for extensions, such as incorporating saccadic momentum or presaccadic attention, to further align its output with human scanpaths.
{"title":"A robotics-inspired scanpath model reveals the importance of uncertainty and semantic object cues for gaze guidance in dynamic scenes.","authors":"Vito Mengers, Nicolas Roth, Oliver Brock, Klaus Obermayer, Martin Rolfs","doi":"10.1167/jov.25.2.6","DOIUrl":"10.1167/jov.25.2.6","url":null,"abstract":"<p><p>The objects we perceive guide our eye movements when observing real-world dynamic scenes. Yet, gaze shifts and selective attention are critical for perceiving details and refining object boundaries. Object segmentation and gaze behavior are, however, typically treated as two independent processes. Here, we present a computational model that simulates these processes in an interconnected manner and allows for hypothesis-driven investigations of distinct attentional mechanisms. Drawing on an information processing pattern from robotics, we use a Bayesian filter to recursively segment the scene, which also provides an uncertainty estimate for the object boundaries that we use to guide active scene exploration. We demonstrate that this model closely resembles observers' free viewing behavior on a dataset of dynamic real-world scenes, measured by scanpath statistics, including foveation duration and saccade amplitude distributions used for parameter fitting and higher-level statistics not used for fitting. These include how object detections, inspections, and returns are balanced and a delay of returning saccades without an explicit implementation of such temporal inhibition of return. Extensive simulations and ablation studies show that uncertainty promotes balanced exploration and that semantic object cues are crucial to forming the perceptual units used in object-based attention. Moreover, we show how our model's modular design allows for extensions, such as incorporating saccadic momentum or presaccadic attention, to further align its output with human scanpaths.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11812614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael A Barnett, Benjamin M Chin, Geoffrey K Aguirre, Johannes Burge, David H Brainard
We characterized the temporal dynamics of color processing using a continuous tracking paradigm by estimating subjects' temporal lag in tracking chromatic Gabor targets. To estimate the lag, we computed the cross-correlation between the velocities of the Gabor target's random walk and the velocities of the subject's tracking. Lag was taken as the time of the peak of the resulting cross-correlogram. We measured how the lag changes as a function of chromatic direction and contrast for stimuli in the LS cone contrast plane. In the same set of subjects, we also measured detection thresholds for stimuli with matched spatial, temporal, and chromatic properties. We created a model of tracking and detection performance to test whether a common representation of chromatic contrast accounts for both measures. The model summarizes the effect of chromatic contrast over different chromatic directions through elliptical isoperformance contours, the shapes of which are contrast independent. The fitted elliptical isoperformance contours have essentially the same orientation in the detection and tracking tasks. For the tracking task, however, there is a striking reduction in relative sensitivity to signals originating in the S cones.
{"title":"Temporal dynamics of human color processing measured using a continuous tracking task.","authors":"Michael A Barnett, Benjamin M Chin, Geoffrey K Aguirre, Johannes Burge, David H Brainard","doi":"10.1167/jov.25.2.12","DOIUrl":"10.1167/jov.25.2.12","url":null,"abstract":"<p><p>We characterized the temporal dynamics of color processing using a continuous tracking paradigm by estimating subjects' temporal lag in tracking chromatic Gabor targets. To estimate the lag, we computed the cross-correlation between the velocities of the Gabor target's random walk and the velocities of the subject's tracking. Lag was taken as the time of the peak of the resulting cross-correlogram. We measured how the lag changes as a function of chromatic direction and contrast for stimuli in the LS cone contrast plane. In the same set of subjects, we also measured detection thresholds for stimuli with matched spatial, temporal, and chromatic properties. We created a model of tracking and detection performance to test whether a common representation of chromatic contrast accounts for both measures. The model summarizes the effect of chromatic contrast over different chromatic directions through elliptical isoperformance contours, the shapes of which are contrast independent. The fitted elliptical isoperformance contours have essentially the same orientation in the detection and tracking tasks. For the tracking task, however, there is a striking reduction in relative sensitivity to signals originating in the S cones.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vanessa Carneiro Morita, David Souto, Guillaume S Masson, Anna Montagnini
Sensory-motor systems can extract statistical regularities in dynamic uncertain environments, enabling quicker responses and anticipatory behavior for expected events. Anticipatory smooth pursuit eye movements (aSP) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. To investigate the nature of the internal model of target kinematics underlying aSP, we tested the effect of varying the target kinematics and its predictability. Participants tracked a small visual target in a constant direction with either constant, accelerating, or decelerating speed. Across experimental blocks, we manipulated the probability of each kinematic condition varying either speed or acceleration across trials; with either one kinematic condition (providing certainty) or with a mixture of conditions with a fixed probability within a block. We show that aSP is robustly modulated by target kinematics. With constant-velocity targets, aSP velocity scales linearly with target velocity in blocked sessions, and matches the probability-weighted average in the mixture sessions. Predictable target acceleration does also have an influence on aSP, suggesting that the internal model of motion that drives anticipation contains some information about the changing target kinematics, beyond the initial target speed. However, there is a large variability across participants in the precision and consistency with which this information is taken into account to control anticipatory behavior.
{"title":"Anticipatory smooth pursuit eye movements scale with the probability of visual motion: The role of target speed and acceleration.","authors":"Vanessa Carneiro Morita, David Souto, Guillaume S Masson, Anna Montagnini","doi":"10.1167/jov.25.1.2","DOIUrl":"10.1167/jov.25.1.2","url":null,"abstract":"<p><p>Sensory-motor systems can extract statistical regularities in dynamic uncertain environments, enabling quicker responses and anticipatory behavior for expected events. Anticipatory smooth pursuit eye movements (aSP) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. To investigate the nature of the internal model of target kinematics underlying aSP, we tested the effect of varying the target kinematics and its predictability. Participants tracked a small visual target in a constant direction with either constant, accelerating, or decelerating speed. Across experimental blocks, we manipulated the probability of each kinematic condition varying either speed or acceleration across trials; with either one kinematic condition (providing certainty) or with a mixture of conditions with a fixed probability within a block. We show that aSP is robustly modulated by target kinematics. With constant-velocity targets, aSP velocity scales linearly with target velocity in blocked sessions, and matches the probability-weighted average in the mixture sessions. Predictable target acceleration does also have an influence on aSP, suggesting that the internal model of motion that drives anticipation contains some information about the changing target kinematics, beyond the initial target speed. However, there is a large variability across participants in the precision and consistency with which this information is taken into account to control anticipatory behavior.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain.
{"title":"Pupil responds spontaneously to visuospatial regularity.","authors":"Zhiming Kong, Chen Chen, Jianrong Jia","doi":"10.1167/jov.25.1.14","DOIUrl":"10.1167/jov.25.1.14","url":null,"abstract":"<p><p>Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B Wilbur, Evie A Malaia
The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.
{"title":"Effect of sign language learning on temporal resolution of visual attention.","authors":"Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B Wilbur, Evie A Malaia","doi":"10.1167/jov.25.1.3","DOIUrl":"10.1167/jov.25.1.3","url":null,"abstract":"<p><p>The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofia Varon, Karsten Babin, Miriam Spering, Jody C Culham
Human performance in perceptual and visuomotor tasks is enhanced when stimulus motion follows the laws of gravitational physics, including acceleration consistent with Earth's gravity, g. Here we used a manual interception task in virtual reality to investigate the effects of trajectory shape and orientation on interception timing and accuracy. Participants punched to intercept a ball moving along one of four trajectories that varied in shape (parabola or tent) and orientation (upright or inverted). We also varied the location of visual fixation such that trajectories fell entirely within the lower or upper visual field. Reaction times were faster for more natural shapes and orientations, regardless of visual field. Overall accuracy was poorer and movement time was longer for the inverted tent condition than the other three conditions, perhaps because it was imperfectly reminiscent of a bouncing ball. A detailed analysis of spatial errors revealed that interception endpoints were more likely to fall along the path of the final trajectory in upright vs. inverted conditions, suggesting stronger expectations regarding the final trajectory direction for these conditions. Taken together, these results suggest that the naturalness of the shape and orientation of a trajectory contributes to performance in a virtual interception task.
{"title":"Target interception in virtual reality is better for natural versus unnatural trajectory shapes and orientations.","authors":"Sofia Varon, Karsten Babin, Miriam Spering, Jody C Culham","doi":"10.1167/jov.25.1.11","DOIUrl":"10.1167/jov.25.1.11","url":null,"abstract":"<p><p>Human performance in perceptual and visuomotor tasks is enhanced when stimulus motion follows the laws of gravitational physics, including acceleration consistent with Earth's gravity, g. Here we used a manual interception task in virtual reality to investigate the effects of trajectory shape and orientation on interception timing and accuracy. Participants punched to intercept a ball moving along one of four trajectories that varied in shape (parabola or tent) and orientation (upright or inverted). We also varied the location of visual fixation such that trajectories fell entirely within the lower or upper visual field. Reaction times were faster for more natural shapes and orientations, regardless of visual field. Overall accuracy was poorer and movement time was longer for the inverted tent condition than the other three conditions, perhaps because it was imperfectly reminiscent of a bouncing ball. A detailed analysis of spatial errors revealed that interception endpoints were more likely to fall along the path of the final trajectory in upright vs. inverted conditions, suggesting stronger expectations regarding the final trajectory direction for these conditions. Taken together, these results suggest that the naturalness of the shape and orientation of a trajectory contributes to performance in a virtual interception task.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}