首页 > 最新文献

Journal of Vision最新文献

英文 中文
The CRIP effect: How a pattern in central vision interferes with perception of a pattern in the periphery.
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-02-03 DOI: 10.1167/jov.25.2.10
Carolina Maria Oletto, Giulio Contemori, Esma Dilara Yavuz, Luca Battaglini, Michael Herzog, Marco Bertamini

Our percept of the world is the result of interactions between central and peripheral vision. They can be facilitatory, because central vision is informative about what is in the periphery, or detrimental, such as when shape elements are pooled. We introduce a novel phenomenon, in which elements in the central region impair perception in the periphery (central region interference with periphery [CRIP]). We showed participants a squared grid containing small lines (vertical or diagonal) or crosses in the central region and diagonal lines in the periphery. The regions were divided by a gap that varied in size and position. Participants reported the orientation of the diagonal lines in the periphery (/ or ). The central pattern caused interference and hindered discrimination. For a fixed eccentricity of the peripheral elements, the smaller the gap the larger the impairment. The effect was only present when the central and peripheral lines had a shared orientation (i.e., diagonal), suggesting that similarity plays a role. Surprisingly, performance was worse if central and peripheral lines had the same orientation. We conclude that people do not rely on extrapolation when perceiving elements in the periphery and that iso-orientation may cause greater interference.

{"title":"The CRIP effect: How a pattern in central vision interferes with perception of a pattern in the periphery.","authors":"Carolina Maria Oletto, Giulio Contemori, Esma Dilara Yavuz, Luca Battaglini, Michael Herzog, Marco Bertamini","doi":"10.1167/jov.25.2.10","DOIUrl":"10.1167/jov.25.2.10","url":null,"abstract":"<p><p>Our percept of the world is the result of interactions between central and peripheral vision. They can be facilitatory, because central vision is informative about what is in the periphery, or detrimental, such as when shape elements are pooled. We introduce a novel phenomenon, in which elements in the central region impair perception in the periphery (central region interference with periphery [CRIP]). We showed participants a squared grid containing small lines (vertical or diagonal) or crosses in the central region and diagonal lines in the periphery. The regions were divided by a gap that varied in size and position. Participants reported the orientation of the diagonal lines in the periphery (/ or ). The central pattern caused interference and hindered discrimination. For a fixed eccentricity of the peripheral elements, the smaller the gap the larger the impairment. The effect was only present when the central and peripheral lines had a shared orientation (i.e., diagonal), suggesting that similarity plays a role. Surprisingly, performance was worse if central and peripheral lines had the same orientation. We conclude that people do not rely on extrapolation when perceiving elements in the periphery and that iso-orientation may cause greater interference.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143505804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target identification under high levels of amplitude, size, orientation and background uncertainty.
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-02-03 DOI: 10.1167/jov.25.2.3
Can Oluk, Wilson S Geisler

Many natural tasks require the visual system to classify image patches accurately into target categories, including the category of no target. Natural target categories often involve high levels of within-category variability (uncertainty), making it challenging to uncover the underlying computational mechanisms. Here, we describe these tasks as identification from a set of exhaustive, mutually exclusive target categories, each partitioned into mutually exclusive subcategories. We derive the optimal decision rule and present a computational method to simulate performance for moderately large and complex tasks. We focus on the detection of an additive wavelet target in white noise with five dimensions of stimulus uncertainty: target amplitude, orientation, scale, background contrast, and spatial pattern. We compare the performance of the ideal observer with various heuristic observers. We find that a properly normalized heuristic MAX observer (SNN-MAX) approximates optimal performance. We also find that a convolutional neural network trained on this task approaches but does not reach optimal performance, even with extensive training. We measured human performance on a task with three of these dimensions of uncertainty (orientation, scale, and background pattern). Results show that the pattern of hits and correct rejections for the ideal and SNN-MAX observers (but not a simple MAX observer) aligns with the data. Additionally, we measured performance without scale and orientation uncertainty and found that the effect of uncertainty on performance was less than predicted by any model. This unexpectedly small effect can largely be explained by incorporating biologically plausible levels of intrinsic position uncertainty into the models.

{"title":"Target identification under high levels of amplitude, size, orientation and background uncertainty.","authors":"Can Oluk, Wilson S Geisler","doi":"10.1167/jov.25.2.3","DOIUrl":"10.1167/jov.25.2.3","url":null,"abstract":"<p><p>Many natural tasks require the visual system to classify image patches accurately into target categories, including the category of no target. Natural target categories often involve high levels of within-category variability (uncertainty), making it challenging to uncover the underlying computational mechanisms. Here, we describe these tasks as identification from a set of exhaustive, mutually exclusive target categories, each partitioned into mutually exclusive subcategories. We derive the optimal decision rule and present a computational method to simulate performance for moderately large and complex tasks. We focus on the detection of an additive wavelet target in white noise with five dimensions of stimulus uncertainty: target amplitude, orientation, scale, background contrast, and spatial pattern. We compare the performance of the ideal observer with various heuristic observers. We find that a properly normalized heuristic MAX observer (SNN-MAX) approximates optimal performance. We also find that a convolutional neural network trained on this task approaches but does not reach optimal performance, even with extensive training. We measured human performance on a task with three of these dimensions of uncertainty (orientation, scale, and background pattern). Results show that the pattern of hits and correct rejections for the ideal and SNN-MAX observers (but not a simple MAX observer) aligns with the data. Additionally, we measured performance without scale and orientation uncertainty and found that the effect of uncertainty on performance was less than predicted by any model. This unexpectedly small effect can largely be explained by incorporating biologically plausible levels of intrinsic position uncertainty into the models.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11798335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pupil dilation underlies the peripheral drift illusion.
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-02-03 DOI: 10.1167/jov.25.2.13
George Mather, Patrick Cavanagh

A well-known motion illusion can be seen in stationary patterns that contain repeated asymmetrical luminance gradients, which create a sawtooth-like spatial luminance profile. Such patterns can appear to move episodically, triggered by saccadic eye movements and blinks. The illusion has been known since 1979, but its origin remains unclear. Our hypothesis is that episodes of the illusory movement are caused by transitory changes in the retinal luminance of the pattern that accompany reflexive changes in pupil diameter after eye movements, blinks, and pattern onsets. Changes in retinal luminance are already known to cause illusory impressions of motion in patterns that contain asymmetrical luminance gradients. To test the hypothesis, participants viewed static illusion patterns and made controlled blinks or saccades, after which they pressed a button to indicate cessation of any illusion of movement. We measured changes in pupil diameter up to the point at which the illusion ceased. Results showed that both the amplitude and the duration of pupil dilation correlated well with illusion duration, consistent with the role of retinal luminance in generating in the illusions. This new explanation can account for the importance of eye movements and blinks, and for the effects of age and artificial pupils on the strength of the illusion. A simulation of the illusion in which pattern luminance is modulated with the same time-course as that caused by blinks and saccades creates a marked impression of illusory motion, confirming the causal role of temporal luminance change in generating the illusion.

{"title":"Pupil dilation underlies the peripheral drift illusion.","authors":"George Mather, Patrick Cavanagh","doi":"10.1167/jov.25.2.13","DOIUrl":"10.1167/jov.25.2.13","url":null,"abstract":"<p><p>A well-known motion illusion can be seen in stationary patterns that contain repeated asymmetrical luminance gradients, which create a sawtooth-like spatial luminance profile. Such patterns can appear to move episodically, triggered by saccadic eye movements and blinks. The illusion has been known since 1979, but its origin remains unclear. Our hypothesis is that episodes of the illusory movement are caused by transitory changes in the retinal luminance of the pattern that accompany reflexive changes in pupil diameter after eye movements, blinks, and pattern onsets. Changes in retinal luminance are already known to cause illusory impressions of motion in patterns that contain asymmetrical luminance gradients. To test the hypothesis, participants viewed static illusion patterns and made controlled blinks or saccades, after which they pressed a button to indicate cessation of any illusion of movement. We measured changes in pupil diameter up to the point at which the illusion ceased. Results showed that both the amplitude and the duration of pupil dilation correlated well with illusion duration, consistent with the role of retinal luminance in generating in the illusions. This new explanation can account for the importance of eye movements and blinks, and for the effects of age and artificial pupils on the strength of the illusion. A simulation of the illusion in which pattern luminance is modulated with the same time-course as that caused by blinks and saccades creates a marked impression of illusory motion, confirming the causal role of temporal luminance change in generating the illusion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"13"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A model of audio-visual motion integration during active self-movement.
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-02-03 DOI: 10.1167/jov.25.2.8
Maria Gallagher, Joshua D Haynes, John F Culling, Tom C A Freeman

Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.

{"title":"A model of audio-visual motion integration during active self-movement.","authors":"Maria Gallagher, Joshua D Haynes, John F Culling, Tom C A Freeman","doi":"10.1167/jov.25.2.8","DOIUrl":"10.1167/jov.25.2.8","url":null,"abstract":"<p><p>Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11841688/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robotics-inspired scanpath model reveals the importance of uncertainty and semantic object cues for gaze guidance in dynamic scenes.
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-02-03 DOI: 10.1167/jov.25.2.6
Vito Mengers, Nicolas Roth, Oliver Brock, Klaus Obermayer, Martin Rolfs

The objects we perceive guide our eye movements when observing real-world dynamic scenes. Yet, gaze shifts and selective attention are critical for perceiving details and refining object boundaries. Object segmentation and gaze behavior are, however, typically treated as two independent processes. Here, we present a computational model that simulates these processes in an interconnected manner and allows for hypothesis-driven investigations of distinct attentional mechanisms. Drawing on an information processing pattern from robotics, we use a Bayesian filter to recursively segment the scene, which also provides an uncertainty estimate for the object boundaries that we use to guide active scene exploration. We demonstrate that this model closely resembles observers' free viewing behavior on a dataset of dynamic real-world scenes, measured by scanpath statistics, including foveation duration and saccade amplitude distributions used for parameter fitting and higher-level statistics not used for fitting. These include how object detections, inspections, and returns are balanced and a delay of returning saccades without an explicit implementation of such temporal inhibition of return. Extensive simulations and ablation studies show that uncertainty promotes balanced exploration and that semantic object cues are crucial to forming the perceptual units used in object-based attention. Moreover, we show how our model's modular design allows for extensions, such as incorporating saccadic momentum or presaccadic attention, to further align its output with human scanpaths.

{"title":"A robotics-inspired scanpath model reveals the importance of uncertainty and semantic object cues for gaze guidance in dynamic scenes.","authors":"Vito Mengers, Nicolas Roth, Oliver Brock, Klaus Obermayer, Martin Rolfs","doi":"10.1167/jov.25.2.6","DOIUrl":"10.1167/jov.25.2.6","url":null,"abstract":"<p><p>The objects we perceive guide our eye movements when observing real-world dynamic scenes. Yet, gaze shifts and selective attention are critical for perceiving details and refining object boundaries. Object segmentation and gaze behavior are, however, typically treated as two independent processes. Here, we present a computational model that simulates these processes in an interconnected manner and allows for hypothesis-driven investigations of distinct attentional mechanisms. Drawing on an information processing pattern from robotics, we use a Bayesian filter to recursively segment the scene, which also provides an uncertainty estimate for the object boundaries that we use to guide active scene exploration. We demonstrate that this model closely resembles observers' free viewing behavior on a dataset of dynamic real-world scenes, measured by scanpath statistics, including foveation duration and saccade amplitude distributions used for parameter fitting and higher-level statistics not used for fitting. These include how object detections, inspections, and returns are balanced and a delay of returning saccades without an explicit implementation of such temporal inhibition of return. Extensive simulations and ablation studies show that uncertainty promotes balanced exploration and that semantic object cues are crucial to forming the perceptual units used in object-based attention. Moreover, we show how our model's modular design allows for extensions, such as incorporating saccadic momentum or presaccadic attention, to further align its output with human scanpaths.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11812614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal dynamics of human color processing measured using a continuous tracking task.
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-02-03 DOI: 10.1167/jov.25.2.12
Michael A Barnett, Benjamin M Chin, Geoffrey K Aguirre, Johannes Burge, David H Brainard

We characterized the temporal dynamics of color processing using a continuous tracking paradigm by estimating subjects' temporal lag in tracking chromatic Gabor targets. To estimate the lag, we computed the cross-correlation between the velocities of the Gabor target's random walk and the velocities of the subject's tracking. Lag was taken as the time of the peak of the resulting cross-correlogram. We measured how the lag changes as a function of chromatic direction and contrast for stimuli in the LS cone contrast plane. In the same set of subjects, we also measured detection thresholds for stimuli with matched spatial, temporal, and chromatic properties. We created a model of tracking and detection performance to test whether a common representation of chromatic contrast accounts for both measures. The model summarizes the effect of chromatic contrast over different chromatic directions through elliptical isoperformance contours, the shapes of which are contrast independent. The fitted elliptical isoperformance contours have essentially the same orientation in the detection and tracking tasks. For the tracking task, however, there is a striking reduction in relative sensitivity to signals originating in the S cones.

{"title":"Temporal dynamics of human color processing measured using a continuous tracking task.","authors":"Michael A Barnett, Benjamin M Chin, Geoffrey K Aguirre, Johannes Burge, David H Brainard","doi":"10.1167/jov.25.2.12","DOIUrl":"10.1167/jov.25.2.12","url":null,"abstract":"<p><p>We characterized the temporal dynamics of color processing using a continuous tracking paradigm by estimating subjects' temporal lag in tracking chromatic Gabor targets. To estimate the lag, we computed the cross-correlation between the velocities of the Gabor target's random walk and the velocities of the subject's tracking. Lag was taken as the time of the peak of the resulting cross-correlogram. We measured how the lag changes as a function of chromatic direction and contrast for stimuli in the LS cone contrast plane. In the same set of subjects, we also measured detection thresholds for stimuli with matched spatial, temporal, and chromatic properties. We created a model of tracking and detection performance to test whether a common representation of chromatic contrast accounts for both measures. The model summarizes the effect of chromatic contrast over different chromatic directions through elliptical isoperformance contours, the shapes of which are contrast independent. The fitted elliptical isoperformance contours have essentially the same orientation in the detection and tracking tasks. For the tracking task, however, there is a striking reduction in relative sensitivity to signals originating in the S cones.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anticipatory smooth pursuit eye movements scale with the probability of visual motion: The role of target speed and acceleration. 预期平滑追踪眼动与视觉运动概率的关系:目标速度和加速度的作用。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-01-02 DOI: 10.1167/jov.25.1.2
Vanessa Carneiro Morita, David Souto, Guillaume S Masson, Anna Montagnini

Sensory-motor systems can extract statistical regularities in dynamic uncertain environments, enabling quicker responses and anticipatory behavior for expected events. Anticipatory smooth pursuit eye movements (aSP) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. To investigate the nature of the internal model of target kinematics underlying aSP, we tested the effect of varying the target kinematics and its predictability. Participants tracked a small visual target in a constant direction with either constant, accelerating, or decelerating speed. Across experimental blocks, we manipulated the probability of each kinematic condition varying either speed or acceleration across trials; with either one kinematic condition (providing certainty) or with a mixture of conditions with a fixed probability within a block. We show that aSP is robustly modulated by target kinematics. With constant-velocity targets, aSP velocity scales linearly with target velocity in blocked sessions, and matches the probability-weighted average in the mixture sessions. Predictable target acceleration does also have an influence on aSP, suggesting that the internal model of motion that drives anticipation contains some information about the changing target kinematics, beyond the initial target speed. However, there is a large variability across participants in the precision and consistency with which this information is taken into account to control anticipatory behavior.

感觉-运动系统可以在动态不确定的环境中提取统计规律,从而对预期事件做出更快的反应和预期行为。当即将到来的视觉运动目标的时间和运动特性完全或部分可预测时,在灵长类动物中观察到预期平滑追踪眼动(aSP)。为了研究aSP背后的目标运动学内部模型的本质,我们测试了改变目标运动学及其可预测性的影响。参与者以恒定、加速或减速的速度沿着固定的方向跟踪一个小的视觉目标。在实验块中,我们操纵了每个运动条件在试验中改变速度或加速度的概率;有一个运动条件(提供确定性)或混合条件,在一个块内具有固定的概率。我们证明了aSP是由目标运动学鲁棒调制的。对于等速靶,在阻塞段中,aSP速度与靶速度呈线性关系,而在混合段中,aSP速度与概率加权平均值相匹配。可预测的目标加速度也对aSP有影响,这表明驱动预期的内部运动模型包含了一些关于目标运动学变化的信息,超出了初始目标速度。然而,参与者在考虑这些信息以控制预期行为的准确性和一致性方面存在很大差异。
{"title":"Anticipatory smooth pursuit eye movements scale with the probability of visual motion: The role of target speed and acceleration.","authors":"Vanessa Carneiro Morita, David Souto, Guillaume S Masson, Anna Montagnini","doi":"10.1167/jov.25.1.2","DOIUrl":"10.1167/jov.25.1.2","url":null,"abstract":"<p><p>Sensory-motor systems can extract statistical regularities in dynamic uncertain environments, enabling quicker responses and anticipatory behavior for expected events. Anticipatory smooth pursuit eye movements (aSP) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. To investigate the nature of the internal model of target kinematics underlying aSP, we tested the effect of varying the target kinematics and its predictability. Participants tracked a small visual target in a constant direction with either constant, accelerating, or decelerating speed. Across experimental blocks, we manipulated the probability of each kinematic condition varying either speed or acceleration across trials; with either one kinematic condition (providing certainty) or with a mixture of conditions with a fixed probability within a block. We show that aSP is robustly modulated by target kinematics. With constant-velocity targets, aSP velocity scales linearly with target velocity in blocked sessions, and matches the probability-weighted average in the mixture sessions. Predictable target acceleration does also have an influence on aSP, suggesting that the internal model of motion that drives anticipation contains some information about the changing target kinematics, beyond the initial target speed. However, there is a large variability across participants in the precision and consistency with which this information is taken into account to control anticipatory behavior.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pupil responds spontaneously to visuospatial regularity. 瞳孔会自发地对视觉空间的规律性做出反应。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-01-02 DOI: 10.1167/jov.25.1.14
Zhiming Kong, Chen Chen, Jianrong Jia

Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain.

除了光反射,瞳孔还对各种高级认知过程作出反应。已发现多种统计规律的刺激调节瞳孔的反应。然而,大多数研究都使用听觉或视觉时间序列作为刺激,而瞳孔大小是否受到刺激空间排列的统计规律的调节尚不清楚。在三个实验中,我们创造了规则和不规则的感知刺激,与物理规则相匹配,以研究空间规则对被动观看时瞳孔反应的影响。以方向(实验1和2)和大小(实验3)作为刺激的实验一致表明,感知到的不规则刺激比常规刺激更能引起瞳孔收缩。此外,这种效应与刺激的亮度无关。综上所述,我们的研究表明,瞳孔对感知到的视觉空间规律性做出自发的反应,将影响瞳孔反应的刺激规律扩展到视觉空间域。
{"title":"Pupil responds spontaneously to visuospatial regularity.","authors":"Zhiming Kong, Chen Chen, Jianrong Jia","doi":"10.1167/jov.25.1.14","DOIUrl":"10.1167/jov.25.1.14","url":null,"abstract":"<p><p>Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of sign language learning on temporal resolution of visual attention. 手语学习对视觉注意时间分辨的影响。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-01-02 DOI: 10.1167/jov.25.1.3
Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B Wilbur, Evie A Malaia

The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.

手语使用者的视觉环境在时空参数上明显不同于非手语使用者。虽然时间和频谱分辨率在听觉模态中对语言发展的重要性已经得到了很好的确立,但对手语理解所必需的视觉注意的光谱时间参数仍然知之甚少。本研究考察了美国手语学习者在不同习得阶段的视觉时间分辨率,以确定手语经验如何影响感知采样。使用闪烁范式,我们评估了识别频率高达60 Hz的非相位视觉闪烁物体的准确性。我们的研究结果表明,第三学期的美国手语学习者在检测高频闪烁方面的准确性有所提高,表明时间分辨率有所提高。有趣的是,随着学习者对美国手语的熟练程度提高,他们的感知采样恢复到典型水平,这可能是因为手语理解向预测处理机制的转变。这些结果表明,视觉注意的时间分辨率具有延展性,并可能受到视觉语言学习过程的影响。
{"title":"Effect of sign language learning on temporal resolution of visual attention.","authors":"Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B Wilbur, Evie A Malaia","doi":"10.1167/jov.25.1.3","DOIUrl":"10.1167/jov.25.1.3","url":null,"abstract":"<p><p>The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target interception in virtual reality is better for natural versus unnatural trajectory shapes and orientations. 虚拟现实中的目标拦截对于自然的轨迹形状和方向比非自然的轨迹形状和方向更好。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-01-02 DOI: 10.1167/jov.25.1.11
Sofia Varon, Karsten Babin, Miriam Spering, Jody C Culham

Human performance in perceptual and visuomotor tasks is enhanced when stimulus motion follows the laws of gravitational physics, including acceleration consistent with Earth's gravity, g. Here we used a manual interception task in virtual reality to investigate the effects of trajectory shape and orientation on interception timing and accuracy. Participants punched to intercept a ball moving along one of four trajectories that varied in shape (parabola or tent) and orientation (upright or inverted). We also varied the location of visual fixation such that trajectories fell entirely within the lower or upper visual field. Reaction times were faster for more natural shapes and orientations, regardless of visual field. Overall accuracy was poorer and movement time was longer for the inverted tent condition than the other three conditions, perhaps because it was imperfectly reminiscent of a bouncing ball. A detailed analysis of spatial errors revealed that interception endpoints were more likely to fall along the path of the final trajectory in upright vs. inverted conditions, suggesting stronger expectations regarding the final trajectory direction for these conditions. Taken together, these results suggest that the naturalness of the shape and orientation of a trajectory contributes to performance in a virtual interception task.

当刺激运动遵循重力物理定律时,人类在感知和视觉运动任务中的表现会增强,包括与地球重力一致的加速度,例如。在这里,我们使用虚拟现实中的手动拦截任务来研究轨迹形状和方向对拦截时间和准确性的影响。参与者通过打孔来拦截沿四种不同形状(抛物线或帐篷)和方向(直立或倒置)运动轨迹之一的球。我们还改变了视觉固定的位置,使轨迹完全落在较低或较高的视野内。无论视野如何,对于更自然的形状和方向,反应时间更快。与其他三种情况相比,倒立帐篷条件的整体准确性较差,移动时间更长,这可能是因为它不太容易让人想起弹跳的球。对空间误差的详细分析显示,在垂直和倒立条件下,拦截端点更有可能沿着最终轨迹的路径落下,这表明在这些条件下,对最终轨迹方向的期望更强。综上所述,这些结果表明,轨迹的形状和方向的自然性有助于在虚拟拦截任务中的表现。
{"title":"Target interception in virtual reality is better for natural versus unnatural trajectory shapes and orientations.","authors":"Sofia Varon, Karsten Babin, Miriam Spering, Jody C Culham","doi":"10.1167/jov.25.1.11","DOIUrl":"10.1167/jov.25.1.11","url":null,"abstract":"<p><p>Human performance in perceptual and visuomotor tasks is enhanced when stimulus motion follows the laws of gravitational physics, including acceleration consistent with Earth's gravity, g. Here we used a manual interception task in virtual reality to investigate the effects of trajectory shape and orientation on interception timing and accuracy. Participants punched to intercept a ball moving along one of four trajectories that varied in shape (parabola or tent) and orientation (upright or inverted). We also varied the location of visual fixation such that trajectories fell entirely within the lower or upper visual field. Reaction times were faster for more natural shapes and orientations, regardless of visual field. Overall accuracy was poorer and movement time was longer for the inverted tent condition than the other three conditions, perhaps because it was imperfectly reminiscent of a bouncing ball. A detailed analysis of spatial errors revealed that interception endpoints were more likely to fall along the path of the final trajectory in upright vs. inverted conditions, suggesting stronger expectations regarding the final trajectory direction for these conditions. Taken together, these results suggest that the naturalness of the shape and orientation of a trajectory contributes to performance in a virtual interception task.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1