Our aim in this study was to understand how we perform visuospatial comparison tasks by analyzing ocular behavior and to examine how restrictions in macular or peripheral vision disturb ocular behavior and task performance. Two groups of 18 healthy participants with normal or corrected visual acuity performed visuospatial comparison tasks (computerized version of the elementary visuospatial perception [EVSP] test) (Pisella et al., 2013) with a gaze-contingent mask simulating either tubular vision (first group) or macular scotoma (second group). After these simulations of pathological conditions, all participants also performed the EVSP test in full view, enabling direct comparison of their oculomotor behavior and performance. In terms of oculomotor behavior, compared with the full view condition, alternation saccades between the two objects to compare were less numerous in the absence of peripheral vision, whereas the number of within-object exploration saccades decreased in the absence of macular vision. The absence of peripheral vision did not affect accuracy except for midline judgments, but the absence of central vision impaired accuracy across all visuospatial subtests. Besides confirming the crucial role of the macula for visuospatial comparison tasks, these experiments provided important insights into how sensory disorder modifies oculomotor behavior with or without consequences on performance accuracy.
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
The eye has considerable chromatic aberration, meaning that the accommodative demand varies with wavelength. Given this, how does the eye accommodate to light of differing spectral content? Previous work is not conclusive but, in general, the eye focuses in the center of the visible spectrum for broadband light, and it focuses at a distance appropriate for individual wavelengths for narrowband light. For stimuli containing two colors, there are also mixed reports. This is the second of a series of two papers where we investigate accommodation in relation to chromatic aberration Fernandez-Alonso, Finch, Love, and Read (2024). In this paper, for the first time, we measure how the eye accommodates to images containing two narrowband wavelengths, with varying relative luminance under monocular conditions. We find that the eye tends to accommodate between the two extremes, weighted by the relative luminance. At first sight, this seems reasonable, but we show that image quality would be maximized if the eye instead accommodated on the more luminous wavelength. Next we explore several hypotheses as to what signal the eye might be using to drive accommodation and compare these with the experimental data. We show that the data is best explained if the eye seeks to maximize contrast at low spatial frequencies. We consider the implication of these results for both the mechanism behind accommodation, and for modern displays containing narrowband illuminants.
The present study investigated the role of early visual experience in the development of postural control (balance) and locomotion (gait). In a cross-sectional design, balance and gait were assessed in 59 participants (ages 7-43 years) with a history of (a) transient congenital blindness, (b) transient late-onset blindness, (c) permanent congenitally blindness, or (d) permanent late-onset blindness, as well as in normally sighted controls. Cataract-reversal participants who experienced a transient phase of blindness and gained sight through cataract removal surgery showed worse balance performance compared with sighted controls even when tested with eyes closed. Individuals with reversed congenital cataracts performed worse than individuals with reversed developmental (late emerging) cataracts. Balance performance in congenitally cataract-reversal participants when tested with eyes closed was not significantly different from that in permanently blind participants. In contrast, their gait parameters did not differ significantly from those of sighted controls. The present findings highlight both the need for visual calibration of proprioceptive and vestibular systems and the crossmodal adaptability of locomotor functions.
The flash-lag effect (FLE) occurs when a flash's position seems to be delayed relative to a continuously moving object, even though both are physically aligned. Although several studies have demonstrated that reduced attention increases FLE magnitude, the precise mechanism underlying these attention-dependent effects remains elusive. In this study, we investigated the influence of visual attention on the FLE by manipulating the level of attention allocated to multiple stimuli moving simultaneously in different locations. Participants were cued to either focus on one moving stimulus or split their attention among two, three, or four moving stimuli presented in different quadrants. We measured trial-wise FLE to explore potential changes in the magnitude of perceived displacement and its trial-to-trial variability under different attention conditions. Our results reveal that FLE magnitudes were significantly greater when attention was divided among multiple stimuli compared with when attention was focused on a single stimulus, suggesting that divided attention considerably augments the perceptual illusion. However, FLE variability, measured as the coefficient of variation, did not differ between conditions, indicating that the consistency of the illusion is unaffected by divided attention. We discuss the interpretations and implications of our findings in the context of widely accepted explanations of the FLE within a dynamic environment.
Numerals, that is, semantic expressions of numbers, enable us to have an exact representation of the amount of things. Visual processing of numerals plays an indispensable role in the recognition and interpretation of numbers. Here, we investigate how visual information from numerals is processed to achieve semantic understanding. We first found that partial occlusion of some digital numerals introduces bistable interpretations. Next, by using the visual adaptation method, we investigated the origin of this bistability in human participants. We showed that adaptation to digital and normal Arabic numerals, as well as homologous shapes, but not Chinese numerals, biases the interpretation of a partially occluded digital numeral. We suggest that this bistable interpretation is driven by intermediate shape processing stages of vision, that is, by features more complex than local visual orientations, but more basic than the abstract concepts of numerals.