Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi
Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.
{"title":"Sex differences in fixational eye movements following concussion.","authors":"Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi","doi":"10.1167/jov.25.14.9","DOIUrl":"10.1167/jov.25.14.9","url":null,"abstract":"<p><p>Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson
Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.
{"title":"The effect of spatial attention on saccadic adaptation.","authors":"Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson","doi":"10.1167/jov.25.14.13","DOIUrl":"10.1167/jov.25.14.13","url":null,"abstract":"<p><p>Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han
Binocular fusion normally relies on a "cyclopean eye" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.
{"title":"Attention can shift the reference eye under binocular fusion failure: A case report.","authors":"Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han","doi":"10.1167/jov.25.14.15","DOIUrl":"10.1167/jov.25.14.15","url":null,"abstract":"<p><p>Binocular fusion normally relies on a \"cyclopean eye\" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck
Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.
{"title":"Representational dynamics of the main dimensions of object space: Face/body selectivity aligns temporally with animal taxonomy but not with animacy.","authors":"Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck","doi":"10.1167/jov.25.13.2","DOIUrl":"10.1167/jov.25.13.2","url":null,"abstract":"<p><p>Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12598827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.
{"title":"Allocentric spatial representations dominate when switching between real and virtual worlds.","authors":"Meaghan McManus, Franziska Seifert, Immo Schütz, Katja Fiehler","doi":"10.1167/jov.25.13.7","DOIUrl":"10.1167/jov.25.13.7","url":null,"abstract":"<p><p>After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12629136/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Balas, Myra Morton, Molly Setchfield, Lily Roshau, Emily Westrick
Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.
面部幻想性视错觉,即在非人脸图像中看到类似人脸的图案的现象,具有双重性质:即使观察者可以识别刺激的真实性质,但幻想性模式也被体验为类似人脸的模式(Stuart et al., 2025)。尽管空想面孔似乎主要是由眼斑和嘴巴的规范排列造成的,但我们假设,对空想模式的真实解释和面部解释之间的竞争可能会限制自然场景和纹理中的面部空想。具体来说,我们预测对比否定会破坏中高级识别的多个方面,通过削弱真实的、非面部刺激的解释,可能会增加复杂自然纹理中面部空想性视错觉的发生率。我们向成人参与者(n = 27)和5- 12岁的儿童(n = 67)展示了一系列描绘草、树叶、贝壳和岩石等纹理的自然图像。我们要求参与者圈出每张图片中任何看起来像脸的图案,没有反应时间、图案大小、位置和方向的限制。我们发现,在我们的成人和儿童样本中,对比阴性图像比阳性图像产生更多的空想性面部检测。我们得出的结论是,干扰物体和纹理识别会增强儿童和成人的幻想性视错觉,因为它损害了幻想性视错觉模式的一半双重性质。
{"title":"Contrast negation increases face pareidolia rates in natural scenes.","authors":"Benjamin Balas, Myra Morton, Molly Setchfield, Lily Roshau, Emily Westrick","doi":"10.1167/jov.25.13.5","DOIUrl":"10.1167/jov.25.13.5","url":null,"abstract":"<p><p>Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12617666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karla Matic, Issam Tafech, Peter König, John-Dylan Haynes
After the offset of complex visual stimuli, rich stimulus information remains briefly available to the observer, reflecting a rapidly decaying iconic memory trace. Here we found that even if the cues are presented in the final stage of the stimulus presentation, the reportable information already starts decaying. Using closely spaced readout cues and a theoretical model of information availability, we observed that a cue has to be presented around 10 to 30 milliseconds before stimulus offset to access the full sensory information. We suggest that this does not reflect an early loss in sensory encoding, but instead it is a consequence of a latency in the processing of the cue that postpones the readout of the sensory representation by 10 to 30 milliseconds. Our analysis also shows that spatial proximity of items in complex arrays impacts sensory representation during both perceptual encoding and initial memory decay. Overall, these results provide a theoretical and empirical characterization of the readout from visual representations and offer a detailed insight into the transition from perception into iconic memory.
{"title":"Temporal dynamics and readout latency in perception and iconic memory.","authors":"Karla Matic, Issam Tafech, Peter König, John-Dylan Haynes","doi":"10.1167/jov.25.13.3","DOIUrl":"10.1167/jov.25.13.3","url":null,"abstract":"<p><p>After the offset of complex visual stimuli, rich stimulus information remains briefly available to the observer, reflecting a rapidly decaying iconic memory trace. Here we found that even if the cues are presented in the final stage of the stimulus presentation, the reportable information already starts decaying. Using closely spaced readout cues and a theoretical model of information availability, we observed that a cue has to be presented around 10 to 30 milliseconds before stimulus offset to access the full sensory information. We suggest that this does not reflect an early loss in sensory encoding, but instead it is a consequence of a latency in the processing of the cue that postpones the readout of the sensory representation by 10 to 30 milliseconds. Our analysis also shows that spatial proximity of items in complex arrays impacts sensory representation during both perceptual encoding and initial memory decay. Overall, these results provide a theoretical and empirical characterization of the readout from visual representations and offer a detailed insight into the transition from perception into iconic memory.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12603959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob M Morris, Esteban Fernández-Juricic, Caryn E Plummer, Bret A Moore
To describe the visual field of three common model species in vision science to understand the organization of their visual perceptual experience and contribute to continued studies of visual processing. Visual fields were measured using an ophthalmoscopic reflex technique in four common ferrets, four albino rats, and six northern tree shrews. Animals were anesthetized to avoid stress and the midpoint between their eyes was centered inside a spherical space. A rotating perimetric arm was manipulated in 10° increments around the head. At each increment, direct ophthalmoscopy was used to visualize limits of the retinal reflex for each eye, the overlap being the binocular visual field. Mean binocularity in the horizontal plane was 63.7 ± 5.1°, 79.1 ± 7.4°, and 53.6 ± 12.0° in the ferret, rat, and shrew, respectively. Maximum mean binocularity was 69.0 ± 1.6° in the ferret, 90.0 ± 3.1° in the rat, and 53.6 ± 12.2° in the shrew, located at 10° above, 40° above, and at the horizontal plane, respectively. Binocularity extended to 160°, 200°, and 180° in the sagittal plane in the ferret, rat, and shrew, respectively, from at least below the nose to above the head in all animals. Establishing the extent of the visual field accessible to the retina provides insight into the egocentric perceptual experience of animals. In describing the visual field, we provide a reference for the representation of the visual space in different cortical and retinal regions, many of which represent specific subregions of the visual field.
{"title":"Visual field of the ferret (Mustela putorius furo), rat (Rattus norvegicus), and tree shrew (Tupaia belangeri).","authors":"Jacob M Morris, Esteban Fernández-Juricic, Caryn E Plummer, Bret A Moore","doi":"10.1167/jov.25.13.8","DOIUrl":"10.1167/jov.25.13.8","url":null,"abstract":"<p><p>To describe the visual field of three common model species in vision science to understand the organization of their visual perceptual experience and contribute to continued studies of visual processing. Visual fields were measured using an ophthalmoscopic reflex technique in four common ferrets, four albino rats, and six northern tree shrews. Animals were anesthetized to avoid stress and the midpoint between their eyes was centered inside a spherical space. A rotating perimetric arm was manipulated in 10° increments around the head. At each increment, direct ophthalmoscopy was used to visualize limits of the retinal reflex for each eye, the overlap being the binocular visual field. Mean binocularity in the horizontal plane was 63.7 ± 5.1°, 79.1 ± 7.4°, and 53.6 ± 12.0° in the ferret, rat, and shrew, respectively. Maximum mean binocularity was 69.0 ± 1.6° in the ferret, 90.0 ± 3.1° in the rat, and 53.6 ± 12.2° in the shrew, located at 10° above, 40° above, and at the horizontal plane, respectively. Binocularity extended to 160°, 200°, and 180° in the sagittal plane in the ferret, rat, and shrew, respectively, from at least below the nose to above the head in all animals. Establishing the extent of the visual field accessible to the retina provides insight into the egocentric perceptual experience of animals. In describing the visual field, we provide a reference for the representation of the visual space in different cortical and retinal regions, many of which represent specific subregions of the visual field.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12629130/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145514104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie Meissner, Jochen Miksch, Lena Würbach, Sascha Feder, Sabine Grimm, Wolfgang Einhäuser, Jutta Billino
Gaze behavior during locomotion must balance the sampling of relevant information and the need for a stable gait. To maintain a safe gait in the light of declining resources, older adults might shift this balance toward the uptake of gait-related information. We investigated how violations of expectations affect gaze behavior and information uptake across age groups by asking younger and older adults to locomote through a virtual hallway, where they encountered expected and unexpected objects. We found that older adults look more on the floor, despite the translational locomotion, though not the rotational, being virtual. Dwell times on unexpected objects were increased in both age groups compared to expected objects. Although older adults showed shorter dwell times on expected objects, dwell times on unexpected objects were similar across age groups. Thus the difference between expected and unexpected objects was greater in older adults. Gaze distributions were more influenced by cognitive control capacities than by motor control capacities. Our findings indicate that unexpected information attracts attention during locomotion-particularly in older adults. However, during actual locomotion in the real world, increased information processing might come at the cost of reduced gait safety if processing resources are shifted away from stabilizing gait.
{"title":"Violated expectations during locomotion through virtual environments: Age effects on gaze guidance.","authors":"Sophie Meissner, Jochen Miksch, Lena Würbach, Sascha Feder, Sabine Grimm, Wolfgang Einhäuser, Jutta Billino","doi":"10.1167/jov.25.13.11","DOIUrl":"10.1167/jov.25.13.11","url":null,"abstract":"<p><p>Gaze behavior during locomotion must balance the sampling of relevant information and the need for a stable gait. To maintain a safe gait in the light of declining resources, older adults might shift this balance toward the uptake of gait-related information. We investigated how violations of expectations affect gaze behavior and information uptake across age groups by asking younger and older adults to locomote through a virtual hallway, where they encountered expected and unexpected objects. We found that older adults look more on the floor, despite the translational locomotion, though not the rotational, being virtual. Dwell times on unexpected objects were increased in both age groups compared to expected objects. Although older adults showed shorter dwell times on expected objects, dwell times on unexpected objects were similar across age groups. Thus the difference between expected and unexpected objects was greater in older adults. Gaze distributions were more influenced by cognitive control capacities than by motor control capacities. Our findings indicate that unexpected information attracts attention during locomotion-particularly in older adults. However, during actual locomotion in the real world, increased information processing might come at the cost of reduced gait safety if processing resources are shifted away from stabilizing gait.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"11"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12663891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wiebke Nörenberg, Richard Schweitzer, Martin Rolfs
The accurate inference of causality between actions and their sensory outcomes requires determining their temporal relationship correctly despite variable delays within and across sensory modalities. Temporal recalibration-the perceptual realignment of actions with delayed sensory feedback-has been demonstrated across various sensorimotor domains. Here, we investigate whether this mechanism extends to saccadic eye movements and sensory events contingent on them. In three experiments, participants made horizontal saccades that triggered high-contrast flashes at varying delays. They then reported whether the flashes occurred during or after the saccade, allowing us to track perceived event timing. Exposure to consistent delays between saccade onset and the flash led to a shift in perceptual reports: flashes presented after saccade offset were more often judged as occurring during the movement. This recalibration effect was robust even when we manipulated relevant visual cues such as the presence of a structured background or the continuity of the saccade target. In a replay condition, we found a significant but much smaller recalibration effect between replayed saccades and flash, demonstrating the importance of action execution for visuomotor temporal recalibration. These findings highlight the visual system's remarkable adaptability to temporal delays between eye movements and their sensory consequences. A similar recalibration mechanism may support perceptual stability in natural vision by dynamically realigning saccades with their resulting visual input, even amid changing visual conditions.
{"title":"Temporal recalibration to delayed visual consequences of saccades.","authors":"Wiebke Nörenberg, Richard Schweitzer, Martin Rolfs","doi":"10.1167/jov.25.13.4","DOIUrl":"10.1167/jov.25.13.4","url":null,"abstract":"<p><p>The accurate inference of causality between actions and their sensory outcomes requires determining their temporal relationship correctly despite variable delays within and across sensory modalities. Temporal recalibration-the perceptual realignment of actions with delayed sensory feedback-has been demonstrated across various sensorimotor domains. Here, we investigate whether this mechanism extends to saccadic eye movements and sensory events contingent on them. In three experiments, participants made horizontal saccades that triggered high-contrast flashes at varying delays. They then reported whether the flashes occurred during or after the saccade, allowing us to track perceived event timing. Exposure to consistent delays between saccade onset and the flash led to a shift in perceptual reports: flashes presented after saccade offset were more often judged as occurring during the movement. This recalibration effect was robust even when we manipulated relevant visual cues such as the presence of a structured background or the continuity of the saccade target. In a replay condition, we found a significant but much smaller recalibration effect between replayed saccades and flash, demonstrating the importance of action execution for visuomotor temporal recalibration. These findings highlight the visual system's remarkable adaptability to temporal delays between eye movements and their sensory consequences. A similar recalibration mechanism may support perceptual stability in natural vision by dynamically realigning saccades with their resulting visual input, even amid changing visual conditions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12603963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}