Marius Grandjean, Louise Kauffmann, Alexia Roux-Sibilon, Valérie Goffaux
Saccadic choice studies have shown that humans initiate faster saccades toward faces than other visual categories. Here, we tested whether the saccadic advantage for faces observed in past studies is partly due to stimuli being typically presented along the horizontal meridian (HM). Our previous work suggests that the radial bias along the HM facilitates access to the horizontal structure of faces, which optimally drives human face-specialized processing. We expected to corroborate the saccadic advantage for faces along the HM, where the radial bias facilitates access to horizontal content, and to observe a reduction of this advantage along the vertical meridian (VM), especially in participants showing a strong horizontal tuning for face recognition. Fifty participants performed a saccadic choice task targeting faces or vehicles presented at 15° eccentricity along the HM and VM. We also assessed the strength of the radial bias and the horizontal tuning for face identity recognition in each individual. As expected, saccades were faster and more accurate toward faces than vehicles; they were also faster along the HM than the VM. Contrary to our hypothesis, the saccadic face advantage did not differ between meridians, suggesting the robustness of face saccadic advantage. However, the saccadic face advantage along the VM correlated with the strength of the horizontal tuning of face identity recognition. Additionally, the radial bias predicted saccade latency toward faces along the HM. These findings indicate that low-level radial biases and high-level face-specialized mechanisms independently contribute to distinct functional aspects of the ultra-fast saccadic responses toward faces.
{"title":"Does radial bias contribute to fast saccades toward faces in the periphery?","authors":"Marius Grandjean, Louise Kauffmann, Alexia Roux-Sibilon, Valérie Goffaux","doi":"10.1167/jov.25.14.16","DOIUrl":"10.1167/jov.25.14.16","url":null,"abstract":"<p><p>Saccadic choice studies have shown that humans initiate faster saccades toward faces than other visual categories. Here, we tested whether the saccadic advantage for faces observed in past studies is partly due to stimuli being typically presented along the horizontal meridian (HM). Our previous work suggests that the radial bias along the HM facilitates access to the horizontal structure of faces, which optimally drives human face-specialized processing. We expected to corroborate the saccadic advantage for faces along the HM, where the radial bias facilitates access to horizontal content, and to observe a reduction of this advantage along the vertical meridian (VM), especially in participants showing a strong horizontal tuning for face recognition. Fifty participants performed a saccadic choice task targeting faces or vehicles presented at 15° eccentricity along the HM and VM. We also assessed the strength of the radial bias and the horizontal tuning for face identity recognition in each individual. As expected, saccades were faster and more accurate toward faces than vehicles; they were also faster along the HM than the VM. Contrary to our hypothesis, the saccadic face advantage did not differ between meridians, suggesting the robustness of face saccadic advantage. However, the saccadic face advantage along the VM correlated with the strength of the horizontal tuning of face identity recognition. Additionally, the radial bias predicted saccade latency toward faces along the HM. These findings indicate that low-level radial biases and high-level face-specialized mechanisms independently contribute to distinct functional aspects of the ultra-fast saccadic responses toward faces.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"16"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721439/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceptual reports can be attracted toward or repulsed from previous stimuli and responses. We investigated the conditions in which attractive and repulsive history effects occur with oriented Gabors by manipulating response type and frequency, as well as stimulus duration. When subjects adjusted a continuous response cue to match orientation, we observed repulsion from the previous stimulus when the stimulus was presented for 50 ms and attraction from the previous stimulus and response when it was presented for 500 ms, regardless of whether responses were given to every stimulus or every other stimulus. Both effects increased with relative orientation between events. With a categorical clockwise/counterclockwise response, there was attraction to the previous response and repulsion from the previous stimulus. Attraction to the previous response was stronger with sequential responses and short relative orientations. Repulsion was constant across all stimulus durations and response frequencies, and it increased with relative orientation. The overall history effect of the previous response and stimulus was repulsive with alternating categorical responses and attractive with sequential categorical responses. Our results replicate and synthesize seminal findings of the serial dependence and adaptation literature, as well as show independent history effects working with and against each other, determined by whether the response is categorical or continuous.
{"title":"Attractive and repulsive history effects in categorical and continuous estimates of orientation perception.","authors":"Mert Can, Thérèse Collins","doi":"10.1167/jov.25.14.23","DOIUrl":"10.1167/jov.25.14.23","url":null,"abstract":"<p><p>Perceptual reports can be attracted toward or repulsed from previous stimuli and responses. We investigated the conditions in which attractive and repulsive history effects occur with oriented Gabors by manipulating response type and frequency, as well as stimulus duration. When subjects adjusted a continuous response cue to match orientation, we observed repulsion from the previous stimulus when the stimulus was presented for 50 ms and attraction from the previous stimulus and response when it was presented for 500 ms, regardless of whether responses were given to every stimulus or every other stimulus. Both effects increased with relative orientation between events. With a categorical clockwise/counterclockwise response, there was attraction to the previous response and repulsion from the previous stimulus. Attraction to the previous response was stronger with sequential responses and short relative orientations. Repulsion was constant across all stimulus durations and response frequencies, and it increased with relative orientation. The overall history effect of the previous response and stimulus was repulsive with alternating categorical responses and attractive with sequential categorical responses. Our results replicate and synthesize seminal findings of the serial dependence and adaptation literature, as well as show independent history effects working with and against each other, determined by whether the response is categorical or continuous.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"23"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Léa Entzmann, Árni Kristjánsson, Árni Gunnar Ásgeirsson
In visual search, our gaze is guided by mental representations of stimulus features, known as attentional templates. These templates are thought to be probabilistic, shaped by environmental regularities. For example, participants can learn to distinguish between the shapes of different distractor color distributions in visual search. The present study assessed whether such subtle differences in distractor color distributions (Gaussian vs. uniform) are reflected in saccade endpoints. We conducted two experiments, each consisting of learning trials, designed to prime a specific distractor color distribution, and test trials, where target color varied in its distance from the mean of previously presented distractor distributions. Saccade endpoint deviations were observed through the global effect, where the saccades tended to land between two nearby stimuli. The experiments differed in difficulty, with test trials in Experiment 2 involving more distractors and colors. During test trials, reaction times and saccade endpoints were affected by target distance from the mean of the preceding distractor distribution. The farther the target color was from this mean, the less the saccade deviated from the target and the lower the reaction times. However, saccade endpoints did not reflect the shape of distractor color distributions, an effect that was observed only on reaction times in Experiment 2. Overall, color priming affects both reaction times and saccade deviations, but distractor feature distribution learning depends on search difficulty and response measures, with saccade endpoints less sensitive to subtle differences in the shape of color distributions.
{"title":"Saccade endpoints reflect attentional templates in visual search: Evidence from feature distribution learning.","authors":"Léa Entzmann, Árni Kristjánsson, Árni Gunnar Ásgeirsson","doi":"10.1167/jov.25.14.18","DOIUrl":"10.1167/jov.25.14.18","url":null,"abstract":"<p><p>In visual search, our gaze is guided by mental representations of stimulus features, known as attentional templates. These templates are thought to be probabilistic, shaped by environmental regularities. For example, participants can learn to distinguish between the shapes of different distractor color distributions in visual search. The present study assessed whether such subtle differences in distractor color distributions (Gaussian vs. uniform) are reflected in saccade endpoints. We conducted two experiments, each consisting of learning trials, designed to prime a specific distractor color distribution, and test trials, where target color varied in its distance from the mean of previously presented distractor distributions. Saccade endpoint deviations were observed through the global effect, where the saccades tended to land between two nearby stimuli. The experiments differed in difficulty, with test trials in Experiment 2 involving more distractors and colors. During test trials, reaction times and saccade endpoints were affected by target distance from the mean of the preceding distractor distribution. The farther the target color was from this mean, the less the saccade deviated from the target and the lower the reaction times. However, saccade endpoints did not reflect the shape of distractor color distributions, an effect that was observed only on reaction times in Experiment 2. Overall, color priming affects both reaction times and saccade deviations, but distractor feature distribution learning depends on search difficulty and response measures, with saccade endpoints less sensitive to subtle differences in the shape of color distributions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721437/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human observers can recognize the meaning of a complex visual scene in a fraction of a second, but not all scenes are equally easy to recognize at a glance. What governs this variability? We tested the hypothesis that scene understanding is modulated by two distinct forms of information: visual information, defined as the structural complexity of the image, and semantic information, defined as the richness of the conceptual content of the scene. We quantified visual information using image compressibility and quantified semantic information from the complexity of human-written scene descriptions. Across four behavioral experiments, participants performed either a rapid detection task (distinguishing intact scenes from phase-scrambled masks) or a basic-level categorization task. High visual information impaired both detection and categorization, consistent with a perceptual bottleneck. In contrast, high-semantic information facilitated detection but not categorization, suggesting that conceptual richness facilitates early perceptual processes without necessarily improving recognition. These findings reveal a dissociation between visual and semantic scene attributes and suggest that top-down expectations can selectively support early perceptual processing.
{"title":"Divergent roles of visual structure and conceptual meaning in scene detection and categorization.","authors":"Sage Aronson, Maria S Adkins, Michelle R Greene","doi":"10.1167/jov.25.14.21","DOIUrl":"10.1167/jov.25.14.21","url":null,"abstract":"<p><p>Human observers can recognize the meaning of a complex visual scene in a fraction of a second, but not all scenes are equally easy to recognize at a glance. What governs this variability? We tested the hypothesis that scene understanding is modulated by two distinct forms of information: visual information, defined as the structural complexity of the image, and semantic information, defined as the richness of the conceptual content of the scene. We quantified visual information using image compressibility and quantified semantic information from the complexity of human-written scene descriptions. Across four behavioral experiments, participants performed either a rapid detection task (distinguishing intact scenes from phase-scrambled masks) or a basic-level categorization task. High visual information impaired both detection and categorization, consistent with a perceptual bottleneck. In contrast, high-semantic information facilitated detection but not categorization, suggesting that conceptual richness facilitates early perceptual processes without necessarily improving recognition. These findings reveal a dissociation between visual and semantic scene attributes and suggest that top-down expectations can selectively support early perceptual processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"21"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akseli Pullinen, Riikka Mononen, Jaana Simola, Linda Henriksson
Boundary extension refers to a phenomenon in which individuals are likely to remember an image as having more content beyond its actual borders, mistakenly adding details that might have been just beyond the original edges. Despite the wealth of research published about the topic over many decades, most research has used simple two-dimensional (2D) images as stimuli. Consequently, there is insufficient evidence that boundary extension as a phenomenon generalizes to real-world scenarios with naturalistic viewing behavior. To address this gap, we designed a virtual reality (VR) experiment during which the participants (N = 60) were freely able to visually explore naturalistic three-dimensional indoor environments surrounding them. In the experiment, each participant visited each of the 20 virtual rooms twice: first to view the scene and then to complete a task. Their task during the second visit was to move to the location from which they had originally viewed the scene, hence matching their view of the scene to what they remembered seeing before. Especially for close-up views, participants ended their task at a location where their field of view of the scene was wider compared to the initial view, hence indicating boundary extension. The effect was also greater when the movement direction was forward from a wider field of view than that of the original view. Both findings are consistent with previous research and demonstrate that boundary extension is not limited to looking at 2D images but can also occur during naturalistic viewing scenarios. As our method showed no visible boundaries in the stimuli, our results suggest that the existence of such boundaries is not critical for eliciting boundary extension.
{"title":"Boundary extension during naturalistic viewing.","authors":"Akseli Pullinen, Riikka Mononen, Jaana Simola, Linda Henriksson","doi":"10.1167/jov.25.14.17","DOIUrl":"10.1167/jov.25.14.17","url":null,"abstract":"<p><p>Boundary extension refers to a phenomenon in which individuals are likely to remember an image as having more content beyond its actual borders, mistakenly adding details that might have been just beyond the original edges. Despite the wealth of research published about the topic over many decades, most research has used simple two-dimensional (2D) images as stimuli. Consequently, there is insufficient evidence that boundary extension as a phenomenon generalizes to real-world scenarios with naturalistic viewing behavior. To address this gap, we designed a virtual reality (VR) experiment during which the participants (N = 60) were freely able to visually explore naturalistic three-dimensional indoor environments surrounding them. In the experiment, each participant visited each of the 20 virtual rooms twice: first to view the scene and then to complete a task. Their task during the second visit was to move to the location from which they had originally viewed the scene, hence matching their view of the scene to what they remembered seeing before. Especially for close-up views, participants ended their task at a location where their field of view of the scene was wider compared to the initial view, hence indicating boundary extension. The effect was also greater when the movement direction was forward from a wider field of view than that of the original view. Both findings are consistent with previous research and demonstrate that boundary extension is not limited to looking at 2D images but can also occur during naturalistic viewing scenarios. As our method showed no visible boundaries in the stimuli, our results suggest that the existence of such boundaries is not critical for eliciting boundary extension.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716452/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, "swap errors" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.
{"title":"Spatiotemporal predictability of saccades modulates postsaccadic feature interference.","authors":"Tzu-Yao Chiu, Isabel Jaen, Julie D Golomb","doi":"10.1167/jov.25.14.1","DOIUrl":"10.1167/jov.25.14.1","url":null,"abstract":"<p><p>Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, \"swap errors\" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12697699/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi
Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.
{"title":"Sex differences in fixational eye movements following concussion.","authors":"Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi","doi":"10.1167/jov.25.14.9","DOIUrl":"10.1167/jov.25.14.9","url":null,"abstract":"<p><p>Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson
Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.
{"title":"The effect of spatial attention on saccadic adaptation.","authors":"Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson","doi":"10.1167/jov.25.14.13","DOIUrl":"10.1167/jov.25.14.13","url":null,"abstract":"<p><p>Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han
Binocular fusion normally relies on a "cyclopean eye" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.
{"title":"Attention can shift the reference eye under binocular fusion failure: A case report.","authors":"Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han","doi":"10.1167/jov.25.14.15","DOIUrl":"10.1167/jov.25.14.15","url":null,"abstract":"<p><p>Binocular fusion normally relies on a \"cyclopean eye\" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck
Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.
{"title":"Representational dynamics of the main dimensions of object space: Face/body selectivity aligns temporally with animal taxonomy but not with animacy.","authors":"Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck","doi":"10.1167/jov.25.13.2","DOIUrl":"10.1167/jov.25.13.2","url":null,"abstract":"<p><p>Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12598827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}