Despite decades of intense study, the spatiotemporal processing of letters in visual word recognition has yet to be elucidated, with the debate largely focusing on whether individual letters are processed serially or in parallel. The present study investigated the processing of individual letters and letter combinations through time in visual word recognition using displays where signal-to-noise ratio (SNR) varied randomly throughout a 200 ms exposure duration. In Experiment 1, SNR varied either homogeneously across all letters or independently for each letter position (cf. heterogeneous sampling). Reading accuracy was substantially greater with homogeneous than heterogeneous sampling. Experiment 2 again used heterogeneous sampling and classification images (CIs) were calculated for individual letter positions or conjunctions thereof, reflecting processing efficiency according to time during target exposure. These CIs or their Fourier transforms were passed to a classifier to assess differences in the result patterns across individual letter positions or their conjunctions. Overall, the present results indicate the following: (1) significant parallel letter processing capacity throughout exposure duration; (2) dissociable processing mechanisms for each letter position; and (3) letter position-specific mechanisms for letter conjunctions that are distinct from those for individual letters. The results also provide evidence relevant to the neural code underlying the perceptual mechanisms that were uncovered.
{"title":"Spatiotemporal letter processing in visual word recognition uncovered by perceptual oscillations.","authors":"Martin Arguin, Simon Fortier-St-Pierre","doi":"10.1167/jov.25.14.8","DOIUrl":"10.1167/jov.25.14.8","url":null,"abstract":"<p><p>Despite decades of intense study, the spatiotemporal processing of letters in visual word recognition has yet to be elucidated, with the debate largely focusing on whether individual letters are processed serially or in parallel. The present study investigated the processing of individual letters and letter combinations through time in visual word recognition using displays where signal-to-noise ratio (SNR) varied randomly throughout a 200 ms exposure duration. In Experiment 1, SNR varied either homogeneously across all letters or independently for each letter position (cf. heterogeneous sampling). Reading accuracy was substantially greater with homogeneous than heterogeneous sampling. Experiment 2 again used heterogeneous sampling and classification images (CIs) were calculated for individual letter positions or conjunctions thereof, reflecting processing efficiency according to time during target exposure. These CIs or their Fourier transforms were passed to a classifier to assess differences in the result patterns across individual letter positions or their conjunctions. Overall, the present results indicate the following: (1) significant parallel letter processing capacity throughout exposure duration; (2) dissociable processing mechanisms for each letter position; and (3) letter position-specific mechanisms for letter conjunctions that are distinct from those for individual letters. The results also provide evidence relevant to the neural code underlying the perceptual mechanisms that were uncovered.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human observers can recognize the meaning of a complex visual scene in a fraction of a second, but not all scenes are equally easy to recognize at a glance. What governs this variability? We tested the hypothesis that scene understanding is modulated by two distinct forms of information: visual information, defined as the structural complexity of the image, and semantic information, defined as the richness of the conceptual content of the scene. We quantified visual information using image compressibility and quantified semantic information from the complexity of human-written scene descriptions. Across four behavioral experiments, participants performed either a rapid detection task (distinguishing intact scenes from phase-scrambled masks) or a basic-level categorization task. High visual information impaired both detection and categorization, consistent with a perceptual bottleneck. In contrast, high-semantic information facilitated detection but not categorization, suggesting that conceptual richness facilitates early perceptual processes without necessarily improving recognition. These findings reveal a dissociation between visual and semantic scene attributes and suggest that top-down expectations can selectively support early perceptual processing.
{"title":"Divergent roles of visual structure and conceptual meaning in scene detection and categorization.","authors":"Sage Aronson, Maria S Adkins, Michelle R Greene","doi":"10.1167/jov.25.14.21","DOIUrl":"10.1167/jov.25.14.21","url":null,"abstract":"<p><p>Human observers can recognize the meaning of a complex visual scene in a fraction of a second, but not all scenes are equally easy to recognize at a glance. What governs this variability? We tested the hypothesis that scene understanding is modulated by two distinct forms of information: visual information, defined as the structural complexity of the image, and semantic information, defined as the richness of the conceptual content of the scene. We quantified visual information using image compressibility and quantified semantic information from the complexity of human-written scene descriptions. Across four behavioral experiments, participants performed either a rapid detection task (distinguishing intact scenes from phase-scrambled masks) or a basic-level categorization task. High visual information impaired both detection and categorization, consistent with a perceptual bottleneck. In contrast, high-semantic information facilitated detection but not categorization, suggesting that conceptual richness facilitates early perceptual processes without necessarily improving recognition. These findings reveal a dissociation between visual and semantic scene attributes and suggest that top-down expectations can selectively support early perceptual processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"21"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akseli Pullinen, Riikka Mononen, Jaana Simola, Linda Henriksson
Boundary extension refers to a phenomenon in which individuals are likely to remember an image as having more content beyond its actual borders, mistakenly adding details that might have been just beyond the original edges. Despite the wealth of research published about the topic over many decades, most research has used simple two-dimensional (2D) images as stimuli. Consequently, there is insufficient evidence that boundary extension as a phenomenon generalizes to real-world scenarios with naturalistic viewing behavior. To address this gap, we designed a virtual reality (VR) experiment during which the participants (N = 60) were freely able to visually explore naturalistic three-dimensional indoor environments surrounding them. In the experiment, each participant visited each of the 20 virtual rooms twice: first to view the scene and then to complete a task. Their task during the second visit was to move to the location from which they had originally viewed the scene, hence matching their view of the scene to what they remembered seeing before. Especially for close-up views, participants ended their task at a location where their field of view of the scene was wider compared to the initial view, hence indicating boundary extension. The effect was also greater when the movement direction was forward from a wider field of view than that of the original view. Both findings are consistent with previous research and demonstrate that boundary extension is not limited to looking at 2D images but can also occur during naturalistic viewing scenarios. As our method showed no visible boundaries in the stimuli, our results suggest that the existence of such boundaries is not critical for eliciting boundary extension.
{"title":"Boundary extension during naturalistic viewing.","authors":"Akseli Pullinen, Riikka Mononen, Jaana Simola, Linda Henriksson","doi":"10.1167/jov.25.14.17","DOIUrl":"10.1167/jov.25.14.17","url":null,"abstract":"<p><p>Boundary extension refers to a phenomenon in which individuals are likely to remember an image as having more content beyond its actual borders, mistakenly adding details that might have been just beyond the original edges. Despite the wealth of research published about the topic over many decades, most research has used simple two-dimensional (2D) images as stimuli. Consequently, there is insufficient evidence that boundary extension as a phenomenon generalizes to real-world scenarios with naturalistic viewing behavior. To address this gap, we designed a virtual reality (VR) experiment during which the participants (N = 60) were freely able to visually explore naturalistic three-dimensional indoor environments surrounding them. In the experiment, each participant visited each of the 20 virtual rooms twice: first to view the scene and then to complete a task. Their task during the second visit was to move to the location from which they had originally viewed the scene, hence matching their view of the scene to what they remembered seeing before. Especially for close-up views, participants ended their task at a location where their field of view of the scene was wider compared to the initial view, hence indicating boundary extension. The effect was also greater when the movement direction was forward from a wider field of view than that of the original view. Both findings are consistent with previous research and demonstrate that boundary extension is not limited to looking at 2D images but can also occur during naturalistic viewing scenarios. As our method showed no visible boundaries in the stimuli, our results suggest that the existence of such boundaries is not critical for eliciting boundary extension.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716452/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, "swap errors" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.
{"title":"Spatiotemporal predictability of saccades modulates postsaccadic feature interference.","authors":"Tzu-Yao Chiu, Isabel Jaen, Julie D Golomb","doi":"10.1167/jov.25.14.1","DOIUrl":"10.1167/jov.25.14.1","url":null,"abstract":"<p><p>Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, \"swap errors\" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12697699/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi
Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.
{"title":"Sex differences in fixational eye movements following concussion.","authors":"Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi","doi":"10.1167/jov.25.14.9","DOIUrl":"10.1167/jov.25.14.9","url":null,"abstract":"<p><p>Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson
Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.
{"title":"The effect of spatial attention on saccadic adaptation.","authors":"Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson","doi":"10.1167/jov.25.14.13","DOIUrl":"10.1167/jov.25.14.13","url":null,"abstract":"<p><p>Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han
Binocular fusion normally relies on a "cyclopean eye" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.
{"title":"Attention can shift the reference eye under binocular fusion failure: A case report.","authors":"Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han","doi":"10.1167/jov.25.14.15","DOIUrl":"10.1167/jov.25.14.15","url":null,"abstract":"<p><p>Binocular fusion normally relies on a \"cyclopean eye\" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.
{"title":"Allocentric spatial representations dominate when switching between real and virtual worlds.","authors":"Meaghan McManus, Franziska Seifert, Immo Schütz, Katja Fiehler","doi":"10.1167/jov.25.13.7","DOIUrl":"10.1167/jov.25.13.7","url":null,"abstract":"<p><p>After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12629136/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck
Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.
{"title":"Representational dynamics of the main dimensions of object space: Face/body selectivity aligns temporally with animal taxonomy but not with animacy.","authors":"Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck","doi":"10.1167/jov.25.13.2","DOIUrl":"10.1167/jov.25.13.2","url":null,"abstract":"<p><p>Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12598827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Balas, Myra Morton, Molly Setchfield, Lily Roshau, Emily Westrick
Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.
面部幻想性视错觉,即在非人脸图像中看到类似人脸的图案的现象,具有双重性质:即使观察者可以识别刺激的真实性质,但幻想性模式也被体验为类似人脸的模式(Stuart et al., 2025)。尽管空想面孔似乎主要是由眼斑和嘴巴的规范排列造成的,但我们假设,对空想模式的真实解释和面部解释之间的竞争可能会限制自然场景和纹理中的面部空想。具体来说,我们预测对比否定会破坏中高级识别的多个方面,通过削弱真实的、非面部刺激的解释,可能会增加复杂自然纹理中面部空想性视错觉的发生率。我们向成人参与者(n = 27)和5- 12岁的儿童(n = 67)展示了一系列描绘草、树叶、贝壳和岩石等纹理的自然图像。我们要求参与者圈出每张图片中任何看起来像脸的图案,没有反应时间、图案大小、位置和方向的限制。我们发现,在我们的成人和儿童样本中,对比阴性图像比阳性图像产生更多的空想性面部检测。我们得出的结论是,干扰物体和纹理识别会增强儿童和成人的幻想性视错觉,因为它损害了幻想性视错觉模式的一半双重性质。
{"title":"Contrast negation increases face pareidolia rates in natural scenes.","authors":"Benjamin Balas, Myra Morton, Molly Setchfield, Lily Roshau, Emily Westrick","doi":"10.1167/jov.25.13.5","DOIUrl":"10.1167/jov.25.13.5","url":null,"abstract":"<p><p>Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12617666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}