Pub Date : 2025-09-12eCollection Date: 2025-09-01DOI: 10.1177/20416695251376196
Kazumichi Matsumiya, Nanami Nakashima
To localize tactile events in external space, our perceptual system must transform skin-based locations into an external frame of reference. Such a transformation has been reported to involve reference frames that are unrelated to tactile sensations, such as eye position, which supports the idea that a visual reference frame is a single unified frame of reference for transforming spatial information from all sensory modalities. However, it remains unclear how tactile events are perceptually localized during saccadic eye movements. In this study, we presented a single tactile stimulus at a fixed location on the skin and investigated the time course of its localization before, during, and after a saccade. Participants reported the perceived location of the tactile stimulus in a visually aligned virtual space. We found that the tactile stimulus was mislocalized in the direction of the saccade. This mislocalization appeared even before the presentation of the saccade target and continued until 500 ms after saccade onset. These findings demonstrate that tactile localization is influenced by saccade planning or preparation and suggest that the time course of tactile localization during a saccade may differ from previously reported patterns of visual localization during a saccade.
{"title":"Localization of a single tactile stimulus during saccadic eye movements.","authors":"Kazumichi Matsumiya, Nanami Nakashima","doi":"10.1177/20416695251376196","DOIUrl":"10.1177/20416695251376196","url":null,"abstract":"<p><p>To localize tactile events in external space, our perceptual system must transform skin-based locations into an external frame of reference. Such a transformation has been reported to involve reference frames that are unrelated to tactile sensations, such as eye position, which supports the idea that a visual reference frame is a single unified frame of reference for transforming spatial information from all sensory modalities. However, it remains unclear how tactile events are perceptually localized during saccadic eye movements. In this study, we presented a single tactile stimulus at a fixed location on the skin and investigated the time course of its localization before, during, and after a saccade. Participants reported the perceived location of the tactile stimulus in a visually aligned virtual space. We found that the tactile stimulus was mislocalized in the direction of the saccade. This mislocalization appeared even before the presentation of the saccade target and continued until 500 ms after saccade onset. These findings demonstrate that tactile localization is influenced by saccade planning or preparation and suggest that the time course of tactile localization during a saccade may differ from previously reported patterns of visual localization during a saccade.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251376196"},"PeriodicalIF":1.1,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12432307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04eCollection Date: 2025-09-01DOI: 10.1177/20416695251372334
Pierre-Pascal Forster, Simon J Hazenberg, Vebjørn Ekroll, Rob van Lier
Some occluders evoke the compelling impression that the space behind them is empty. Stage magicians use this illusion of absence to produce objects out of thin air. The generic view principle predicts that the illusion of absence should increase with decreasing occluder size. We investigated this prediction in experiments where participants saw a partly occluded scene and the same scene without the occluder, revealing a piece of fruit. They then rated (1) how easy it felt to imagine that the fruit was hidden behind the occluder and (2) how likely they thought it was that the fruit was hidden behind the occluder. Both ratings increased with increasing occluder area. This shows that the illusion of absence increases with decreasing occluder area, as predicted by the generic view principle. These findings could provide a starting point for future studies aiming to understand and prevent road accidents involving obstructions of view.
{"title":"The illusory perception of occluded space as empty depends on the occluded area.","authors":"Pierre-Pascal Forster, Simon J Hazenberg, Vebjørn Ekroll, Rob van Lier","doi":"10.1177/20416695251372334","DOIUrl":"10.1177/20416695251372334","url":null,"abstract":"<p><p>Some occluders evoke the compelling impression that the space behind them is empty. Stage magicians use this illusion of absence to produce objects out of thin air. The generic view principle predicts that the illusion of absence should increase with decreasing occluder size. We investigated this prediction in experiments where participants saw a partly occluded scene and the same scene without the occluder, revealing a piece of fruit. They then rated (1) how easy it felt to imagine that the fruit was hidden behind the occluder and (2) how likely they thought it was that the fruit was hidden behind the occluder. Both ratings increased with increasing occluder area. This shows that the illusion of absence increases with decreasing occluder area, as predicted by the generic view principle. These findings could provide a starting point for future studies aiming to understand and prevent road accidents involving obstructions of view.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251372334"},"PeriodicalIF":1.1,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25eCollection Date: 2025-07-01DOI: 10.1177/20416695251364725
Takahiro Kawabe
In dynamic visual scenes, many materials-including cloth, jelly-like bodies, and flowing liquids-undergo non-rigid deformations that convey information about their physical state. Among such cues, we focus on deformation-based motion-defined as the spatial shifts of image deformation. Studying deformation-based motion is essential because it lies at the intersection of motion perception and material perception. This study examines how two fundamental properties-spatial frequency and displacement speed-jointly shape the perception of deformation-based motion. We focused on these parameters because, in luminance-based motion perception, spatial frequency and displacement speed have been shown to critically influence motion sensitivity. Across three experiments using sequentially deformed 1/f noise images as a neutral background, we systematically manipulated the spatial frequency components of the deformation and the speed at which these deformations were displaced. Results showed that direction discrimination performance was strongly modulated by the interaction between spatial frequency and displacement speed. Suppressing local deformation cues improved discrimination at low frequencies, suggesting that local signals may interfere with global motion inference. These findings reveal how the spatial structure and dynamics of image deformation constrain motion perception and provide insights into how the brain interprets dynamic visual information from non-rigid materials.
{"title":"Perceiving direction of deformation-based motion.","authors":"Takahiro Kawabe","doi":"10.1177/20416695251364725","DOIUrl":"10.1177/20416695251364725","url":null,"abstract":"<p><p>In dynamic visual scenes, many materials-including cloth, jelly-like bodies, and flowing liquids-undergo non-rigid deformations that convey information about their physical state. Among such cues, we focus on deformation-based motion-defined as the spatial shifts of image deformation. Studying deformation-based motion is essential because it lies at the intersection of motion perception and material perception. This study examines how two fundamental properties-spatial frequency and displacement speed-jointly shape the perception of deformation-based motion. We focused on these parameters because, in luminance-based motion perception, spatial frequency and displacement speed have been shown to critically influence motion sensitivity. Across three experiments using sequentially deformed 1/f noise images as a neutral background, we systematically manipulated the spatial frequency components of the deformation and the speed at which these deformations were displaced. Results showed that direction discrimination performance was strongly modulated by the interaction between spatial frequency and displacement speed. Suppressing local deformation cues improved discrimination at low frequencies, suggesting that local signals may interfere with global motion inference. These findings reveal how the spatial structure and dynamics of image deformation constrain motion perception and provide insights into how the brain interprets dynamic visual information from non-rigid materials.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251364725"},"PeriodicalIF":1.1,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25eCollection Date: 2025-07-01DOI: 10.1177/20416695251364206
Hao Wang, Zhigang Yang
Face pareidolia refers to perceiving facial features on inanimate objects. Previous studies have identified gender differences in pareidolia, but the factors behind these differences remain unclear. This study examined potential influences, including task requirement, low-frequency information encoding ability, and cognitive style. University student participants reported what they saw in face-like object images and rated their face-likeness. A delayed matching task with blurred faces assessed encoding ability, and the Navon task examined cognitive style. Results showed that gender differences were influenced by task demands: women were more likely than men to perceive faces in objects, and this was not related to facial configuration processing. Additionally, a global processing tendency predicted higher pareidolia in women but not in men. Our findings suggest that gender differences in pareidolia are shaped by judgment criteria, with women adopting more relaxed criteria. This research contributes to understanding gender differences in social cognition.
{"title":"Gender differences in face pareidolia: The effect of cognitive style and judgment criteria.","authors":"Hao Wang, Zhigang Yang","doi":"10.1177/20416695251364206","DOIUrl":"10.1177/20416695251364206","url":null,"abstract":"<p><p>Face pareidolia refers to perceiving facial features on inanimate objects. Previous studies have identified gender differences in pareidolia, but the factors behind these differences remain unclear. This study examined potential influences, including task requirement, low-frequency information encoding ability, and cognitive style. University student participants reported what they saw in face-like object images and rated their face-likeness. A delayed matching task with blurred faces assessed encoding ability, and the Navon task examined cognitive style. Results showed that gender differences were influenced by task demands: women were more likely than men to perceive faces in objects, and this was not related to facial configuration processing. Additionally, a global processing tendency predicted higher pareidolia in women but not in men. Our findings suggest that gender differences in pareidolia are shaped by judgment criteria, with women adopting more relaxed criteria. This research contributes to understanding gender differences in social cognition.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251364206"},"PeriodicalIF":1.1,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378617/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pilots show superior visual processing capabilities in many visual domain tasks. However, the extent to which this perceptual advantage extends to multisensory processing requires validation. In this study, we examined multisensory integration of auditory and visual information in both pilot and control groups, utilizing two sound-induced flash illusions (SIFI) tasks: the fission illusion, where one flash coupled with two beeps is perceived as two flashes; and the fusion illusion, where two flashes with a single beep are perceived as one flash. Sixty-six participants were instructed to discern whether they observed one or two flashes while discounting irrelevant auditory beeps, across six conditions: one flash (1F), two flashes (2F), one flash/one beep (1F1B), one flash/two beeps (1F2B), two flashes/one beep (2F1B), and two flashes/two beeps (2F2B). We varied six stimulus onset asynchronies (SOAs) between auditory and visual events (25-150 ms) to assess the participants' temporal binding window (TBW). Signal detection theory was employed to analyze the group differences in illusion reports. The findings suggest that, while pilots are less susceptible to SIFI in either fission or fusion conditions, they only exhibit narrower TBW in the fusion condition, where pilots demonstrated a more gradual change in their susceptibility as SOA increases. In the fission condition, the group difference was primarily driven by visual sensitivity, whereas in the fusion condition it also likely reflected pilots' distinct multisensory integration mechanisms. Two alternative possibilities are discussed to explain the group differences and the different multisensory integration patterns in fission and fusion conditions.
{"title":"Altered multisensory integration in pilots: Examining susceptibility to fission and fusion sound-induced flash illusions.","authors":"Xing Peng, Yaowei Liang, Xiuyi Li, Jiaying Sun, Xiaoyu Tang, Aijun Wang, Chengyi Zeng","doi":"10.1177/20416695251364202","DOIUrl":"10.1177/20416695251364202","url":null,"abstract":"<p><p>Pilots show superior visual processing capabilities in many visual domain tasks. However, the extent to which this perceptual advantage extends to multisensory processing requires validation. In this study, we examined multisensory integration of auditory and visual information in both pilot and control groups, utilizing two sound-induced flash illusions (SIFI) tasks: the fission illusion, where one flash coupled with two beeps is perceived as two flashes; and the fusion illusion, where two flashes with a single beep are perceived as one flash. Sixty-six participants were instructed to discern whether they observed one or two flashes while discounting irrelevant auditory beeps, across six conditions: one flash (1F), two flashes (2F), one flash/one beep (1F1B), one flash/two beeps (1F2B), two flashes/one beep (2F1B), and two flashes/two beeps (2F2B). We varied six stimulus onset asynchronies (SOAs) between auditory and visual events (25-150 ms) to assess the participants' temporal binding window (TBW). Signal detection theory was employed to analyze the group differences in illusion reports. The findings suggest that, while pilots are less susceptible to SIFI in either fission or fusion conditions, they only exhibit narrower TBW in the fusion condition, where pilots demonstrated a more gradual change in their susceptibility as SOA increases. In the fission condition, the group difference was primarily driven by visual sensitivity, whereas in the fusion condition it also likely reflected pilots' distinct multisensory integration mechanisms. Two alternative possibilities are discussed to explain the group differences and the different multisensory integration patterns in fission and fusion conditions.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251364202"},"PeriodicalIF":1.1,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12332369/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-25eCollection Date: 2025-07-01DOI: 10.1177/20416695251349685
Nicholas J Wade
Julesz constructed stereograms in which surfaces in depth could be seen with two eyes but not with either eye alone. He noted that such enclosed surfaces in depth never occur in natural scenes. In contrast, extended stereoscopic surfaces are a natural feature of binocular vision. Examples of constructed textured surface stereograms are presented as anaglyphs. They satisfy the criterion of revealing depth seen with two eyes which is concealed from each eye alone. A wide range of carrier patterns can be employed to construct complex stereoscopic surfaces. Stereoscopic inclusions can be embedded within modulated surface depths in the same anaglyphs, and conventional stereoscopic images (photographs) can be incorporated within constructed stereograms. Textured surface stereograms offer the possibility of extending the artistic expression of stereoscopy.
{"title":"Textured surface stereoscopy.","authors":"Nicholas J Wade","doi":"10.1177/20416695251349685","DOIUrl":"10.1177/20416695251349685","url":null,"abstract":"<p><p>Julesz constructed stereograms in which surfaces in depth could be seen with two eyes but not with either eye alone. He noted that such enclosed surfaces in depth never occur in natural scenes. In contrast, extended stereoscopic surfaces are a natural feature of binocular vision. Examples of constructed textured surface stereograms are presented as anaglyphs. They satisfy the criterion of revealing depth seen with two eyes which is concealed from each eye alone. A wide range of carrier patterns can be employed to construct complex stereoscopic surfaces. Stereoscopic inclusions can be embedded within modulated surface depths in the same anaglyphs, and conventional stereoscopic images (photographs) can be incorporated within constructed stereograms. Textured surface stereograms offer the possibility of extending the artistic expression of stereoscopy.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251349685"},"PeriodicalIF":1.1,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12304618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-24eCollection Date: 2025-07-01DOI: 10.1177/20416695251359216
Vebjørn Ekroll, Rob van Lier
When climbing a mountain, one is sometimes surprised at how the mountain turns out to be much taller than one initially believed. Wishful thinking easily comes to mind as an explanation for this, but we illustrate how this misjudgment may also be explained as a consequence of the perceptual experience of amodal volume completion.
{"title":"The three rules of mountaineering and amodal volume completion.","authors":"Vebjørn Ekroll, Rob van Lier","doi":"10.1177/20416695251359216","DOIUrl":"10.1177/20416695251359216","url":null,"abstract":"<p><p>When climbing a mountain, one is sometimes surprised at how the mountain turns out to be much taller than one initially believed. Wishful thinking easily comes to mind as an explanation for this, but we illustrate how this misjudgment may also be explained as a consequence of the perceptual experience of amodal volume completion.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251359216"},"PeriodicalIF":1.1,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23eCollection Date: 2025-07-01DOI: 10.1177/20416695251352129
Ziwei Chen, Mengxin Wen, Xun Liu, Di Fu
In real life, people perceive nonexistent faces from face-like objects, called face pareidolia. Face-like objects, similar to averted gazes, can direct the observer's attention. However, the similarities and differences in attentional shifts induced by these two types of stimuli remain underexplored. Through a gaze cueing task, this study compares the cueing effects of face-like objects and averted gaze faces, revealing both commonalities and distinct underlying mechanisms. Our findings demonstrate that while both types of stimuli can elicit attentional shifts, the mechanisms differ: averted gaze faces rely on processing local features like gaze direction, whereas face-like objects leverage their global configuration to enhance attentional shifts by triggered eye-like features. These findings advance the understanding of the processing mechanisms underlying the perception of face-like objects, and how the brain represents facial attributes even when physical facial stimuli are absent. This study provides a valuable theoretical foundation for future investigations into the broader applications of face-like stimuli in human perception and attention.
{"title":"How face-like objects and averted gaze faces orient our attention: The role of global configuration and local features.","authors":"Ziwei Chen, Mengxin Wen, Xun Liu, Di Fu","doi":"10.1177/20416695251352129","DOIUrl":"10.1177/20416695251352129","url":null,"abstract":"<p><p>In real life, people perceive nonexistent faces from face-like objects, called face pareidolia. Face-like objects, similar to averted gazes, can direct the observer's attention. However, the similarities and differences in attentional shifts induced by these two types of stimuli remain underexplored. Through a gaze cueing task, this study compares the cueing effects of face-like objects and averted gaze faces, revealing both commonalities and distinct underlying mechanisms. Our findings demonstrate that while both types of stimuli can elicit attentional shifts, the mechanisms differ: averted gaze faces rely on processing local features like gaze direction, whereas face-like objects leverage their global configuration to enhance attentional shifts by triggered eye-like features. These findings advance the understanding of the processing mechanisms underlying the perception of face-like objects, and how the brain represents facial attributes even when physical facial stimuli are absent. This study provides a valuable theoretical foundation for future investigations into the broader applications of face-like stimuli in human perception and attention.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251352129"},"PeriodicalIF":1.1,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350056/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144875968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-22eCollection Date: 2025-07-01DOI: 10.1177/20416695251355381
Sabrina Hansmann-Roth, Pascal Mamassian
The image intensity depends on the illumination, the reflectance properties of objects but also on the reflectance and absorption properties of any intervening media. In this study we present observers with glossy objects behind partially transmissive materials. The transparent layer causes an achromatic color shift and compression in luminance contrast, which can affect the perception of the specular reflections of the object behind the layer. In two distinct experiments, we examine how an achromatic color shift and the compression of luminance contrast affect perceived gloss. Thanks to the maximum likelihood conjoint measurement paradigm, we estimate the contamination of different transparent layers on perceived gloss. In the follow-up experiment, observers were asked to match the albedo and the gloss of surfaces seen in plain view to surfaces seen behind a transparent layer. Our results indicate a high degree of gloss constancy with some small but significant contribution of the transparent layer when estimating gloss, especially in the case of light-colored transparent layers. Overall, gloss is significantly overestimated.
{"title":"Perceiving gloss through transparency.","authors":"Sabrina Hansmann-Roth, Pascal Mamassian","doi":"10.1177/20416695251355381","DOIUrl":"10.1177/20416695251355381","url":null,"abstract":"<p><p>The image intensity depends on the illumination, the reflectance properties of objects but also on the reflectance and absorption properties of any intervening media. In this study we present observers with glossy objects behind partially transmissive materials. The transparent layer causes an achromatic color shift and compression in luminance contrast, which can affect the perception of the specular reflections of the object behind the layer. In two distinct experiments, we examine how an achromatic color shift and the compression of luminance contrast affect perceived gloss. Thanks to the maximum likelihood conjoint measurement paradigm, we estimate the contamination of different transparent layers on perceived gloss. In the follow-up experiment, observers were asked to match the albedo and the gloss of surfaces seen in plain view to surfaces seen behind a transparent layer. Our results indicate a high degree of gloss constancy with some small but significant contribution of the transparent layer when estimating gloss, especially in the case of light-colored transparent layers. Overall, gloss is significantly overestimated.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251355381"},"PeriodicalIF":1.1,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12290337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Illumination conditions inside and outside cast shadows typically differ significantly in both intensity and in chromaticity. However, our daily experiences suggest that we generally have no difficulty in stably perceiving surface color in cast shadows. In this study, two experiments were conducted to measure the extent to which color constancy holds within cast shadows. We constructed a scene with colored hexagons illuminated by two projectors simulating "sunlight" and "skylight." Part of the scene included a cast shadow, illuminated only by the skylight, where a subjective white point was measured. We also created a condition in which a cast shadow was not perceived as a shadow. Results showed that color constancy generally holds well in shadows, and the color of skylight had varying effects depending on observers. Perceiving a cast shadow as a shadow had no effect. Overall, these findings are consistent with our daily experiences, in which we stably judge objects' color even within cast shadows.
{"title":"Human color constancy in cast shadows.","authors":"Takuma Morimoto, Masayuki Sato, Shoji Sunaga, Keiji Uchikawa","doi":"10.1177/20416695251349737","DOIUrl":"10.1177/20416695251349737","url":null,"abstract":"<p><p>Illumination conditions inside and outside cast shadows typically differ significantly in both intensity and in chromaticity. However, our daily experiences suggest that we generally have no difficulty in stably perceiving surface color in cast shadows. In this study, two experiments were conducted to measure the extent to which color constancy holds within cast shadows. We constructed a scene with colored hexagons illuminated by two projectors simulating \"sunlight\" and \"skylight.\" Part of the scene included a cast shadow, illuminated only by the skylight, where a subjective white point was measured. We also created a condition in which a cast shadow was not perceived as a shadow. Results showed that color constancy generally holds well in shadows, and the color of skylight had varying effects depending on observers. Perceiving a cast shadow as a shadow had no effect. Overall, these findings are consistent with our daily experiences, in which we stably judge objects' color even within cast shadows.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251349737"},"PeriodicalIF":2.4,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12264327/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144650878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}