Pub Date : 2025-10-14eCollection Date: 2025-09-01DOI: 10.1177/20416695251385816
Michaela Jeschke, Knut Drewing
Humans use distinct exploratory procedures (EPs) in active touch, which are typically specialized for materials with particular properties: for example, pressing for deformable objects such as cushions, or stroking to test a fabric's smoothness. Further, humans can use abstract visual priors for fine-tuning of exploratory movement parameters such as exploration direction. We here test the usage of visual priors in the planning of material-specific EPs, using real-life materials and a naturalistic visual virtual reality environment. We show that humans are better at selecting specialized EPs at initial touch when they have access to valid prior visual information on the material: They used specialized EP earlier, with higher probability, and explored materials for a shorter time. We conclude that visual prior information increases the efficiency of haptic explorations by anticipatory planning of appropriate movement schemes.
{"title":"Look first, feel faster: Prior visual information accelerates haptic material exploration.","authors":"Michaela Jeschke, Knut Drewing","doi":"10.1177/20416695251385816","DOIUrl":"10.1177/20416695251385816","url":null,"abstract":"<p><p>Humans use distinct exploratory procedures (EPs) in active touch, which are typically specialized for materials with particular properties: for example, pressing for deformable objects such as cushions, or stroking to test a fabric's smoothness<i>.</i> Further, humans can use abstract visual priors for fine-tuning of exploratory movement parameters such as exploration direction. We here test the usage of visual priors in the planning of material-specific EPs, using real-life materials and a naturalistic visual virtual reality environment. We show that humans are better at selecting specialized EPs at initial touch when they have access to valid prior visual information on the material: They used specialized EP earlier, with higher probability, and explored materials for a shorter time. We conclude that visual prior information increases the efficiency of haptic explorations by anticipatory planning of appropriate movement schemes.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251385816"},"PeriodicalIF":1.1,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534811/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23eCollection Date: 2025-09-01DOI: 10.1177/20416695251376600
Eleftheria Pistolas, Liv Smets, Johan Wagemans
A multimodal Ganzfeld (MMGF) consists of homogeneous stimulation in both the visual and auditory modalities. Exposure to this unique perceptual environment can elicit the awareness of hallucinatory percepts. The nature of these hallucinatory percepts, and specifically the frequency of visual, auditory and multisensorial hallucinations, remains unclear. In this study, an MMGF refers to the stimulation paradigm itself. The perceptual experiences elicited, however, can be unimodal (occurring in one modality), multisensory (simultaneous but thematically unrelated across modalities), or multimodal (thematically integrated across modalities), allowing us to assess multisensory integration in the MMGF. Employing a multimethod approach in which we combine quantitative and qualitative measures, we conducted three experiments, using a between-subjects design with three noise conditions, that is, no-noise, white-noise, and brown-noise. Experiments 1 and 2 were conducted in a laboratory Ganzfeld (GF) space, Experiment 3 was conducted in a GF art installation in a museum context. We conducted half-open interviews, analyzed using inductive content analysis, to grasp the subjective experience and assess congruency of visual and auditory hallucinations. We found that visual hallucinations were frequently reported, but auditory hallucinations were less common. The most consistently reported auditory hallucinations, and importantly, multisensory integrated hallucinations, were water-related, suggesting a potential influence of noise, particularly brown noise, possibly due to its resemblance to water sounds. Our findings also indicate a predominantly unimodal focus on the visual aspect among participants, alongside instances of attention switching between modalities.
{"title":"Wave after wave: The suggestibility of noise in the experience of multisensory hallucinations under multimodal Ganzfeld stimulation.","authors":"Eleftheria Pistolas, Liv Smets, Johan Wagemans","doi":"10.1177/20416695251376600","DOIUrl":"10.1177/20416695251376600","url":null,"abstract":"<p><p>A multimodal Ganzfeld (MMGF) consists of homogeneous stimulation in both the visual and auditory modalities. Exposure to this unique perceptual environment can elicit the awareness of hallucinatory percepts. The nature of these hallucinatory percepts, and specifically the frequency of visual, auditory and multisensorial hallucinations, remains unclear. In this study, an MMGF refers to the stimulation paradigm itself. The perceptual experiences elicited, however, can be unimodal (occurring in one modality), multisensory (simultaneous but thematically unrelated across modalities), or multimodal (thematically integrated across modalities), allowing us to assess multisensory integration in the MMGF. Employing a multimethod approach in which we combine quantitative and qualitative measures, we conducted three experiments, using a between-subjects design with three noise conditions, that is, no-noise, white-noise, and brown-noise. Experiments 1 and 2 were conducted in a laboratory Ganzfeld (GF) space, Experiment 3 was conducted in a GF art installation in a museum context. We conducted half-open interviews, analyzed using inductive content analysis, to grasp the subjective experience and assess congruency of visual and auditory hallucinations. We found that visual hallucinations were frequently reported, but auditory hallucinations were less common. The most consistently reported auditory hallucinations, and importantly, multisensory integrated hallucinations, were water-related, suggesting a potential influence of noise, particularly brown noise, possibly due to its resemblance to water sounds. Our findings also indicate a predominantly unimodal focus on the visual aspect among participants, alongside instances of attention switching between modalities.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251376600"},"PeriodicalIF":1.1,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12457770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15eCollection Date: 2025-09-01DOI: 10.1177/20416695251377199
Qian Sun, Haojiang Ying, Qi Sun
Numerous studies have explored the mechanisms of heading estimation from optic flow and ensemble coding in other features, yet none have examined ensemble coding's role in heading estimation. This study addressed this gap through two experiments. Participants sequentially viewed three (experiment 1) or five/seven (experiment 2) optic flow-simulated headings, then reported specific directions. Results revealed that individual heading accuracy declined with increasing numbers, while estimates closely matched ensemble representations, demonstrating ensemble coding in heading estimation. Notably, ensemble coding accuracy remained unaffected by heading quantity, indicating its capacity-free nature-unlike capacity-limited individual heading processing. The discovered summary statistics of motion may help us to better understand the navigation in complex environments (e.g., how pedestrians/drivers judge their self-motion directions), which could potentially contribute to real-world implications.
{"title":"Self-motion direction estimation from optic flow is a result of capacity-free and implicit ensemble coding.","authors":"Qian Sun, Haojiang Ying, Qi Sun","doi":"10.1177/20416695251377199","DOIUrl":"10.1177/20416695251377199","url":null,"abstract":"<p><p>Numerous studies have explored the mechanisms of heading estimation from optic flow and ensemble coding in other features, yet none have examined ensemble coding's role in heading estimation. This study addressed this gap through two experiments. Participants sequentially viewed three (experiment 1) or five/seven (experiment 2) optic flow-simulated headings, then reported specific directions. Results revealed that individual heading accuracy declined with increasing numbers, while estimates closely matched ensemble representations, demonstrating ensemble coding in heading estimation. Notably, ensemble coding accuracy remained unaffected by heading quantity, indicating its capacity-free nature-unlike capacity-limited individual heading processing. The discovered summary statistics of motion may help us to better understand the navigation in complex environments (e.g., how pedestrians/drivers judge their self-motion directions), which could potentially contribute to real-world implications.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251377199"},"PeriodicalIF":1.1,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12437248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-12eCollection Date: 2025-09-01DOI: 10.1177/20416695251376196
Kazumichi Matsumiya, Nanami Nakashima
To localize tactile events in external space, our perceptual system must transform skin-based locations into an external frame of reference. Such a transformation has been reported to involve reference frames that are unrelated to tactile sensations, such as eye position, which supports the idea that a visual reference frame is a single unified frame of reference for transforming spatial information from all sensory modalities. However, it remains unclear how tactile events are perceptually localized during saccadic eye movements. In this study, we presented a single tactile stimulus at a fixed location on the skin and investigated the time course of its localization before, during, and after a saccade. Participants reported the perceived location of the tactile stimulus in a visually aligned virtual space. We found that the tactile stimulus was mislocalized in the direction of the saccade. This mislocalization appeared even before the presentation of the saccade target and continued until 500 ms after saccade onset. These findings demonstrate that tactile localization is influenced by saccade planning or preparation and suggest that the time course of tactile localization during a saccade may differ from previously reported patterns of visual localization during a saccade.
{"title":"Localization of a single tactile stimulus during saccadic eye movements.","authors":"Kazumichi Matsumiya, Nanami Nakashima","doi":"10.1177/20416695251376196","DOIUrl":"10.1177/20416695251376196","url":null,"abstract":"<p><p>To localize tactile events in external space, our perceptual system must transform skin-based locations into an external frame of reference. Such a transformation has been reported to involve reference frames that are unrelated to tactile sensations, such as eye position, which supports the idea that a visual reference frame is a single unified frame of reference for transforming spatial information from all sensory modalities. However, it remains unclear how tactile events are perceptually localized during saccadic eye movements. In this study, we presented a single tactile stimulus at a fixed location on the skin and investigated the time course of its localization before, during, and after a saccade. Participants reported the perceived location of the tactile stimulus in a visually aligned virtual space. We found that the tactile stimulus was mislocalized in the direction of the saccade. This mislocalization appeared even before the presentation of the saccade target and continued until 500 ms after saccade onset. These findings demonstrate that tactile localization is influenced by saccade planning or preparation and suggest that the time course of tactile localization during a saccade may differ from previously reported patterns of visual localization during a saccade.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251376196"},"PeriodicalIF":1.1,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12432307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04eCollection Date: 2025-09-01DOI: 10.1177/20416695251372334
Pierre-Pascal Forster, Simon J Hazenberg, Vebjørn Ekroll, Rob van Lier
Some occluders evoke the compelling impression that the space behind them is empty. Stage magicians use this illusion of absence to produce objects out of thin air. The generic view principle predicts that the illusion of absence should increase with decreasing occluder size. We investigated this prediction in experiments where participants saw a partly occluded scene and the same scene without the occluder, revealing a piece of fruit. They then rated (1) how easy it felt to imagine that the fruit was hidden behind the occluder and (2) how likely they thought it was that the fruit was hidden behind the occluder. Both ratings increased with increasing occluder area. This shows that the illusion of absence increases with decreasing occluder area, as predicted by the generic view principle. These findings could provide a starting point for future studies aiming to understand and prevent road accidents involving obstructions of view.
{"title":"The illusory perception of occluded space as empty depends on the occluded area.","authors":"Pierre-Pascal Forster, Simon J Hazenberg, Vebjørn Ekroll, Rob van Lier","doi":"10.1177/20416695251372334","DOIUrl":"10.1177/20416695251372334","url":null,"abstract":"<p><p>Some occluders evoke the compelling impression that the space behind them is empty. Stage magicians use this illusion of absence to produce objects out of thin air. The generic view principle predicts that the illusion of absence should increase with decreasing occluder size. We investigated this prediction in experiments where participants saw a partly occluded scene and the same scene without the occluder, revealing a piece of fruit. They then rated (1) how easy it felt to imagine that the fruit was hidden behind the occluder and (2) how likely they thought it was that the fruit was hidden behind the occluder. Both ratings increased with increasing occluder area. This shows that the illusion of absence increases with decreasing occluder area, as predicted by the generic view principle. These findings could provide a starting point for future studies aiming to understand and prevent road accidents involving obstructions of view.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251372334"},"PeriodicalIF":1.1,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25eCollection Date: 2025-07-01DOI: 10.1177/20416695251364725
Takahiro Kawabe
In dynamic visual scenes, many materials-including cloth, jelly-like bodies, and flowing liquids-undergo non-rigid deformations that convey information about their physical state. Among such cues, we focus on deformation-based motion-defined as the spatial shifts of image deformation. Studying deformation-based motion is essential because it lies at the intersection of motion perception and material perception. This study examines how two fundamental properties-spatial frequency and displacement speed-jointly shape the perception of deformation-based motion. We focused on these parameters because, in luminance-based motion perception, spatial frequency and displacement speed have been shown to critically influence motion sensitivity. Across three experiments using sequentially deformed 1/f noise images as a neutral background, we systematically manipulated the spatial frequency components of the deformation and the speed at which these deformations were displaced. Results showed that direction discrimination performance was strongly modulated by the interaction between spatial frequency and displacement speed. Suppressing local deformation cues improved discrimination at low frequencies, suggesting that local signals may interfere with global motion inference. These findings reveal how the spatial structure and dynamics of image deformation constrain motion perception and provide insights into how the brain interprets dynamic visual information from non-rigid materials.
{"title":"Perceiving direction of deformation-based motion.","authors":"Takahiro Kawabe","doi":"10.1177/20416695251364725","DOIUrl":"10.1177/20416695251364725","url":null,"abstract":"<p><p>In dynamic visual scenes, many materials-including cloth, jelly-like bodies, and flowing liquids-undergo non-rigid deformations that convey information about their physical state. Among such cues, we focus on deformation-based motion-defined as the spatial shifts of image deformation. Studying deformation-based motion is essential because it lies at the intersection of motion perception and material perception. This study examines how two fundamental properties-spatial frequency and displacement speed-jointly shape the perception of deformation-based motion. We focused on these parameters because, in luminance-based motion perception, spatial frequency and displacement speed have been shown to critically influence motion sensitivity. Across three experiments using sequentially deformed 1/f noise images as a neutral background, we systematically manipulated the spatial frequency components of the deformation and the speed at which these deformations were displaced. Results showed that direction discrimination performance was strongly modulated by the interaction between spatial frequency and displacement speed. Suppressing local deformation cues improved discrimination at low frequencies, suggesting that local signals may interfere with global motion inference. These findings reveal how the spatial structure and dynamics of image deformation constrain motion perception and provide insights into how the brain interprets dynamic visual information from non-rigid materials.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251364725"},"PeriodicalIF":1.1,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25eCollection Date: 2025-07-01DOI: 10.1177/20416695251364206
Hao Wang, Zhigang Yang
Face pareidolia refers to perceiving facial features on inanimate objects. Previous studies have identified gender differences in pareidolia, but the factors behind these differences remain unclear. This study examined potential influences, including task requirement, low-frequency information encoding ability, and cognitive style. University student participants reported what they saw in face-like object images and rated their face-likeness. A delayed matching task with blurred faces assessed encoding ability, and the Navon task examined cognitive style. Results showed that gender differences were influenced by task demands: women were more likely than men to perceive faces in objects, and this was not related to facial configuration processing. Additionally, a global processing tendency predicted higher pareidolia in women but not in men. Our findings suggest that gender differences in pareidolia are shaped by judgment criteria, with women adopting more relaxed criteria. This research contributes to understanding gender differences in social cognition.
{"title":"Gender differences in face pareidolia: The effect of cognitive style and judgment criteria.","authors":"Hao Wang, Zhigang Yang","doi":"10.1177/20416695251364206","DOIUrl":"10.1177/20416695251364206","url":null,"abstract":"<p><p>Face pareidolia refers to perceiving facial features on inanimate objects. Previous studies have identified gender differences in pareidolia, but the factors behind these differences remain unclear. This study examined potential influences, including task requirement, low-frequency information encoding ability, and cognitive style. University student participants reported what they saw in face-like object images and rated their face-likeness. A delayed matching task with blurred faces assessed encoding ability, and the Navon task examined cognitive style. Results showed that gender differences were influenced by task demands: women were more likely than men to perceive faces in objects, and this was not related to facial configuration processing. Additionally, a global processing tendency predicted higher pareidolia in women but not in men. Our findings suggest that gender differences in pareidolia are shaped by judgment criteria, with women adopting more relaxed criteria. This research contributes to understanding gender differences in social cognition.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251364206"},"PeriodicalIF":1.1,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12378617/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pilots show superior visual processing capabilities in many visual domain tasks. However, the extent to which this perceptual advantage extends to multisensory processing requires validation. In this study, we examined multisensory integration of auditory and visual information in both pilot and control groups, utilizing two sound-induced flash illusions (SIFI) tasks: the fission illusion, where one flash coupled with two beeps is perceived as two flashes; and the fusion illusion, where two flashes with a single beep are perceived as one flash. Sixty-six participants were instructed to discern whether they observed one or two flashes while discounting irrelevant auditory beeps, across six conditions: one flash (1F), two flashes (2F), one flash/one beep (1F1B), one flash/two beeps (1F2B), two flashes/one beep (2F1B), and two flashes/two beeps (2F2B). We varied six stimulus onset asynchronies (SOAs) between auditory and visual events (25-150 ms) to assess the participants' temporal binding window (TBW). Signal detection theory was employed to analyze the group differences in illusion reports. The findings suggest that, while pilots are less susceptible to SIFI in either fission or fusion conditions, they only exhibit narrower TBW in the fusion condition, where pilots demonstrated a more gradual change in their susceptibility as SOA increases. In the fission condition, the group difference was primarily driven by visual sensitivity, whereas in the fusion condition it also likely reflected pilots' distinct multisensory integration mechanisms. Two alternative possibilities are discussed to explain the group differences and the different multisensory integration patterns in fission and fusion conditions.
{"title":"Altered multisensory integration in pilots: Examining susceptibility to fission and fusion sound-induced flash illusions.","authors":"Xing Peng, Yaowei Liang, Xiuyi Li, Jiaying Sun, Xiaoyu Tang, Aijun Wang, Chengyi Zeng","doi":"10.1177/20416695251364202","DOIUrl":"10.1177/20416695251364202","url":null,"abstract":"<p><p>Pilots show superior visual processing capabilities in many visual domain tasks. However, the extent to which this perceptual advantage extends to multisensory processing requires validation. In this study, we examined multisensory integration of auditory and visual information in both pilot and control groups, utilizing two sound-induced flash illusions (SIFI) tasks: the fission illusion, where one flash coupled with two beeps is perceived as two flashes; and the fusion illusion, where two flashes with a single beep are perceived as one flash. Sixty-six participants were instructed to discern whether they observed one or two flashes while discounting irrelevant auditory beeps, across six conditions: one flash (1F), two flashes (2F), one flash/one beep (1F1B), one flash/two beeps (1F2B), two flashes/one beep (2F1B), and two flashes/two beeps (2F2B). We varied six stimulus onset asynchronies (SOAs) between auditory and visual events (25-150 ms) to assess the participants' temporal binding window (TBW). Signal detection theory was employed to analyze the group differences in illusion reports. The findings suggest that, while pilots are less susceptible to SIFI in either fission or fusion conditions, they only exhibit narrower TBW in the fusion condition, where pilots demonstrated a more gradual change in their susceptibility as SOA increases. In the fission condition, the group difference was primarily driven by visual sensitivity, whereas in the fusion condition it also likely reflected pilots' distinct multisensory integration mechanisms. Two alternative possibilities are discussed to explain the group differences and the different multisensory integration patterns in fission and fusion conditions.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251364202"},"PeriodicalIF":1.1,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12332369/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-25eCollection Date: 2025-07-01DOI: 10.1177/20416695251349685
Nicholas J Wade
Julesz constructed stereograms in which surfaces in depth could be seen with two eyes but not with either eye alone. He noted that such enclosed surfaces in depth never occur in natural scenes. In contrast, extended stereoscopic surfaces are a natural feature of binocular vision. Examples of constructed textured surface stereograms are presented as anaglyphs. They satisfy the criterion of revealing depth seen with two eyes which is concealed from each eye alone. A wide range of carrier patterns can be employed to construct complex stereoscopic surfaces. Stereoscopic inclusions can be embedded within modulated surface depths in the same anaglyphs, and conventional stereoscopic images (photographs) can be incorporated within constructed stereograms. Textured surface stereograms offer the possibility of extending the artistic expression of stereoscopy.
{"title":"Textured surface stereoscopy.","authors":"Nicholas J Wade","doi":"10.1177/20416695251349685","DOIUrl":"10.1177/20416695251349685","url":null,"abstract":"<p><p>Julesz constructed stereograms in which surfaces in depth could be seen with two eyes but not with either eye alone. He noted that such enclosed surfaces in depth never occur in natural scenes. In contrast, extended stereoscopic surfaces are a natural feature of binocular vision. Examples of constructed textured surface stereograms are presented as anaglyphs. They satisfy the criterion of revealing depth seen with two eyes which is concealed from each eye alone. A wide range of carrier patterns can be employed to construct complex stereoscopic surfaces. Stereoscopic inclusions can be embedded within modulated surface depths in the same anaglyphs, and conventional stereoscopic images (photographs) can be incorporated within constructed stereograms. Textured surface stereograms offer the possibility of extending the artistic expression of stereoscopy.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251349685"},"PeriodicalIF":1.1,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12304618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-24eCollection Date: 2025-07-01DOI: 10.1177/20416695251359216
Vebjørn Ekroll, Rob van Lier
When climbing a mountain, one is sometimes surprised at how the mountain turns out to be much taller than one initially believed. Wishful thinking easily comes to mind as an explanation for this, but we illustrate how this misjudgment may also be explained as a consequence of the perceptual experience of amodal volume completion.
{"title":"The three rules of mountaineering and amodal volume completion.","authors":"Vebjørn Ekroll, Rob van Lier","doi":"10.1177/20416695251359216","DOIUrl":"10.1177/20416695251359216","url":null,"abstract":"<p><p>When climbing a mountain, one is sometimes surprised at how the mountain turns out to be much taller than one initially believed. Wishful thinking easily comes to mind as an explanation for this, but we illustrate how this misjudgment may also be explained as a consequence of the perceptual experience of amodal volume completion.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 4","pages":"20416695251359216"},"PeriodicalIF":1.1,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}