Pub Date : 2025-12-03eCollection Date: 2025-11-01DOI: 10.1177/20416695251395442
Benjamin Balas
The human visual system is sensitive to statistical regularities in natural images. This includes general properties like the characteristic 1/f power-spectrum fall-off coefficient observed across diverse natural scenes and category-specific properties like the bias favoring horizontal contrast energy for face recognition. Here, we examined the sensitivity of face pareidolia in adult observers to these image properties using fractal noise images and an unconstrained pareidolic face detection task. We presented participants in separate experiments with (Experiment 1) noise patterns with varying spectral fall-off coefficients and (Experiment 2) noise patterns with bandpass orientation filtering such that either horizontal or vertical contrast energy was limited. In both experiments, we found that face pareidolia rates were sensitive to these manipulations. In Experiment 1, we found that fractal noise patterns with steeper fall-off coefficients (favoring coarser appearance) led to lower rates of pareidolic face detection. In Experiment 2, we found that despite the clear bias favoring horizontal contrast energy in a wide range of face recognition tasks, both horizontal and vertical orientation bandpass filtering reduced rates of face pareidolia relative to isotropic images. We suggest that these results indicate that detecting pareidolic faces depends on the availability of face-like information across many low-level channels rather than a favored scale or orientation that is face-specific.
{"title":"Face pareidolia is sensitive to spectral power and orientation energy.","authors":"Benjamin Balas","doi":"10.1177/20416695251395442","DOIUrl":"10.1177/20416695251395442","url":null,"abstract":"<p><p>The human visual system is sensitive to statistical regularities in natural images. This includes general properties like the characteristic 1/f power-spectrum fall-off coefficient observed across diverse natural scenes and category-specific properties like the bias favoring horizontal contrast energy for face recognition. Here, we examined the sensitivity of face pareidolia in adult observers to these image properties using fractal noise images and an unconstrained pareidolic face detection task. We presented participants in separate experiments with (Experiment 1) noise patterns with varying spectral fall-off coefficients and (Experiment 2) noise patterns with bandpass orientation filtering such that either horizontal or vertical contrast energy was limited. In both experiments, we found that face pareidolia rates were sensitive to these manipulations. In Experiment 1, we found that fractal noise patterns with steeper fall-off coefficients (favoring coarser appearance) led to lower rates of pareidolic face detection. In Experiment 2, we found that despite the clear bias favoring horizontal contrast energy in a wide range of face recognition tasks, both horizontal and vertical orientation bandpass filtering reduced rates of face pareidolia relative to isotropic images. We suggest that these results indicate that detecting pareidolic faces depends on the availability of face-like information across many low-level channels rather than a favored scale or orientation that is face-specific.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251395442"},"PeriodicalIF":1.1,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12678727/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-27eCollection Date: 2025-11-01DOI: 10.1177/20416695251391640
Nikolaus F Troje, Lucie Preißler, Gudrun Schwarzer
Earlier research has shown that seven-month-old infants prefer to look at real objects over their referents. Which visual cues determine that preference? Motivated by research on adult observers highlighting the significance of motion parallax over other depth cues contributing to a sense of presence and place, we tested the hypothesis that motion parallax alone is sufficient to cause preferential looking to real objects in infants. We presented pairs of displays of toys in different formats: (a) The real three-dimensional toy; (b) a realistic image of that toy presented on screen; (c) the same image, but with added depth-from-motion-parallax. Infants preferred (a) over (b) (57% vs. 43%, p < .01) and (c) over (b) (52% vs. 48%, p < .05), but showed no significant preference between (a) and (c) (51% vs. 49%, n.s.). This supports the hypothesis that motion parallax alone can induce a looking preference comparable to that observed for real objects.
早些时候的研究表明,七个月大的婴儿更喜欢看真实的物体而不是他们的参照物。哪些视觉线索决定了这种偏好?在对成人观察者的研究中,我们强调了运动视差比其他深度线索对存在感和地点感的重要性,我们验证了运动视差本身足以引起婴儿对真实物体的偏好的假设。我们以不同的形式对玩具进行展示:(a)真实的三维玩具;(b)在屏幕上显示该玩具的逼真图像;(c)相同的图像,但增加了运动视差深度。婴儿更喜欢(a)而不是(b) (57% vs. 43%, p p
{"title":"Motion parallax allows 7-8-month-old infants to distinguish pictures from their referents.","authors":"Nikolaus F Troje, Lucie Preißler, Gudrun Schwarzer","doi":"10.1177/20416695251391640","DOIUrl":"10.1177/20416695251391640","url":null,"abstract":"<p><p>Earlier research has shown that seven-month-old infants prefer to look at real objects over their referents. Which visual cues determine that preference? Motivated by research on adult observers highlighting the significance of motion parallax over other depth cues contributing to a sense of presence and place, we tested the hypothesis that motion parallax alone is sufficient to cause preferential looking to real objects in infants. We presented pairs of displays of toys in different formats: (a) The real three-dimensional toy; (b) a realistic image of that toy presented on screen; (c) the same image, but with added depth-from-motion-parallax. Infants preferred (a) over (b) (57% vs. 43%, <i>p</i> < .01) and (c) over (b) (52% vs. 48%, <i>p</i> < .05), but showed no significant preference between (a) and (c) (51% vs. 49%, n.s.). This supports the hypothesis that motion parallax alone can induce a looking preference comparable to that observed for real objects.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251391640"},"PeriodicalIF":1.1,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660657/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26eCollection Date: 2025-11-01DOI: 10.1177/20416695251399118
Biye Wang, Tao Tao, Wei Guo
Olfactory function plays a vital role in daily life but tends to decline with age, affecting health and wellbeing. While previous studies suggest a link between physical activities and olfactory function in older adults, the relationship between cognitive activity and olfactory function remains unclear, as do the combined effects of both activities. This cross-sectional study examined associations between physical and cognitive activity and three domains of olfaction (identification, sensitivity, and memory) in 583 community-dwelling older adults. Both types of activity were positively associated with overall olfactory performance. Physical activity exhibited the strongest link with olfaction identification, while cognitive activity was more closely related to olfaction memory. Furthermore, participants engaging in moderate-to-high levels of both activities achieved the best overall olfactory scores. These findings suggest that a combined lifestyle of physical exertion and cognitive engagement may help preserve olfactory function in aging, with implications for autonomy, safety, and quality of life.
{"title":"Associations between physical and cognitive activities and olfactory function in older adults.","authors":"Biye Wang, Tao Tao, Wei Guo","doi":"10.1177/20416695251399118","DOIUrl":"10.1177/20416695251399118","url":null,"abstract":"<p><p>Olfactory function plays a vital role in daily life but tends to decline with age, affecting health and wellbeing. While previous studies suggest a link between physical activities and olfactory function in older adults, the relationship between cognitive activity and olfactory function remains unclear, as do the combined effects of both activities. This cross-sectional study examined associations between physical and cognitive activity and three domains of olfaction (identification, sensitivity, and memory) in 583 community-dwelling older adults. Both types of activity were positively associated with overall olfactory performance. Physical activity exhibited the strongest link with olfaction identification, while cognitive activity was more closely related to olfaction memory. Furthermore, participants engaging in moderate-to-high levels of both activities achieved the best overall olfactory scores. These findings suggest that a combined lifestyle of physical exertion and cognitive engagement may help preserve olfactory function in aging, with implications for autonomy, safety, and quality of life.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251399118"},"PeriodicalIF":1.1,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21eCollection Date: 2025-11-01DOI: 10.1177/20416695251391634
Hongyu Zhou, Qingqing Li, Yuanyuan Tang, Yu Tian
The rise of social media has raised concerns about its addictive potential and impairments in mental health and cognitive functions, including distortions in time processing. Emerging evidence suggests social media addicts tend to misestimate the amount of time spent using social media, hinting at possible problems with their cognitive time-processing. This study aimed to investigate the impact of social media addiction on basic time perception using controlled experimental paradigms. Forty participants scoring ≥24 on the Bergen Social Media Addiction Scale with ≥5 hr daily usage were recruited, alongside 40 controls. After excluding individuals craving, fear of missing out for social media, or test anxiety during experiment, final samples included 36 addicts and 37 controls. Time reproduction (motor timing) and bisection (perceptual timing) tasks were administered, distinguishing subsecond (<1 s) and suprasecond (>1 s) intervals. Tasks used neutral gray stimuli to avoid social media cues and included pretask rest to control physiological arousal. Social media addicts exhibited significant deficits in suprasecond bisection task, demonstrated by lower subjective equality points (1,430.69 vs. 1,549.32 ms) and higher Weber ratios (0.41 vs. 0.29), indicating both time overestimation and reduced time sensitivity. No significant group differences were observed in reproduction tasks or in subsecond bisection task. These findings establish that social media addiction selectively impairs suprasecond perceptual timing, characterized by overestimation and diminished sensitivity. These findings establish a novel cognitive deficit linked to addictive social media use, with potential clinical implications for intervention strategies targeting distorted time processing.
社交媒体的兴起引发了人们对其潜在成瘾性以及对心理健康和认知功能的损害的担忧,包括时间处理的扭曲。新出现的证据表明,社交媒体成瘾者倾向于错误估计花在社交媒体上的时间,这暗示了他们的认知时间处理可能存在问题。本研究旨在通过对照实验范式探讨社交媒体成瘾对基本时间知觉的影响。招募了40名在卑尔根社交媒体成瘾量表上得分≥24分、每日使用时间≥5小时的参与者,以及40名对照组。在实验中排除了渴望、害怕错过社交媒体或考试焦虑的个体后,最终的样本包括36名成瘾者和37名对照组。时间复制(运动计时)和对分(感知计时)任务被执行,区分亚秒(1秒)间隔。任务使用中性灰色刺激来避免社交媒体线索,并包括任务前休息来控制生理唤醒。社交媒体成瘾者在超秒等分任务中表现出明显的缺陷,主观平等点较低(1,430.69 vs 1,549.32 ms),韦伯比较高(0.41 vs 0.29),表明时间高估和时间敏感性降低。在繁殖任务和亚秒分任务中,各组间无显著差异。这些发现表明,社交媒体成瘾选择性地损害了超秒感知时间,其特征是高估和敏感度降低。这些发现建立了一种与社交媒体成瘾使用有关的新型认知缺陷,对针对扭曲时间处理的干预策略具有潜在的临床意义。
{"title":"Selective suprasecond timing deficit in social media addicts: Bisection task reveals overestimation and impaired sensitivity without subsecond effects.","authors":"Hongyu Zhou, Qingqing Li, Yuanyuan Tang, Yu Tian","doi":"10.1177/20416695251391634","DOIUrl":"10.1177/20416695251391634","url":null,"abstract":"<p><p>The rise of social media has raised concerns about its addictive potential and impairments in mental health and cognitive functions, including distortions in time processing. Emerging evidence suggests social media addicts tend to misestimate the amount of time spent using social media, hinting at possible problems with their cognitive time-processing. This study aimed to investigate the impact of social media addiction on basic time perception using controlled experimental paradigms. Forty participants scoring ≥24 on the Bergen Social Media Addiction Scale with ≥5 hr daily usage were recruited, alongside 40 controls. After excluding individuals craving, fear of missing out for social media, or test anxiety during experiment, final samples included 36 addicts and 37 controls. Time reproduction (motor timing) and bisection (perceptual timing) tasks were administered, distinguishing subsecond (<1 s) and suprasecond (>1 s) intervals. Tasks used neutral gray stimuli to avoid social media cues and included pretask rest to control physiological arousal. Social media addicts exhibited significant deficits in suprasecond bisection task, demonstrated by lower subjective equality points (1,430.69 vs. 1,549.32 ms) and higher Weber ratios (0.41 vs. 0.29), indicating both time overestimation and reduced time sensitivity. No significant group differences were observed in reproduction tasks or in subsecond bisection task. These findings establish that social media addiction selectively impairs suprasecond perceptual timing, characterized by overestimation and diminished sensitivity. These findings establish a novel cognitive deficit linked to addictive social media use, with potential clinical implications for intervention strategies targeting distorted time processing.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251391634"},"PeriodicalIF":1.1,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12639200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20eCollection Date: 2025-11-01DOI: 10.1177/20416695251381548
Ronald Hübner
This study investigates how the method used by participants to assess the beauty of pictures influences their preference for the compositional rules of symmetry, balance, and proximity. The hypothesis that production methods (actively arranging picture elements) prompt a local perspective, favoring proximity, while evaluation tasks (rating precomposed pictures) elicit a global perspective, favoring symmetry and balance, was tested in two experiments. Experiment 1 demonstrated that (positional) symmetry was preferred over balance, and balance over proximity, when participants rated precomposed pictures. Experiment 2, employing a production method with movable elements, showed a frequent use of proximity, yet also a tendency toward (positional) symmetry. The combined results indicate that assessment methods substantially impact the preferred composition rules.
{"title":"Preference for symmetry, balance, or proximity in picture aesthetics depends on the method of evaluation.","authors":"Ronald Hübner","doi":"10.1177/20416695251381548","DOIUrl":"10.1177/20416695251381548","url":null,"abstract":"<p><p>This study investigates how the method used by participants to assess the beauty of pictures influences their preference for the compositional rules of symmetry, balance, and proximity. The hypothesis that production methods (actively arranging picture elements) prompt a local perspective, favoring proximity, while evaluation tasks (rating precomposed pictures) elicit a global perspective, favoring symmetry and balance, was tested in two experiments. Experiment 1 demonstrated that (positional) symmetry was preferred over balance, and balance over proximity, when participants rated precomposed pictures. Experiment 2, employing a production method with movable elements, showed a frequent use of proximity, yet also a tendency toward (positional) symmetry. The combined results indicate that assessment methods substantially impact the preferred composition rules.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251381548"},"PeriodicalIF":1.1,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12638637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18eCollection Date: 2025-11-01DOI: 10.1177/20416695251396873
Tama Kanematsu, Hiroyuki Ito
We discovered a new type of assimilative color induction. An achromatic target with a white background was placed in the center of a concentric chromatic gradient that caused the glare effect. The target frequently appears to be in the same hue as the gradient. We discussed lower-level factors such as lateral inhibition and spatial summation functions, and higher-level factors such as illumination estimation.
{"title":"Concentric chromatic gradient affects color appearance of central targets.","authors":"Tama Kanematsu, Hiroyuki Ito","doi":"10.1177/20416695251396873","DOIUrl":"10.1177/20416695251396873","url":null,"abstract":"<p><p>We discovered a new type of assimilative color induction. An achromatic target with a white background was placed in the center of a concentric chromatic gradient that caused the glare effect. The target frequently appears to be in the same hue as the gradient. We discussed lower-level factors such as lateral inhibition and spatial summation functions, and higher-level factors such as illumination estimation.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251396873"},"PeriodicalIF":1.1,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12627348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The color appearance of #TheDress image varies across individuals. The color of pixels in the image distributes mostly in blue-achromatic-yellow color direction, and so are the perceived color variations. One of the potential causes is differences in the degree of perceiving light-blue pixels as a part of white clothing under a skylight, referred to as "blue bias." A deep neural network (DNN) application was used to simulate individual differences in blue bias, by varying the percentage of such scenes in the training-image set. A style-transfer DNN was used to simulate a "color naming" procedure by learning pairs of natural images and their color-name labels as pixel-by-pixel maps. The models trained with different ratio of blue-bias scenes were tested using the #TheDress image. The averaged results across trials showed a progressive change from blue/black to white (gray)/gold, indicating that exposure or attention to blue-bias scenes could have caused the individual differences in the color perception of #TheDress image. In an additional experiment, we manipulated the relative number of artificially blue- or yellow-tinted images, instead of varying the ratio of blue-bias scenes, to train the DNN. If the blue-bias scenes are equivalent with blue-tinted images of scenes taken under daylight, this manipulation should yield similar result. However, the resulting outputs did not produce a white/gold image at all. This suggests that exposure to skylight scenes alone is insufficient; the scenes must contain unequivocally white objects (such as snow, white clothing, or white road signs) in order to establish a "blue bias" in human observers.
{"title":"Can DNN models simulate appearance variations of #TheDress?","authors":"Ichiro Kuriki, Hikari Saito, Rui Okubo, Hiroaki Kiyokawa, Takashi Shinozaki","doi":"10.1177/20416695251388577","DOIUrl":"10.1177/20416695251388577","url":null,"abstract":"<p><p>The color appearance of #TheDress image varies across individuals. The color of pixels in the image distributes mostly in blue-achromatic-yellow color direction, and so are the perceived color variations. One of the potential causes is differences in the degree of perceiving light-blue pixels as a part of white clothing under a skylight, referred to as \"blue bias.\" A deep neural network (DNN) application was used to simulate individual differences in blue bias, by varying the percentage of such scenes in the training-image set. A style-transfer DNN was used to simulate a \"color naming\" procedure by learning pairs of natural images and their color-name labels as pixel-by-pixel maps. The models trained with different ratio of blue-bias scenes were tested using the #TheDress image. The averaged results across trials showed a progressive change from blue/black to white (gray)/gold, indicating that exposure or attention to blue-bias scenes could have caused the individual differences in the color perception of #TheDress image. In an additional experiment, we manipulated the relative number of artificially blue- or yellow-tinted images, instead of varying the ratio of blue-bias scenes, to train the DNN. If the blue-bias scenes are equivalent with blue-tinted images of scenes taken under daylight, this manipulation should yield similar result. However, the resulting outputs did not produce a white/gold image at all. This suggests that exposure to skylight scenes alone is insufficient; the scenes must contain unequivocally white objects (such as snow, white clothing, or white road signs) in order to establish a \"blue bias\" in human observers.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 6","pages":"20416695251388577"},"PeriodicalIF":1.1,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12589789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14eCollection Date: 2025-09-01DOI: 10.1177/20416695251385816
Michaela Jeschke, Knut Drewing
Humans use distinct exploratory procedures (EPs) in active touch, which are typically specialized for materials with particular properties: for example, pressing for deformable objects such as cushions, or stroking to test a fabric's smoothness. Further, humans can use abstract visual priors for fine-tuning of exploratory movement parameters such as exploration direction. We here test the usage of visual priors in the planning of material-specific EPs, using real-life materials and a naturalistic visual virtual reality environment. We show that humans are better at selecting specialized EPs at initial touch when they have access to valid prior visual information on the material: They used specialized EP earlier, with higher probability, and explored materials for a shorter time. We conclude that visual prior information increases the efficiency of haptic explorations by anticipatory planning of appropriate movement schemes.
{"title":"Look first, feel faster: Prior visual information accelerates haptic material exploration.","authors":"Michaela Jeschke, Knut Drewing","doi":"10.1177/20416695251385816","DOIUrl":"10.1177/20416695251385816","url":null,"abstract":"<p><p>Humans use distinct exploratory procedures (EPs) in active touch, which are typically specialized for materials with particular properties: for example, pressing for deformable objects such as cushions, or stroking to test a fabric's smoothness<i>.</i> Further, humans can use abstract visual priors for fine-tuning of exploratory movement parameters such as exploration direction. We here test the usage of visual priors in the planning of material-specific EPs, using real-life materials and a naturalistic visual virtual reality environment. We show that humans are better at selecting specialized EPs at initial touch when they have access to valid prior visual information on the material: They used specialized EP earlier, with higher probability, and explored materials for a shorter time. We conclude that visual prior information increases the efficiency of haptic explorations by anticipatory planning of appropriate movement schemes.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251385816"},"PeriodicalIF":1.1,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534811/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23eCollection Date: 2025-09-01DOI: 10.1177/20416695251376600
Eleftheria Pistolas, Liv Smets, Johan Wagemans
A multimodal Ganzfeld (MMGF) consists of homogeneous stimulation in both the visual and auditory modalities. Exposure to this unique perceptual environment can elicit the awareness of hallucinatory percepts. The nature of these hallucinatory percepts, and specifically the frequency of visual, auditory and multisensorial hallucinations, remains unclear. In this study, an MMGF refers to the stimulation paradigm itself. The perceptual experiences elicited, however, can be unimodal (occurring in one modality), multisensory (simultaneous but thematically unrelated across modalities), or multimodal (thematically integrated across modalities), allowing us to assess multisensory integration in the MMGF. Employing a multimethod approach in which we combine quantitative and qualitative measures, we conducted three experiments, using a between-subjects design with three noise conditions, that is, no-noise, white-noise, and brown-noise. Experiments 1 and 2 were conducted in a laboratory Ganzfeld (GF) space, Experiment 3 was conducted in a GF art installation in a museum context. We conducted half-open interviews, analyzed using inductive content analysis, to grasp the subjective experience and assess congruency of visual and auditory hallucinations. We found that visual hallucinations were frequently reported, but auditory hallucinations were less common. The most consistently reported auditory hallucinations, and importantly, multisensory integrated hallucinations, were water-related, suggesting a potential influence of noise, particularly brown noise, possibly due to its resemblance to water sounds. Our findings also indicate a predominantly unimodal focus on the visual aspect among participants, alongside instances of attention switching between modalities.
{"title":"Wave after wave: The suggestibility of noise in the experience of multisensory hallucinations under multimodal Ganzfeld stimulation.","authors":"Eleftheria Pistolas, Liv Smets, Johan Wagemans","doi":"10.1177/20416695251376600","DOIUrl":"10.1177/20416695251376600","url":null,"abstract":"<p><p>A multimodal Ganzfeld (MMGF) consists of homogeneous stimulation in both the visual and auditory modalities. Exposure to this unique perceptual environment can elicit the awareness of hallucinatory percepts. The nature of these hallucinatory percepts, and specifically the frequency of visual, auditory and multisensorial hallucinations, remains unclear. In this study, an MMGF refers to the stimulation paradigm itself. The perceptual experiences elicited, however, can be unimodal (occurring in one modality), multisensory (simultaneous but thematically unrelated across modalities), or multimodal (thematically integrated across modalities), allowing us to assess multisensory integration in the MMGF. Employing a multimethod approach in which we combine quantitative and qualitative measures, we conducted three experiments, using a between-subjects design with three noise conditions, that is, no-noise, white-noise, and brown-noise. Experiments 1 and 2 were conducted in a laboratory Ganzfeld (GF) space, Experiment 3 was conducted in a GF art installation in a museum context. We conducted half-open interviews, analyzed using inductive content analysis, to grasp the subjective experience and assess congruency of visual and auditory hallucinations. We found that visual hallucinations were frequently reported, but auditory hallucinations were less common. The most consistently reported auditory hallucinations, and importantly, multisensory integrated hallucinations, were water-related, suggesting a potential influence of noise, particularly brown noise, possibly due to its resemblance to water sounds. Our findings also indicate a predominantly unimodal focus on the visual aspect among participants, alongside instances of attention switching between modalities.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251376600"},"PeriodicalIF":1.1,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12457770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15eCollection Date: 2025-09-01DOI: 10.1177/20416695251377199
Qian Sun, Haojiang Ying, Qi Sun
Numerous studies have explored the mechanisms of heading estimation from optic flow and ensemble coding in other features, yet none have examined ensemble coding's role in heading estimation. This study addressed this gap through two experiments. Participants sequentially viewed three (experiment 1) or five/seven (experiment 2) optic flow-simulated headings, then reported specific directions. Results revealed that individual heading accuracy declined with increasing numbers, while estimates closely matched ensemble representations, demonstrating ensemble coding in heading estimation. Notably, ensemble coding accuracy remained unaffected by heading quantity, indicating its capacity-free nature-unlike capacity-limited individual heading processing. The discovered summary statistics of motion may help us to better understand the navigation in complex environments (e.g., how pedestrians/drivers judge their self-motion directions), which could potentially contribute to real-world implications.
{"title":"Self-motion direction estimation from optic flow is a result of capacity-free and implicit ensemble coding.","authors":"Qian Sun, Haojiang Ying, Qi Sun","doi":"10.1177/20416695251377199","DOIUrl":"10.1177/20416695251377199","url":null,"abstract":"<p><p>Numerous studies have explored the mechanisms of heading estimation from optic flow and ensemble coding in other features, yet none have examined ensemble coding's role in heading estimation. This study addressed this gap through two experiments. Participants sequentially viewed three (experiment 1) or five/seven (experiment 2) optic flow-simulated headings, then reported specific directions. Results revealed that individual heading accuracy declined with increasing numbers, while estimates closely matched ensemble representations, demonstrating ensemble coding in heading estimation. Notably, ensemble coding accuracy remained unaffected by heading quantity, indicating its capacity-free nature-unlike capacity-limited individual heading processing. The discovered summary statistics of motion may help us to better understand the navigation in complex environments (e.g., how pedestrians/drivers judge their self-motion directions), which could potentially contribute to real-world implications.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"16 5","pages":"20416695251377199"},"PeriodicalIF":1.1,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12437248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}