Pub Date : 2026-02-01Epub Date: 2025-10-29DOI: 10.1177/03010066251379949
Yong Hoon Chung, Nicole C Anaya Sosa, Viola S Störmer
Spatially aligned faces presented in a continuous stream in the periphery appear distorted and grotesque. This flashed face distortion effect ("FFDE") was first reported over 10 years ago, yet little is known about the underlying mechanisms. Here we investigate whether the FFDE persists across visual field locations when there is a change in position. Face streams were presented at one location for several seconds and then either remained at the same location, or were shifted to a new location, either across visual half-fields (Experiment 1) or within the same visual half-field (Experiment 2). We assessed the perceived illusion magnitudes continuously throughout each trial using a joystick as a response device and found that the illusion decreased significantly when the location changed. In the third experiment we added a control condition that did not elicit an illusion and found that the decrease in reported distortions for location-shift trials was of the same magnitude as this baseline condition. Together, our results suggest that the FFDE may be bound to retinotopic locations, at least when location changes are relatively large.
{"title":"Testing location invariance of the flashed face distortion effect.","authors":"Yong Hoon Chung, Nicole C Anaya Sosa, Viola S Störmer","doi":"10.1177/03010066251379949","DOIUrl":"10.1177/03010066251379949","url":null,"abstract":"<p><p>Spatially aligned faces presented in a continuous stream in the periphery appear distorted and grotesque. This flashed face distortion effect (\"FFDE\") was first reported over 10 years ago, yet little is known about the underlying mechanisms. Here we investigate whether the FFDE persists across visual field locations when there is a change in position. Face streams were presented at one location for several seconds and then either remained at the same location, or were shifted to a new location, either across visual half-fields (Experiment 1) or within the same visual half-field (Experiment 2). We assessed the perceived illusion magnitudes continuously throughout each trial using a joystick as a response device and found that the illusion decreased significantly when the location changed. In the third experiment we added a control condition that did not elicit an illusion and found that the decrease in reported distortions for location-shift trials was of the same magnitude as this baseline condition. Together, our results suggest that the FFDE may be bound to retinotopic locations, at least when location changes are relatively large.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"159-177"},"PeriodicalIF":1.1,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-07DOI: 10.1177/03010066251378983
Emil Skog, Andrew J Schofield, Timothy S Meese
Ordnance Survey (OS) remote sensing surveyors have extensive experience with aerial views of scenes and objects. Building on our previous work with this group, we investigated whether their expertise influenced performance on a same/different object recognition task involving houses. In an online study, these stimuli were shown from both familiar ground-level viewpoints and from what is for most people, unfamiliar aerial viewpoints. OS experts and novices compared achromatic, disparity-free images with aerial perspectives rotated around the clock against canonical ground-views; we measured response times (RTs) and sensitivities (d'). In two 'grounding' tasks using rotated letters, we found conventional outcomes for both groups, validating the online approach. Experiment 1 (non-matching letters) yielded ceiling-level performance with no signs of mental rotation, consistent with a feature-based recognition strategy. In Experiment 2 (mirror reversed letters), both groups showed orientation-dependent performance, but experts exhibited a speed-accuracy trade-off, responding more cautiously than novices. In the main house task (Experiment 3), we found (a) the same speed-accuracy trade-off observed in Experiment 2, (b) substantially longer RTs overall, and (c) no evidence for mental rotation in either group, mirroring Experiment 1. Contrary to our earlier findings on aerial depth perception, expertise in remote sensing did not yield a distinctive recognition strategy for the experiments here. However, experts displayed more diligent tactics in Experiments 2 and 3. We suggest that all participants in Experiment 3 engaged in cognitively challenging feature comparisons across viewpoints, presumably supported by volumetric or surface-connected prototypes of houses as the basis for feature comparisons.
{"title":"Visual expertise for aerial- and ground-views of houses: No evidence for mental rotation, but experts were more diligent than novices.","authors":"Emil Skog, Andrew J Schofield, Timothy S Meese","doi":"10.1177/03010066251378983","DOIUrl":"10.1177/03010066251378983","url":null,"abstract":"<p><p>Ordnance Survey (OS) remote sensing surveyors have extensive experience with aerial views of scenes and objects. Building on our previous work with this group, we investigated whether their expertise influenced performance on a same/different object recognition task involving houses. In an online study, these stimuli were shown from both familiar ground-level viewpoints and from what is for most people, unfamiliar aerial viewpoints. OS experts and novices compared achromatic, disparity-free images with aerial perspectives rotated around the clock against canonical ground-views; we measured response times (RTs) and sensitivities (<i>d'</i>). In two 'grounding' tasks using rotated letters, we found conventional outcomes for both groups, validating the online approach. Experiment 1 (non-matching letters) yielded ceiling-level performance with no signs of mental rotation, consistent with a feature-based recognition strategy. In Experiment 2 (mirror reversed letters), both groups showed orientation-dependent performance, but experts exhibited a speed-accuracy trade-off, responding more cautiously than novices. In the main house task (Experiment 3), we found (a) the same speed-accuracy trade-off observed in Experiment 2, (b) substantially longer RTs overall, and (c) no evidence for mental rotation in either group, mirroring Experiment 1. Contrary to our earlier findings on aerial depth perception, expertise in remote sensing did not yield a distinctive recognition strategy for the experiments here. However, experts displayed more diligent tactics in Experiments 2 and 3. We suggest that all participants in Experiment 3 engaged in cognitively challenging feature comparisons across viewpoints, presumably supported by volumetric or surface-connected prototypes of houses as the basis for feature comparisons.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"111-138"},"PeriodicalIF":1.1,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-20DOI: 10.1177/03010066251384492
Anna Kravchenko, Andrey A Bagrov, Mikhail I Katsnelson, Veronica Dudarev
While intuitive for humans, the concept of visual complexity is hard to define and quantify formally. We suggest adopting the multiscale structural complexity (MSSC) measure, an approach that defines structural complexity of an object as the amount of dissimilarities between distinct scales in its hierarchical organization. In this work, we apply MSSC to the case of visual stimuli, using an open dataset of images with subjective complexity scores obtained from human participants (SAVOIAS). We demonstrate that MSSC correlates with subjective complexity on par with other computational complexity measures, while being more intuitive by definition, consistent across categories of images, and easier to compute. We discuss objective and subjective elements inherently present in human perception of complexity and the domains where the two are more likely to diverge. We show how the multiscale nature of MSSC allows further investigation of complexity as it is perceived by humans.
{"title":"Multiscale structural complexity as a quantitative measure of visual complexity.","authors":"Anna Kravchenko, Andrey A Bagrov, Mikhail I Katsnelson, Veronica Dudarev","doi":"10.1177/03010066251384492","DOIUrl":"10.1177/03010066251384492","url":null,"abstract":"<p><p>While intuitive for humans, the concept of visual complexity is hard to define and quantify formally. We suggest adopting the multiscale structural complexity (MSSC) measure, an approach that defines structural complexity of an object as the amount of dissimilarities between distinct scales in its hierarchical organization. In this work, we apply MSSC to the case of visual stimuli, using an open dataset of images with subjective complexity scores obtained from human participants (SAVOIAS). We demonstrate that MSSC correlates with subjective complexity on par with other computational complexity measures, while being more intuitive by definition, consistent across categories of images, and easier to compute. We discuss objective and subjective elements inherently present in human perception of complexity and the domains where the two are more likely to diverge. We show how the multiscale nature of MSSC allows further investigation of complexity as it is perceived by humans.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"139-158"},"PeriodicalIF":1.1,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816411/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1177/03010066251408297
Andrea S Ying, Joan Danielle K Ongchoco
The colors and lines that compose perceptual experience result from the interplay between visual processing pathways and the light that hits the retina. So it is striking that many individuals seem to also experience these visual properties even in the absence of explicit sensory cues-as in the phenomenon of "scaffolded attention." When observing a uniform grid of squares, people report perceiving the squares as grouped into shapes or patterns, where the squares sometimes appear brighter or colored (for "shaders"), or bolded or outlined (for "bolders"). With 100 observers, we used an interactive grid to characterize the prevalence and magnitude of these experiences. Results showed that people's experiences could be modulated by grid contrast, that is, 89% of hallucinators reporting "bolding" on a black grid, while only 36% on a white one. Thus, stimulus factors may influence what gets selected-the squares (for shaders) or the lines (for bolders)-as the raw material for "everyday hallucinations" in scaffolded attention.
{"title":"Are you a visual \"shader\" or a \"bolder\"? Different visual routines create everyday hallucinations in \"scaffolded attention\".","authors":"Andrea S Ying, Joan Danielle K Ongchoco","doi":"10.1177/03010066251408297","DOIUrl":"https://doi.org/10.1177/03010066251408297","url":null,"abstract":"<p><p>The colors and lines that compose perceptual experience result from the interplay between visual processing pathways and the light that hits the retina. So it is striking that many individuals seem to also experience these visual properties even in the absence of explicit sensory cues-as in the phenomenon of \"scaffolded attention.\" When observing a uniform grid of squares, people report perceiving the squares as grouped into shapes or patterns, where the squares sometimes appear brighter or colored (for \"shaders\"), or bolded or outlined (for \"bolders\"). With 100 observers, we used an interactive grid to characterize the prevalence and magnitude of these experiences. Results showed that people's experiences could be modulated by grid contrast, that is, 89% of hallucinators reporting \"bolding\" on a black grid, while only 36% on a white one. Thus, stimulus factors may influence what gets selected-the squares (for shaders) or the lines (for bolders)-as the raw material for \"everyday hallucinations\" in scaffolded attention.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251408297"},"PeriodicalIF":1.1,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1177/03010066251410899
Keiyu Niikuni, Manami Sato
Previous research has demonstrated that words associated with brightness (e.g., "sun") elicit smaller pupil diameters than those related to darkness (e.g., "night"). The present study aimed to determine whether these language-induced pupillary responses are driven by the luminance of the mentally simulated content-referred to here as sensory interpretation-or by the conceptual brightness linked to the words' emotional valence, termed emotional interpretation. To address this question, we utilized the Japanese adjectives akarui and kurai, which can denote both luminance, as in the noun phrase akarui/kurai gamen ("bright/dark screen"), and emotional valence, as in akarui/kurai seikaku ("cheerful/gloomy personality"). Participants were presented with noun phrases composed of these adjectives and various nouns (akarui/kurai + noun). A significant main effect of the adjective indicated that phrases containing akarui yielded smaller pupil diameters than those containing kurai. Furthermore, although the interaction effect did not reach significance, the adjective effect was observed only when the adjectives conveyed luminance, not when they conveyed emotional valence. These findings suggest that sensory, rather than emotional, interpretation better explains language-induced changes in pupil size. The use of pupillometry as a measure of perceptual simulation offers more direct and compelling evidence in support of the central claim of embodied language theories: that during language comprehension, readers and listeners spontaneously generate sensorimotor simulations of the described content. Future studies are warranted to examine whether these findings extend to sentence- and discourse-level processing, as well as to simulations of information conveyed implicitly or indirectly through language.
{"title":"Pupillometric evidence for perceptual simulation in language comprehension: Sensory and emotional meanings of Japanese adjectives.","authors":"Keiyu Niikuni, Manami Sato","doi":"10.1177/03010066251410899","DOIUrl":"10.1177/03010066251410899","url":null,"abstract":"<p><p>Previous research has demonstrated that words associated with brightness (e.g., \"sun\") elicit smaller pupil diameters than those related to darkness (e.g., \"night\"). The present study aimed to determine whether these language-induced pupillary responses are driven by the luminance of the mentally simulated content-referred to here as <i>sensory interpretation</i>-or by the conceptual brightness linked to the words' emotional valence, termed <i>emotional interpretation</i>. To address this question, we utilized the Japanese adjectives <i>akarui</i> and <i>kurai</i>, which can denote both luminance, as in the noun phrase <i>akarui/kurai gamen</i> (\"bright/dark screen\"), and emotional valence, as in <i>akarui/kurai seikaku</i> (\"cheerful/gloomy personality\"). Participants were presented with noun phrases composed of these adjectives and various nouns (<i>akarui/kurai</i> + noun). A significant main effect of the adjective indicated that phrases containing <i>akarui</i> yielded smaller pupil diameters than those containing <i>kurai</i>. Furthermore, although the interaction effect did not reach significance, the adjective effect was observed only when the adjectives conveyed luminance, not when they conveyed emotional valence. These findings suggest that sensory, rather than emotional, interpretation better explains language-induced changes in pupil size. The use of pupillometry as a measure of perceptual simulation offers more direct and compelling evidence in support of the central claim of embodied language theories: that during language comprehension, readers and listeners spontaneously generate sensorimotor simulations of the described content. Future studies are warranted to examine whether these findings extend to sentence- and discourse-level processing, as well as to simulations of information conveyed implicitly or indirectly through language.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251410899"},"PeriodicalIF":1.1,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1177/03010066251408252
Chaery Park, Jongwan Kim
The sense of touch is fundamental to human experience, influencing emotions, behaviors, and social interactions. While previous studies on texture and emotion have focused on the precise discrimination of tactile stimuli, the emotional aspects have been less explored. In this study, we reanalyzed data from a previously published study to map haptic and visuo-haptic stimuli onto a two-dimensional affective space of valence and arousal and to compare the affective representations of unimodal and bimodal stimuli. We used multivariate methods, including multidimensional scaling and classification, to explore whether the affective dimensions of haptic and visuo-haptic stimuli support core affect theory and whether they share affective representations. The results of multidimensional scaling indicated that the roughness and hardness dimensions corresponded to valence and arousal, supporting core affect theory. Within-condition classification analyses indicated that both haptic and visuo-haptic stimuli could be predicted by tactile and emotion scales. Cross-condition classification revealed that the roughness and hardness of tactile stimuli could be accurately predicted from tactile and emotional ratings of visuo-haptic stimuli, and vice versa. These findings provide empirical evidence for a modality-general representation of affective and haptic responses, highlighting the interconnected nature of sensory and emotional experiences.
{"title":"Touching the unseen: Exploring affective responses to haptic stimuli with and without visual input.","authors":"Chaery Park, Jongwan Kim","doi":"10.1177/03010066251408252","DOIUrl":"https://doi.org/10.1177/03010066251408252","url":null,"abstract":"<p><p>The sense of touch is fundamental to human experience, influencing emotions, behaviors, and social interactions. While previous studies on texture and emotion have focused on the precise discrimination of tactile stimuli, the emotional aspects have been less explored. In this study, we reanalyzed data from a previously published study to map haptic and visuo-haptic stimuli onto a two-dimensional affective space of valence and arousal and to compare the affective representations of unimodal and bimodal stimuli. We used multivariate methods, including multidimensional scaling and classification, to explore whether the affective dimensions of haptic and visuo-haptic stimuli support core affect theory and whether they share affective representations. The results of multidimensional scaling indicated that the roughness and hardness dimensions corresponded to valence and arousal, supporting core affect theory. Within-condition classification analyses indicated that both haptic and visuo-haptic stimuli could be predicted by tactile and emotion scales. Cross-condition classification revealed that the roughness and hardness of tactile stimuli could be accurately predicted from tactile and emotional ratings of visuo-haptic stimuli, and vice versa. These findings provide empirical evidence for a modality-general representation of affective and haptic responses, highlighting the interconnected nature of sensory and emotional experiences.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251408252"},"PeriodicalIF":1.1,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cuteness acts as a key protective mechanism, enhancing the survival of fully dependent infants. Characteristic facial features trigger neural responses that promote caregiving behaviors. Therefore, understanding what kinds of facial features are perceived as 'cuteness' is of particular importance. This study investigates the role of spatial frequency (SF) in cuteness perception and examines whether this effect is influenced by age (young vs. old). We selected infant facial images and processed them into versions with different cuteness levels (by baby schema) and SF. Participants were invited to complete a two-alternative forced-choice task to measure their cuteness perception ability. They observed two infant faces for 2000 ms, then were asked to respond which face was cuter. The results revealed that broad SF faces were more effective for cuteness perception than filtered facial images. Additionally, young people demonstrated significantly higher cuteness perception ability compared to old people. Notably, young people showed a slightly higher accuracy for high SF images compared to low SF images, whereas no such difference was observed in old people. These findings suggest that cuteness perception relies on information from both low and high SF with the weighting of this information varying by age.
{"title":"A behavioral study on the impact of spatial frequency and age on cuteness perception.","authors":"Jie Xiang, Jiani Guo, Qingqing Li, Yulong Liu, Huazhi Li, Mengni Zhou","doi":"10.1177/03010066251401483","DOIUrl":"https://doi.org/10.1177/03010066251401483","url":null,"abstract":"<p><p>Cuteness acts as a key protective mechanism, enhancing the survival of fully dependent infants. Characteristic facial features trigger neural responses that promote caregiving behaviors. Therefore, understanding what kinds of facial features are perceived as 'cuteness' is of particular importance. This study investigates the role of spatial frequency (SF) in cuteness perception and examines whether this effect is influenced by age (young vs. old). We selected infant facial images and processed them into versions with different cuteness levels (by <i>baby schema</i>) and SF. Participants were invited to complete a two-alternative forced-choice task to measure their cuteness perception ability. They observed two infant faces for 2000 ms, then were asked to respond which face was cuter. The results revealed that broad SF faces were more effective for cuteness perception than filtered facial images. Additionally, young people demonstrated significantly higher cuteness perception ability compared to old people. Notably, young people showed a slightly higher accuracy for high SF images compared to low SF images, whereas no such difference was observed in old people. These findings suggest that cuteness perception relies on information from both low and high SF with the weighting of this information varying by age.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251401483"},"PeriodicalIF":1.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1177/03010066251409616
Y Howard Li, Michele Rucci, Borja Aguado, Cristina M Maho, Martina Poletti, Eli Brenner
As the eyes drift across a scene, borders between surfaces slide across the retina. Consequently, near borders' edges, parts of the retina that have adapted to the light at one side of the border are exposed to the light at the other side of the border. Such changes in exposure might increase the judged contrast. Retinal image motion might therefore contribute to chromatic induction, the influence that adjacent colours have on a surface's apparent colour, by increasing the apparent colour contrast. We conducted two experiments to evaluate this possibility. The experiments examined how artificially increasing or decreasing the extent to which certain surface borders shift across the retina influences the perceived colour. Neither increasing nor decreasing the extent to which selected borders shift across the retina had a substantial influence on the perceived colour. This implies that chromatic induction does not arise from overestimating the contrast between adjacent surfaces when small eye movements shift the border between those surfaces across the retina.
{"title":"Chromatic induction and retinal image motion.","authors":"Y Howard Li, Michele Rucci, Borja Aguado, Cristina M Maho, Martina Poletti, Eli Brenner","doi":"10.1177/03010066251409616","DOIUrl":"https://doi.org/10.1177/03010066251409616","url":null,"abstract":"<p><p>As the eyes drift across a scene, borders between surfaces slide across the retina. Consequently, near borders' edges, parts of the retina that have adapted to the light at one side of the border are exposed to the light at the other side of the border. Such changes in exposure might increase the judged contrast. Retinal image motion might therefore contribute to chromatic induction, the influence that adjacent colours have on a surface's apparent colour, by increasing the apparent colour contrast. We conducted two experiments to evaluate this possibility. The experiments examined how artificially increasing or decreasing the extent to which certain surface borders shift across the retina influences the perceived colour. Neither increasing nor decreasing the extent to which selected borders shift across the retina had a substantial influence on the perceived colour. This implies that chromatic induction does not arise from overestimating the contrast between adjacent surfaces when small eye movements shift the border between those surfaces across the retina.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251409616"},"PeriodicalIF":1.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-09DOI: 10.1177/03010066251405671
Tim S Meese, Pascal Mamassian, Isabelle Mareschal, Frans A J Verstraten
{"title":"Introducing Philosophy Corner.","authors":"Tim S Meese, Pascal Mamassian, Isabelle Mareschal, Frans A J Verstraten","doi":"10.1177/03010066251405671","DOIUrl":"10.1177/03010066251405671","url":null,"abstract":"","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3-6"},"PeriodicalIF":1.1,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145716424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-09DOI: 10.1177/03010066251384790
Patrick Seebold, Yingchen He
Looming sounds are known to influence visual processing in various ways. Prior work suggests that performance on an orientation sensitivity task may be improved if visual presentation is preceded by looming audio, but not by non-looming audio. However, our recent work revealed that looming and non-looming alert sounds have a similar impact on performance in contrast sensitivity tasks. In the current study, we aim to reconcile these findings by comparing the effects of looming and non-looming sounds on contrast and orientation discrimination tasks within participants. Participants viewed tilted sinusoidal gratings and made judgments about their orientation (left/right). The gratings for the contrast discrimination task had low contrast and high deviation from vertical (±45°), whereas for the orientation discrimination task, they had a low deviation (less than ±2° from vertical) and full contrast. Immediately before visual stimulus presentation, there could be no sound, stationary sound, or looming sound. Sensitivity was measured as d' and compared across tasks and sound types. Our results indicate that neither task benefited more from looming sounds over stationary sounds, yielding no evidence for a looming bias in this domain. However, we found a differential effect between tasks, indicating that contrast discrimination was improved more by alert sounds than orientation discrimination, likely reflecting perceptual differences in the task types. Factors that may influence the effectiveness of looming sounds are discussed.
{"title":"Task-specific effects of looming audio: Influences on visual contrast and orientation sensitivity.","authors":"Patrick Seebold, Yingchen He","doi":"10.1177/03010066251384790","DOIUrl":"10.1177/03010066251384790","url":null,"abstract":"<p><p>Looming sounds are known to influence visual processing in various ways. Prior work suggests that performance on an orientation sensitivity task may be improved if visual presentation is preceded by looming audio, but not by non-looming audio. However, our recent work revealed that looming and non-looming alert sounds have a similar impact on performance in contrast sensitivity tasks. In the current study, we aim to reconcile these findings by comparing the effects of looming and non-looming sounds on contrast and orientation discrimination tasks within participants. Participants viewed tilted sinusoidal gratings and made judgments about their orientation (left/right). The gratings for the contrast discrimination task had low contrast and high deviation from vertical (±45°), whereas for the orientation discrimination task, they had a low deviation (less than ±2° from vertical) and full contrast. Immediately before visual stimulus presentation, there could be no sound, stationary sound, or looming sound. Sensitivity was measured as <i>d</i>' and compared across tasks and sound types. Our results indicate that neither task benefited more from looming sounds over stationary sounds, yielding no evidence for a looming bias in this domain. However, we found a differential effect between tasks, indicating that contrast discrimination was improved more by alert sounds than orientation discrimination, likely reflecting perceptual differences in the task types. Factors that may influence the effectiveness of looming sounds are discussed.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"77-89"},"PeriodicalIF":1.1,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}