Pub Date : 2025-12-01DOI: 10.1177/03010066251396877
Guangyao Zu, Tianyang Zhang, Aijun Wang, Ming Zhang
Evidence suggested that stimulus-response bindings could occur automatically as a result of the co-occurrence of a stimulus and a response, without requiring additional attentional involvement for features or objects. Considering that stimuli used in previous research often involved high-discriminability features processed automatically, the current study investigated the role of feature types in attention-modulated stimulus-response binding. Using the classic partial repetition cost (PRC) paradigm, the study manipulated the task relevance of features during the binding phase to modulate feature-based attention, with color and Landolt-C gap orientation as experimental features. The study found that when the stimulus feature was color (a high-discriminability feature), no significant difference was observed in the PRC effect during the retrieval phase, regardless of whether attention was directed to the color during the binding phase. When the stimulus feature was the gap orientation of the Landolt-C (a fine-grained feature), the PRC effect appeared during the retrieval phase, regardless of attention to gap orientation during the binding phase. However, the PRC effect was stronger when attention was directed to gap orientation, indicating that feature-based attention during the binding phase enhanced the binding strength between the gap orientation of the Landolt-C and the response. This study suggests that stimulus-response binding occurs automatically, but its binding strength is modulated by attention, with the type of stimulus feature playing a critical role in this process. Stimulus-driven and goal-driven factors jointly influence the strength of stimulus-response binding.
{"title":"Feature-based attention enhances the binding between fine-grained features and responses.","authors":"Guangyao Zu, Tianyang Zhang, Aijun Wang, Ming Zhang","doi":"10.1177/03010066251396877","DOIUrl":"https://doi.org/10.1177/03010066251396877","url":null,"abstract":"<p><p>Evidence suggested that stimulus-response bindings could occur automatically as a result of the co-occurrence of a stimulus and a response, without requiring additional attentional involvement for features or objects. Considering that stimuli used in previous research often involved high-discriminability features processed automatically, the current study investigated the role of feature types in attention-modulated stimulus-response binding. Using the classic partial repetition cost (PRC) paradigm, the study manipulated the task relevance of features during the binding phase to modulate feature-based attention, with color and Landolt-C gap orientation as experimental features. The study found that when the stimulus feature was color (a high-discriminability feature), no significant difference was observed in the PRC effect during the retrieval phase, regardless of whether attention was directed to the color during the binding phase. When the stimulus feature was the gap orientation of the Landolt-C (a fine-grained feature), the PRC effect appeared during the retrieval phase, regardless of attention to gap orientation during the binding phase. However, the PRC effect was stronger when attention was directed to gap orientation, indicating that feature-based attention during the binding phase enhanced the binding strength between the gap orientation of the Landolt-C and the response. This study suggests that stimulus-response binding occurs automatically, but its binding strength is modulated by attention, with the type of stimulus feature playing a critical role in this process. Stimulus-driven and goal-driven factors jointly influence the strength of stimulus-response binding.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251396877"},"PeriodicalIF":1.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145656308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1177/03010066251399885
Wilder Daniel, Matthew R Longo
Visual adaptation to extreme body types is known to produce contrastive adaptation aftereffects on the subsequent perception of human bodies. This approach has been exploited to probe the perceptual mechanisms underlying body perception by measuring the extent to which aftereffects occur when the adapting and test stimuli differ in specific characteristics (cross adaptation). The present study used this approach to investigate the body-part specificity of adaptation to body muscularity. Participants made judgments of the muscularity of torsos and arms both before and after adaptation to muscular torsos (Experiment 1) or muscular arms (Experiment 2). Across experiments, we report a double dissociation in the effects of adaptation. In Experiment 1, adaptation to muscular torsos produced aftereffects for torso judgments, but not arm judgments. In Experiment 2, adaptation to muscular arms produced aftereffects for arm judgments, but not torso judgments. These results demonstrate body-part specificity of the visual mechanisms underlying perception of body muscularity.
{"title":"Visual adaptation after effects for muscularity are body-part specific.","authors":"Wilder Daniel, Matthew R Longo","doi":"10.1177/03010066251399885","DOIUrl":"https://doi.org/10.1177/03010066251399885","url":null,"abstract":"<p><p>Visual adaptation to extreme body types is known to produce contrastive adaptation aftereffects on the subsequent perception of human bodies. This approach has been exploited to probe the perceptual mechanisms underlying body perception by measuring the extent to which aftereffects occur when the adapting and test stimuli differ in specific characteristics (<i>cross adaptation</i>). The present study used this approach to investigate the body-part specificity of adaptation to body muscularity. Participants made judgments of the muscularity of torsos and arms both before and after adaptation to muscular torsos (Experiment 1) or muscular arms (Experiment 2). Across experiments, we report a double dissociation in the effects of adaptation. In Experiment 1, adaptation to muscular torsos produced aftereffects for torso judgments, but not arm judgments. In Experiment 2, adaptation to muscular arms produced aftereffects for arm judgments, but not torso judgments. These results demonstrate body-part specificity of the visual mechanisms underlying perception of body muscularity.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251399885"},"PeriodicalIF":1.1,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145642111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1177/03010066251395946
Mounia Ziat, Grace Shim, Rishika Mini Thulasi, Ilja Frissen
Can you tell what's inside a sealed container just by touching it? Prior work in "container haptics" has focused on numbers-how many marbles are rolling around, or how full a bottle is. Here, we explore whether humans can make qualitative judgments-what kind of thing is inside-without seeing it. Across three studies, participants explored containers filled with dry food items (e.g., flour or granola) using touch, with or without sound. Surprisingly, even with no visual (or auditory cues), participants could often identify, or at least describe, the contents based on texture, size, and density. These findings suggest that your hands are better at guessing container contents than you might think.
{"title":"\"Definitely a toaster\": Identifying container contents by touch and sound.","authors":"Mounia Ziat, Grace Shim, Rishika Mini Thulasi, Ilja Frissen","doi":"10.1177/03010066251395946","DOIUrl":"https://doi.org/10.1177/03010066251395946","url":null,"abstract":"<p><p>Can you tell what's inside a sealed container just by touching it? Prior work in \"container haptics\" has focused on numbers-how many marbles are rolling around, or how full a bottle is. Here, we explore whether humans can make qualitative judgments-what <i>kind</i> of thing is inside-without seeing it. Across three studies, participants explored containers filled with dry food items (e.g., flour or granola) using touch, with or without sound. Surprisingly, even with no visual (or auditory cues), participants could often identify, or at least describe, the contents based on texture, size, and density. These findings suggest that your hands are better at guessing container contents than you might think.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251395946"},"PeriodicalIF":1.1,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1177/03010066251395418
Jiaxin Xu, Yani Liu, Yanju Ren
Prior research employing emotional faces as distractors within the emotion-induced blindness paradigm has yielded mixed findings, prompting the present investigation into the impact of distinct types of emotional faces on target perception in this framework. Experiment 1 utilized happy faces, neutral faces, baseline stimuli, and inverted emotional faces as distractors, while Experiment 2 employed angry faces, neutral faces, and inverted emotional faces. Results demonstrated that neither happy faces (Experiment 1) nor angry faces (Experiment 2) significantly impaired target perception. By contrast, inverted emotional faces induced a statistically significant reduction in accuracy of target orientation judgments. These findings demonstrate that emotional distractor faces do not automatically elicit blindness under certain conditions, highlighting the importance of both the saliency and task relevance of the distractor in the occurrence of blindness. This study challenges the hypothesis of automatic attentional capture by emotional faces, comprehensively discusses probable reasons underlying these counterintuitive patterns, such as arousal, physical salience, task relevance, and emphasizes the boundary conditions of emotional distractor faces induce blindness.
{"title":"Can irrelevant emotional distractor faces induce blindness? The role of distractor saliency and task relevance.","authors":"Jiaxin Xu, Yani Liu, Yanju Ren","doi":"10.1177/03010066251395418","DOIUrl":"https://doi.org/10.1177/03010066251395418","url":null,"abstract":"<p><p>Prior research employing emotional faces as distractors within the emotion-induced blindness paradigm has yielded mixed findings, prompting the present investigation into the impact of distinct types of emotional faces on target perception in this framework. Experiment 1 utilized happy faces, neutral faces, baseline stimuli, and inverted emotional faces as distractors, while Experiment 2 employed angry faces, neutral faces, and inverted emotional faces. Results demonstrated that neither happy faces (Experiment 1) nor angry faces (Experiment 2) significantly impaired target perception. By contrast, inverted emotional faces induced a statistically significant reduction in accuracy of target orientation judgments. These findings demonstrate that emotional distractor faces do not automatically elicit blindness under certain conditions, highlighting the importance of both the saliency and task relevance of the distractor in the occurrence of blindness. This study challenges the hypothesis of automatic attentional capture by emotional faces, comprehensively discusses probable reasons underlying these counterintuitive patterns, such as arousal, physical salience, task relevance, and emphasizes the boundary conditions of emotional distractor faces induce blindness.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251395418"},"PeriodicalIF":1.1,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1177/03010066251395028
EunJi Baek, Min Hee Shim, Ecem Altan, Gene Tangtartharakul, Katherine Storrs, Paul Michael Corballis, Dietrich Samuel Schwarzkopf
Most humans have only two ears. To know where a sound is in external space, our auditory system must therefore rely on the limited information received by these ears alone. In an adventurous late-night attempt to test blindfolded humans' ability to achieve this feat, we discovered that we mishear the sound of two spoons being hit right in front of us as coming from behind us.
{"title":"The spoon illusion: A consistent rearward bias in human sound localisation.","authors":"EunJi Baek, Min Hee Shim, Ecem Altan, Gene Tangtartharakul, Katherine Storrs, Paul Michael Corballis, Dietrich Samuel Schwarzkopf","doi":"10.1177/03010066251395028","DOIUrl":"https://doi.org/10.1177/03010066251395028","url":null,"abstract":"<p><p>Most humans have only two ears. To know where a sound is in external space, our auditory system must therefore rely on the limited information received by these ears alone. In an adventurous late-night attempt to test blindfolded humans' ability to achieve this feat, we discovered that we mishear the sound of two spoons being hit right in front of us as coming from behind us.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251395028"},"PeriodicalIF":1.1,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1177/03010066251391730
Kyuto Uno, Ryoichi Nakashima
Previous research has shown that task-irrelevant auditory/haptic input semantically congruent with a target visual object facilitates visual search, indicating that cross-modal congruency influences goal-directed attentional control. The present study examined whether haptic input involuntarily shifts spatial attention to the congruent visual object even though it was not a search target. Participants identified the arrow direction presented above or below a central gaze fixation point while clasping a specifically shaped item in their hand. Two task-irrelevant pictures with specific shapes preceded the arrow. Results showed a significant interaction between visual and haptic shapes: Participants responded faster when the visual object shared the shape of the item clasped in their hand than when the two shapes differed, indicating that haptic-visual shape congruency modulates spatial attention. Thus, cross-modal congruency can affect involuntary attentional orienting as well as goal-directed attentional control.
{"title":"Cross-modal congruency between haptic and visual objects affects involuntary shifts in spatial attention.","authors":"Kyuto Uno, Ryoichi Nakashima","doi":"10.1177/03010066251391730","DOIUrl":"https://doi.org/10.1177/03010066251391730","url":null,"abstract":"<p><p>Previous research has shown that task-irrelevant auditory/haptic input semantically congruent with a target visual object facilitates visual search, indicating that cross-modal congruency influences goal-directed attentional control. The present study examined whether haptic input involuntarily shifts spatial attention to the congruent visual object even though it was not a search target. Participants identified the arrow direction presented above or below a central gaze fixation point while clasping a specifically shaped item in their hand. Two task-irrelevant pictures with specific shapes preceded the arrow. Results showed a significant interaction between visual and haptic shapes: Participants responded faster when the visual object shared the shape of the item clasped in their hand than when the two shapes differed, indicating that haptic-visual shape congruency modulates spatial attention. Thus, cross-modal congruency can affect involuntary attentional orienting as well as goal-directed attentional control.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251391730"},"PeriodicalIF":1.1,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145439950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-15DOI: 10.1177/03010066251355391
Hüseyin O Elmas, Sena Er, Ada D Rezaki, Aysesu Izgi, Buse M Urgen, Huseyin Boyaci, Burcu A Urgen
Biological motion perception plays a crucial role in understanding the actions of other animals, facilitating effective social interactions. Although traditionally viewed as a bottom-up driven process, recent research suggests that top-down mechanisms, including attention and expectation, significantly influence biological motion perception at all levels, particularly highlighted under complex or ambiguous conditions. In this study, we investigated the effect of expectation on biological motion perception using a cued individuation task with point-light display (PLD) stimuli. We conducted three experiments investigating how prior information regarding action, emotion, and gender of PLD stimuli modulates perceptual processing. We observed a statistically significant congruency effect when preceding cues informed about action of the upcoming biological motion stimulus; participants performed slower in incongruent trials compared to congruent trials. This effect seems to be mainly driven from the 75% congruency condition compared to the non-informative 50% (chance level) validity condition. The congruency effect that was observed in the action experiment was absent in the emotion and gender experiments. These findings highlight the nuanced role of prior information in biological motion perception, particularly emphasizing that action-related cues, when moderately reliable, can influence biological motion perception. Our results are in line with the predictive processing framework, suggesting that the integration of top-down and bottom-up processes is context-dependent and influenced by the nature of prior information. Our results also emphasize the need to develop more comprehensive frameworks that incorporate naturalistic, complex and dynamic, stimuli to build better models of biological motion perception.
{"title":"Predictive processing in biological motion perception: Evidence from human behavior.","authors":"Hüseyin O Elmas, Sena Er, Ada D Rezaki, Aysesu Izgi, Buse M Urgen, Huseyin Boyaci, Burcu A Urgen","doi":"10.1177/03010066251355391","DOIUrl":"10.1177/03010066251355391","url":null,"abstract":"<p><p>Biological motion perception plays a crucial role in understanding the actions of other animals, facilitating effective social interactions. Although traditionally viewed as a bottom-up driven process, recent research suggests that top-down mechanisms, including attention and expectation, significantly influence biological motion perception at all levels, particularly highlighted under complex or ambiguous conditions. In this study, we investigated the effect of expectation on biological motion perception using a cued individuation task with point-light display (PLD) stimuli. We conducted three experiments investigating how prior information regarding action, emotion, and gender of PLD stimuli modulates perceptual processing. We observed a statistically significant congruency effect when preceding cues informed about action of the upcoming biological motion stimulus; participants performed slower in incongruent trials compared to congruent trials. This effect seems to be mainly driven from the 75% congruency condition compared to the non-informative 50% (chance level) validity condition. The congruency effect that was observed in the action experiment was absent in the emotion and gender experiments. These findings highlight the nuanced role of prior information in biological motion perception, particularly emphasizing that action-related cues, when moderately reliable, can influence biological motion perception. Our results are in line with the predictive processing framework, suggesting that the integration of top-down and bottom-up processes is context-dependent and influenced by the nature of prior information. Our results also emphasize the need to develop more comprehensive frameworks that incorporate naturalistic, complex and dynamic, stimuli to build better models of biological motion perception.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"844-862"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-06-24DOI: 10.1177/03010066251345677
Emil Skog, Andrew J Schofield, Timothy S Meese
Human object recognition often exhibits viewpoint invariance. However, unfamiliar aerial viewpoints pose challenges because diagnostic features are often obscured. Here, we investigated the gist perception of scenes when viewed from above and at the ground level, comparing novices against remote sensing surveyors with expertise in aerial photogrammetry. In a randomly interleaved single-interval, 14-choice design, briefly presented target images were followed by a backward white-noise mask. The targets and choices were selected from seven natural and seven man-made categories. Performance across expertise and viewpoint was between 46.0% and 82.6% correct and confusions were sparsely distributed across the 728 (2 × 2 × 14 × 13) possibilities. Both groups performed better with ground views than with aerial views and different confusions were made across viewpoints, but experts outperformed novices only for aerial views, displaying no transfer of expertise to ground views. Where novices underperformed by comparison, this tended to involve mistaking natural for man-made scenes in aerial views. There was also an overall effect for categorisation to be better for the man-made categories than the natural categories. These, and a few other notable exceptions aside, the main result was that detailed sub-category patterns of successes and confusions were very similar across participant groups: the experimental effects related more to viewpoint than expertise. This contrasts with our recent finding for perception of 3D relief, where comparable groups of experts and novices used very different strategies. It seems that expertise in gist perception (for aerial images at least) is largely a matter of degree rather than kind.
{"title":"Performance and confusion effects for gist perception of scenes: An investigation of expertise, viewpoint and image categories.","authors":"Emil Skog, Andrew J Schofield, Timothy S Meese","doi":"10.1177/03010066251345677","DOIUrl":"10.1177/03010066251345677","url":null,"abstract":"<p><p>Human object recognition often exhibits viewpoint invariance. However, unfamiliar aerial viewpoints pose challenges because diagnostic features are often obscured. Here, we investigated the gist perception of scenes when viewed from above and at the ground level, comparing novices against remote sensing surveyors with expertise in aerial photogrammetry. In a randomly interleaved single-interval, 14-choice design, briefly presented target images were followed by a backward white-noise mask. The targets and choices were selected from seven natural and seven man-made categories. Performance across expertise and viewpoint was between 46.0% and 82.6% correct and confusions were sparsely distributed across the 728 (2 × 2 × 14 × 13) possibilities. Both groups performed better with ground views than with aerial views and different confusions were made across viewpoints, but experts outperformed novices only for aerial views, displaying no transfer of expertise to ground views. Where novices underperformed by comparison, this tended to involve mistaking natural for man-made scenes in aerial views. There was also an overall effect for categorisation to be better for the man-made categories than the natural categories. These, and a few other notable exceptions aside, the main result was that detailed sub-category patterns of successes and confusions were very similar across participant groups: the experimental effects related more to viewpoint than expertise. This contrasts with our recent finding for perception of 3D relief, where comparable groups of experts and novices used very different strategies. It seems that expertise in gist perception (for aerial images at least) is largely a matter of degree rather than kind.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"817-843"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-29DOI: 10.1177/03010066251359214
Algis Bertulis, Arunas Bielevicius
The study builds upon previous research on the perceived size of visual objects of various shapes compared to an empty spatial interval. In psychophysical experiments using the size-matching procedure, the effect of overestimating the relative size of an object (relative to an equivalent empty space) was consistently observed when testing visual objects, such as rectangles, circles, ellipses, rhombuses, and triangles, in both filled and empty formats. The strength of the illusion did not depend on whether the shapes were filled or not, but rather it varied with the shape itself. Objects with open contours, such as angles of different orientations and narrow stimuli like straight, tangled, defocused, and divided lines, all produced the expansion effect. The overestimation manifested during testing stimuli of various contour types, including spatial contrast of luminance, colour, and texture, as well as those determined by perceptual grouping and illusory outlines of Kanizsa and Oppel-Kundt versions. Finally, the expansion effect was found to be more pronounced with increasing length and height of the stimuli. The data supported the assumption that the object contour is the primary inducer of perceived size expansion and that the overestimation effect is a regular phenomenon rather than an incidental event.
{"title":"Expansion of perceived size of visual stimuli: Objects look wider than equivalent empty spaces.","authors":"Algis Bertulis, Arunas Bielevicius","doi":"10.1177/03010066251359214","DOIUrl":"10.1177/03010066251359214","url":null,"abstract":"<p><p>The study builds upon previous research on the perceived size of visual objects of various shapes compared to an empty spatial interval. In psychophysical experiments using the size-matching procedure, the effect of overestimating the relative size of an object (relative to an equivalent empty space) was consistently observed when testing visual objects, such as rectangles, circles, ellipses, rhombuses, and triangles, in both filled and empty formats. The strength of the illusion did not depend on whether the shapes were filled or not, but rather it varied with the shape itself. Objects with open contours, such as angles of different orientations and narrow stimuli like straight, tangled, defocused, and divided lines, all produced the expansion effect. The overestimation manifested during testing stimuli of various contour types, including spatial contrast of luminance, colour, and texture, as well as those determined by perceptual grouping and illusory outlines of Kanizsa and Oppel-Kundt versions. Finally, the expansion effect was found to be more pronounced with increasing length and height of the stimuli. The data supported the assumption that the object contour is the primary inducer of perceived size expansion and that the overestimation effect is a regular phenomenon rather than an incidental event.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"863-887"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-29DOI: 10.1177/03010066251360131
Marcello Maniglia, Russell Cohen Hoffing
Maniglia and colleagues reported a significant reduction in visual crowding following perceptual learning training on contrast detection using a lateral masking configuration with collinear flankers. They interpreted this reduction within a framework of shared cortical mechanisms between collinear inhibition, elicited by lateral masking with closely spaced flankers, and crowding. We reanalyzed their data to directly test this hypothesis by examining correlations between learning gains at short target-to-flankers separations (reduced contrast detection thresholds) and crowding reduction. Surprisingly, individual analyses revealed an inverse correlation: participants with greater reduction in collinear inhibition showed smaller reductions in crowding. We suggest that these participants exhibited separation-specific learning, which previous studies indicate may hinder effective transfer. Thus, while collinear inhibition and crowding may share mechanisms, distributed improvement across separations might be necessary to observe transfer of learning to crowding.
{"title":"A bridge between collinear inhibition and visual crowding: Hints from perceptual learning.","authors":"Marcello Maniglia, Russell Cohen Hoffing","doi":"10.1177/03010066251360131","DOIUrl":"10.1177/03010066251360131","url":null,"abstract":"<p><p>Maniglia and colleagues reported a significant reduction in visual crowding following perceptual learning training on contrast detection using a lateral masking configuration with collinear flankers. They interpreted this reduction within a framework of shared cortical mechanisms between collinear inhibition, elicited by lateral masking with closely spaced flankers, and crowding. We reanalyzed their data to directly test this hypothesis by examining correlations between learning gains at short target-to-flankers separations (reduced contrast detection thresholds) and crowding reduction. Surprisingly, individual analyses revealed an inverse correlation: participants with greater reduction in collinear inhibition showed smaller reductions in crowding. We suggest that these participants exhibited separation-specific learning, which previous studies indicate may hinder effective transfer. Thus, while collinear inhibition and crowding may share mechanisms, distributed improvement across separations might be necessary to observe transfer of learning to crowding.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"888-899"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}