Pub Date : 2025-11-21DOI: 10.1177/03010066251395418
Jiaxin Xu, Yani Liu, Yanju Ren
Prior research employing emotional faces as distractors within the emotion-induced blindness paradigm has yielded mixed findings, prompting the present investigation into the impact of distinct types of emotional faces on target perception in this framework. Experiment 1 utilized happy faces, neutral faces, baseline stimuli, and inverted emotional faces as distractors, while Experiment 2 employed angry faces, neutral faces, and inverted emotional faces. Results demonstrated that neither happy faces (Experiment 1) nor angry faces (Experiment 2) significantly impaired target perception. By contrast, inverted emotional faces induced a statistically significant reduction in accuracy of target orientation judgments. These findings demonstrate that emotional distractor faces do not automatically elicit blindness under certain conditions, highlighting the importance of both the saliency and task relevance of the distractor in the occurrence of blindness. This study challenges the hypothesis of automatic attentional capture by emotional faces, comprehensively discusses probable reasons underlying these counterintuitive patterns, such as arousal, physical salience, task relevance, and emphasizes the boundary conditions of emotional distractor faces induce blindness.
{"title":"Can irrelevant emotional distractor faces induce blindness? The role of distractor saliency and task relevance.","authors":"Jiaxin Xu, Yani Liu, Yanju Ren","doi":"10.1177/03010066251395418","DOIUrl":"https://doi.org/10.1177/03010066251395418","url":null,"abstract":"<p><p>Prior research employing emotional faces as distractors within the emotion-induced blindness paradigm has yielded mixed findings, prompting the present investigation into the impact of distinct types of emotional faces on target perception in this framework. Experiment 1 utilized happy faces, neutral faces, baseline stimuli, and inverted emotional faces as distractors, while Experiment 2 employed angry faces, neutral faces, and inverted emotional faces. Results demonstrated that neither happy faces (Experiment 1) nor angry faces (Experiment 2) significantly impaired target perception. By contrast, inverted emotional faces induced a statistically significant reduction in accuracy of target orientation judgments. These findings demonstrate that emotional distractor faces do not automatically elicit blindness under certain conditions, highlighting the importance of both the saliency and task relevance of the distractor in the occurrence of blindness. This study challenges the hypothesis of automatic attentional capture by emotional faces, comprehensively discusses probable reasons underlying these counterintuitive patterns, such as arousal, physical salience, task relevance, and emphasizes the boundary conditions of emotional distractor faces induce blindness.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251395418"},"PeriodicalIF":1.1,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1177/03010066251395028
EunJi Baek, Min Hee Shim, Ecem Altan, Gene Tangtartharakul, Katherine Storrs, Paul Michael Corballis, Dietrich Samuel Schwarzkopf
Most humans have only two ears. To know where a sound is in external space, our auditory system must therefore rely on the limited information received by these ears alone. In an adventurous late-night attempt to test blindfolded humans' ability to achieve this feat, we discovered that we mishear the sound of two spoons being hit right in front of us as coming from behind us.
{"title":"The spoon illusion: A consistent rearward bias in human sound localisation.","authors":"EunJi Baek, Min Hee Shim, Ecem Altan, Gene Tangtartharakul, Katherine Storrs, Paul Michael Corballis, Dietrich Samuel Schwarzkopf","doi":"10.1177/03010066251395028","DOIUrl":"https://doi.org/10.1177/03010066251395028","url":null,"abstract":"<p><p>Most humans have only two ears. To know where a sound is in external space, our auditory system must therefore rely on the limited information received by these ears alone. In an adventurous late-night attempt to test blindfolded humans' ability to achieve this feat, we discovered that we mishear the sound of two spoons being hit right in front of us as coming from behind us.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251395028"},"PeriodicalIF":1.1,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1177/03010066251391730
Kyuto Uno, Ryoichi Nakashima
Previous research has shown that task-irrelevant auditory/haptic input semantically congruent with a target visual object facilitates visual search, indicating that cross-modal congruency influences goal-directed attentional control. The present study examined whether haptic input involuntarily shifts spatial attention to the congruent visual object even though it was not a search target. Participants identified the arrow direction presented above or below a central gaze fixation point while clasping a specifically shaped item in their hand. Two task-irrelevant pictures with specific shapes preceded the arrow. Results showed a significant interaction between visual and haptic shapes: Participants responded faster when the visual object shared the shape of the item clasped in their hand than when the two shapes differed, indicating that haptic-visual shape congruency modulates spatial attention. Thus, cross-modal congruency can affect involuntary attentional orienting as well as goal-directed attentional control.
{"title":"Cross-modal congruency between haptic and visual objects affects involuntary shifts in spatial attention.","authors":"Kyuto Uno, Ryoichi Nakashima","doi":"10.1177/03010066251391730","DOIUrl":"https://doi.org/10.1177/03010066251391730","url":null,"abstract":"<p><p>Previous research has shown that task-irrelevant auditory/haptic input semantically congruent with a target visual object facilitates visual search, indicating that cross-modal congruency influences goal-directed attentional control. The present study examined whether haptic input involuntarily shifts spatial attention to the congruent visual object even though it was not a search target. Participants identified the arrow direction presented above or below a central gaze fixation point while clasping a specifically shaped item in their hand. Two task-irrelevant pictures with specific shapes preceded the arrow. Results showed a significant interaction between visual and haptic shapes: Participants responded faster when the visual object shared the shape of the item clasped in their hand than when the two shapes differed, indicating that haptic-visual shape congruency modulates spatial attention. Thus, cross-modal congruency can affect involuntary attentional orienting as well as goal-directed attentional control.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251391730"},"PeriodicalIF":1.1,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145439950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-15DOI: 10.1177/03010066251355391
Hüseyin O Elmas, Sena Er, Ada D Rezaki, Aysesu Izgi, Buse M Urgen, Huseyin Boyaci, Burcu A Urgen
Biological motion perception plays a crucial role in understanding the actions of other animals, facilitating effective social interactions. Although traditionally viewed as a bottom-up driven process, recent research suggests that top-down mechanisms, including attention and expectation, significantly influence biological motion perception at all levels, particularly highlighted under complex or ambiguous conditions. In this study, we investigated the effect of expectation on biological motion perception using a cued individuation task with point-light display (PLD) stimuli. We conducted three experiments investigating how prior information regarding action, emotion, and gender of PLD stimuli modulates perceptual processing. We observed a statistically significant congruency effect when preceding cues informed about action of the upcoming biological motion stimulus; participants performed slower in incongruent trials compared to congruent trials. This effect seems to be mainly driven from the 75% congruency condition compared to the non-informative 50% (chance level) validity condition. The congruency effect that was observed in the action experiment was absent in the emotion and gender experiments. These findings highlight the nuanced role of prior information in biological motion perception, particularly emphasizing that action-related cues, when moderately reliable, can influence biological motion perception. Our results are in line with the predictive processing framework, suggesting that the integration of top-down and bottom-up processes is context-dependent and influenced by the nature of prior information. Our results also emphasize the need to develop more comprehensive frameworks that incorporate naturalistic, complex and dynamic, stimuli to build better models of biological motion perception.
{"title":"Predictive processing in biological motion perception: Evidence from human behavior.","authors":"Hüseyin O Elmas, Sena Er, Ada D Rezaki, Aysesu Izgi, Buse M Urgen, Huseyin Boyaci, Burcu A Urgen","doi":"10.1177/03010066251355391","DOIUrl":"10.1177/03010066251355391","url":null,"abstract":"<p><p>Biological motion perception plays a crucial role in understanding the actions of other animals, facilitating effective social interactions. Although traditionally viewed as a bottom-up driven process, recent research suggests that top-down mechanisms, including attention and expectation, significantly influence biological motion perception at all levels, particularly highlighted under complex or ambiguous conditions. In this study, we investigated the effect of expectation on biological motion perception using a cued individuation task with point-light display (PLD) stimuli. We conducted three experiments investigating how prior information regarding action, emotion, and gender of PLD stimuli modulates perceptual processing. We observed a statistically significant congruency effect when preceding cues informed about action of the upcoming biological motion stimulus; participants performed slower in incongruent trials compared to congruent trials. This effect seems to be mainly driven from the 75% congruency condition compared to the non-informative 50% (chance level) validity condition. The congruency effect that was observed in the action experiment was absent in the emotion and gender experiments. These findings highlight the nuanced role of prior information in biological motion perception, particularly emphasizing that action-related cues, when moderately reliable, can influence biological motion perception. Our results are in line with the predictive processing framework, suggesting that the integration of top-down and bottom-up processes is context-dependent and influenced by the nature of prior information. Our results also emphasize the need to develop more comprehensive frameworks that incorporate naturalistic, complex and dynamic, stimuli to build better models of biological motion perception.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"844-862"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-06-24DOI: 10.1177/03010066251345677
Emil Skog, Andrew J Schofield, Timothy S Meese
Human object recognition often exhibits viewpoint invariance. However, unfamiliar aerial viewpoints pose challenges because diagnostic features are often obscured. Here, we investigated the gist perception of scenes when viewed from above and at the ground level, comparing novices against remote sensing surveyors with expertise in aerial photogrammetry. In a randomly interleaved single-interval, 14-choice design, briefly presented target images were followed by a backward white-noise mask. The targets and choices were selected from seven natural and seven man-made categories. Performance across expertise and viewpoint was between 46.0% and 82.6% correct and confusions were sparsely distributed across the 728 (2 × 2 × 14 × 13) possibilities. Both groups performed better with ground views than with aerial views and different confusions were made across viewpoints, but experts outperformed novices only for aerial views, displaying no transfer of expertise to ground views. Where novices underperformed by comparison, this tended to involve mistaking natural for man-made scenes in aerial views. There was also an overall effect for categorisation to be better for the man-made categories than the natural categories. These, and a few other notable exceptions aside, the main result was that detailed sub-category patterns of successes and confusions were very similar across participant groups: the experimental effects related more to viewpoint than expertise. This contrasts with our recent finding for perception of 3D relief, where comparable groups of experts and novices used very different strategies. It seems that expertise in gist perception (for aerial images at least) is largely a matter of degree rather than kind.
{"title":"Performance and confusion effects for gist perception of scenes: An investigation of expertise, viewpoint and image categories.","authors":"Emil Skog, Andrew J Schofield, Timothy S Meese","doi":"10.1177/03010066251345677","DOIUrl":"10.1177/03010066251345677","url":null,"abstract":"<p><p>Human object recognition often exhibits viewpoint invariance. However, unfamiliar aerial viewpoints pose challenges because diagnostic features are often obscured. Here, we investigated the gist perception of scenes when viewed from above and at the ground level, comparing novices against remote sensing surveyors with expertise in aerial photogrammetry. In a randomly interleaved single-interval, 14-choice design, briefly presented target images were followed by a backward white-noise mask. The targets and choices were selected from seven natural and seven man-made categories. Performance across expertise and viewpoint was between 46.0% and 82.6% correct and confusions were sparsely distributed across the 728 (2 × 2 × 14 × 13) possibilities. Both groups performed better with ground views than with aerial views and different confusions were made across viewpoints, but experts outperformed novices only for aerial views, displaying no transfer of expertise to ground views. Where novices underperformed by comparison, this tended to involve mistaking natural for man-made scenes in aerial views. There was also an overall effect for categorisation to be better for the man-made categories than the natural categories. These, and a few other notable exceptions aside, the main result was that detailed sub-category patterns of successes and confusions were very similar across participant groups: the experimental effects related more to viewpoint than expertise. This contrasts with our recent finding for perception of 3D relief, where comparable groups of experts and novices used very different strategies. It seems that expertise in gist perception (for aerial images at least) is largely a matter of degree rather than kind.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"817-843"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-29DOI: 10.1177/03010066251359214
Algis Bertulis, Arunas Bielevicius
The study builds upon previous research on the perceived size of visual objects of various shapes compared to an empty spatial interval. In psychophysical experiments using the size-matching procedure, the effect of overestimating the relative size of an object (relative to an equivalent empty space) was consistently observed when testing visual objects, such as rectangles, circles, ellipses, rhombuses, and triangles, in both filled and empty formats. The strength of the illusion did not depend on whether the shapes were filled or not, but rather it varied with the shape itself. Objects with open contours, such as angles of different orientations and narrow stimuli like straight, tangled, defocused, and divided lines, all produced the expansion effect. The overestimation manifested during testing stimuli of various contour types, including spatial contrast of luminance, colour, and texture, as well as those determined by perceptual grouping and illusory outlines of Kanizsa and Oppel-Kundt versions. Finally, the expansion effect was found to be more pronounced with increasing length and height of the stimuli. The data supported the assumption that the object contour is the primary inducer of perceived size expansion and that the overestimation effect is a regular phenomenon rather than an incidental event.
{"title":"Expansion of perceived size of visual stimuli: Objects look wider than equivalent empty spaces.","authors":"Algis Bertulis, Arunas Bielevicius","doi":"10.1177/03010066251359214","DOIUrl":"10.1177/03010066251359214","url":null,"abstract":"<p><p>The study builds upon previous research on the perceived size of visual objects of various shapes compared to an empty spatial interval. In psychophysical experiments using the size-matching procedure, the effect of overestimating the relative size of an object (relative to an equivalent empty space) was consistently observed when testing visual objects, such as rectangles, circles, ellipses, rhombuses, and triangles, in both filled and empty formats. The strength of the illusion did not depend on whether the shapes were filled or not, but rather it varied with the shape itself. Objects with open contours, such as angles of different orientations and narrow stimuli like straight, tangled, defocused, and divided lines, all produced the expansion effect. The overestimation manifested during testing stimuli of various contour types, including spatial contrast of luminance, colour, and texture, as well as those determined by perceptual grouping and illusory outlines of Kanizsa and Oppel-Kundt versions. Finally, the expansion effect was found to be more pronounced with increasing length and height of the stimuli. The data supported the assumption that the object contour is the primary inducer of perceived size expansion and that the overestimation effect is a regular phenomenon rather than an incidental event.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"863-887"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-29DOI: 10.1177/03010066251360131
Marcello Maniglia, Russell Cohen Hoffing
Maniglia and colleagues reported a significant reduction in visual crowding following perceptual learning training on contrast detection using a lateral masking configuration with collinear flankers. They interpreted this reduction within a framework of shared cortical mechanisms between collinear inhibition, elicited by lateral masking with closely spaced flankers, and crowding. We reanalyzed their data to directly test this hypothesis by examining correlations between learning gains at short target-to-flankers separations (reduced contrast detection thresholds) and crowding reduction. Surprisingly, individual analyses revealed an inverse correlation: participants with greater reduction in collinear inhibition showed smaller reductions in crowding. We suggest that these participants exhibited separation-specific learning, which previous studies indicate may hinder effective transfer. Thus, while collinear inhibition and crowding may share mechanisms, distributed improvement across separations might be necessary to observe transfer of learning to crowding.
{"title":"A bridge between collinear inhibition and visual crowding: Hints from perceptual learning.","authors":"Marcello Maniglia, Russell Cohen Hoffing","doi":"10.1177/03010066251360131","DOIUrl":"10.1177/03010066251360131","url":null,"abstract":"<p><p>Maniglia and colleagues reported a significant reduction in visual crowding following perceptual learning training on contrast detection using a lateral masking configuration with collinear flankers. They interpreted this reduction within a framework of shared cortical mechanisms between collinear inhibition, elicited by lateral masking with closely spaced flankers, and crowding. We reanalyzed their data to directly test this hypothesis by examining correlations between learning gains at short target-to-flankers separations (reduced contrast detection thresholds) and crowding reduction. Surprisingly, individual analyses revealed an inverse correlation: participants with greater reduction in collinear inhibition showed smaller reductions in crowding. We suggest that these participants exhibited separation-specific learning, which previous studies indicate may hinder effective transfer. Thus, while collinear inhibition and crowding may share mechanisms, distributed improvement across separations might be necessary to observe transfer of learning to crowding.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"888-899"},"PeriodicalIF":1.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1177/03010066251387848
Vicki Ledrou-Paquet, Daniel Fiset, Mélissa Carré, Joël Guérette, Caroline Blais
Faces are rapidly and automatically assessed on multiple social dimensions, including trustworthiness. The high inter-rater agreement on this social judgment suggests a systematic association between facial appearance and perceived trustworthiness. The facial information used by observers during explicit trustworthiness judgments has been studied before. However, it remains unknown whether the same perceptual strategies are used during decisions that involve trusting another individual, without necessitating an explicit trustworthiness judgment. To explore this, 53 participants completed the Trust Game, an economic decision task, while facial information was randomly sampled using the Bubbles method. Our results show that economic decisions based on facial cues rely on similar visual information as that used during explicit trustworthiness judgments. We then manipulated facial features identified as diagnostic for trust to test their influence on perceived trustworthiness (Experiment 2) and on trust-related behaviors (Experiment 3). Across all experiments, subtle, targeted changes to facial features systematically shifted both impressions and monetary trust decisions. These findings demonstrate that the same perceptual strategies underlie explicit judgments and trust behaviors, highlighting the applied relevance of even minimal alterations in facial appearance. These findings should be replicated with real faces from diverse demographic backgrounds to confirm their generalizability.
{"title":"The facial information underlying economic decision-making.","authors":"Vicki Ledrou-Paquet, Daniel Fiset, Mélissa Carré, Joël Guérette, Caroline Blais","doi":"10.1177/03010066251387848","DOIUrl":"https://doi.org/10.1177/03010066251387848","url":null,"abstract":"<p><p>Faces are rapidly and automatically assessed on multiple social dimensions, including trustworthiness. The high inter-rater agreement on this social judgment suggests a systematic association between facial appearance and perceived trustworthiness. The facial information used by observers during explicit trustworthiness judgments has been studied before. However, it remains unknown whether the same perceptual strategies are used during decisions that involve trusting another individual, without necessitating an explicit trustworthiness judgment. To explore this, 53 participants completed the Trust Game, an economic decision task, while facial information was randomly sampled using the Bubbles method. Our results show that economic decisions based on facial cues rely on similar visual information as that used during explicit trustworthiness judgments. We then manipulated facial features identified as diagnostic for trust to test their influence on perceived trustworthiness (Experiment 2) and on trust-related behaviors (Experiment 3). Across all experiments, subtle, targeted changes to facial features systematically shifted both impressions and monetary trust decisions. These findings demonstrate that the same perceptual strategies underlie explicit judgments and trust behaviors, highlighting the applied relevance of even minimal alterations in facial appearance. These findings should be replicated with real faces from diverse demographic backgrounds to confirm their generalizability.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251387848"},"PeriodicalIF":1.1,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1177/03010066251391729
Jongwan Kim
As ChatGPT continues to impress with its ability to generate human-like text, its capabilities in emotion recognition remain an open question. Unlike previous research comparing ChatGPT and humans on tasks with objective answers, we explored an affective domain where no correct answer exists: emotional ratings of images, a task requiring visual-perceptual analysis of complex input to recover an affective judgment. Using the MATTER database, rated on valence and arousal dimensions, I prompted ChatGPT-4 to do the same. The results revealed that ChatGPT rated images as less positive and less arousing than humans on average, particularly for images categorized as 'mirthful,' 'fearful,' and 'disgusting.' These findings suggest that while ChatGPT is able to process affective information, its responses reflect an analytical rather than experiential framework, differing from human interpretations.
{"title":"Comparing ChatGPT and human ratings of affective images.","authors":"Jongwan Kim","doi":"10.1177/03010066251391729","DOIUrl":"https://doi.org/10.1177/03010066251391729","url":null,"abstract":"<p><p>As ChatGPT continues to impress with its ability to generate human-like text, its capabilities in emotion recognition remain an open question. Unlike previous research comparing ChatGPT and humans on tasks with objective answers, we explored an affective domain where no correct answer exists: emotional ratings of images, a task requiring visual-perceptual analysis of complex input to recover an affective judgment. Using the MATTER database, rated on valence and arousal dimensions, I prompted ChatGPT-4 to do the same. The results revealed that ChatGPT rated images as less positive and less arousing than humans on average, particularly for images categorized as 'mirthful,' 'fearful,' and 'disgusting.' These findings suggest that while ChatGPT is able to process affective information, its responses reflect an analytical rather than experiential framework, differing from human interpretations.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251391729"},"PeriodicalIF":1.1,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1177/03010066251390106
Linden Williamson, Scott Bailey, Jamie Ward
Although synaesthesia has been linked to increased creativity and engagement with the arts, most of the evidence has come from visual arts rather than music. Here we show for the first time that synaesthesia is far more prevalent in musicians than non-musicians (an odds ratio of about 4). We show that this result holds true for all three different kinds of synaesthesia that we considered (grapheme-colour, sequence-space, sound-colour) including for types of synaesthesia unrelated to music. That is, it is not simply the case that the ability to 'see' music drives the higher prevalence, although this may have a role. Instead, we speculate that the cognitive profile of synaesthetes is conducive to musicality. We provide an estimate of the prevalence of sound-colour synaesthesia in non-musicians of between 0.3% and 1.3%, depending on the threshold applied, with comparable figures for musicians of 1.3% to 7.3%.
{"title":"Increased prevalence of synaesthesia in musicians.","authors":"Linden Williamson, Scott Bailey, Jamie Ward","doi":"10.1177/03010066251390106","DOIUrl":"https://doi.org/10.1177/03010066251390106","url":null,"abstract":"<p><p>Although synaesthesia has been linked to increased creativity and engagement with the arts, most of the evidence has come from visual arts rather than music. Here we show for the first time that synaesthesia is far more prevalent in musicians than non-musicians (an odds ratio of about 4). We show that this result holds true for all three different kinds of synaesthesia that we considered (grapheme-colour, sequence-space, sound-colour) including for types of synaesthesia unrelated to music. That is, it is not simply the case that the ability to 'see' music drives the higher prevalence, although this may have a role. Instead, we speculate that the cognitive profile of synaesthetes is conducive to musicality. We provide an estimate of the prevalence of sound-colour synaesthesia in non-musicians of between 0.3% and 1.3%, depending on the threshold applied, with comparable figures for musicians of 1.3% to 7.3%.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066251390106"},"PeriodicalIF":1.1,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}