Greta Stuart, Blake W Saurels, Amanda K Robinson, Jessica Taubert
Humans are so sensitive to faces and face-like patterns in the environment that sometimes we mistakenly see a face where none exists-a common illusion called "face pareidolia." Examples of face pareidolia, "illusory faces," occur in everyday objects such as trees and food and contain two identities: an illusory face and an object. In this study, we studied illusory faces in a rapid serial visual presentation paradigm over three experiments to explore the detectability of illusory faces under various task conditions and presentation speeds. The first experiment revealed the rapid and reliable detection of illusory faces even with only a glimpse, suggesting that face pareidolia arises from an error in rapidly detecting faces. Experiment 2 demonstrated that illusory facial structures within food items did not interfere with the recognition of the object's veridical identity, affirming that examples of face pareidolia maintain their objecthood. Experiment 3 directly compared behavioral responses to illusory faces under different task conditions. The data indicate that, with extended viewing time, the object identity dominates perception. From a behavioral perspective, the findings revealed that illusory faces have two distinct identities as both faces and objects that may be processed in parallel. Future research could explore the neural representation of these unique stimuli under varying circumstances and attentional demands, providing deeper insights into the encoding of visual stimuli for detection and recognition. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"One object with two identities: The rapid detection of face pareidolia in face and food detection tasks.","authors":"Greta Stuart, Blake W Saurels, Amanda K Robinson, Jessica Taubert","doi":"10.1037/xhp0001296","DOIUrl":"https://doi.org/10.1037/xhp0001296","url":null,"abstract":"<p><p>Humans are so sensitive to faces and face-like patterns in the environment that sometimes we mistakenly see a face where none exists-a common illusion called \"face pareidolia.\" Examples of face pareidolia, \"illusory faces,\" occur in everyday objects such as trees and food and contain two identities: an illusory face and an object. In this study, we studied illusory faces in a rapid serial visual presentation paradigm over three experiments to explore the detectability of illusory faces under various task conditions and presentation speeds. The first experiment revealed the rapid and reliable detection of illusory faces even with only a glimpse, suggesting that face pareidolia arises from an error in rapidly detecting faces. Experiment 2 demonstrated that illusory facial structures within food items did not interfere with the recognition of the object's veridical identity, affirming that examples of face pareidolia maintain their objecthood. Experiment 3 directly compared behavioral responses to illusory faces under different task conditions. The data indicate that, with extended viewing time, the object identity dominates perception. From a behavioral perspective, the findings revealed that illusory faces have two distinct identities as both faces and objects that may be processed in parallel. Future research could explore the neural representation of these unique stimuli under varying circumstances and attentional demands, providing deeper insights into the encoding of visual stimuli for detection and recognition. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent empirical findings demonstrate that, in visual search for a target in an array of distractors, observers exploit information about object relations to increase search efficiency. We investigated how people searched for interacting people in a crowd, and how the eccentricity of the target affected this search (Experiments 1-3). Participants briefly viewed crowded arrays and had to search for an interacting dyad (two bodies face-to-face) among noninteracting dyads (back-to-back distractors), or vice versa, with the target presented in the attended central location or at a peripheral location. With central targets, we found a search asymmetry, whereby interacting people among noninteracting people were detected better than noninteracting people among interacting people. With peripheral targets, the advantage disappeared, or even tended to reverse in favor of noninteracting dyads. In Experiments 4-5, we asked whether the search asymmetry generalized to object pairs whose spatial relations did or did not form a functionally interacting set (a computer screen above a keyboard vs. a computer screen below a keyboard). We found no advantage for interacting over noninteracting sets either in central or peripheral locations for objects, but, if anything, evidence for the opposite effect. Thus, the effect of relational information on visual search is contingent on both stimulus category and attentional focus: The presentation of social interaction-but not of nonsocial interaction-at the attended (central) location readily captures an individual's attention. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"Category-specific effects of high-level relations in visual search.","authors":"Nicolas Goupil, Daniel Kaiser, Liuba Papeo","doi":"10.1037/xhp0001300","DOIUrl":"https://doi.org/10.1037/xhp0001300","url":null,"abstract":"<p><p>Recent empirical findings demonstrate that, in visual search for a target in an array of distractors, observers exploit information about object relations to increase search efficiency. We investigated how people searched for interacting people in a crowd, and how the eccentricity of the target affected this search (Experiments 1-3). Participants briefly viewed crowded arrays and had to search for an interacting dyad (two bodies face-to-face) among noninteracting dyads (back-to-back distractors), or vice versa, with the target presented in the attended central location or at a peripheral location. With central targets, we found a search asymmetry, whereby interacting people among noninteracting people were detected better than noninteracting people among interacting people. With peripheral targets, the advantage disappeared, or even tended to reverse in favor of noninteracting dyads. In Experiments 4-5, we asked whether the search asymmetry generalized to object pairs whose spatial relations did or did not form a functionally interacting set (a computer screen above a keyboard vs. a computer screen below a keyboard). We found no advantage for interacting over noninteracting sets either in central or peripheral locations for objects, but, if anything, evidence for the opposite effect. Thus, the effect of relational information on visual search is contingent on both stimulus category and attentional focus: The presentation of social interaction-but not of nonsocial interaction-at the attended (central) location readily captures an individual's attention. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies have linked musical expertise with nonmusical abilities such as speech perception, memory, or executive functions. Far fewer have examined associations with basic auditory skills. Here, we asked whether psychoacoustic thresholds predict four aspects of musical expertise: music training, melody perception, rhythm perception, and self-reported musical abilities and behaviors (other than training). A total of 138 participants completed nine psychoacoustic tasks, as well as the Musical Ear Test (melody and rhythm subtests) and the Goldsmiths Musical Sophistication Index. We also measured and controlled for demographics, general cognitive abilities, and personality traits. The psychoacoustic tasks assessed discrimination thresholds for pitch and temporal perception (both assessed with three tasks), and for timbre, intensity, and backward masking (each assessed with one task). Both music training and melody perception predicted better performance on the pitch-discrimination tasks. Rhythm perception was associated with better performance on several temporal and nontemporal tasks, although none had unique associations when the others were held constant. Self-reported musical abilities and behaviors were associated with performance on one of the temporal tasks: duration discrimination. The findings indicate that basic auditory skills correlate with individual differences in musical expertise, whether expertise is defined as music training or musical ability. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"Associations between musical expertise and auditory processing.","authors":"Aíssa M Baldé, César F Lima, E Glenn Schellenberg","doi":"10.1037/xhp0001312","DOIUrl":"https://doi.org/10.1037/xhp0001312","url":null,"abstract":"<p><p>Many studies have linked musical expertise with nonmusical abilities such as speech perception, memory, or executive functions. Far fewer have examined associations with basic auditory skills. Here, we asked whether psychoacoustic thresholds predict four aspects of musical expertise: music training, melody perception, rhythm perception, and self-reported musical abilities and behaviors (other than training). A total of 138 participants completed nine psychoacoustic tasks, as well as the Musical Ear Test (melody and rhythm subtests) and the Goldsmiths Musical Sophistication Index. We also measured and controlled for demographics, general cognitive abilities, and personality traits. The psychoacoustic tasks assessed discrimination thresholds for pitch and temporal perception (both assessed with three tasks), and for timbre, intensity, and backward masking (each assessed with one task). Both music training and melody perception predicted better performance on the pitch-discrimination tasks. Rhythm perception was associated with better performance on several temporal and nontemporal tasks, although none had unique associations when the others were held constant. Self-reported musical abilities and behaviors were associated with performance on one of the temporal tasks: duration discrimination. The findings indicate that basic auditory skills correlate with individual differences in musical expertise, whether expertise is defined as music training or musical ability. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-13DOI: 10.1037/xhp0001270
Taiji Ueno, Richard J Allen
Multi-item retro-cueing effects refer to better working memory performance for multiple items when they are cued after their offset compared to a neutral condition in which all items are cued. However, several studies have reported boundary conditions, and findings have also sometimes failed to replicate. We hypothesized that a strategy to focus on only one of the cued items could possibly yield these inconsistent patterns. In Study 1, a Monte Carlo simulation showed that randomly selecting one of the cued items as the focus in each trial increased the chance of obtaining significant "multi-item retro-cueing effects" on the mean accuracy over the trials, providing an incorrect conclusion if interpreted as evidence for attending all the cued items. These high rates to obtain such data fit with inconsistent patterns in the literature. To try and circumvent this situation, we conducted two new experiments (Studies 2A and 2B) where participants were explicitly instructed to fixate their gaze on all the cued positions, verified through eye tracking (Study 2B). These produced robust multi-item retro-cueing effects regardless of previously identified boundary conditions. Notably, gazes were clearly fixated to multiple cued positions within each trial. Nevertheless, simulation revealed that our accuracy patterns could also in principle be produced by single-item enhancement on each trial. The present study forms the first step to disentangle overt gaze-based allocation of attention from single-item focusing strategies while also highlighting the need for improved methodologies to probe genuine multiplicity in working memory. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
多项目回溯提示效应指的是,与所有项目都被提示的中性条件相比,当多个项目在它们的偏移后被提示时,它们的工作记忆表现更好。然而,有几项研究报告了边界条件,研究结果有时也无法复制。我们假设,只关注其中一个提示项目的策略可能会产生这些不一致的模式。在研究1中,蒙特卡罗模拟表明,在每个试验中随机选择一个提示项目作为焦点,增加了在试验的平均准确性上获得显著的“多项目回溯提示效应”的机会,如果将其解释为参加所有提示项目的证据,则提供了错误的结论。获得这些数据的高比率与文献中不一致的模式相吻合。为了避免这种情况,我们进行了两个新的实验(研究2A和2B),参与者被明确指示将他们的目光固定在所有提示位置上,并通过眼动追踪(研究2B)进行验证。无论之前确定的边界条件如何,这些都产生了强大的多项目回溯线索效应。值得注意的是,在每次试验中,目光都明显固定在多个提示位置上。然而,模拟显示,我们的准确性模式原则上也可以通过每次试验的单项增强来产生。本研究是将基于显性注视的注意力分配与单项目聚焦策略区分开来的第一步,同时也强调了改进方法以探索工作记忆中真正的多样性的必要性。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"Running after two hares in visual working memory: Exploring retrospective attention to multiple items using simulation, behavioral outcomes, and eye tracking.","authors":"Taiji Ueno, Richard J Allen","doi":"10.1037/xhp0001270","DOIUrl":"10.1037/xhp0001270","url":null,"abstract":"<p><p>Multi-item retro-cueing effects refer to better working memory performance for multiple items when they are cued after their offset compared to a neutral condition in which all items are cued. However, several studies have reported boundary conditions, and findings have also sometimes failed to replicate. We hypothesized that a strategy to focus on only one of the cued items could possibly yield these inconsistent patterns. In Study 1, a Monte Carlo simulation showed that randomly selecting one of the cued items as the focus in each trial increased the chance of obtaining significant \"multi-item retro-cueing effects\" on the mean accuracy over the trials, providing an incorrect conclusion if interpreted as evidence for attending all the cued items. These high rates to obtain such data fit with inconsistent patterns in the literature. To try and circumvent this situation, we conducted two new experiments (Studies 2A and 2B) where participants were explicitly instructed to fixate their gaze on all the cued positions, verified through eye tracking (Study 2B). These produced robust multi-item retro-cueing effects regardless of previously identified boundary conditions. Notably, gazes were clearly fixated to multiple cued positions within each trial. Nevertheless, simulation revealed that our accuracy patterns could also in principle be produced by single-item enhancement on each trial. The present study forms the first step to disentangle overt gaze-based allocation of attention from single-item focusing strategies while also highlighting the need for improved methodologies to probe genuine multiplicity in working memory. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":"405-420"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of attentional allocation due to external stimulation has a long history in psychology. Early research by Yantis and Jonides suggested that abrupt onsets constitute a unique class of stimuli that captures attention in a stimulus-driven fashion unless attention is proactively directed elsewhere. Since then, the study of visual attention has evolved significantly. This article revisits the core conclusions by Yantis and Jonides in light of subsequent findings and highlights emerging issues for future investigation. These issues include clarifying key concepts of visual attention, adopting measures with greater spatiotemporal precision, exploring how past experiences modulate the effects of abrupt onsets, and understanding individual differences in attentional allocation. Addressing these issues is challenging but crucial, and we offer some perspectives on how one might choose to study these issues going forward. Finally, we call for more investigation into abrupt onsets. Perhaps due to their strong potential to capture attention, abrupt onsets are often set aside in pursuit of other conditions that show attenuation of distractor interference. However, given their real-world relevance, abrupt onsets represent the exact type of stimuli that we need to study more to connect laboratory attention research to real life. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"Attentional capture by abrupt onsets: Foundations and emerging issues.","authors":"Han Zhang, A Kane York, John Jonides","doi":"10.1037/xhp0001275","DOIUrl":"https://doi.org/10.1037/xhp0001275","url":null,"abstract":"<p><p>The study of attentional allocation due to external stimulation has a long history in psychology. Early research by Yantis and Jonides suggested that abrupt onsets constitute a unique class of stimuli that captures attention in a stimulus-driven fashion unless attention is proactively directed elsewhere. Since then, the study of visual attention has evolved significantly. This article revisits the core conclusions by Yantis and Jonides in light of subsequent findings and highlights emerging issues for future investigation. These issues include clarifying key concepts of visual attention, adopting measures with greater spatiotemporal precision, exploring how past experiences modulate the effects of abrupt onsets, and understanding individual differences in attentional allocation. Addressing these issues is challenging but crucial, and we offer some perspectives on how one might choose to study these issues going forward. Finally, we call for more investigation into abrupt onsets. Perhaps due to their strong potential to capture attention, abrupt onsets are often set aside in pursuit of other conditions that show attenuation of distractor interference. However, given their real-world relevance, abrupt onsets represent the exact type of stimuli that we need to study more to connect laboratory attention research to real life. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":"51 3","pages":"283-299"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The question of whether low-level perceptual processes are involved in language comprehension remains unclear. Here, we introduce a promising paradigm in which the role of motion perception in phrase understanding may be causally inferred without interpretational ambiguity. After participants had been adapted to either leftward or rightward drifting motion, resulting in the reduced responsiveness of motion neurons coding for the adapted direction, they were asked to indicate whether a subsequent verb phrase denoted leftward or rightward motion. When the adapting stimulus was blocked from visual awareness under continuous flash suppression, wherein only the influence of low-level perceptual processes existed, we found the response inhibition in the adapted direction across diverse verb phrases, indicating that desensitization of motion perception impaired the understanding of verb phrases. Our findings provide evidence for the functional relevance of motion perception to phrase understanding. However, when the adapting stimulus was consciously perceived, wherein both the influence of low-level perceptual processes and high-level cognitive processes coexisted but counteracted each other, we found different results for diverse verb phrases. Our findings highlight the importance of considering the influence of conscious awareness on how visual perception affects language comprehension. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"Adaptation to invisible motion impairs the understanding of verb phrases.","authors":"Shuyue Huang, Chen Huang, Yanliang Sun, Shena Lu","doi":"10.1037/xhp0001304","DOIUrl":"https://doi.org/10.1037/xhp0001304","url":null,"abstract":"<p><p>The question of whether low-level perceptual processes are involved in language comprehension remains unclear. Here, we introduce a promising paradigm in which the role of motion perception in phrase understanding may be causally inferred without interpretational ambiguity. After participants had been adapted to either leftward or rightward drifting motion, resulting in the reduced responsiveness of motion neurons coding for the adapted direction, they were asked to indicate whether a subsequent verb phrase denoted leftward or rightward motion. When the adapting stimulus was blocked from visual awareness under continuous flash suppression, wherein only the influence of low-level perceptual processes existed, we found the response inhibition in the adapted direction across diverse verb phrases, indicating that desensitization of motion perception impaired the understanding of verb phrases. Our findings provide evidence for the functional relevance of motion perception to phrase understanding. However, when the adapting stimulus was consciously perceived, wherein both the influence of low-level perceptual processes and high-level cognitive processes coexisted but counteracted each other, we found different results for diverse verb phrases. Our findings highlight the importance of considering the influence of conscious awareness on how visual perception affects language comprehension. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":"51 3","pages":"303-313"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-23DOI: 10.1037/xhp0001274
James M Webb, Ediz Sohoglu
Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal. We systematically filtered speech to control modulation content during training and test blocks. Perceptual learning was highly specific to the modulation filter heard during training, consistent with the hypothesis that learning involves changes to representations of speech modulations. In further experiments, we used modulation filtering and different feedback regimes (clear speech vs. written feedback) to investigate the role of talker-specific cues for cross-talker generalization of learning. Our results suggest that learning partially generalizes to speech from novel (untrained) talkers but that talker-specific cues can enhance generalization. These findings are consistent with the proposal that perceptual learning entails the adjustment of internal models that map acoustic features to phonological categories. These models can be applied to degraded speech from novel talkers, particularly when listeners can account for talker-specific variability in the acoustic signal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"Perceptual learning of modulation filtered speech.","authors":"James M Webb, Ediz Sohoglu","doi":"10.1037/xhp0001274","DOIUrl":"10.1037/xhp0001274","url":null,"abstract":"<p><p>Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal. We systematically filtered speech to control modulation content during training and test blocks. Perceptual learning was highly specific to the modulation filter heard during training, consistent with the hypothesis that learning involves changes to representations of speech modulations. In further experiments, we used modulation filtering and different feedback regimes (clear speech vs. written feedback) to investigate the role of talker-specific cues for cross-talker generalization of learning. Our results suggest that learning partially generalizes to speech from novel (untrained) talkers but that talker-specific cues can enhance generalization. These findings are consistent with the proposal that perceptual learning entails the adjustment of internal models that map acoustic features to phonological categories. These models can be applied to degraded speech from novel talkers, particularly when listeners can account for talker-specific variability in the acoustic signal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":"314-340"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-23DOI: 10.1037/xhp0001271
Maura Nevejans, Jan R Wiersema, Jan De Houwer, Emiel Cracco
Motivational theories of imitation state that we imitate because this led to positive social consequences in the past. Because movement imitation typically only leads to these consequences when perceived by the imitated person, it should increase when the interaction partner sees the imitator. Current evidence for this hypothesis is mixed, potentially due to the low ecological validity in previous studies. We conducted two experiments (NExperiment 1 = 94, NExperiment 2 = 110) in which we resolved this limitation by placing participants in a virtual environment with a seeing and a blindfolded virtual agent, where they reacted to auditory cues with a head movement to the left or right, while the agent(s) also made a left or right head movement. We tested the effect of model eyesight (Experiments 1 and 2) and social reward on imitation (Experiment 2). Data were collected in 2023 and 2024. As expected, participants tended to imitate the agents. However, we found only limited evidence for the effect of model eyesight on automatic imitation in Experiment 1 and no evidence for the effect of model eyesight or social reward in Experiment 2. These findings challenge claims made by motivational theories. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"The impact of model eyesight and social reward on automatic imitation in virtual reality.","authors":"Maura Nevejans, Jan R Wiersema, Jan De Houwer, Emiel Cracco","doi":"10.1037/xhp0001271","DOIUrl":"10.1037/xhp0001271","url":null,"abstract":"<p><p>Motivational theories of imitation state that we imitate because this led to positive social consequences in the past. Because movement imitation typically only leads to these consequences when perceived by the imitated person, it should increase when the interaction partner sees the imitator. Current evidence for this hypothesis is mixed, potentially due to the low ecological validity in previous studies. We conducted two experiments (<i>N</i><sub>Experiment 1</sub> = 94, <i>N</i><sub>Experiment 2</sub> = 110) in which we resolved this limitation by placing participants in a virtual environment with a seeing and a blindfolded virtual agent, where they reacted to auditory cues with a head movement to the left or right, while the agent(s) also made a left or right head movement. We tested the effect of model eyesight (Experiments 1 and 2) and social reward on imitation (Experiment 2). Data were collected in 2023 and 2024. As expected, participants tended to imitate the agents. However, we found only limited evidence for the effect of model eyesight on automatic imitation in Experiment 1 and no evidence for the effect of model eyesight or social reward in Experiment 2. These findings challenge claims made by motivational theories. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":"370-385"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-16DOI: 10.1037/xhp0001273
Margherita Adelaide Musco, Eraldo Paulesu, Lucia Maria Sacheli
Collaborative motor interactions (joint actions) require relating to another person (social dimension) whose contribution is needed to achieve a shared goal (goal-related dimension). We explored if and how these dimensions modulate interactive behavior by exploring posterror interpersonal adaptations. In two experiments carried out in 2022 (N₁ = 23; N₂ = 24, preregistered), participants played sequences of notes in turn-taking with a coactor either described as another participant or the computer (human vs. nonhuman coactor, social manipulation) while pursuing shared or individual goals (goal-related manipulation). The coactor was programmed to make a mistake in 50% of the trials. We found that, only in the shared goal condition, participants were slower when interacting with a human than a nonhuman coactor depending on how strongly they believed the human coactor was a real participant. Moreover, the general slowdown following a partner's error was absent when the action required from the participant corresponded to what the coactor should have done (correction tendency effect). This effect was found only in the shared goal condition without differences between coactors, suggesting it was driven by goal-related representations. The social and goal-related dimensions thus independently but significantly shape interpersonal adaptations during joint action. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
协作运动互动(联合行动)需要与另一个人(社会维度)建立联系,而这个人的贡献是实现共同目标(目标相关维度)所必需的。我们通过探究后向人际适应来探索这些维度是否以及如何调节互动行为。在2022年进行的两次实验中(N₁= 23;N₂= 24,预注册),参与者在追求共享或个人目标(与目标相关的操纵)的同时,与另一个参与者或计算机(人类vs.非人类合作者,社会操纵)一起轮流演奏一系列音符。该辅助器被设定在50%的试验中犯错。我们发现,只有在共同目标条件下,参与者在与人类互动时比与非人类合作伙伴互动时慢,这取决于他们对人类合作伙伴是真实参与者的信任程度。此外,当参与者需要采取的行动与合作者应该采取的行动相一致时(纠正倾向效应),合伙人犯错后的普遍减速就不存在了。这种效应只在共同目标条件下被发现,而在合作者之间没有差异,这表明它是由目标相关表征驱动的。因此,在联合行动中,社会和目标相关维度独立但显著地塑造了人际适应。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"Social and goal-related foundations of interpersonal adaptation during joint action.","authors":"Margherita Adelaide Musco, Eraldo Paulesu, Lucia Maria Sacheli","doi":"10.1037/xhp0001273","DOIUrl":"10.1037/xhp0001273","url":null,"abstract":"<p><p>Collaborative motor interactions (joint actions) require relating to another person (social dimension) whose contribution is needed to achieve a shared goal (goal-related dimension). We explored if and how these dimensions modulate interactive behavior by exploring posterror interpersonal adaptations. In two experiments carried out in 2022 (<i>N</i>₁ = 23; <i>N</i>₂ = 24, preregistered), participants played sequences of notes in turn-taking with a coactor either described as another participant or the computer (human vs. nonhuman coactor, social manipulation) while pursuing shared or individual goals (goal-related manipulation). The coactor was programmed to make a mistake in 50% of the trials. We found that, only in the shared goal condition, participants were slower when interacting with a human than a nonhuman coactor depending on how strongly they believed the human coactor was a real participant. Moreover, the general slowdown following a partner's error was absent when the action required from the participant corresponded to what the coactor should have done (correction tendency effect). This effect was found only in the shared goal condition without differences between coactors, suggesting it was driven by goal-related representations. The social and goal-related dimensions thus independently but significantly shape interpersonal adaptations during joint action. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":" ","pages":"341-356"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual mental imagery is a core topic of cognitive psychology and cognitive neuroscience. Several early behavioral contributions were published in the Journal of Experimental Psychology: Human Perception and Performance, and they continue to influence the field despite the advent of new technologies and statistical models that are used in contemporary research on mental imagery. Future research will lead to new discoveries showing a broader importance of mental imagery, ranging from consciousness, problem-solving, expectations, perception, and reality monitoring. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"The long-lasting legacy of early experimental studies in visual mental imagery.","authors":"Corinna S Martarelli, Fred W Mast","doi":"10.1037/xhp0001276","DOIUrl":"https://doi.org/10.1037/xhp0001276","url":null,"abstract":"<p><p>Visual mental imagery is a core topic of cognitive psychology and cognitive neuroscience. Several early behavioral contributions were published in the <i>Journal of Experimental Psychology: Human Perception and Performance</i>, and they continue to influence the field despite the advent of new technologies and statistical models that are used in contemporary research on mental imagery. Future research will lead to new discoveries showing a broader importance of mental imagery, ranging from consciousness, problem-solving, expectations, perception, and reality monitoring. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":50195,"journal":{"name":"Journal of Experimental Psychology-Human Perception and Performance","volume":"51 3","pages":"300-302"},"PeriodicalIF":2.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}