Pub Date : 2025-12-01Epub Date: 2025-02-07DOI: 10.1177/17470218251322167
Devu Mahesan, Rico Fischer
In dual tasks, with a visual-manual choice reaction time task in Task 1 and a go/no-go task in Task 2, not responding to Task 2 can have adverse effects on Task 1 performance, as demonstrated by no-go backward crosstalk effects (no-go BCE). Here, the response inhibition required to not respond to Task 2 spills over and slows response execution in Task 1. Over three experiments, we investigated whether the prospect of reward, which is a potent cognitive control modulator, influences no-go BCE. In Experiment 1, reward for fast and accurate responses in both tasks was modulated as a within-subject factor, and in Experiments 2 and 3, as a between-subject factor. The results revealed three major insights. In all three experiments, reward led to faster Task 1 and Task 2 performance. Second, despite this speeding, the no-go BCE was not modulated by reward. Finally, the reward led to more errors in Task 2 no-go trials. These results reveal a reward-induced bias for action, suggesting better preparedness to respond and, consequently, larger commission errors in Task 2 no-go trials. The absence of a reward-based modulation of the no-go BCE indicates that the reward-induced bias for action does not necessarily translate into larger response inhibition. These findings point towards the complex interactions between reward and inhibitory control and shed light on the potentials and limitations of reward-based modulation of dual-task interference.
{"title":"Prospective reward in dual task induces a bias towards action at the cost of less accurate Task 2 performance.","authors":"Devu Mahesan, Rico Fischer","doi":"10.1177/17470218251322167","DOIUrl":"10.1177/17470218251322167","url":null,"abstract":"<p><p>In dual tasks, with a visual-manual choice reaction time task in Task 1 and a go/no-go task in Task 2, not responding to Task 2 can have adverse effects on Task 1 performance, as demonstrated by no-go backward crosstalk effects (no-go BCE). Here, the response inhibition required to not respond to Task 2 spills over and slows response execution in Task 1. Over three experiments, we investigated whether the prospect of reward, which is a potent cognitive control modulator, influences no-go BCE. In Experiment 1, reward for fast and accurate responses in both tasks was modulated as a within-subject factor, and in Experiments 2 and 3, as a between-subject factor. The results revealed three major insights. In all three experiments, reward led to faster Task 1 and Task 2 performance. Second, despite this speeding, the no-go BCE was not modulated by reward. Finally, the reward led to more errors in Task 2 no-go trials. These results reveal a reward-induced bias for action, suggesting better preparedness to respond and, consequently, larger commission errors in Task 2 no-go trials. The absence of a reward-based modulation of the no-go BCE indicates that the reward-induced bias for action does not necessarily translate into larger response inhibition. These findings point towards the complex interactions between reward and inhibitory control and shed light on the potentials and limitations of reward-based modulation of dual-task interference.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2713-2731"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12638453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143365788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-02DOI: 10.1177/17470218251379036
Nadine Lavan
When we hear someone speak, we do not just hear 'a voice'. If the voice is unfamiliar, we form an often complex first impression by inferring various characteristics about the person. If the voice is familiar, at least to some degree, we may be able to recognise and identify the person to whom the voice belongs. Even though first impression formation and identity recognition can thus be seen as being situatied at two opposing ends of a 'familiarity continuum', first impressions and identity recognition functionally serve the same purpose: making sense of who another person is. Theories and empirical work examining impression formation and identity perception from voices have, however, developed largely in isolation from one another, with relatively limited cross-talk. In this paper, I will review some recent findings from the literature on first impression formation from unfamiliar voices and voice identity learning and recognition from familiar(ised) voices. I will ask how impression perception and identity perception may interact and interface with one another along this 'familiarity continuum' between completely unfamiliar and very familiar voices, trying to bring together these two literatures. Specifically, I will consider what happens to first impressions when we become increasingly familiar with a person, whether first impressions might have an impact on how (well) voices can be learned and recognised, and when and how identity recognition might take over from ad-hoc impression formation.
{"title":"Time after time: Voice perception from first impressions to identity recognition.","authors":"Nadine Lavan","doi":"10.1177/17470218251379036","DOIUrl":"10.1177/17470218251379036","url":null,"abstract":"<p><p>When we hear someone speak, we do not just hear 'a voice'. If the voice is unfamiliar, we form an often complex first impression by inferring various characteristics about the person. If the voice is familiar, at least to some degree, we may be able to recognise and identify the person to whom the voice belongs. Even though first impression formation and identity recognition can thus be seen as being situatied at two opposing ends of a 'familiarity continuum', first impressions and identity recognition functionally serve the same purpose: making sense of who another person is. Theories and empirical work examining impression formation and identity perception from voices have, however, developed largely in isolation from one another, with relatively limited cross-talk. In this paper, I will review some recent findings from the literature on first impression formation from unfamiliar voices and voice identity learning and recognition from familiar(ised) voices. I will ask how impression perception and identity perception may interact and interface with one another along this 'familiarity continuum' between completely unfamiliar and very familiar voices, trying to bring together these two literatures. Specifically, I will consider what happens to first impressions when we become increasingly familiar with a person, whether first impressions might have an impact on how (well) voices can be learned and recognised, and when and how identity recognition might take over from ad-hoc impression formation.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2583-2593"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12638452/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144966596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-03-15DOI: 10.1177/17470218251330237
Fabian Kiepe, Guido Hesselmann
Despite extensive research across various modalities, the precise mechanisms of sensory attenuation (SA) remain debated. Specifically, it remains unclear to what extent SA is influenced by stimulus predictability alone, as opposed to the distinct impact of self-generated actions. Forward models suggest that efference copies of motor commands enable the brain to predict and distinguish anticipated changes in self-initiated sensory input. Predictive processing proposes that predictions about upcoming changes in sensory input are not solely based on efference copies, but rather generated in the form of a generative model integrating external, contextual factors, as well. This study investigated the underlying mechanisms of SA in the tactile domain, specifically examining self-initiation and temporal predictions within a virtual reality (VR) framework. This setup allowed for precise control over sensory feedback in response to movement. Participants (N = 33) engaged in an active condition, moving their hands to elicit a virtual touch. Importantly, visual perception was modified in VR, so that participants touched their rendered-but not physical-hands. The virtual touch triggered the test vibrations on a touch controller (intensities: 0.2, 0.35, 0.5, 0.65, 0.8; in arbitrary units.), the intensity of which was then compared to that of a standard stimulus (intensity: 0.5). In the passive condition, vibrations were presented without movement and were preceded by a visual cue. Further, test vibrations appeared either immediately or after a variable onset delay (700-800ms). Our results revealed a significant effect of the factor "onset delay" on perceived vibration intensity. In addition, we observed interactions between the factors "agency" and "test vibration intensity" and between the factors "agency" and "onset delay," with attenuation effects for immediate vibrations at high intensities and enhancement effects for delayed vibrations at low intensities. These findings emphasize the impact of external, contextual factors and support the notion of a broader, attention-oriented predictive mechanism for the perception of self-initiated stimuli.
{"title":"Sensory attenuation of self-initiated tactile feedback is modulated by stimulus strength and temporal delay in a virtual reality environment.","authors":"Fabian Kiepe, Guido Hesselmann","doi":"10.1177/17470218251330237","DOIUrl":"10.1177/17470218251330237","url":null,"abstract":"<p><p>Despite extensive research across various modalities, the precise mechanisms of sensory attenuation (SA) remain debated. Specifically, it remains unclear to what extent SA is influenced by stimulus predictability alone, as opposed to the distinct impact of self-generated actions. Forward models suggest that efference copies of motor commands enable the brain to predict and distinguish anticipated changes in self-initiated sensory input. Predictive processing proposes that predictions about upcoming changes in sensory input are not solely based on efference copies, but rather generated in the form of a generative model integrating external, contextual factors, as well. This study investigated the underlying mechanisms of SA in the tactile domain, specifically examining self-initiation and temporal predictions within a virtual reality (VR) framework. This setup allowed for precise control over sensory feedback in response to movement. Participants (<i>N</i> = 33) engaged in an active condition, moving their hands to elicit a virtual touch. Importantly, visual perception was modified in VR, so that participants touched their rendered-but not physical-hands. The virtual touch triggered the test vibrations on a touch controller (intensities: 0.2, 0.35, 0.5, 0.65, 0.8; in arbitrary units.), the intensity of which was then compared to that of a standard stimulus (intensity: 0.5). In the passive condition, vibrations were presented without movement and were preceded by a visual cue. Further, test vibrations appeared either immediately or after a variable onset delay (700-800ms). Our results revealed a significant effect of the factor \"onset delay\" on perceived vibration intensity. In addition, we observed interactions between the factors \"agency\" and \"test vibration intensity\" and between the factors \"agency\" and \"onset delay,\" with attenuation effects for immediate vibrations at high intensities and enhancement effects for delayed vibrations at low intensities. These findings emphasize the impact of external, contextual factors and support the notion of a broader, attention-oriented predictive mechanism for the perception of self-initiated stimuli.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2829-2840"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143634453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-03-11DOI: 10.1177/17470218251328994
Bethanie Richards, Henning Holle, Shane Lindsay
Studies of waking rest, whereby passive rest is compared with an active task, have shown a benefit for declarative memory during short waking rest periods, which has been argued to result from the active task disrupting slow oscillations that occur during rest. Arshamian et al. (2018) found that nasal breathing while resting for an hour led to an advantage for olfactory memory consolidation compared with oral breathing, which has also been argued to result from the disruption of slow oscillations during oral breathing. In the present pre-registered research, we looked to see whether this oral breathing disruption extended to impair declarative memory consolidation, and if it is modulated by the presence of an active task. We used a 2 × 2 within-participants counterbalanced design of two sessions separated by a week where participants breathed either orally (induced by a nose clip) or nasally (induced through tape over the mouth). Each session involved learning two sets of pseudowords followed by either waking rest or an active task (N-back) for 15 min during the breathing manipulation. Memory performance was assessed by a recognition task. Our results show that the nasal advantage did not generalise to pseudowords, nor were we able to replicate the waking rest advantage or show an interaction between these factors. This study contributes to a growing body of evidence that challenges the consistency of the waking rest advantage and highlights the need for further exploration of the influence of breathing pathway on memory processes.
对清醒休息的研究,即被动休息与主动任务的比较,显示了在短暂的清醒休息期间陈述性记忆的好处,这被认为是由于主动任务破坏了休息期间发生的缓慢振荡。Arshamian等人(2018)发现,与口腔呼吸相比,休息一小时时的鼻腔呼吸有利于嗅觉记忆巩固,这也被认为是由于口腔呼吸过程中缓慢振荡的中断造成的。在目前的预注册研究中,我们观察了这种口腔呼吸中断是否会扩展到损害陈述性记忆的巩固,以及它是否受到活动任务的调节。我们采用了2 x 2的参与者内部平衡设计,两个疗程间隔一周,参与者通过口腔呼吸(由鼻夹诱导)或鼻腔呼吸(通过嘴上的胶带诱导)。每个阶段包括学习两组假词,然后在呼吸操作期间进行15分钟的清醒休息或主动任务(N-back)。记忆表现通过一个识别任务来评估。我们的研究结果表明,鼻腔优势并不适用于假话,我们也无法复制清醒时休息的优势,也无法显示这些因素之间的相互作用。这项研究提供了越来越多的证据,挑战了清醒休息优势的一致性,并强调了进一步探索呼吸途径对记忆过程影响的必要性。
{"title":"Does oral breathing disrupt memory consolidation during waking rest? A registered report.","authors":"Bethanie Richards, Henning Holle, Shane Lindsay","doi":"10.1177/17470218251328994","DOIUrl":"10.1177/17470218251328994","url":null,"abstract":"<p><p>Studies of waking rest, whereby passive rest is compared with an active task, have shown a benefit for declarative memory during short waking rest periods, which has been argued to result from the active task disrupting slow oscillations that occur during rest. Arshamian et al. (2018) found that nasal breathing while resting for an hour led to an advantage for olfactory memory consolidation compared with oral breathing, which has also been argued to result from the disruption of slow oscillations during oral breathing. In the present pre-registered research, we looked to see whether this oral breathing disruption extended to impair declarative memory consolidation, and if it is modulated by the presence of an active task. We used a 2 × 2 within-participants counterbalanced design of two sessions separated by a week where participants breathed either orally (induced by a nose clip) or nasally (induced through tape over the mouth). Each session involved learning two sets of pseudowords followed by either waking rest or an active task (N-back) for 15 min during the breathing manipulation. Memory performance was assessed by a recognition task. Our results show that the nasal advantage did not generalise to pseudowords, nor were we able to replicate the waking rest advantage or show an interaction between these factors. This study contributes to a growing body of evidence that challenges the consistency of the waking rest advantage and highlights the need for further exploration of the influence of breathing pathway on memory processes.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2610-2626"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-02-07DOI: 10.1177/17470218251322504
Mengying Yuan, Min Gao, Xinzhong Cui, Sa Lu, Xiaoyu Tang
The pip-and-pop effect describes the phenomenon of auditory pure-tone stimuli (pip) causing simultaneously visual target to pop out. This study utilised a dynamic visual search paradigm and conducted two eye movement experiments (Experiment 1: set size = 24 items; Experiment 2: set size = 48 items) to explore the influence of auditory singularity on the Pip-and-Pop effect through single-sound condition (singularity) and multiple-sound condition (non-singularity). In Experiment 1, there were no significant differences between the no-sound, single-sound, and multiple-sound conditions in terms of reaction time, accuracy, or fixation number. In Experiment 2, compared with the no-sound condition, both the single-sound and multiple-sound conditions significantly reduced the Search time (RTs), accuracy, and fixation numbers when the target was present. Both Experiments 1 and 2 revealed that the fixation duration under the single-sound condition was significantly longer than that under the no-sound condition. These findings suggest that the singularity of auditory stimuli is not a necessary condition for the pip-and-pop effect. Audiovisual interaction is more likely to be a prerequisite for the occurrence of the pip-and-pop effect.
{"title":"Breaking the silence: Exploring the influence of auditory singularity on visual search.","authors":"Mengying Yuan, Min Gao, Xinzhong Cui, Sa Lu, Xiaoyu Tang","doi":"10.1177/17470218251322504","DOIUrl":"10.1177/17470218251322504","url":null,"abstract":"<p><p>The pip-and-pop effect describes the phenomenon of auditory pure-tone stimuli (pip) causing simultaneously visual target to pop out. This study utilised a dynamic visual search paradigm and conducted two eye movement experiments (Experiment 1: set size = 24 items; Experiment 2: set size = 48 items) to explore the influence of auditory singularity on the Pip-and-Pop effect through single-sound condition (singularity) and multiple-sound condition (non-singularity). In Experiment 1, there were no significant differences between the no-sound, single-sound, and multiple-sound conditions in terms of reaction time, accuracy, or fixation number. In Experiment 2, compared with the no-sound condition, both the single-sound and multiple-sound conditions significantly reduced the Search time (RTs), accuracy, and fixation numbers when the target was present. Both Experiments 1 and 2 revealed that the fixation duration under the single-sound condition was significantly longer than that under the no-sound condition. These findings suggest that the singularity of auditory stimuli is not a necessary condition for the pip-and-pop effect. Audiovisual interaction is more likely to be a prerequisite for the occurrence of the pip-and-pop effect.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2627-2642"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-03-15DOI: 10.1177/17470218251330422
Mariko Kikutani, Machiko Ikemoto
This research concerns three channels for emotional communication: voice, semantics, and facial expressions. We used speech in which the emotion in voice and semantics did not match, and we investigated the dominant modality and how they interact with facial expressions. The study used voices emoting anger, happiness, or sadness while saying, "I'm angry," "I'm pleased," or "I'm sad." A facial image accompanied the voice, and it expressed either the same emotion to the voice (voice = face condition), the same emotion to the semantics (semantic = face condition), or a mixed emotion shown in the voice and semantics (morph condition). The phrases were articulated in the participants' native language (Japanese), second language (English), and unfamiliar language (Khmer). In Study 1, participants answered how much they agreed that the speaker expressed anger, happiness, and sadness. Their attention was not controlled. In Study 2, participants were told to attend to either voice or semantics. The morph condition of study 1 found semantic dominance for the native language stimuli. The semantic = face and voice = face conditions in Studies 1 and 2 revealed that an emotion solely expressed in semantics (while a different emotion was shown in face and voice) had more substantial impacts on assessing the speaker's emotion than an emotion solely expressed in voice when the semantics were in understandable languages.
{"title":"Processing of incongruent emotional expressions in voice and semantics: The dominant modality and integration with facial expressions.","authors":"Mariko Kikutani, Machiko Ikemoto","doi":"10.1177/17470218251330422","DOIUrl":"10.1177/17470218251330422","url":null,"abstract":"<p><p>This research concerns three channels for emotional communication: voice, semantics, and facial expressions. We used speech in which the emotion in voice and semantics did not match, and we investigated the dominant modality and how they interact with facial expressions. The study used voices emoting anger, happiness, or sadness while saying, \"I'm angry,\" \"I'm pleased,\" or \"I'm sad.\" A facial image accompanied the voice, and it expressed either the same emotion to the voice (voice = face condition), the same emotion to the semantics (semantic = face condition), or a mixed emotion shown in the voice and semantics (morph condition). The phrases were articulated in the participants' native language (Japanese), second language (English), and unfamiliar language (Khmer). In Study 1, participants answered how much they agreed that the speaker expressed anger, happiness, and sadness. Their attention was not controlled. In Study 2, participants were told to attend to either voice or semantics. The morph condition of study 1 found semantic dominance for the native language stimuli. The semantic = face and voice = face conditions in Studies 1 and 2 revealed that an emotion solely expressed in semantics (while a different emotion was shown in face and voice) had more substantial impacts on assessing the speaker's emotion than an emotion solely expressed in voice when the semantics were in understandable languages.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2841-2856"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143634449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-04-02DOI: 10.1177/17470218251332905
Mengyin Jiang, Jie Sui
Self-concept is the basis for many cognitive and behavioural processes, such as the processing of self-related information (e.g. one's own face, one's own name) and the categorisation of people into various social groups (e.g. self vs. other, family vs. non-family). Previous research suggests that one's self-concept is not only construed from individual characteristics but also from one's social experiences and group memberships. Thus, important life experiences such as childbirth and becoming a parent have significant impacts on one's self-concept and subsequently influence the categorisation of information regarding the self and others. In two experiments, women who gave birth within the last 2 years were recruited and tested on a series of categorisation tasks using names (Experiment 1) or faces (Experiment 2) as stimuli. Results consistently revealed faster reaction times in response to the self regardless of stimulus type (name or face) and response category (self vs. other, family vs. non-family, familiar vs. non-familiar). A family bias for one's own baby name and one's own mother name over friend was observed in the family versus non-family but not in the familiar versus non-familiar categorisation tasks. These findings indicate that information regarding the self and one's family members receives preferential processing in social categorisation. These findings contribute to current understandings of the evolving self-concept through social experiences and its influence on group membership categorisations and response behaviour.
{"title":"The social self: Categorisation of family members examined through the self-bias effect in new mothers.","authors":"Mengyin Jiang, Jie Sui","doi":"10.1177/17470218251332905","DOIUrl":"10.1177/17470218251332905","url":null,"abstract":"<p><p>Self-concept is the basis for many cognitive and behavioural processes, such as the processing of self-related information (e.g. one's own face, one's own name) and the categorisation of people into various social groups (e.g. self vs. other, family vs. non-family). Previous research suggests that one's self-concept is not only construed from individual characteristics but also from one's social experiences and group memberships. Thus, important life experiences such as childbirth and becoming a parent have significant impacts on one's self-concept and subsequently influence the categorisation of information regarding the self and others. In two experiments, women who gave birth within the last 2 years were recruited and tested on a series of categorisation tasks using names (Experiment 1) or faces (Experiment 2) as stimuli. Results consistently revealed faster reaction times in response to the self regardless of stimulus type (name or face) and response category (self vs. other, family vs. non-family, familiar vs. non-familiar). A family bias for one's own baby name and one's own mother name over friend was observed in the family versus non-family but not in the familiar versus non-familiar categorisation tasks. These findings indicate that information regarding the self and one's family members receives preferential processing in social categorisation. These findings contribute to current understandings of the evolving self-concept through social experiences and its influence on group membership categorisations and response behaviour.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2816-2828"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-02-28DOI: 10.1177/17470218251326501
Xianjia Wang, Wei Cui, Shuochen Wang, Yang Liu, Hao Yu, Jian Song
Facial attractiveness plays a significant role in interpersonal interactions, influencing various aspects of life. This study is the first to explore, from a neurological perspective, the impact of facial attractiveness on individual cooperative behavior in the context of the Stag Hunt game. Twenty-six participants took part in a two-person Stag Hunt experimental task, while their electroencephalogram (EEG) data were recorded. Participants had to decide whether to cooperate with or to defect from a virtual partner in the game, with photos of these partners (high or low attractiveness) shown before the decision. Analysis of the behavioral data indicates that faces with high attractiveness can promote individual cooperative behavior. EEG data analysis revealed that during the facial stimulus presentation phase, low attractiveness faces elicited more negative N2 amplitudes, smaller late positive potential amplitudes, and larger alpha oscillations compared to high attractiveness faces. During the outcome feedback phase, high attractiveness faces elicited smaller feedback-related negativity (FRN) amplitudes, larger P300 amplitudes, and stronger theta oscillations than low attractiveness faces, while loss feedback elicited more negative FRN amplitudes, smaller P300 amplitudes, and larger theta oscillations than gain feedback. These findings indicate that the processing of facial attractiveness occurs early and automatically, and it also influences individuals' evaluation of behavioral outcomes.
{"title":"Facial attractiveness influenced cooperative behavior in the Stag Hunt game: Evidence from neural electrophysiology.","authors":"Xianjia Wang, Wei Cui, Shuochen Wang, Yang Liu, Hao Yu, Jian Song","doi":"10.1177/17470218251326501","DOIUrl":"10.1177/17470218251326501","url":null,"abstract":"<p><p>Facial attractiveness plays a significant role in interpersonal interactions, influencing various aspects of life. This study is the first to explore, from a neurological perspective, the impact of facial attractiveness on individual cooperative behavior in the context of the Stag Hunt game. Twenty-six participants took part in a two-person Stag Hunt experimental task, while their electroencephalogram (EEG) data were recorded. Participants had to decide whether to cooperate with or to defect from a virtual partner in the game, with photos of these partners (high or low attractiveness) shown before the decision. Analysis of the behavioral data indicates that faces with high attractiveness can promote individual cooperative behavior. EEG data analysis revealed that during the facial stimulus presentation phase, low attractiveness faces elicited more negative N2 amplitudes, smaller late positive potential amplitudes, and larger alpha oscillations compared to high attractiveness faces. During the outcome feedback phase, high attractiveness faces elicited smaller feedback-related negativity (FRN) amplitudes, larger P300 amplitudes, and stronger theta oscillations than low attractiveness faces, while loss feedback elicited more negative FRN amplitudes, smaller P300 amplitudes, and larger theta oscillations than gain feedback. These findings indicate that the processing of facial attractiveness occurs early and automatically, and it also influences individuals' evaluation of behavioral outcomes.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2758-2771"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-02-11DOI: 10.1177/17470218251323236
Gaëtan Thiebaut, Alain Méot, Pavol Prokop, Patrick Bonin
We examined fear and disgust responses in trypophobia to distinguish between two hypotheses concerning the origin of this phenomenon. According to the hypothesis that trypophobia stems from an ancestral fear of dangerous animals, fear predominates over disgust, whereas the opposite is true according to the disease aversion hypothesis. Currently, the question of which of the two plays a more significant role in trypophobia remains unclear. Adults had to rate on Likert-type scales their level of disgust and fear when presented with photographs of frightening or disgusting stimuli, trypophobia-inducing stimuli, i.e., clusters of holes, or neutral stimuli. They also had to rate the difficulty of viewing these images. Higher levels of disgust than fear were found for the trypophobic images in both the overall sample and in the participants reporting the highest levels of discomfort when viewing them. Trypophobic images had a special status for these latter participants, as they were rated more disgusting than non-trypophobic disgusting images and more frightening than non-trypophobic frightening images. Although disgust is the dominant emotion in trypophobia, fear is also not negligible.
{"title":"Is trypophobia more related to disgust than to fear? Assessing the disease avoidance and ancestral fear hypotheses.","authors":"Gaëtan Thiebaut, Alain Méot, Pavol Prokop, Patrick Bonin","doi":"10.1177/17470218251323236","DOIUrl":"10.1177/17470218251323236","url":null,"abstract":"<p><p>We examined fear and disgust responses in trypophobia to distinguish between two hypotheses concerning the origin of this phenomenon. According to the hypothesis that trypophobia stems from an ancestral fear of dangerous animals, fear predominates over disgust, whereas the opposite is true according to the disease aversion hypothesis. Currently, the question of which of the two plays a more significant role in trypophobia remains unclear. Adults had to rate on Likert-type scales their level of disgust and fear when presented with photographs of frightening or disgusting stimuli, trypophobia-inducing stimuli, i.e., clusters of holes, or neutral stimuli. They also had to rate the difficulty of viewing these images. Higher levels of disgust than fear were found for the trypophobic images in both the overall sample and in the participants reporting the highest levels of discomfort when viewing them. Trypophobic images had a special status for these latter participants, as they were rated more disgusting than non-trypophobic disgusting images and more frightening than non-trypophobic frightening images. Although disgust is the dominant emotion in trypophobia, fear is also not negligible.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2681-2687"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to assess the extent to which human participants co-represent the lexico-semantic processing of a humanoid robot partner. Specifically, we investigated whether participants would engage their speech production system to predict the robot's upcoming words, and how they would progressively adapt to the robot's verbal behaviour. In the experiment, a human participant and a robot alternated in naming pictures of objects from 15 semantic categories, while the participant's electrophysiological activity was recorded. We manipulated word frequency as a measure of lexical access, with half of the pictures associated with high-frequency names and the other half with low-frequency names. In addition, the robot was programmed to provide semantic category labels (e.g., "tool" for the picture of a hammer) instead of the more typical basic-level names (e.g., "hammer") for items in five categories. Analysis of the stimulus-locked activity revealed a comparable event-related potential (ERP) associated with word frequency both when it was the participant's and the robot's turn to speak. Analysis of the response-locked activity showed a different pattern for the category and basic-level responses in the first but not in the second part of the experiment, suggesting that participants adapted to the robot's lexico-semantic patterns over time. These findings provide empirical evidence for two key points: (1) participants engage their speech production system to predict the robot's upcoming words and (2) partner-adaptive behaviour facilitates comprehension of the robot's speech.
{"title":"Electrophysiological markers of adaptive co-representation in joint language production: Evidence from human-robot interaction.","authors":"Giusy Cirillo, Elin Runnqvist, Kristof Strijkers, Noël Nguyen, Cristina Baus","doi":"10.1177/17470218251322347","DOIUrl":"10.1177/17470218251322347","url":null,"abstract":"<p><p>This study aimed to assess the extent to which human participants co-represent the lexico-semantic processing of a humanoid robot partner. Specifically, we investigated whether participants would engage their speech production system to predict the robot's upcoming words, and how they would progressively adapt to the robot's verbal behaviour. In the experiment, a human participant and a robot alternated in naming pictures of objects from 15 semantic categories, while the participant's electrophysiological activity was recorded. We manipulated word frequency as a measure of lexical access, with half of the pictures associated with high-frequency names and the other half with low-frequency names. In addition, the robot was programmed to provide semantic category labels (e.g., \"tool\" for the picture of a hammer) instead of the more typical basic-level names (e.g., \"hammer\") for items in five categories. Analysis of the stimulus-locked activity revealed a comparable event-related potential (ERP) associated with word frequency both when it was the participant's and the robot's turn to speak. Analysis of the response-locked activity showed a different pattern for the category and basic-level responses in the first but not in the second part of the experiment, suggesting that participants adapted to the robot's lexico-semantic patterns over time. These findings provide empirical evidence for two key points: (1) participants engage their speech production system to predict the robot's upcoming words and (2) partner-adaptive behaviour facilitates comprehension of the robot's speech.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"2643-2659"},"PeriodicalIF":1.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}