Pub Date : 2025-09-12DOI: 10.1016/j.chbah.2025.100208
Lidor Ivan
The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.
An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying visual inconsistencies, signs of perfection, and technical flaws. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “Learning Loop”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.
{"title":"Visual deception in online dating: How gender shapes AI-generated image detection","authors":"Lidor Ivan","doi":"10.1016/j.chbah.2025.100208","DOIUrl":"10.1016/j.chbah.2025.100208","url":null,"abstract":"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100208"},"PeriodicalIF":0.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-11DOI: 10.1016/j.chbah.2025.100204
Md Nazmus Sakib , Muhaiminul Islam , Mochammad Fahlevi , Md Siddikur Rahman , Mohammad Younus , Md Mizanur Rahman
ChatGPT, a transformative conversational agent, has exhibited significant impact across diverse domains, particularly in revolutionizing customer service within the e-commerce sector and aiding content development professionals. Despite its broad applications, a dearth of comprehensive studies exists on user attitudes and actions regarding ChatGPT adoption. This study addresses this gap by investigating the key factors influencing ChatGPT usage through the conceptual lens of the Technology Acceptance Model (TAM). Employing PLS-SEM modeling on data collected from 313 ChatGPT users globally, spanning various professions and consistent platform use for a minimum of six months, the research identifies perceived cost, perceived enjoyment, perceived usefulness, facilitating conditions, and social influence as pivotal factors determining ChatGPT usage. Notably, perceived ease of use, perceived trust, and perceived compatibility emerge as negligible determinants. However, trust and compatibility exert an indirect influence on usage via social influence, while ease of use indirectly affects ChatGPT usage through facilitating conditions. Thus, this study revolutionizes TAM research, identifying critical factors for ChatGPT adoption and providing actionable insights for organizations to strategically enhance AI utilization, transforming customer service and content development across industries.
{"title":"Factors influencing users' intention to adopt ChatGPT based on the extended technology acceptance model","authors":"Md Nazmus Sakib , Muhaiminul Islam , Mochammad Fahlevi , Md Siddikur Rahman , Mohammad Younus , Md Mizanur Rahman","doi":"10.1016/j.chbah.2025.100204","DOIUrl":"10.1016/j.chbah.2025.100204","url":null,"abstract":"<div><div>ChatGPT, a transformative conversational agent, has exhibited significant impact across diverse domains, particularly in revolutionizing customer service within the e-commerce sector and aiding content development professionals. Despite its broad applications, a dearth of comprehensive studies exists on user attitudes and actions regarding ChatGPT adoption. This study addresses this gap by investigating the key factors influencing ChatGPT usage through the conceptual lens of the Technology Acceptance Model (TAM). Employing PLS-SEM modeling on data collected from 313 ChatGPT users globally, spanning various professions and consistent platform use for a minimum of six months, the research identifies perceived cost, perceived enjoyment, perceived usefulness, facilitating conditions, and social influence as pivotal factors determining ChatGPT usage. Notably, perceived ease of use, perceived trust, and perceived compatibility emerge as negligible determinants. However, trust and compatibility exert an indirect influence on usage via social influence, while ease of use indirectly affects ChatGPT usage through facilitating conditions. Thus, this study revolutionizes TAM research, identifying critical factors for ChatGPT adoption and providing actionable insights for organizations to strategically enhance AI utilization, transforming customer service and content development across industries.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100204"},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1016/j.chbah.2025.100206
Miquel Llorente , Matthieu J. Guitton , Thomas Castelain
{"title":"Primatology as an integrative framework to study social robots","authors":"Miquel Llorente , Matthieu J. Guitton , Thomas Castelain","doi":"10.1016/j.chbah.2025.100206","DOIUrl":"10.1016/j.chbah.2025.100206","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100206"},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1016/j.chbah.2025.100205
Rohan L. Dunham, Gerben A. van Kleef, Eftychia Stamkou
People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (N = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (N = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.
{"title":"The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music","authors":"Rohan L. Dunham, Gerben A. van Kleef, Eftychia Stamkou","doi":"10.1016/j.chbah.2025.100205","DOIUrl":"10.1016/j.chbah.2025.100205","url":null,"abstract":"<div><div>People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (<em>N</em> = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (<em>N</em> = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100205"},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-02DOI: 10.1016/j.chbah.2025.100197
Sonali Uttam Singh, Akbar Siami Namin
Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.
Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.
{"title":"The influence of persuasive techniques on large language models: A scenario-based study","authors":"Sonali Uttam Singh, Akbar Siami Namin","doi":"10.1016/j.chbah.2025.100197","DOIUrl":"10.1016/j.chbah.2025.100197","url":null,"abstract":"<div><div>Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.</div><div>Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100197"},"PeriodicalIF":0.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145010733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1016/j.chbah.2025.100200
Melanie J. McGrath , Andreas Duenser , Justine Lacey , Cécile Paris
Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.
{"title":"Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration","authors":"Melanie J. McGrath , Andreas Duenser , Justine Lacey , Cécile Paris","doi":"10.1016/j.chbah.2025.100200","DOIUrl":"10.1016/j.chbah.2025.100200","url":null,"abstract":"<div><div>Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100200"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1016/j.chbah.2025.100198
Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo
This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that trust can be achieved through transparency. By revealing the coexistence of AI appreciation and aversion, the study offers nuanced insights into trust calibration within socially and emotionally sensitive communication contexts. These results also inform the integration of AI summarization into qualitative research workflows.
{"title":"Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight","authors":"Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo","doi":"10.1016/j.chbah.2025.100198","DOIUrl":"10.1016/j.chbah.2025.100198","url":null,"abstract":"<div><div>This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that <strong>trust can be achieved through transparency</strong>. By revealing the <strong>coexistence of AI appreciation and aversion</strong>, the study offers nuanced insights into <strong>trust calibration</strong> within <strong>socially and emotionally sensitive communication contexts</strong>. These results also inform the <strong>integration of AI summarization into qualitative research workflows</strong>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100198"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.chbah.2025.100186
Hikaru Nozawa, Masaharu Kato
Social touch is vital for developing stable attachments and social skills, and haptic robots could provide children opportunities to develop those attachments and skills. However, haptic robots are not guaranteed suitable for every child, and individual differences exist in accepting these robots. In this study, we proposed that screening children's sensory reactivity can predict the suitable and challenging attributes for accepting these robots. Additionally, we investigated how sensory reactivity influences the tendency to anthropomorphize a haptic robot, as anthropomorphizing a robot is considered an indicator of accepting the robot. Sixty-seven preschool children aged 5–6 years participated. Results showed that the initial anthropomorphic tendency toward the robot was more likely to decrease with increasing atypicality in sensory reactivity, and haptic interaction with the robot tended to promote anthropomorphic tendency. A detailed analysis focusing on children's sensory insensitivity revealed polarized results: those actively seeking sensory information (i.e., sensory seeking) showed a lower anthropomorphic tendency toward the robot, whereas those who were passive (i.e., low registration) showed a higher anthropomorphic tendency. Importantly, haptic interaction with the robot mitigated the lower anthropomorphic tendency observed in sensory seekers. Finally, we found that the degree of anthropomorphizing the robot. positively influenced physiological arousal level. These results indicate that children with atypical sensory reactivity may accept robots through haptic interaction This extends previous research by demonstrating how individual sensory reactivity profiles modulate children's robot acceptance through physical interaction rather than visual observation alone. Future robots must be designed to interact in ways tailored to each child's sensory reactivity to develop stable attachment and social skills.
{"title":"Effects of sensory reactivity and haptic interaction on children's anthropomorphism of a haptic robot","authors":"Hikaru Nozawa, Masaharu Kato","doi":"10.1016/j.chbah.2025.100186","DOIUrl":"10.1016/j.chbah.2025.100186","url":null,"abstract":"<div><div>Social touch is vital for developing stable attachments and social skills, and haptic robots could provide children opportunities to develop those attachments and skills. However, haptic robots are not guaranteed suitable for every child, and individual differences exist in accepting these robots. In this study, we proposed that screening children's sensory reactivity can predict the suitable and challenging attributes for accepting these robots. Additionally, we investigated how sensory reactivity influences the tendency to anthropomorphize a haptic robot, as anthropomorphizing a robot is considered an indicator of accepting the robot. Sixty-seven preschool children aged 5–6 years participated. Results showed that the initial anthropomorphic tendency toward the robot was more likely to decrease with increasing atypicality in sensory reactivity, and haptic interaction with the robot tended to promote anthropomorphic tendency. A detailed analysis focusing on children's sensory insensitivity revealed polarized results: those actively seeking sensory information (i.e., <em>sensory seeking</em>) showed a lower anthropomorphic tendency toward the robot, whereas those who were passive (i.e., <em>low registration</em>) showed a higher anthropomorphic tendency. Importantly, haptic interaction with the robot mitigated the lower anthropomorphic tendency observed in sensory seekers. Finally, we found that the degree of anthropomorphizing the robot. positively influenced physiological arousal level. These results indicate that children with atypical sensory reactivity may accept robots through haptic interaction This extends previous research by demonstrating how individual sensory reactivity profiles modulate children's robot acceptance through physical interaction rather than visual observation alone. Future robots must be designed to interact in ways tailored to each child's sensory reactivity to develop stable attachment and social skills.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100186"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.chbah.2025.100192
Yejin Lee , Sang-Hwan Kim
This study aims to identify humanlike traits in conversational AI (CAI) that influence human identity threat and dehumanization, and to propose design guidelines that mitigate these effects. An online survey was conducted with 323 participants. Factor analysis revealed four key dimensions of perceived anthropomorphism in CAI: Self-likeness, Communication & Memory, Social Adaptability, and Agency. Structural equation modeling showed that Self-likeness heightened both perceived human identity threat and dehumanization, whereas Agency significantly moderated these effects while also directly mitigating dehumanization. Social Adaptability generally reduced perceived human identity threat but amplified it when combined with high Self-likeness. Furthermore, younger individuals were more likely to experience perceived human identity threat and dehumanization, underscoring the importance of considering user age. By elucidating the psychological structure underlying users’ perceptions of CAI anthropomorphism, this study deepens understanding of its psychosocial implications and provides practical guidance for the ethical design of CAI systems.
{"title":"Exploring dimensions of perceived anthropomorphism in conversational AI: Implications for human identity threat and dehumanization","authors":"Yejin Lee , Sang-Hwan Kim","doi":"10.1016/j.chbah.2025.100192","DOIUrl":"10.1016/j.chbah.2025.100192","url":null,"abstract":"<div><div>This study aims to identify humanlike traits in conversational AI (CAI) that influence human identity threat and dehumanization, and to propose design guidelines that mitigate these effects. An online survey was conducted with 323 participants. Factor analysis revealed four key dimensions of perceived anthropomorphism in CAI: Self-likeness, Communication & Memory, Social Adaptability, and Agency. Structural equation modeling showed that Self-likeness heightened both perceived human identity threat and dehumanization, whereas Agency significantly moderated these effects while also directly mitigating dehumanization. Social Adaptability generally reduced perceived human identity threat but amplified it when combined with high Self-likeness. Furthermore, younger individuals were more likely to experience perceived human identity threat and dehumanization, underscoring the importance of considering user age. By elucidating the psychological structure underlying users’ perceptions of CAI anthropomorphism, this study deepens understanding of its psychosocial implications and provides practical guidance for the ethical design of CAI systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100192"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.chbah.2025.100195
Riccardo Volpato , Lisa DeBruine , Simone Stumpf
People are increasingly using generative artificial intelligence (AI) for emotional support, creating trust-based interactions with limited predictability and transparency. We address the fragmented nature of research on trust in AI through a multidisciplinary conceptual review, examining theoretical foundations for understanding trust in the emerging context of emotional support from generative AI. Through an in-depth literature search across human-computer interaction, computer-mediated communication, social psychology, mental health, economics, sociology, philosophy, and science and technology studies, we developed two principal contributions. First, we summarise relevant definitions of trust across disciplines. Second, based on our first contribution, we define trust in the context of emotional support provided by AI and present a categorisation of relevant concepts that recur across well-established research areas. Our work equips researchers with a map for navigating the literature and formulating hypotheses about AI-based mental health support, as well as important theoretical, methodological, and practical implications for advancing research in this area.
{"title":"Trusting emotional support from generative artificial intelligence: a conceptual review","authors":"Riccardo Volpato , Lisa DeBruine , Simone Stumpf","doi":"10.1016/j.chbah.2025.100195","DOIUrl":"10.1016/j.chbah.2025.100195","url":null,"abstract":"<div><div>People are increasingly using generative artificial intelligence (AI) for emotional support, creating trust-based interactions with limited predictability and transparency. We address the fragmented nature of research on trust in AI through a multidisciplinary conceptual review, examining theoretical foundations for understanding trust in the emerging context of emotional support from generative AI. Through an in-depth literature search across human-computer interaction, computer-mediated communication, social psychology, mental health, economics, sociology, philosophy, and science and technology studies, we developed two principal contributions. First, we summarise relevant definitions of trust across disciplines. Second, based on our first contribution, we define trust in the context of emotional support provided by AI and present a categorisation of relevant concepts that recur across well-established research areas. Our work equips researchers with a map for navigating the literature and formulating hypotheses about AI-based mental health support, as well as important theoretical, methodological, and practical implications for advancing research in this area.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100195"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}