首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages 评估大型语言模型中的跨文化敏感性:跨八种语言的GPT-3.5和GPT-4的比较研究
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100241
Yiwen Jin , Lies Sercu , Feng Guo
As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.
随着像ChatGPT这样的大型语言模型(llm)越来越多地跨文化和语言使用,人们开始关注它们以文化敏感方式做出反应的能力。本研究使用翻译成八种语言的跨文化敏感性量表(ISS)评估GPT-3.5和GPT-4的跨文化敏感性。每个模型完成了每种语言24项ISS的10次随机迭代,并使用描述性统计和三向方差分析对结果进行分析。在所有维度上,GPT-4的跨文化敏感性得分明显高于GPT-3.5,其中“尊重文化差异”得分最高,“互动信心”得分最低。模型版本与语言之间、模型版本与ISS维度之间存在显著的交互作用,表明GPT-4的改善因语言语境而异。然而,语言和维度之间的相互作用并没有产生显著的结果。未来的研究应侧重于增加较少使用语言的训练数据量,并增加丰富的情感和文化背景数据,以提高模型对文化规范和细微差别的理解。
{"title":"Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages","authors":"Yiwen Jin ,&nbsp;Lies Sercu ,&nbsp;Feng Guo","doi":"10.1016/j.chbah.2025.100241","DOIUrl":"10.1016/j.chbah.2025.100241","url":null,"abstract":"<div><div>As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100241"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aesthetic Integrity Index (AII) for human–AI hybrid epistemology: Reconfiguring the Beholder’s Share through Xie He’s Six Principles 人-人工智能混合认识论的审美完整性指数(AII):通过解贺六原则重新配置旁观者份额
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100242
Rong Chang
{"title":"Aesthetic Integrity Index (AII) for human–AI hybrid epistemology: Reconfiguring the Beholder’s Share through Xie He’s Six Principles","authors":"Rong Chang","doi":"10.1016/j.chbah.2025.100242","DOIUrl":"10.1016/j.chbah.2025.100242","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100242"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative fairness—A framework for the design of equitable cybernetic societies 定量公平——设计公平控制论社会的框架
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100236
Kevin Riehl, Anastasios Kouvelas, Michail A. Makridis
Advancements in computer science, artificial intelligence, and control systems have catalyzed the emergence of cybernetic societies, where algorithms play a pivotal role in decision-making processes shaping nearly every aspect of human life. Automated decision-making for resource allocation has expanded into industry, government processes, critical infrastructures, and even determines the very fabric of social interactions and communication. While these systems promise greater efficiency and reduced corruption, misspecified cybernetic mechanisms harbor the threat for reinforcing inequities, discrimination, and even dystopian or totalitarian structures. Fairness thus becomes a crucial component in the design of cybernetic systems, to promote cooperation between selfish individuals, to achieve better outcomes at the system level, to confront public resistance, to gain trust and acceptance for rules and institutions, to perforate self-reinforcing cycles of poverty through social mobility, to incentivize motivation, contribution and satisfaction of people through inclusion, to increase social-cohesion in groups, and ultimately to improve life quality. Quantitative descriptions of fairness are crucial to reflect equity into algorithms, but only few works in the fairness literature offer such measures; the existing quantitative measures in the literature are either too application-specific, suffer from undesirable characteristics, or are not ideology-agnostic. This study proposes a quantitative, transactional, and distributive fairness framework based on an interdisciplinary foundation that supports the systematic design of socially-feasible decision-making systems. Moreover, it emphasizes the importance of fairness and transparency when designing algorithms for equitable, cybernetic societies, and establishes a connection between fairness literature and resource allocating systems.
计算机科学、人工智能和控制系统的进步催化了控制论社会的出现,算法在决定人类生活几乎方方面面的决策过程中发挥着关键作用。用于资源分配的自动化决策已经扩展到工业、政府流程、关键基础设施,甚至决定了社会交互和通信的结构。虽然这些系统有望提高效率并减少腐败,但不明确的控制论机制却有可能加剧不平等、歧视,甚至是反乌托邦或极权主义结构。因此,公平成为控制论系统设计中的一个重要组成部分,它可以促进自私的个人之间的合作,在系统层面取得更好的结果,对抗公众的抵制,获得对规则和制度的信任和接受,通过社会流动打破贫困的自我强化循环,通过包容激励人们的动机、贡献和满意度,增加群体的社会凝聚力。并最终提高生活质量。公平的定量描述对于将公平反映到算法中至关重要,但公平文献中只有很少的作品提供了这样的措施;文献中现有的量化措施要么过于具体,要么具有不良特征,要么不是意识形态不可知论的。本研究提出了一个基于跨学科基础的定量、交易和分配公平框架,支持社会可行决策系统的系统设计。此外,它强调了公平和透明的重要性设计算法公平,控制论社会,并建立公平文献和资源分配系统之间的联系。
{"title":"Quantitative fairness—A framework for the design of equitable cybernetic societies","authors":"Kevin Riehl,&nbsp;Anastasios Kouvelas,&nbsp;Michail A. Makridis","doi":"10.1016/j.chbah.2025.100236","DOIUrl":"10.1016/j.chbah.2025.100236","url":null,"abstract":"<div><div>Advancements in computer science, artificial intelligence, and control systems have catalyzed the emergence of cybernetic societies, where algorithms play a pivotal role in decision-making processes shaping nearly every aspect of human life. Automated decision-making for resource allocation has expanded into industry, government processes, critical infrastructures, and even determines the very fabric of social interactions and communication. While these systems promise greater efficiency and reduced corruption, misspecified cybernetic mechanisms harbor the threat for reinforcing inequities, discrimination, and even dystopian or totalitarian structures. Fairness thus becomes a crucial component in the design of cybernetic systems, to promote cooperation between selfish individuals, to achieve better outcomes at the system level, to confront public resistance, to gain trust and acceptance for rules and institutions, to perforate self-reinforcing cycles of poverty through social mobility, to incentivize motivation, contribution and satisfaction of people through inclusion, to increase social-cohesion in groups, and ultimately to improve life quality. Quantitative descriptions of fairness are crucial to reflect equity into algorithms, but only few works in the fairness literature offer such measures; the existing quantitative measures in the literature are either too application-specific, suffer from undesirable characteristics, or are not ideology-agnostic. This study proposes a quantitative, transactional, and distributive fairness framework based on an interdisciplinary foundation that supports the systematic design of socially-feasible decision-making systems. Moreover, it emphasizes the importance of fairness and transparency when designing algorithms for equitable, cybernetic societies, and establishes a connection between fairness literature and resource allocating systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100236"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why human mistakes hurt more? Emotional responses in human-AI errors 为什么人为的错误伤害更大?人类-人工智能错误中的情绪反应
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100238
Ying Qin, Wanhui Zhou, Bu Zhong
Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (N = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.
理解用户对人工智能和人为错误的反应至关重要,因为它们会影响信任、接受和交互结果。本研究通过研究代理身份(人类与人工智能)和错误严重程度(低与高)如何影响负面情绪反应,调查了人类与人工智能互动的情感动态。使用2 × 2因子设计(N = 250),研究结果显示,无论错误严重程度如何,人类代理始终比人工智能代理引发更强烈的负面情绪。此外,感知经验在特定条件下调节了这种关系:认为人工智能经验不如人类的个体对人类的错误表现出更强烈的负面情绪,而当人工智能被认为具有更高的经验时,这种影响就会减弱。然而,感知代理对情绪反应没有显著影响。这些发现强调了代理身份和感知经验在塑造对错误的情绪反应方面的关键作用,增加了对人类与人工智能互动动态的见解。这项研究表明,开发有效的人工智能系统需要管理用户的情绪反应和信任,其中感知经验和能力在采用中起着关键作用。这些发现可以指导人工智能系统的设计,根据人工智能感知的经验水平调整用户的期望和情绪反应。
{"title":"Why human mistakes hurt more? Emotional responses in human-AI errors","authors":"Ying Qin,&nbsp;Wanhui Zhou,&nbsp;Bu Zhong","doi":"10.1016/j.chbah.2025.100238","DOIUrl":"10.1016/j.chbah.2025.100238","url":null,"abstract":"<div><div>Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (<em>N</em> = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100238"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective 在基于法学硕士的聊天机器人时代映射用户满意度:一个可视性视角
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100240
Eun Go , Taeyoung Kim
Despite the widespread use of large language model (LLM)-based chatbots, little is known about what specific gratifications users obtain from the unique affordances of these systems and how these affordance-driven gratifications shape user evaluations. To address this gap, the present study maps the gratification structure of LLM chatbot use and examines whether users’ primary purpose of chatbot use (information-, conversation-, or task-oriented) influences the gratifications they derive. A survey of 249 LLM chatbot users revealed nine distinct gratifications aligned with four affordance types: modality, agency, interactivity, and navigability. Purpose of use meaningfully shaped which gratifications were most salient. For example, conversational use heightened Immersive Realism and Fun, whereas information- and task-oriented use elevated Adaptive Responsiveness. In turn, these affordance-driven gratifications predicted key outcomes, including perceived expertise, perceived friendliness, satisfaction, attitudes, and behavioral intentions to continued use. Across outcomes, Adaptive Responsiveness consistently emerged as the strongest predictor, underscoring the pivotal role of contingent, high-quality dialogue in LLM-based human–AI interaction. These findings extend uses and gratifications theory and offer design implications for developing more engaging, responsive, and purpose-tailored chatbot experiences.
尽管基于大型语言模型(LLM)的聊天机器人被广泛使用,但人们对用户从这些系统的独特功能中获得的特定满足以及这些功能支持驱动的满足如何影响用户评估知之甚少。为了解决这一差距,本研究绘制了LLM聊天机器人使用的满足结构,并检查用户使用聊天机器人的主要目的(信息导向、对话导向或任务导向)是否会影响他们获得的满足。一项针对249名LLM聊天机器人用户的调查显示,九种不同的满足感与四种提供类型相一致:模态、代理、交互性和可导航性。使用目的有意义地决定了哪些满足是最显著的。例如,会话使用增强了沉浸式现实性和趣味性,而信息和任务导向的使用增强了适应性响应性。反过来,这些可视性驱动的满足预测了关键结果,包括感知到的专业知识、感知到的友好、满意度、态度和继续使用的行为意图。在结果中,适应性反应一直是最强的预测因子,强调了基于法学硕士的人类-人工智能交互中偶然的高质量对话的关键作用。这些发现扩展了使用和满足理论,并为开发更具吸引力、响应性和针对性的聊天机器人体验提供了设计启示。
{"title":"Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective","authors":"Eun Go ,&nbsp;Taeyoung Kim","doi":"10.1016/j.chbah.2025.100240","DOIUrl":"10.1016/j.chbah.2025.100240","url":null,"abstract":"<div><div>Despite the widespread use of large language model (LLM)-based chatbots, little is known about what specific gratifications users obtain from the unique affordances of these systems and how these affordance-driven gratifications shape user evaluations. To address this gap, the present study maps the gratification structure of LLM chatbot use and examines whether users’ primary purpose of chatbot use (information-, conversation-, or task-oriented) influences the gratifications they derive. A survey of 249 LLM chatbot users revealed nine distinct gratifications aligned with four affordance types: modality, agency, interactivity, and navigability. Purpose of use meaningfully shaped which gratifications were most salient. For example, conversational use heightened <em>Immersive Realism</em> and <em>Fun</em>, whereas information- and task-oriented use elevated <em>Adaptive Responsiveness</em>. In turn, these affordance-driven gratifications predicted key outcomes, including perceived expertise, perceived friendliness, satisfaction, attitudes, and behavioral intentions to continued use. Across outcomes, <em>Adaptive Responsiveness</em> consistently emerged as the strongest predictor, underscoring the pivotal role of contingent, high-quality dialogue in LLM-based human–AI interaction. These findings extend uses and gratifications theory and offer design implications for developing more engaging, responsive, and purpose-tailored chatbot experiences.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100240"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145698045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space 化身还是人类,是谁在体验?虚拟游戏世界中社交互动对个人空间的影响
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100237
Ruoyu Niu, Mengzhu Huang, Rixin Tang
Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.
虚拟游戏世界支持丰富的社交互动,玩家可以使用虚拟角色进行协作、竞争和远距离交流。由于越来越依赖中介社会联系,本研究考察了虚拟共享空间和虚拟角色属性是否以平行面对面接触的方式塑造了个人空间调节。三个实验测试了虚拟共享空间、化身代理和化身拟人化对人际距离的影响。在所有研究中,虚拟舒适距离和心理距离被用作个人空间变化的补充指标,而物理舒适距离在有实际在场的人类伴侣的情况下被额外评估。实验1表明,即使在控制了物理共享空间之后,与在实验室中与人类驱动的伙伴互动时,与在单独的虚拟空间互动相比,占据共享虚拟空间可靠地减少了舒适距离,增加了心理亲密度。实验2模拟了网络虚拟游戏世界中由电脑驱动的伴侣所产生的虚拟共享空间效应,表明人与人之间距离的减少并不仅仅取决于人的行为。实验3显示,拟人化的虚拟形象增加了对电脑驱动的伴侣的舒适度,而当伴侣是人类时,虚拟形象的形式几乎没有影响。总之,研究结果表明,虚拟共享空间、感知代理和虚拟形象共同塑造了数字环境中的个人空间监管,并为设计虚拟形象和虚拟空间提供了可操作的指导,以促进面向方法的、亲社会的互动。
{"title":"Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space","authors":"Ruoyu Niu,&nbsp;Mengzhu Huang,&nbsp;Rixin Tang","doi":"10.1016/j.chbah.2025.100237","DOIUrl":"10.1016/j.chbah.2025.100237","url":null,"abstract":"<div><div>Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100237"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scoping review of nonverbal mimicry in human-virtual human interaction 非语言模仿在人-虚拟人互动中的研究综述
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100230
Kyana H.J. van Eijndhoven , Ethel Pruss , Pieter Spronck
A growing body of research has focused on examining the role of nonverbal mimicry, the spontaneous imitation of others’ physical behavior during social interactions, in human-virtual human interaction. The increasing deployment of virtual humans, and growing advancements in technology vital to virtual human development, emphasize the necessity to review studies incorporating such state-of-the-art technologies. To this end, we conducted a scoping review of empirical work studying nonverbal mimicry in human-virtual human interaction. This review focused on outlining (1) the contexts in which such interactions occurred, (2) implementations of nonverbal mimicry, (3) individual and situational factors that can lead one to mimic more (facilitators) or less (inhibitors), and (4) individual and social consequences. By creating this comprehensive outline, we were able to capture the current state of nonverbal mimicry research, and identify methodological, evidence, and empirical research gaps, that may serve as future guidelines to drive the field of virtual human research forward.
越来越多的研究集中在研究非语言模仿的作用,即在社交互动中自发地模仿他人的身体行为,在人与人之间的虚拟互动中。越来越多的虚拟人的部署,以及对虚拟人发展至关重要的技术的不断进步,强调了审查纳入这些最先进技术的研究的必要性。为此,我们对人类-虚拟人类互动中非语言模仿的实证研究进行了范围审查。这篇综述着重概述了(1)这种互动发生的背景,(2)非语言模仿的实施,(3)导致一个人模仿更多(促进者)或更少(抑制者)的个人和情境因素,以及(4)个人和社会后果。通过创建这个全面的大纲,我们能够捕捉到非语言模仿研究的现状,并确定方法,证据和经验研究差距,这可能作为未来的指导方针,推动虚拟人类研究领域向前发展。
{"title":"A scoping review of nonverbal mimicry in human-virtual human interaction","authors":"Kyana H.J. van Eijndhoven ,&nbsp;Ethel Pruss ,&nbsp;Pieter Spronck","doi":"10.1016/j.chbah.2025.100230","DOIUrl":"10.1016/j.chbah.2025.100230","url":null,"abstract":"<div><div>A growing body of research has focused on examining the role of nonverbal mimicry, the spontaneous imitation of others’ physical behavior during social interactions, in human-virtual human interaction. The increasing deployment of virtual humans, and growing advancements in technology vital to virtual human development, emphasize the necessity to review studies incorporating such state-of-the-art technologies. To this end, we conducted a scoping review of empirical work studying nonverbal mimicry in human-virtual human interaction. This review focused on outlining (1) the contexts in which such interactions occurred, (2) implementations of nonverbal mimicry, (3) individual and situational factors that can lead one to mimic more (facilitators) or less (inhibitors), and (4) individual and social consequences. By creating this comprehensive outline, we were able to capture the current state of nonverbal mimicry research, and identify methodological, evidence, and empirical research gaps, that may serve as future guidelines to drive the field of virtual human research forward.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100230"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less human, less positive? How AI involvement in leadership shapes employees’ affective well-being across different supervisor decisions 少了人性,少了积极?人工智能参与领导如何在不同的主管决策中塑造员工的情感幸福感
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100239
Emily Lochner , René Schmoll , Stephan Kaiser
As artificial intelligence (AI) becomes increasingly integrated into organizational leadership, it is critical to understand how algorithmic decision-making affects employee well-being. This study investigates how varying levels of AI involvement in leadership – ranging from fully human to hybrid (human-AI collaboration) to fully automated – influence employees' emotional responses at work. It also examines whether the emotional impact of leader type depends on the outcome of a managerial decision (positive vs. negative). To investigate these questions, we conducted a vignette-based online experiment using a 3x2 between-subjects design. Participants (N = 153 workers) were randomly assigned to one of six short, standardized leadership scenarios that varied by leader type (human, hybrid, or AI) and decision outcome (positive or negative). The vignettes described a realistic workplace situation in which a leader communicates a decision about a project's continuation. Subsequently, emotional responses were measured using validated affective scales.
The results showed that higher AI involvement led to lower positive affect, particularly following favorable decisions, while negative affect remained largely unaffected. These results suggest that, while AI leadership is not emotionally harmful, it also fails to generate positive engagement. Positive affect was strongest when positive decisions were delivered by a human leader and weakest when delivered by an AI.
These findings contribute to leadership and human-AI interaction research by highlighting an emotional asymmetry in AI-led leadership. Practically speaking, these results imply that while AI offers efficiency, it lacks the interpersonal resonance necessary for emotionally meaningful interactions. Therefore, organizations should consider maintaining human involvement in contexts where recognition, trust, or relational sensitivity are important.
随着人工智能(AI)越来越多地融入组织领导,了解算法决策如何影响员工福祉至关重要。这项研究调查了不同程度的人工智能参与领导——从完全人工到混合(人类-人工智能协作)到完全自动化——如何影响员工在工作中的情绪反应。它还检验了领导者类型的情绪影响是否取决于管理决策的结果(积极的还是消极的)。为了调查这些问题,我们使用3x2受试者间设计进行了一个基于小插图的在线实验。参与者(153名员工)被随机分配到六个简短的标准化领导场景中,这些场景根据领导者类型(人类、混合或人工智能)和决策结果(积极或消极)而有所不同。这些小插曲描述了一个现实的工作环境,在这个环境中,一位领导者传达了一个关于项目继续进行的决定。随后,使用经过验证的情感量表测量情绪反应。结果显示,人工智能参与度越高,积极情绪越低,尤其是在做出有利决定后,而消极情绪基本不受影响。这些结果表明,虽然人工智能领导不会在情感上造成伤害,但它也无法产生积极的参与。当人类领导做出积极决策时,积极影响最强,而当人工智能做出积极决策时,积极影响最弱。这些发现通过强调人工智能领导下的情感不对称,有助于领导力和人类与人工智能互动的研究。实际上,这些结果意味着,虽然人工智能提供了效率,但它缺乏情感上有意义的互动所必需的人际共鸣。因此,组织应该考虑在识别、信任或关系敏感性很重要的环境中保持人类的参与。
{"title":"Less human, less positive? How AI involvement in leadership shapes employees’ affective well-being across different supervisor decisions","authors":"Emily Lochner ,&nbsp;René Schmoll ,&nbsp;Stephan Kaiser","doi":"10.1016/j.chbah.2025.100239","DOIUrl":"10.1016/j.chbah.2025.100239","url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes increasingly integrated into organizational leadership, it is critical to understand how algorithmic decision-making affects employee well-being. This study investigates how varying levels of AI involvement in leadership – ranging from fully human to hybrid (human-AI collaboration) to fully automated – influence employees' emotional responses at work. It also examines whether the emotional impact of leader type depends on the outcome of a managerial decision (positive vs. negative). To investigate these questions, we conducted a vignette-based online experiment using a 3x2 between-subjects design. Participants (N = 153 workers) were randomly assigned to one of six short, standardized leadership scenarios that varied by leader type (human, hybrid, or AI) and decision outcome (positive or negative). The vignettes described a realistic workplace situation in which a leader communicates a decision about a project's continuation. Subsequently, emotional responses were measured using validated affective scales.</div><div>The results showed that higher AI involvement led to lower positive affect, particularly following favorable decisions, while negative affect remained largely unaffected. These results suggest that, while AI leadership is not emotionally harmful, it also fails to generate positive engagement. Positive affect was strongest when positive decisions were delivered by a human leader and weakest when delivered by an AI.</div><div>These findings contribute to leadership and human-AI interaction research by highlighting an emotional asymmetry in AI-led leadership. Practically speaking, these results imply that while AI offers efficiency, it lacks the interpersonal resonance necessary for emotionally meaningful interactions. Therefore, organizations should consider maintaining human involvement in contexts where recognition, trust, or relational sensitivity are important.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100239"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can large language models exhibit cognitive and affective empathy as humans? 大型语言模型能像人类一样表现出认知和情感同理心吗?
Pub Date : 2025-11-13 DOI: 10.1016/j.chbah.2025.100233
Tengfei Yu , Siyu Pan , Caoyun Fan , Siyang Luo , Yaohui Jin , Binglei Zhao
Empathy, a key component of human-social interaction, has become a core con-cern in human-computer interaction. This study examines whether current large language models (LLMs) can exhibit empathy in both cognitive and affective dimensions as humans. In our study, we used the standardized questionnaire to assess LLMs empathy ability and a novel paradigm was developed for LLMs eval-uation. Four main experiments were reported on LLMs empathy abilities using the Interpersonal Reactivity Index (IRI) and the Basic Empathy Scale (BES) on GPT-4 and Llama3 respectively. Two levels of evaluations were conducted to investigate whether the structural validity of the questionnaire in LLMs was aligned with humans and to compare the LLMs' empathy abilities with humans. We found GPT-4 show identical empathy dimension structure with humans while exhibiting significantly lower empathy abilities as compared to humans. Moreover, systemati-cal difference empathy ability was evident in Llama3 showing its failure to exhibit the same empathy dimensions as humans. All these findings indicate that though GPT-4 kept the same structure of human empathy (cognitive and affective), the current LLMs can not simulate empathy as we humans as indexed by the response to the questionnaire. This highlights the urgent requirements for further improving LLMs’ empathy abilities for more user-friendly human-LLMs interactions. In addition, the pipeline to generate diverse LLMs-simulated participants was also discussed.
移情是人与社会互动的重要组成部分,已成为人机交互研究的核心问题。本研究考察了当前的大型语言模型(llm)是否能像人类一样在认知和情感维度上表现出同理心。在本研究中,我们采用标准化问卷对法学硕士共情能力进行评估,并开发了一种新的法学硕士评估范式。采用人际反应指数(IRI)和基本共情量表(BES)分别在GPT-4和Llama3上对LLMs共情能力进行了四个主要实验。本研究通过两个层面的评估来考察法学硕士问卷的结构效度是否与人类一致,并比较法学硕士与人类的共情能力。我们发现GPT-4的共情维度结构与人类相同,但共情能力明显低于人类。此外,美洲驼的系统差异共情能力也很明显,表明它没有表现出与人类相同的共情维度。这些结果表明,虽然GPT-4保持了人类共情(认知和情感)的相同结构,但目前的法学硕士无法像问卷反应那样模拟人类的共情。这凸显了进一步提高法学硕士的移情能力以实现更友好的人机交互的迫切需求。此外,还讨论了生成各种llms模拟参与者的管道。
{"title":"Can large language models exhibit cognitive and affective empathy as humans?","authors":"Tengfei Yu ,&nbsp;Siyu Pan ,&nbsp;Caoyun Fan ,&nbsp;Siyang Luo ,&nbsp;Yaohui Jin ,&nbsp;Binglei Zhao","doi":"10.1016/j.chbah.2025.100233","DOIUrl":"10.1016/j.chbah.2025.100233","url":null,"abstract":"<div><div>Empathy, a key component of human-social interaction, has become a core con-cern in human-computer interaction. This study examines whether current large language models (LLMs) can exhibit empathy in both cognitive and affective dimensions as humans. In our study, we used the standardized questionnaire to assess LLMs empathy ability and a novel paradigm was developed for LLMs eval-uation. Four main experiments were reported on LLMs empathy abilities using the Interpersonal Reactivity Index (IRI) and the Basic Empathy Scale (BES) on GPT-4 and Llama3 respectively. Two levels of evaluations were conducted to investigate whether the structural validity of the questionnaire in LLMs was aligned with humans and to compare the LLMs' empathy abilities with humans. We found GPT-4 show identical empathy dimension structure with humans while exhibiting significantly lower empathy abilities as compared to humans. Moreover, systemati-cal difference empathy ability was evident in Llama3 showing its failure to exhibit the same empathy dimensions as humans. All these findings indicate that though GPT-4 kept the same structure of human empathy (cognitive and affective), the current LLMs can not simulate empathy as we humans as indexed by the response to the questionnaire. This highlights the urgent requirements for further improving LLMs’ empathy abilities for more user-friendly human-LLMs interactions. In addition, the pipeline to generate diverse LLMs-simulated participants was also discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100233"},"PeriodicalIF":0.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“What is the latest news, Avatar Pavel?” - AI assistants in transformation processes of metaverse “有什么最新消息吗,阿凡达·帕维尔”- AI辅助元宇宙的转换过程
Pub Date : 2025-11-01 DOI: 10.1016/j.chbah.2025.100225
Vaclav Moravec , Beata Gavurova , Martin Rigelsky
The main goal of the study was to examine and to evaluate the relationships between public attitudes towards AI avatars, their selected socio-demographic characteristics, fields of their media consumption, as well as ideological attitudes in order to reveal other adoption perspectives of AI avatars, which are not yet explored, and their strong economic and social potential in the metaverse. The data collection was carried out on a sample of 1250 respondents aged 18 and over in the period from 2 April 2025 to 9 April 2025. The research used an AI avatar experimentally developed by the start-up company The MAMA AI.
The outcomes of the descriptive analysis confirmed the fact that the AI news avatar Pavel was perceived rather neutrally to slightly positively, but as impersonal, with respondents demonstrating a low willingness to accept him as a guide across the media. The respondents also evaluated the use of AI assistants most favorably in the technical-service fields, but significantly more negatively in the sensitive domains such as psychology or politics. The differences between these groups were most noticeable in the perception of the AI avatar as more or less human and intimate, especially between men and women. Contrariwise, media habits played a much larger role. The study confirmed the importance of investigating specific adoption factors related to media consumption, media habits, and ideological attitudes along with the socio-demographic factors and thus, allowed us to understand the new adoption potential of AI avatars and the possibilities of its expansion.
该研究的主要目标是检查和评估公众对人工智能化身的态度、他们所选择的社会人口特征、媒体消费领域以及意识形态态度之间的关系,以揭示尚未探索的人工智能化身的其他采用观点,以及它们在虚拟世界中强大的经济和社会潜力。在2025年4月2日至2025年4月9日期间,对1250名18岁及以上的受访者进行了数据收集。这项研究使用了初创公司The MAMA AI实验性开发的人工智能化身。描述性分析的结果证实了这样一个事实,即人们对人工智能新闻化身帕维尔的看法相当中性或略微积极,但没有人情感,受访者表示不太愿意接受他作为整个媒体的向导。受访者还对人工智能助手在技术服务领域的使用进行了最有利的评估,但在心理学或政治等敏感领域则明显更为负面。这些群体之间的差异最明显的是对人工智能化身的看法,尤其是在男性和女性之间。相反,媒体习惯发挥了更大的作用。该研究证实了调查与媒体消费、媒体习惯、意识形态态度以及社会人口因素相关的具体采用因素的重要性,从而使我们能够了解人工智能化身的新采用潜力及其扩展的可能性。
{"title":"“What is the latest news, Avatar Pavel?” - AI assistants in transformation processes of metaverse","authors":"Vaclav Moravec ,&nbsp;Beata Gavurova ,&nbsp;Martin Rigelsky","doi":"10.1016/j.chbah.2025.100225","DOIUrl":"10.1016/j.chbah.2025.100225","url":null,"abstract":"<div><div>The main goal of the study was to examine and to evaluate the relationships between public attitudes towards AI avatars, their selected socio-demographic characteristics, fields of their media consumption, as well as ideological attitudes in order to reveal other adoption perspectives of AI avatars, which are not yet explored, and their strong economic and social potential in the metaverse. The data collection was carried out on a sample of 1250 respondents aged 18 and over in the period from 2 April 2025 to 9 April 2025. The research used an AI avatar experimentally developed by the start-up company The MAMA AI.</div><div>The outcomes of the descriptive analysis confirmed the fact that the AI news avatar Pavel was perceived rather neutrally to slightly positively, but as impersonal, with respondents demonstrating a low willingness to accept him as a guide across the media. The respondents also evaluated the use of AI assistants most favorably in the technical-service fields, but significantly more negatively in the sensitive domains such as psychology or politics. The differences between these groups were most noticeable in the perception of the AI avatar as more or less human and intimate, especially between men and women. Contrariwise, media habits played a much larger role. The study confirmed the importance of investigating specific adoption factors related to media consumption, media habits, and ideological attitudes along with the socio-demographic factors and thus, allowed us to understand the new adoption potential of AI avatars and the possibilities of its expansion.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100225"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1