首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
The reasoning-like capabilities of large language models across different languages: Insights from representational similarity analysis 跨不同语言的大型语言模型的类似推理的能力:来自表示相似性分析的见解
Pub Date : 2026-03-01 Epub Date: 2026-01-20 DOI: 10.1016/j.chbah.2026.100250
Chris M. Stolle , Rongjun Yu , Yi Huang
Recent research shows that Large Language Models (LLMs) demonstrate human-comparable performance on various cognitive tasks, suggesting reasoning-like capabilities. However, the language dependency of these capabilities and the contribution of their neural network states remain underexplored. This study investigates how different prompts and languages influence the reasoning performance of LLMs compared to humans, while exploring the internal cognitive-like processes of LLMs through representational similarity analysis (RSA). Using scenario-based and mathematical Cognitive Reflection Test (CRT) questions across four languages, we evaluated the reasoning capabilities of LLM Qwen 2.5 (including Gemma 2.9 and Llama 3.1 replications). Results showed that language significantly impacts performance in scenario-based CRT that requires nuanced semantic processing. However, RSA of the inner state activations revealed that the LLM processed identical questions similarly across languages, suggesting that the model encodes semantics in a language-independent latent space. Additionally, the LLM's performance improved when it verbalised its reasoning, and this verbalisation increased similarity in activations. Layer-wise analyses revealed a U-shaped similarity pattern across early to late layers in Qwen and Gemma but not Llama. Furthermore, scenario-based and equivalent mathematical CRT versions elicited similar activation patterns for the paired questions, even after controlling for input and output confounds, pointing to format-agnostic reasoning mechanisms. These results highlight that while LLMs exhibit language-invariant semantic representations and format-agnostic reasoning, their performance remains sensitive to linguistic nuances and self-generated verbalisations, offering insights into both the strengths and limitations of their cognitive-like processing.
最近的研究表明,大型语言模型(llm)在各种认知任务中表现出与人类相当的表现,表明类似推理的能力。然而,这些能力的语言依赖性及其神经网络状态的贡献仍未得到充分探索。本研究探讨了不同提示语和语言对法学硕士推理表现的影响,并通过表征相似度分析(RSA)探索了法学硕士的内部认知过程。使用基于场景和数学认知反射测试(CRT)的四种语言问题,我们评估了LLM Qwen 2.5(包括Gemma 2.9和Llama 3.1重复)的推理能力。结果表明,语言显著影响基于场景的CRT的表现,这需要细微的语义处理。然而,内部状态激活的RSA显示,LLM在不同语言中处理相同的问题相似,这表明该模型在一个与语言无关的潜在空间中编码语义。此外,当LLM用语言表达推理时,它的性能得到了提高,这种语言表达增加了激活的相似性。分层分析显示,Qwen和Gemma在早期到晚期的相似性呈u型,而Llama没有。此外,即使在控制了输入和输出混淆之后,基于场景的和等效的数学CRT版本对配对问题也产生了类似的激活模式,这表明了格式不可知的推理机制。这些结果强调,虽然法学硕士表现出语言不变的语义表示和格式不可知的推理,但他们的表现仍然对语言细微差别和自我生成的语言表达敏感,这为他们的认知类处理的优势和局限性提供了见解。
{"title":"The reasoning-like capabilities of large language models across different languages: Insights from representational similarity analysis","authors":"Chris M. Stolle ,&nbsp;Rongjun Yu ,&nbsp;Yi Huang","doi":"10.1016/j.chbah.2026.100250","DOIUrl":"10.1016/j.chbah.2026.100250","url":null,"abstract":"<div><div>Recent research shows that Large Language Models (LLMs) demonstrate human-comparable performance on various cognitive tasks, suggesting reasoning-like capabilities. However, the language dependency of these capabilities and the contribution of their neural network states remain underexplored. This study investigates how different prompts and languages influence the reasoning performance of LLMs compared to humans, while exploring the internal cognitive-like processes of LLMs through representational similarity analysis (RSA). Using scenario-based and mathematical Cognitive Reflection Test (CRT) questions across four languages, we evaluated the reasoning capabilities of LLM Qwen 2.5 (including Gemma 2.9 and Llama 3.1 replications). Results showed that language significantly impacts performance in scenario-based CRT that requires nuanced semantic processing. However, RSA of the inner state activations revealed that the LLM processed identical questions similarly across languages, suggesting that the model encodes semantics in a language-independent latent space. Additionally, the LLM's performance improved when it verbalised its reasoning, and this verbalisation increased similarity in activations. Layer-wise analyses revealed a U-shaped similarity pattern across early to late layers in Qwen and Gemma but not Llama. Furthermore, scenario-based and equivalent mathematical CRT versions elicited similar activation patterns for the paired questions, even after controlling for input and output confounds, pointing to format-agnostic reasoning mechanisms. These results highlight that while LLMs exhibit language-invariant semantic representations and format-agnostic reasoning, their performance remains sensitive to linguistic nuances and self-generated verbalisations, offering insights into both the strengths and limitations of their cognitive-like processing.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100250"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-supported problem-based learning for enhancing computational thinking skills in STEM education 人工智能支持的基于问题的学习,以提高STEM教育中的计算思维技能
Pub Date : 2026-03-01 Epub Date: 2026-02-09 DOI: 10.1016/j.chbah.2026.100263
Musa Adekunle Ayanwale , Christian Basil Omeh
By 2030, over 80% of skilled jobs will require higher order thinking such as computational thinking skills and problem-solving capabilities, yet most university students, especially in developing countries remain underprepared for such demands. This skills gap is particularly evident in computer robotics programming education, where traditional pedagogies fall short in fostering computational thinking (CT) and academic achievement. This study investigates the effectiveness of Artificial Intelligence-supported Problem-Based Learning (AI-PBL) in enhancing students' CT and performance in computer robotics programming within a Nigerian university context. Grounded in Vygotsky's Social Constructivist Theory, the study positions AI tools as “more capable peers,” offering adaptive scaffolding through intelligent systems embedded in inquiry-driven instruction. A quasi-experimental design with pre-test–post-test non-equivalent groups was employed to randomly assign the students. A total of 103 students were randomly assigned to experimental (AI-PBL) and control (conventional PBL) groups. Data was analyzed using ANCOVA and simple main effects tests. Results showed that students in the AI-PBL group significantly outperformed their peers in both posttest CT and academic achievement, controlling baseline scores. While gender did not significantly moderate the overall effects, both male and female students benefited from the AI-PBL approach. These results affirm the pedagogical potential of AI-enhanced PBL in STEM education, particularly in under-resourced contexts. The integration of intelligent systems not only improves student learning outcomes but also aligns with future workforce needs. The study calls for institutional and policy-level adoption of AI-PBL frameworks, investments in teacher training, and further research to ensure scalability. AI-supported pedagogy is not just innovative; it is essential for equitable skill acquisition and making student future-ready for world of work.
到2030年,超过80%的技术工作将需要更高层次的思维能力,如计算思维技能和解决问题的能力,但大多数大学生,特别是发展中国家的大学生,仍未做好应对这些需求的准备。这种技能差距在计算机机器人编程教育中尤为明显,传统的教学方法在培养计算思维(CT)和学术成就方面存在不足。本研究调查了人工智能支持的基于问题的学习(AI-PBL)在尼日利亚大学背景下提高学生计算机机器人编程的CT和表现的有效性。该研究以维果茨基的社会建构主义理论为基础,将人工智能工具定位为“更有能力的同伴”,通过嵌入在探究驱动型教学中的智能系统提供自适应脚手架。采用准实验设计,前测后测非等效组随机分配学生。103名学生被随机分为实验组(AI-PBL)和对照组(传统PBL)。采用方差分析和简单主效应检验对数据进行分析。结果显示,在控制基线分数的情况下,AI-PBL组的学生在测试后CT和学业成绩上都明显优于同龄人。虽然性别对总体效果没有显著影响,但男性和女性学生都从AI-PBL方法中受益。这些结果肯定了人工智能增强PBL在STEM教育中的教学潜力,特别是在资源不足的情况下。智能系统的集成不仅提高了学生的学习成果,而且符合未来的劳动力需求。该研究呼吁在机构和政策层面采用AI-PBL框架,投资于教师培训,并进一步研究以确保可扩展性。人工智能支持的教学法不仅具有创新性;这对于公平的技能获取和让学生为未来的工作世界做好准备至关重要。
{"title":"AI-supported problem-based learning for enhancing computational thinking skills in STEM education","authors":"Musa Adekunle Ayanwale ,&nbsp;Christian Basil Omeh","doi":"10.1016/j.chbah.2026.100263","DOIUrl":"10.1016/j.chbah.2026.100263","url":null,"abstract":"<div><div>By 2030, over 80% of skilled jobs will require higher order thinking such as computational thinking skills and problem-solving capabilities, yet most university students, especially in developing countries remain underprepared for such demands. This skills gap is particularly evident in computer robotics programming education, where traditional pedagogies fall short in fostering computational thinking (CT) and academic achievement. This study investigates the effectiveness of Artificial Intelligence-supported Problem-Based Learning (AI-PBL) in enhancing students' CT and performance in computer robotics programming within a Nigerian university context. Grounded in Vygotsky's Social Constructivist Theory, the study positions AI tools as “more capable peers,” offering adaptive scaffolding through intelligent systems embedded in inquiry-driven instruction. A quasi-experimental design with pre-test–post-test non-equivalent groups was employed to randomly assign the students. A total of 103 students were randomly assigned to experimental (AI-PBL) and control (conventional PBL) groups. Data was analyzed using ANCOVA and simple main effects tests. Results showed that students in the AI-PBL group significantly outperformed their peers in both posttest CT and academic achievement, controlling baseline scores. While gender did not significantly moderate the overall effects, both male and female students benefited from the AI-PBL approach. These results affirm the pedagogical potential of AI-enhanced PBL in STEM education, particularly in under-resourced contexts. The integration of intelligent systems not only improves student learning outcomes but also aligns with future workforce needs. The study calls for institutional and policy-level adoption of AI-PBL frameworks, investments in teacher training, and further research to ensure scalability. AI-supported pedagogy is not just innovative; it is essential for equitable skill acquisition and making student future-ready for world of work.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100263"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making sense of nonsense: A qualitative investigation of how brainrot content serves generation Z's media needs 让废话变得有意义:一项关于脑残内容如何满足Z世代媒体需求的定性调查
Pub Date : 2026-03-01 Epub Date: 2026-02-07 DOI: 10.1016/j.chbah.2026.100255
Anna Götzfried , Maxi Heitmayer
Brainrot refers to low-quality content saturating digital spaces and the cognitive deterioration resulting from consuming it. Through semi-structured interviews with 24 participants aged 13-26, this study examines the functions that brainrot serves for Gen Z. Users report that brainrot facilitates aesthetic experience through deliberate absurdity, enables resistance to attention economy exploitation, supports generational in-group formation, and provides escapism from digital oversaturation. Importantly, the findings suggest that social media infrastructure, coupled with the spread of GenAI tools, induce mental states of brainrot which precede and shape content creation, rather than consumption of brainrot content causing cognitive decline. Gen Z uses brainrot as a subversive strategy to reclaim agency within oversaturated media environments, challenging deficit-based framings of digital youth culture. This study therefore introduces the concept of anti-gratification - a previously untheorized media need where users actively seek content that rejects productivity and meaning-making.
Brainrot指充斥数字空间的低质量内容,以及因消费这些内容而导致的认知退化。通过对24名年龄在13-26岁之间的参与者进行半结构化访谈,本研究考察了脑腐烂对z世代的作用。用户报告称,脑腐烂通过故意的荒谬促进审美体验,使人们能够抵制注意力经济剥削,支持世代内群体形成,并提供逃避数字过度饱和的方式。重要的是,研究结果表明,社交媒体基础设施,加上GenAI工具的传播,诱发了在内容创作之前和塑造内容的脑残精神状态,而不是脑残内容的消费导致认知能力下降。Z世代将脑残作为一种颠覆性策略,在过度饱和的媒体环境中重新获得代理,挑战以赤字为基础的数字青年文化框架。因此,本研究引入了反满足的概念-一种以前未理论化的媒体需求,用户积极寻求拒绝生产力和意义创造的内容。
{"title":"Making sense of nonsense: A qualitative investigation of how brainrot content serves generation Z's media needs","authors":"Anna Götzfried ,&nbsp;Maxi Heitmayer","doi":"10.1016/j.chbah.2026.100255","DOIUrl":"10.1016/j.chbah.2026.100255","url":null,"abstract":"<div><div>Brainrot refers to low-quality content saturating digital spaces and the cognitive deterioration resulting from consuming it. Through semi-structured interviews with 24 participants aged 13-26, this study examines the functions that brainrot serves for Gen Z. Users report that brainrot facilitates aesthetic experience through deliberate absurdity, enables resistance to attention economy exploitation, supports generational in-group formation, and provides escapism from digital oversaturation. Importantly, the findings suggest that social media infrastructure, coupled with the spread of GenAI tools, induce mental states of brainrot which precede and shape content creation, rather than consumption of brainrot content causing cognitive decline. Gen Z uses brainrot as a subversive strategy to reclaim agency within oversaturated media environments, challenging deficit-based framings of digital youth culture. This study therefore introduces the concept of <em>anti-gratification</em> - a previously untheorized media need where users actively seek content that rejects productivity and meaning-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100255"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining academicians' views on using generative artificial intelligence applications: Case for using ChatGPT from a perspective of opportunities and threats 探讨学者对使用生成式人工智能应用的看法:从机会和威胁的角度看待ChatGPT的使用案例
Pub Date : 2026-03-01 Epub Date: 2026-02-07 DOI: 10.1016/j.chbah.2026.100252
Nurgün Gençel , Fatma Gizem Karaoğlan Yılmaz , Ramazan Yılmaz
An innovation that the rapidly developing artificial intelligence (AI) technology has added to our lives in recent years has been chatbots powered by artificial intelligence. In the last year, ChatGPT, a new AI chatbot that is becoming more widespread every day, has been launched. ChatGPT is an AI that has the ability to analyze, synthesize, and interpret the questions asked. In this respect, the reflections of education began to be seen in a long time. The presented work focus on positive and negative opinions of academicians about ChatGPT both in their academic studies and within the scope of students and to present a solution idea about the problem of having students prepare their homework for ChatGPT, which is widely seen as a negative aspect. For this purpose, 50 academicians from Türkiye participated in the study within the scope of snowball sampling. A developed questionnaire for the semi-structured interview form was utilized to collect qualitative data in addition to quantitative data from academicians. Quantitative data were analyzed using descriptive statistics. Content analysis methods were used in qualitative data. According to the findings, the majority of academics found ChatGPT useful for their own studies and students. Academicians have stated that ChatGPT has features such as reinforcement, repetition, material diversity as useful aspects as well as providing motivation, increasing imagination and creativity. The negative aspects are seen as doing it with addiction outside of ethical problems, accustoming students to ease and reducing motivation. Regarding the problem of preparing homework, it is stated that assignments based on applications, comments and analyzes where ChatGPT is not enabled should be given and plagiarism programs should be used when necessary. In the last part of the study, an exemplary model was given to integrate ChatGPT into education and suggestions were made for educators and researchers.
近年来,快速发展的人工智能(AI)技术为我们的生活带来了一项创新,即由人工智能驱动的聊天机器人。去年,一种新的人工智能聊天机器人ChatGPT已经推出,它每天都在变得越来越普遍。ChatGPT是一种人工智能,具有分析、综合和解释所问问题的能力。在这方面,教育的反思在很长一段时间内开始显现。所呈现的工作侧重于学者在学术研究和学生范围内对ChatGPT的积极和消极看法,并针对让学生为ChatGPT准备作业的问题提出解决方案,这被广泛认为是消极的方面。为此, rkiye大学的50位院士在雪球抽样的范围内参与了研究。一份开发的半结构化访谈问卷用于收集院士的定性数据和定量数据。定量资料采用描述性统计进行分析。定性资料采用内容分析法。根据调查结果,大多数学者认为ChatGPT对他们自己的研究和学生都很有用。学者们表示,ChatGPT具有强化、重复、材料多样性等有用方面的特点,并提供动力,增加想象力和创造力。消极的方面被认为是在道德问题之外上瘾,使学生习惯于放松,减少动力。关于准备作业的问题,在没有启用ChatGPT的情况下,应该给出基于应用、评论和分析的作业,必要时应该使用抄袭程序。在研究的最后一部分,给出了一个将ChatGPT融入教育的示范模型,并对教育者和研究者提出了建议。
{"title":"Examining academicians' views on using generative artificial intelligence applications: Case for using ChatGPT from a perspective of opportunities and threats","authors":"Nurgün Gençel ,&nbsp;Fatma Gizem Karaoğlan Yılmaz ,&nbsp;Ramazan Yılmaz","doi":"10.1016/j.chbah.2026.100252","DOIUrl":"10.1016/j.chbah.2026.100252","url":null,"abstract":"<div><div>An innovation that the rapidly developing artificial intelligence (AI) technology has added to our lives in recent years has been chatbots powered by artificial intelligence. In the last year, ChatGPT, a new AI chatbot that is becoming more widespread every day, has been launched. ChatGPT is an AI that has the ability to analyze, synthesize, and interpret the questions asked. In this respect, the reflections of education began to be seen in a long time. The presented work focus on positive and negative opinions of academicians about ChatGPT both in their academic studies and within the scope of students and to present a solution idea about the problem of having students prepare their homework for ChatGPT, which is widely seen as a negative aspect. For this purpose, 50 academicians from Türkiye participated in the study within the scope of snowball sampling. A developed questionnaire for the semi-structured interview form was utilized to collect qualitative data in addition to quantitative data from academicians. Quantitative data were analyzed using descriptive statistics. Content analysis methods were used in qualitative data. According to the findings, the majority of academics found ChatGPT useful for their own studies and students. Academicians have stated that ChatGPT has features such as reinforcement, repetition, material diversity as useful aspects as well as providing motivation, increasing imagination and creativity. The negative aspects are seen as doing it with addiction outside of ethical problems, accustoming students to ease and reducing motivation. Regarding the problem of preparing homework, it is stated that assignments based on applications, comments and analyzes where ChatGPT is not enabled should be given and plagiarism programs should be used when necessary. In the last part of the study, an exemplary model was given to integrate ChatGPT into education and suggestions were made for educators and researchers.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100252"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How mindsets shape trust in AI: A dual path mechanism of self-efficacy and risk perception 心态如何塑造人工智能中的信任:自我效能和风险感知的双路径机制
Pub Date : 2026-03-01 Epub Date: 2026-02-12 DOI: 10.1016/j.chbah.2026.100270
Sicen Shen , Ping Zhang , Yuchang Jin , Junxiu An
With the rapid advancement of artificial intelligence (AI) technologies, their applications in daily life have become increasingly pervasive. In human-robot interaction, trust is considered a critical predictor for the adoption of new technologies. Drawing on the lens of risk, this study integrates two cognitive pathways—internal self-assessment and external risk evaluation—within the framework of protection motivation theory(PMT). A parallel mediation model is proposed to examine how a growth mindset influences human-robot trust through the dual mechanisms of risk perception and self-efficacy. In Study 1, data were collected through a survey, revealing that a growth mindset positively predicts human-robot trust, with self-efficacy and risk perception serving as parallel mediators. To further clarify the causal relationships, Study 2 employed an experimental design, manipulating participants' growth mindset and assessing their trust behavior in a trust game paradigm to validate the model. The results demonstrated that different mindsets led to significant variations in human-robot trust investment, reinforcing the parallel mediating roles of risk perception and self-efficacy. The findings elucidate the underlying mechanisms through which a growth mindset affects human-robot trust, extend the application of PMT to the domain of human-robot interaction, and provide new insights into the trust formation process in such interactions.
随着人工智能(AI)技术的飞速发展,其在日常生活中的应用越来越广泛。在人机交互中,信任被认为是采用新技术的关键预测因素。本研究从风险的角度出发,在保护动机理论的框架下,整合了内部自我评估和外部风险评估两种认知途径。本文提出了一个平行中介模型来考察成长心态如何通过风险感知和自我效能的双重机制影响人机信任。在研究1中,通过调查收集数据,发现成长心态正向预测人机信任,自我效能感和风险感知是平行中介。为了进一步厘清因果关系,研究2采用实验设计,通过操纵被试的成长心态,在信任博弈范式中评估被试的信任行为来验证模型。结果表明,不同心态导致人机信任投资的显著差异,强化了风险感知和自我效能感的平行中介作用。研究结果阐明了成长型思维影响人机信任的潜在机制,将PMT的应用扩展到人机交互领域,并为人机交互中的信任形成过程提供了新的见解。
{"title":"How mindsets shape trust in AI: A dual path mechanism of self-efficacy and risk perception","authors":"Sicen Shen ,&nbsp;Ping Zhang ,&nbsp;Yuchang Jin ,&nbsp;Junxiu An","doi":"10.1016/j.chbah.2026.100270","DOIUrl":"10.1016/j.chbah.2026.100270","url":null,"abstract":"<div><div>With the rapid advancement of artificial intelligence (AI) technologies, their applications in daily life have become increasingly pervasive. In human-robot interaction, trust is considered a critical predictor for the adoption of new technologies. Drawing on the lens of risk, this study integrates two cognitive pathways—internal self-assessment and external risk evaluation—within the framework of protection motivation theory(PMT). A parallel mediation model is proposed to examine how a growth mindset influences human-robot trust through the dual mechanisms of risk perception and self-efficacy. In Study 1, data were collected through a survey, revealing that a growth mindset positively predicts human-robot trust, with self-efficacy and risk perception serving as parallel mediators. To further clarify the causal relationships, Study 2 employed an experimental design, manipulating participants' growth mindset and assessing their trust behavior in a trust game paradigm to validate the model. The results demonstrated that different mindsets led to significant variations in human-robot trust investment, reinforcing the parallel mediating roles of risk perception and self-efficacy. The findings elucidate the underlying mechanisms through which a growth mindset affects human-robot trust, extend the application of PMT to the domain of human-robot interaction, and provide new insights into the trust formation process in such interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100270"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phenomenologically human: Fine-tuning LLMs to simulate online group identity 现象学人类:微调法学硕士模拟在线群体身份
Pub Date : 2026-03-01 Epub Date: 2026-02-10 DOI: 10.1016/j.chbah.2026.100272
Andrés Martínez Torres, Davide Morselli
The study of online group identity can be constrained by methodological and ethical challenges, particularly in communities that are inaccessible, uncooperative, or promote harmful ideologies. This paper introduces a novel methodology that employs Large Language Models (LLMs) as experimental substitutes for community members. By fine-tuning Llama 3.2 (3B) and Mistral 0.3 (7B) on data from r/AskTRP, a banned forum within the “Manosphere”, we assess whether models can assimilate community language and function as group members. Using Term Frequency–Inverse Document Frequency (TF-IDF), qualitative analysis, and a Turing test with human evaluators, we find that fine-tuned models adopt the forum's linguistic and identity signals, producing outputs that are difficult to distinguish from those of real group members. We conceptualise such models as phenomenologically human: they can be studied before and after exposure to community discourse, enabling reproducible and ethically viable experiments on identity formation and language. Beyond descriptive text analysis, this approach advances a generative paradigm for social research, allowing scholars to probe community dynamics and ideologies through simulated members in ways that are not possible with human participants.
对网络群体身份的研究可能会受到方法和伦理挑战的限制,特别是在难以接近、不合作或宣扬有害意识形态的社区。本文介绍了一种使用大型语言模型(llm)作为社区成员的实验替代品的新方法。通过对Llama 3.2 (3B)和Mistral 0.3 (7B)的数据进行微调,我们评估了模型是否能够吸收社区语言并作为群体成员发挥作用。使用术语频率-逆文档频率(TF-IDF)、定性分析和人类评估者的图灵测试,我们发现微调模型采用了论坛的语言和身份信号,产生的输出很难与真实小组成员的输出区分开来。我们将这些模型概念化为现象学上的人类:它们可以在接触社区话语之前和之后进行研究,从而可以在身份形成和语言方面进行可重复和道德上可行的实验。除了描述性文本分析之外,这种方法还为社会研究提供了一种生成范式,使学者能够通过模拟成员以人类参与者无法实现的方式探索社区动态和意识形态。
{"title":"Phenomenologically human: Fine-tuning LLMs to simulate online group identity","authors":"Andrés Martínez Torres,&nbsp;Davide Morselli","doi":"10.1016/j.chbah.2026.100272","DOIUrl":"10.1016/j.chbah.2026.100272","url":null,"abstract":"<div><div>The study of online group identity can be constrained by methodological and ethical challenges, particularly in communities that are inaccessible, uncooperative, or promote harmful ideologies. This paper introduces a novel methodology that employs Large Language Models (LLMs) as experimental substitutes for community members. By fine-tuning Llama 3.2 (3B) and Mistral 0.3 (7B) on data from r/AskTRP, a banned forum within the “Manosphere”, we assess whether models can assimilate community language and function as group members. Using Term Frequency–Inverse Document Frequency (TF-IDF), qualitative analysis, and a Turing test with human evaluators, we find that fine-tuned models adopt the forum's linguistic and identity signals, producing outputs that are difficult to distinguish from those of real group members. We conceptualise such models as phenomenologically human: they can be studied before and after exposure to community discourse, enabling reproducible and ethically viable experiments on identity formation and language. Beyond descriptive text analysis, this approach advances a generative paradigm for social research, allowing scholars to probe community dynamics and ideologies through simulated members in ways that are not possible with human participants.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100272"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trusting in the competence of humans and artificially intelligent agents varying in generosity 对人类和人工智能代理能力的信任在慷慨程度上有所不同
Pub Date : 2026-03-01 Epub Date: 2026-02-07 DOI: 10.1016/j.chbah.2026.100265
Richa Gautam , Nina Lauharatanahirun , Jasmin Cloutier , Jennifer T. Kubota
Around the world and with increasing frequency, humans now interact with artificial intelligence (AI) in various domains and collaborative settings. Because of this rapid and growing integration, investigating human-AI interactions can identify effective ways for humans and AI to work together. In two preregistered experiments, we examined how competency (high or low), benevolence reputation (malevolent, neutral, or generous reciprocity), and recent feedback (meeting expectations for generous reciprocity) impact trust towards human and AI partners during iterative Trust Games. In Experiment 1, participants played with human or AI partners; in Experiment 2, participants played with both. Positive behavior from partners (generosity or reciprocity) was consistently rewarded with trust, regardless of context. However, the impact of agent type and competence were context dependent. In a comparative context, people initially trusted humans more, but as they learned about their partner from their behavior (reputation and recent actions (i.e., feedback)), differentiation between humans and AI generally decreased. However, when interacting with only a human or AI, they were treated similarly, and trust depended on the partner's behavior (reputation and feedback). Competency had a more nuanced, context-dependent effect: individuals initially trusted highly competent partners more when interacting with humans or AI, but competency shaped trust towards AI more than towards humans in a comparative context. Overall, these findings indicate that human-human and human-AI trust are context-dependent. Importantly, trust in AI can match trust in humans when AI exhibits benevolence.
在世界各地,人类与人工智能(AI)在不同领域和协作环境中的互动越来越频繁。由于这种快速和不断增长的整合,研究人类与人工智能的互动可以确定人类和人工智能共同工作的有效方式。在两个预先注册的实验中,我们研究了在迭代信任游戏中,能力(高或低)、仁慈声誉(恶意、中立或慷慨的互惠)和最近的反馈(满足慷慨互惠的期望)如何影响对人类和人工智能伙伴的信任。在实验1中,参与者与人类或AI伙伴一起玩;在实验2中,参与者两种都玩。伙伴的积极行为(慷慨或互惠)总是得到信任的回报,无论背景如何。然而,代理类型和能力的影响是情境依赖的。在一个比较的背景下,人们最初更信任人类,但当他们从他们的行为(声誉和最近的行为(即反馈))中了解他们的伴侣时,人类和人工智能之间的差异普遍下降。然而,当只与人类或人工智能互动时,他们受到类似的对待,信任取决于合作伙伴的行为(声誉和反馈)。能力有更微妙的、情境依赖的影响:当与人类或人工智能互动时,个体最初更信任能力强的伙伴,但在比较环境中,能力塑造了对人工智能的信任,而不是对人类的信任。总的来说,这些发现表明人与人之间以及人与人之间的信任依赖于上下文。重要的是,当人工智能表现出仁慈时,对人工智能的信任可以与对人类的信任相匹配。
{"title":"Trusting in the competence of humans and artificially intelligent agents varying in generosity","authors":"Richa Gautam ,&nbsp;Nina Lauharatanahirun ,&nbsp;Jasmin Cloutier ,&nbsp;Jennifer T. Kubota","doi":"10.1016/j.chbah.2026.100265","DOIUrl":"10.1016/j.chbah.2026.100265","url":null,"abstract":"<div><div>Around the world and with increasing frequency, humans now interact with artificial intelligence (AI) in various domains and collaborative settings. Because of this rapid and growing integration, investigating human-AI interactions can identify effective ways for humans and AI to work together. In two preregistered experiments, we examined how competency (high or low), benevolence reputation (malevolent, neutral, or generous reciprocity), and recent feedback (meeting expectations for generous reciprocity) impact trust towards human and AI partners during iterative Trust Games. In Experiment 1, participants played with human or AI partners; in Experiment 2, participants played with both. Positive behavior from partners (generosity or reciprocity) was consistently rewarded with trust, regardless of context. However, the impact of agent type and competence were context dependent. In a comparative context, people initially trusted humans more, but as they learned about their partner from their behavior (reputation and recent actions (i.e., feedback)), differentiation between humans and AI generally decreased. However, when interacting with only a human or AI, they were treated similarly, and trust depended on the partner's behavior (reputation and feedback). Competency had a more nuanced, context-dependent effect: individuals initially trusted highly competent partners more when interacting with humans or AI, but competency shaped trust towards AI more than towards humans in a comparative context. Overall, these findings indicate that human-human and human-AI trust are context-dependent. Importantly, trust in AI can match trust in humans when AI exhibits benevolence.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100265"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adoption of AI-enabled mental health wearables in India: The roles of psychological assurance and algorithmic credibility 印度采用支持人工智能的心理健康可穿戴设备:心理保证和算法可信度的作用
Pub Date : 2026-03-01 Epub Date: 2026-02-09 DOI: 10.1016/j.chbah.2026.100259
Sweeta Agrawal , Abayomi O. Agbeyangi
This study examines the willingness to adopt AI-based wearable devices for mental health diagnosis, with a focus on the importance of psychological safety and algorithmic trust. It highlights the need for early and ongoing diagnosis of mental health issues, as AI-enabled wearable devices with advanced digital biomarkers and machine learning can help achieve this goal. Unlike studies on trust and TAM in digital health that focus on the perceived usefulness of the system, this study is the first to show that psychological safety is the dominant factor with stronger effects on adoption, illustrating a unique trust-emotion mechanism in AI-assisted wearables for mental health. Developing and validating a multidimensional adoption model, this study is grounded within the Technology Acceptance Model (TAM), extended with trust-based constructs and the Information Systems Success Model. Data were collected through a cross-sectional online survey of 763 respondents from urban and semi-urban areas in the Indian states of Odisha, Maharashtra, Delhi, and Tamil Nadu, representing a range of educational qualifications and occupations. Using SmartPLS 4, the findings suggest that psychological assurance is the dominant predictor of behavioral intention (β = 0.659, p < 0.001, 95% CI [0.608, 0.709]). While there was no direct effect on behavioral intention (β = −0.023, p = 0.452), a significant pathway trust was observed. Algorithmic credibility positively influenced psychological assurance (β = 0.276, p < 0.001) and perceived diagnostic accuracy (β = 0.555, p < 0.001), suggesting an indirect effect. Psychological assurance was positively influenced by perceived usefulness (β = 0.301, p < 0.001), and a positive effect was observed on behavioural intention (β = 0.062, p = 0.029) in relation to perceived diagnostic accuracy. Finally, the authors concluded that a range of influencing factors affects perceived diagnostic accuracy, with a moderate effect (β = 0.102, p = 0.003) also being found. These results underscore the importance of examining both the emotional trust aspect and the technical accuracy aspect of AI-enabled mental health wearables. For practitioners and policymakers, the findings underscore the importance of focusing on explainable AI, clinician endorsement, and reassurance feedback loops to enhance the adoption of mental health technologies and improve mental health care outcomes.
本研究考察了采用基于人工智能的可穿戴设备进行心理健康诊断的意愿,重点关注心理安全和算法信任的重要性。它强调了对心理健康问题进行早期和持续诊断的必要性,因为具有先进数字生物标志物和机器学习功能的人工智能可穿戴设备可以帮助实现这一目标。与关注系统感知有用性的关于数字健康中的信任和TAM的研究不同,本研究首次表明,心理安全是对采用产生更强影响的主导因素,说明了人工智能辅助可穿戴设备中独特的信任-情感机制。开发和验证一个多维采用模型,本研究以技术接受模型(TAM)为基础,扩展了基于信任的结构和信息系统成功模型。数据是通过横断面在线调查收集的,来自印度奥里萨邦、马哈拉施特拉邦、德里和泰米尔纳德邦的城市和半城市地区的763名受访者,代表了一系列的教育资格和职业。使用SmartPLS 4,结果表明心理保证是行为意向的主要预测因子(β = 0.659, p < 0.001, 95% CI[0.608, 0.709])。虽然对行为意向没有直接影响(β = - 0.023, p = 0.452),但存在显著的路径信任。算法可信度正向影响心理保证(β = 0.276, p < 0.001)和感知诊断准确性(β = 0.555, p < 0.001),表明存在间接影响。心理保证受到感知有用性的积极影响(β = 0.301, p < 0.001),行为意向对感知诊断准确性有积极影响(β = 0.062, p = 0.029)。最后,作者得出结论,一系列影响因素影响感知诊断准确性,也发现了中等影响(β = 0.102, p = 0.003)。这些结果强调了研究人工智能支持的心理健康可穿戴设备的情感信任方面和技术准确性方面的重要性。对于从业人员和政策制定者来说,研究结果强调了关注可解释的人工智能、临床医生认可和保证反馈循环的重要性,以加强对精神卫生技术的采用并改善精神卫生保健结果。
{"title":"Adoption of AI-enabled mental health wearables in India: The roles of psychological assurance and algorithmic credibility","authors":"Sweeta Agrawal ,&nbsp;Abayomi O. Agbeyangi","doi":"10.1016/j.chbah.2026.100259","DOIUrl":"10.1016/j.chbah.2026.100259","url":null,"abstract":"<div><div>This study examines the willingness to adopt AI-based wearable devices for mental health diagnosis, with a focus on the importance of psychological safety and algorithmic trust. It highlights the need for early and ongoing diagnosis of mental health issues, as AI-enabled wearable devices with advanced digital biomarkers and machine learning can help achieve this goal. Unlike studies on trust and TAM in digital health that focus on the perceived usefulness of the system, this study is the first to show that psychological safety is the dominant factor with stronger effects on adoption, illustrating a unique trust-emotion mechanism in AI-assisted wearables for mental health. Developing and validating a multidimensional adoption model, this study is grounded within the Technology Acceptance Model (TAM), extended with trust-based constructs and the Information Systems Success Model. Data were collected through a cross-sectional online survey of 763 respondents from urban and semi-urban areas in the Indian states of Odisha, Maharashtra, Delhi, and Tamil Nadu, representing a range of educational qualifications and occupations. Using SmartPLS 4, the findings suggest that psychological assurance is the dominant predictor of behavioral intention (β = 0.659, p &lt; 0.001, 95% CI [0.608, 0.709]). While there was no direct effect on behavioral intention (β = −0.023, p = 0.452), a significant pathway trust was observed. Algorithmic credibility positively influenced psychological assurance (β = 0.276, p &lt; 0.001) and perceived diagnostic accuracy (β = 0.555, p &lt; 0.001), suggesting an indirect effect. Psychological assurance was positively influenced by perceived usefulness (β = 0.301, p &lt; 0.001), and a positive effect was observed on behavioural intention (β = 0.062, p = 0.029) in relation to perceived diagnostic accuracy. Finally, the authors concluded that a range of influencing factors affects perceived diagnostic accuracy, with a moderate effect (β = 0.102, p = 0.003) also being found. These results underscore the importance of examining both the emotional trust aspect and the technical accuracy aspect of AI-enabled mental health wearables. For practitioners and policymakers, the findings underscore the importance of focusing on explainable AI, clinician endorsement, and reassurance feedback loops to enhance the adoption of mental health technologies and improve mental health care outcomes.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100259"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal robotic storytelling integrating sound effects and background music 多模式机器人讲故事整合声音效果和背景音乐
Pub Date : 2026-03-01 Epub Date: 2025-12-16 DOI: 10.1016/j.chbah.2025.100248
Sophia C. Steinhaeusser, Sophia Maier, Birgit Lugrin
Music can induce emotions and is often used to enhance emotional experiences of storytelling media, while sound effects can convey information on a story’s environmental setting. While these non-speech sounds are well-integrated into traditional media, their use in newer forms such as robotic storytelling is still developing. To address this gap, we developed guidelines for emotion-inducing music based on theoretical knowledge from music theory, psychology, and media studies, and validated them in an online perception study. Subsequently, a laboratory prestudy compared the effects of the music’s source during robotic storytelling, finding no significant differences between the robotic storyteller and an external loudspeaker. Building on these results, our main study compared storytelling with added background music, sound effects, a combination of both, and a control condition without non-speech sounds. Results showed that while subjective evaluations of presentation liking and qualitative feedback did not significantly differ, background music alone yielded the best outcomes on standardized measures, enhancing transportation, cognitive absorption, emotion induction, and objectively assessed attention-related affects. These findings support incorporating emotion-inducing background music into robotic storytelling to enhance its immersive and emotional effects.
音乐可以诱导情感,通常用于增强叙事媒体的情感体验,而音效可以传达故事环境设置的信息。虽然这些非语音已经很好地融入了传统媒体,但它们在机器人讲故事等新形式中的应用仍在发展中。为了解决这一差距,我们基于音乐理论、心理学和媒体研究的理论知识,制定了情感诱导音乐的指导方针,并在一项在线感知研究中对其进行了验证。随后,一项实验室预研究比较了机器人讲故事时音乐来源的影响,发现机器人讲故事和外部扬声器之间没有显著差异。基于这些结果,我们的主要研究将讲故事与添加背景音乐、音效、两者的组合以及没有非言语声音的控制条件进行了比较。结果表明,虽然对呈现喜好的主观评价和定性反馈没有显著差异,但背景音乐在标准化测量、增强运输、认知吸收、情感诱导和客观评估注意力相关影响方面产生了最好的结果。这些发现支持将情感诱导背景音乐融入机器人讲故事中,以增强其沉浸感和情感效果。
{"title":"Multimodal robotic storytelling integrating sound effects and background music","authors":"Sophia C. Steinhaeusser,&nbsp;Sophia Maier,&nbsp;Birgit Lugrin","doi":"10.1016/j.chbah.2025.100248","DOIUrl":"10.1016/j.chbah.2025.100248","url":null,"abstract":"<div><div>Music can induce emotions and is often used to enhance emotional experiences of storytelling media, while sound effects can convey information on a story’s environmental setting. While these non-speech sounds are well-integrated into traditional media, their use in newer forms such as robotic storytelling is still developing. To address this gap, we developed guidelines for emotion-inducing music based on theoretical knowledge from music theory, psychology, and media studies, and validated them in an online perception study. Subsequently, a laboratory prestudy compared the effects of the music’s source during robotic storytelling, finding no significant differences between the robotic storyteller and an external loudspeaker. Building on these results, our main study compared storytelling with added background music, sound effects, a combination of both, and a control condition without non-speech sounds. Results showed that while subjective evaluations of presentation liking and qualitative feedback did not significantly differ, background music alone yielded the best outcomes on standardized measures, enhancing transportation, cognitive absorption, emotion induction, and objectively assessed attention-related affects. These findings support incorporating emotion-inducing background music into robotic storytelling to enhance its immersive and emotional effects.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100248"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit neural measures of trust in artificial intelligence 人工智能中信任的内隐神经测量
Pub Date : 2026-03-01 Epub Date: 2026-02-12 DOI: 10.1016/j.chbah.2026.100274
Tobias Feldmann-Wüstefeld , Eva Wiese
Trust in AI systems is critical for effective collaboration, yet traditional measures—such as self-reports and behavioral proxies—are limited in capturing its dynamic and latent nature. This study introduces the contralateral delay activity (CDA), a neural marker of visual working memory load, as a novel, implicit index of trust. While the CDA has been widely used in change detection tasks to track memory load, we repurpose it here to measure how many working memory resources users offload to an AI-framed automated partner. Participants performed a lateralized memory task under low and high working memory load, collaborating with an AI whose reliability was experimentally manipulated across three phases: trust formation, violation, and restoration. The system was a rule-based automated agent framed to participants as an AI collaborator, as the psychological effects of AI framing are central to trust and offloading behavior. In dyadic trials, where the AI was responsible for one hemifield, the CDA amplitude served as an index of how much information participants chose to maintain themselves versus offload to the AI. As AI reliability increased, CDA amplitude rose, indicating greater trust and reliance. When AI reliability dropped during the violation phase, participants encoded more from both hemifields, and CDA amplitude declined. During trust restoration, CDA amplitude returned to pre-violation levels, indicating renewed reliance—though it did not match the high amplitude of solo trials, suggesting lingering mistrust. Behavioral measures (e.g., reliance, compliance, response time) tracked the CDA dynamics but lacked the resolution and specificity of the CDA. Together, these results establish the CDA as a powerful neural index of dynamic cognitive offloading that closely tracks behavioral and reliability-based indicators of trust. It captures trial-by-trial fluctuations in offloading behavior that reflect users’ evolving confidence in AI assistance, offering a continuous, covert, and cognitively grounded measure of trust in interactive settings.
对人工智能系统的信任对于有效协作至关重要,然而传统的措施——如自我报告和行为代理——在捕捉其动态和潜在性质方面是有限的。本研究将视觉工作记忆负荷的神经标记对侧延迟活动(CDA)作为一种新的内隐信任指标引入研究。虽然CDA已广泛用于变更检测任务以跟踪内存负载,但我们在这里将其重新用于测量用户将多少工作内存资源卸载给ai框架的自动化合作伙伴。参与者在低和高工作记忆负荷下进行了横向记忆任务,与人工智能合作,人工智能的可靠性在三个阶段被实验操纵:信任形成、违反和恢复。该系统是一个基于规则的自动代理,将参与者视为人工智能合作者,因为人工智能框架的心理影响对信任和卸载行为至关重要。在二元试验中,人工智能负责一个半场,CDA振幅作为参与者选择维护自己而不是将多少信息卸载给人工智能的指标。随着人工智能可靠性的提高,CDA振幅增大,表明信任和依赖程度增大。当人工智能可靠性在违反阶段下降时,参与者从两个半场编码更多,CDA振幅下降。在信任恢复过程中,CDA振幅恢复到违反前的水平,表明重新建立了依赖——尽管它与单独试验的高振幅不匹配,表明不信任挥之不去。行为测量(例如,依赖性、依从性、响应时间)跟踪CDA动态,但缺乏CDA的分辨率和特异性。总之,这些结果确立了CDA作为动态认知卸载的强大神经指标,密切跟踪基于信任的行为和可靠性指标。它捕捉了卸载行为的一次又一次波动,反映了用户对人工智能帮助的信心不断变化,在互动环境中提供了持续、隐蔽和基于认知的信任衡量标准。
{"title":"Implicit neural measures of trust in artificial intelligence","authors":"Tobias Feldmann-Wüstefeld ,&nbsp;Eva Wiese","doi":"10.1016/j.chbah.2026.100274","DOIUrl":"10.1016/j.chbah.2026.100274","url":null,"abstract":"<div><div>Trust in AI systems is critical for effective collaboration, yet traditional measures—such as self-reports and behavioral proxies—are limited in capturing its dynamic and latent nature. This study introduces the <em>contralateral delay activity</em> (CDA), a neural marker of visual working memory load, as a novel, implicit index of trust. While the CDA has been widely used in change detection tasks to track memory load, we repurpose it here to measure how many working memory resources users offload to an AI-framed automated partner. Participants performed a lateralized memory task under low and high working memory load, collaborating with an AI whose reliability was experimentally manipulated across three phases: trust formation, violation, and restoration. The system was a rule-based automated agent framed to participants as an AI collaborator, as the psychological effects of AI framing are central to trust and offloading behavior. In dyadic trials, where the AI was responsible for one hemifield, the CDA amplitude served as an index of how much information participants chose to maintain themselves versus offload to the AI. As AI reliability increased, CDA amplitude rose, indicating greater trust and reliance. When AI reliability dropped during the violation phase, participants encoded more from both hemifields, and CDA amplitude declined. During trust restoration, CDA amplitude returned to pre-violation levels, indicating renewed reliance—though it did not match the high amplitude of solo trials, suggesting lingering mistrust. Behavioral measures (e.g., reliance, compliance, response time) tracked the CDA dynamics but lacked the resolution and specificity of the CDA. Together, these results establish the CDA as a powerful neural index of dynamic cognitive offloading that closely tracks behavioral and reliability-based indicators of trust. It captures trial-by-trial fluctuations in offloading behavior that reflect users’ evolving confidence in AI assistance, offering a continuous, covert, and cognitively grounded measure of trust in interactive settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100274"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1