首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Corrigendum to “From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents” [Comput. Hum. Behav.: Artificial Humans (2024) 100030] “从像个人一样说话到个性化:与对话代理的个性化、定期互动的影响”的勘误表[计算机]。嗡嗡声。Behav。:人造人(2024)100030]
Pub Date : 2025-08-01 DOI: 10.1016/j.chbah.2025.100178
Theo Araujo , Nadine Bol
As human-AI interactions become more pervasive, conversational agents are increasingly relevant in our communication environment. While a rich body of research investigates the consequences of one-shot, single interactions with these agents, knowledge is still scarce on how these consequences evolve across regular, repeated interactions in which these agents make use of AI-enabled techniques to enable increasingly personalized conversations and recommendations. By means of a longitudinal experiment (N = 179) with an agent able to personalize a conversation, this study sheds light on how perceptions – about the agent (anthropomorphism and trust), the interaction (dialogue quality and privacy risks), and the information (relevance and credibility) – and behavior (self-disclosure and recommendation adherence) evolve across interactions. The findings highlight the role of interplay between system-initiated personalization and repeated exposure in this process, suggesting the importance of considering the role of AI in communication processes in a dynamic manner.
随着人类与人工智能的互动变得越来越普遍,对话代理在我们的交流环境中越来越重要。虽然有大量的研究调查了与这些代理进行一次性、单次交互的后果,但关于这些后果如何在常规、重复的交互中演变的知识仍然很少,在这些交互中,这些代理使用支持人工智能的技术来实现越来越个性化的对话和推荐。通过纵向实验(N = 179)与一个能够个性化对话的代理,本研究揭示了感知-关于代理(拟人化和信任),互动(对话质量和隐私风险),信息(相关性和可信度)和行为(自我披露和推荐依从性)如何在互动中演变。研究结果强调了在这一过程中系统发起的个性化和重复暴露之间的相互作用,表明以动态方式考虑人工智能在沟通过程中的作用的重要性。
{"title":"Corrigendum to “From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents” [Comput. Hum. Behav.: Artificial Humans (2024) 100030]","authors":"Theo Araujo ,&nbsp;Nadine Bol","doi":"10.1016/j.chbah.2025.100178","DOIUrl":"10.1016/j.chbah.2025.100178","url":null,"abstract":"<div><div>As human-AI interactions become more pervasive, conversational agents are increasingly relevant in our communication environment. While a rich body of research investigates the consequences of one-shot, single interactions with these agents, knowledge is still scarce on how these consequences evolve across regular, repeated interactions in which these agents make use of AI-enabled techniques to enable increasingly personalized conversations and recommendations. By means of a longitudinal experiment (<em>N</em> = 179) with an agent able to personalize a conversation, this study sheds light on how perceptions – about the agent (anthropomorphism and trust), the interaction (dialogue quality and privacy risks), and the information (relevance and credibility) – and behavior (self-disclosure and recommendation adherence) evolve across interactions. The findings highlight the role of interplay between system-initiated personalization and repeated exposure in this process, suggesting the importance of considering the role of AI in communication processes in a dynamic manner.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100178"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do truthfulness notifications influence perceptions of AI-generated political images? A cognitive investigation with EEG 真实性通知是否会影响人们对人工智能生成的政治图像的看法?脑电认知调查
Pub Date : 2025-07-22 DOI: 10.1016/j.chbah.2025.100185
Colin Conrad , Anika Nissen , Kya Masoumi , Mayank Ramchandani , Rafael Fecury Braga , Aaron J. Newman
Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.
对于民主国家来说,政治错误信息是一个日益严重的问题,部分原因是可广泛获取的人工智能生成内容(AIGC)的兴起。作为回应,社交媒体平台越来越多地考虑明确的人工智能内容标签,尽管支持这种方法有效性的证据参差不齐。在本文中,我们讨论了两项研究,这些研究揭示了先前的认知过程,这些过程有助于解释为什么以及如何在人工智能生成的政治图像的特定背景下,AIGC标签影响用户评价。在第一项研究中,我们对26名参与者进行了神经生理学实验,使用脑电图事件相关电位(EEG event- associated potential, ERPs)和自我报告测量来深入了解与人工生成的政治图像和AIGC标签评估相关的大脑过程。在第二项研究中,我们将脑电图研究中的一些刺激嵌入到复制的YouTube推荐中,并在线管理276名参与者。这两项研究的结果表明,人工智能生成的政治图像与更高的注意力和情感处理有关。这些反应与人们对人性和可信度的看法有关。重要的是,有效的AIGC标签可以影响可信度感知。我们发现影响可以追溯到大脑的后期执行网络活动,正如P300和晚期正电位(LPP)成分的模式所反映的那样。我们的研究结果表明,当设计经过仔细考虑时,AIGC标签可以成为解决在线错误信息的有效方法。未来的研究可以通过将更逼真的刺激与生态有效的社交媒体任务和多模态观察技术相结合来扩展这些结果,以改进标签设计和跨人口细分的个性化干预。
{"title":"Do truthfulness notifications influence perceptions of AI-generated political images? A cognitive investigation with EEG","authors":"Colin Conrad ,&nbsp;Anika Nissen ,&nbsp;Kya Masoumi ,&nbsp;Mayank Ramchandani ,&nbsp;Rafael Fecury Braga ,&nbsp;Aaron J. Newman","doi":"10.1016/j.chbah.2025.100185","DOIUrl":"10.1016/j.chbah.2025.100185","url":null,"abstract":"<div><div>Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100185"},"PeriodicalIF":0.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The emotional cost of AI chatbots in education: Who benefits and who struggles? 人工智能聊天机器人在教育领域的情感成本:谁受益,谁挣扎?
Pub Date : 2025-07-11 DOI: 10.1016/j.chbah.2025.100181
Justin W. Carter, Justin T. Scott, John D. Barrett
Recent advancements in large language models have enabled the development of advanced chatbots, offering new opportunities for personalized learning and academic support that could transform the way students learn. Despite their growing popularity and promising benefits, there is limited understanding of the psychological impact. Accordingly, this study examined the effects of chatbot usage on students' positive and negative affect and considered the moderating role of familiarity. Using a pre-post control group design, undergraduate students were divided into two groups to completed an assignment. Groups received the same task, and only differed based on receiving instruction to use or not to use an AI chatbot. Students who used a chatbot reported significantly lower positive affect, with no significant difference in negative affect. Importantly, familiarity with chatbots moderated changes in positive affect such that students with more familiarity with chatbots reported fewer declines. These findings showcase chatbots’ duplicitous effects. While the tools may prove empowering with effective use, they can also decrease the positive aspects of completing assignments for those with less familiarity. These findings underscore the behavioral complexity of AI integration by highlighting how familiarity moderates affective outcomes and how chatbot use may reduce positive emotional engagement without increasing negative affect. Integrating AI tools in education requires not just access and training, but a nuanced understanding of how student behavior and emotional well-being are shaped by their interaction with intelligent systems.
大型语言模型的最新进展促进了高级聊天机器人的发展,为个性化学习和学术支持提供了新的机会,这可能会改变学生的学习方式。尽管它们越来越受欢迎,而且有很多好处,但人们对其心理影响的了解有限。因此,本研究考察了聊天机器人的使用对学生的积极和消极情绪的影响,并考虑了熟悉度的调节作用。采用前后控制组设计,本科生被分为两组来完成一项任务。各组接受相同的任务,只是根据使用或不使用人工智能聊天机器人的指令有所不同。使用聊天机器人的学生报告的积极情绪显著降低,消极情绪没有显著差异。重要的是,熟悉聊天机器人缓和了积极影响的变化,因此更熟悉聊天机器人的学生报告的下降较少。这些发现显示了聊天机器人的两面性。虽然这些工具可能被证明可以有效地使用,但对于那些不太熟悉的人来说,它们也会减少完成任务的积极方面。这些发现强调了人工智能整合的行为复杂性,强调了熟悉度如何调节情感结果,以及聊天机器人的使用如何在不增加负面影响的情况下减少积极的情感投入。将人工智能工具整合到教育中,不仅需要访问和培训,还需要细致入微地了解学生的行为和情感健康是如何通过与智能系统的互动来塑造的。
{"title":"The emotional cost of AI chatbots in education: Who benefits and who struggles?","authors":"Justin W. Carter,&nbsp;Justin T. Scott,&nbsp;John D. Barrett","doi":"10.1016/j.chbah.2025.100181","DOIUrl":"10.1016/j.chbah.2025.100181","url":null,"abstract":"<div><div>Recent advancements in large language models have enabled the development of advanced chatbots, offering new opportunities for personalized learning and academic support that could transform the way students learn. Despite their growing popularity and promising benefits, there is limited understanding of the psychological impact. Accordingly, this study examined the effects of chatbot usage on students' positive and negative affect and considered the moderating role of familiarity. Using a pre-post control group design, undergraduate students were divided into two groups to completed an assignment. Groups received the same task, and only differed based on receiving instruction to use or not to use an AI chatbot. Students who used a chatbot reported significantly lower positive affect, with no significant difference in negative affect. Importantly, familiarity with chatbots moderated changes in positive affect such that students with more familiarity with chatbots reported fewer declines. These findings showcase chatbots’ duplicitous effects. While the tools may prove empowering with effective use, they can also decrease the positive aspects of completing assignments for those with less familiarity. These findings underscore the behavioral complexity of AI integration by highlighting how familiarity moderates affective outcomes and how chatbot use may reduce positive emotional engagement without increasing negative affect. Integrating AI tools in education requires not just access and training, but a nuanced understanding of how student behavior and emotional well-being are shaped by their interaction with intelligent systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100181"},"PeriodicalIF":0.0,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RVBench: Role values benchmark for role-playing LLMs RVBench:角色扮演法学硕士的角色价值基准
Pub Date : 2025-07-09 DOI: 10.1016/j.chbah.2025.100184
Ye Wang , Tong Li , Meixuan Li , Ziyue Cheng , Ge Wang , Hanyue Kang , Yaling Deng , Hongjiang Xiao , Yuan Zhang
With the explosive development of Large Language Models (LLMs), the demand for role-playing agents has greatly increased to promote applications such as personalized digital companion and artificial society simulation. In LLM-driven role-playing, the values of agents lay the foundation for their attitudes and behaviors, thus alignment of values is crucial in enhancing the realism of interactions and enriching the user experience. However, a benchmark for evaluating values in role-playing LLMs is absent. In this study, we built a Role Values Dataset (RVD) containing 25 roles as the groundtruth. Additionally, inspired by psychological tests in humans, we proposed a Role Values Benchmark (RVBench) including values rating and values ranking methods to evaluate the values of role-playing LLMs from subjective questionnaires and observed behavior. The values rating method tests the values orientation through the revised Portrait Values Questionnaire (PVQ-RR), which provides a direct and quantitative comparison of the roles to be played. The values ranking method assesses whether the behaviors of agents are consistent with their values’ hierarchical organization when encountering dilemmatic scenarios. Subsequent testing on a selection of both open-source and closed-source LLMs revealed that GLM-4 exhibited values most closely mirroring the roles in the RVD. However, compared to preset roles, there is still a certain gap in the role-playing ability of LLMs, including the consistency, stability and flexibility in value dimensions. These findings prompt a vital need for further research aimed at refining the role-playing capacities of LLMs from a value alignment perspective. The RVD is available at: https://github.com/northwang/RVD.
随着大语言模型(Large Language Models, llm)的爆炸式发展,角色扮演代理的需求大幅增加,以促进个性化数字伴侣和人工社会模拟等应用。在法学硕士驱动的角色扮演中,agent的价值观是其态度和行为的基础,因此价值观的一致性对于增强交互的真实感和丰富用户体验至关重要。然而,评估角色扮演法学硕士价值的基准是缺失的。在本研究中,我们建立了一个包含25个角色的角色价值数据集(RVD)作为基础事实。此外,受人类心理测试的启发,我们提出了一个角色价值基准(RVBench),包括价值观评级和价值观排名方法,从主观问卷调查和观察行为来评估角色扮演法学硕士的价值。价值观评定法通过修订后的肖像价值观问卷(PVQ-RR)来检验价值观取向,对所要扮演的角色进行直接和定量的比较。价值观排序法评估agent在遇到两难情境时的行为是否与其价值观的层级组织相一致。随后对选择的开源和闭源llm进行的测试表明,GLM-4所展示的值最接近于RVD中的角色。但法学硕士的角色扮演能力与预设角色相比,在价值维度上的一致性、稳定性、灵活性等方面仍有一定差距。这些发现提示了进一步研究的迫切需要,旨在从价值一致性的角度提炼法学硕士的角色扮演能力。RVD可在https://github.com/northwang/RVD上获得。
{"title":"RVBench: Role values benchmark for role-playing LLMs","authors":"Ye Wang ,&nbsp;Tong Li ,&nbsp;Meixuan Li ,&nbsp;Ziyue Cheng ,&nbsp;Ge Wang ,&nbsp;Hanyue Kang ,&nbsp;Yaling Deng ,&nbsp;Hongjiang Xiao ,&nbsp;Yuan Zhang","doi":"10.1016/j.chbah.2025.100184","DOIUrl":"10.1016/j.chbah.2025.100184","url":null,"abstract":"<div><div>With the explosive development of Large Language Models (LLMs), the demand for role-playing agents has greatly increased to promote applications such as personalized digital companion and artificial society simulation. In LLM-driven role-playing, the values of agents lay the foundation for their attitudes and behaviors, thus alignment of values is crucial in enhancing the realism of interactions and enriching the user experience. However, a benchmark for evaluating values in role-playing LLMs is absent. In this study, we built a Role Values Dataset (RVD) containing 25 roles as the groundtruth. Additionally, inspired by psychological tests in humans, we proposed a Role Values Benchmark (RVBench) including values rating and values ranking methods to evaluate the values of role-playing LLMs from subjective questionnaires and observed behavior. The values rating method tests the values orientation through the revised Portrait Values Questionnaire (PVQ-RR), which provides a direct and quantitative comparison of the roles to be played. The values ranking method assesses whether the behaviors of agents are consistent with their values’ hierarchical organization when encountering dilemmatic scenarios. Subsequent testing on a selection of both open-source and closed-source LLMs revealed that GLM-4 exhibited values most closely mirroring the roles in the RVD. However, compared to preset roles, there is still a certain gap in the role-playing ability of LLMs, including the consistency, stability and flexibility in value dimensions. These findings prompt a vital need for further research aimed at refining the role-playing capacities of LLMs from a value alignment perspective. The RVD is available at: <span><span>https://github.com/northwang/RVD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100184"},"PeriodicalIF":0.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-disclosure to AI: People provide personal information to AI and humans equivalently 向AI自我披露:人们向AI和人类提供的个人信息是等量的
Pub Date : 2025-07-09 DOI: 10.1016/j.chbah.2025.100180
Elizabeth R. Merwin , Allen C. Hagen , Joseph R. Keebler , Chad Forbes
As Artificial Intelligence (AI) increasingly emerges as a tool in therapeutic settings, understanding individuals' willingness to disclose personal information to AI versus humans is critical. This study examined how participants chose between self-disclosure-based and fact-based statements when responses were thought to be analyzed by an AI, a human researcher, or kept private. Participants completed forced-choice trials where they selected a self-disclosure-based or fact-based statement for one of the three agent conditions. Results showed that participants were statistically more likely to select self-disclosure over fact-based statements. Choice for self-disclosure rates were similar for the AI and human researcher, but significantly lower when responses were kept private. Multiple regression analyses revealed that individuals with a higher score on the negative attitude toward AI scale were less likely to choose Self-based statements across the three agent conditions. Overall, individuals were just as likely to choose to self-disclose to an AI as to a human researcher, and more likely to choose either agent over keeping self-disclosure information private. In addition, personality traits and attitudes toward AI were able to significantly influence disclosure choices. These findings provide insights into how individual differences impact the willingness to self-disclose information in human-AI interactions and offer a foundation for exploring the feasibility of AI as a clinical and social tool. Future research should expand on these results to further understand self-disclosure behaviors and evaluate AI's role in therapeutic settings.
随着人工智能(AI)越来越多地成为治疗环境中的一种工具,了解个人向人工智能和人类披露个人信息的意愿至关重要。这项研究调查了参与者在回答被人工智能、人类研究人员分析或保密时,如何在基于自我披露和基于事实的陈述之间做出选择。参与者完成了强迫选择试验,他们在三个代理条件中选择一个基于自我披露的陈述或基于事实的陈述。结果显示,在统计上,参与者更有可能选择自我披露而不是基于事实的陈述。人工智能研究人员和人类研究人员对自我披露率的选择相似,但当回答保密时,选择的比例明显较低。多元回归分析显示,在三个主体条件下,对人工智能的消极态度量表得分较高的个体更不可能选择基于自我的陈述。总的来说,个人选择向人工智能和人类研究人员自我披露的可能性是一样的,而且更有可能选择任何一个代理,而不是将自我披露的信息保密。此外,人格特质和对人工智能的态度能够显著影响披露选择。这些发现揭示了个体差异如何影响人类与人工智能互动中自我披露信息的意愿,并为探索人工智能作为临床和社交工具的可行性奠定了基础。未来的研究应该扩展这些结果,以进一步了解自我表露行为,并评估人工智能在治疗环境中的作用。
{"title":"Self-disclosure to AI: People provide personal information to AI and humans equivalently","authors":"Elizabeth R. Merwin ,&nbsp;Allen C. Hagen ,&nbsp;Joseph R. Keebler ,&nbsp;Chad Forbes","doi":"10.1016/j.chbah.2025.100180","DOIUrl":"10.1016/j.chbah.2025.100180","url":null,"abstract":"<div><div>As Artificial Intelligence (AI) increasingly emerges as a tool in therapeutic settings, understanding individuals' willingness to disclose personal information to AI versus humans is critical. This study examined how participants chose between self-disclosure-based and fact-based statements when responses were thought to be analyzed by an AI, a human researcher, or kept private. Participants completed forced-choice trials where they selected a self-disclosure-based or fact-based statement for one of the three agent conditions. Results showed that participants were statistically more likely to select self-disclosure over fact-based statements. Choice for self-disclosure rates were similar for the AI and human researcher, but significantly lower when responses were kept private. Multiple regression analyses revealed that individuals with a higher score on the negative attitude toward AI scale were less likely to choose Self-based statements across the three agent conditions. Overall, individuals were just as likely to choose to self-disclose to an AI as to a human researcher, and more likely to choose either agent over keeping self-disclosure information private. In addition, personality traits and attitudes toward AI were able to significantly influence disclosure choices. These findings provide insights into how individual differences impact the willingness to self-disclose information in human-AI interactions and offer a foundation for exploring the feasibility of AI as a clinical and social tool. Future research should expand on these results to further understand self-disclosure behaviors and evaluate AI's role in therapeutic settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100180"},"PeriodicalIF":0.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144604432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A critical discussion of strategies and ramifications of implementing conversational agents in mental healthcare 在精神卫生保健中实施会话代理的策略和后果的关键讨论
Pub Date : 2025-07-08 DOI: 10.1016/j.chbah.2025.100182
Arthur Bran Herbener , Michał Klincewicz , Lily Frank , Malene Flensborg Damholdt
In recent years, there has been growing optimism about the potential of conversational agents, such as chatbots and social robots, in mental healthcare. Their scalability offers a promising solution to some of the key limitations of the dominant model of treatment in Western countries. However, while recent experimental research provides grounds for cautious optimism, the integration of conversational agents into mental healthcare raises significant clinical and ethical challenges, particularly concerning the partial or full replacement of human practitioners. Overall, this theoretical paper examines the clinical and ethical implications of deploying conversational agents in mental health services as partial and full replacement of human practitioners. On the one hand, we outline how these agents can circumvent core treatment barriers through stepped care, blended care, and a personalized medicine approach. On the other hand, we argue that the partial and full substitution of human practitioners can have profound consequences for the ethical landscape of mental healthcare, potentially undermining patients’ rights and safety. By making this argument, this work extends prior literature by specifically considering how different levels of implementation of conversational agents in healthcare present both opportunities and risks. We argue for the urgent need to establish regulatory frameworks to ensure that the integration of conversational agents into mental healthcare is both safe and ethically sound.
近年来,人们对聊天机器人和社交机器人等对话代理在精神健康领域的潜力越来越乐观。它们的可扩展性为解决西方国家主流治疗模式的一些关键限制提供了一个有希望的解决方案。然而,尽管最近的实验研究提供了谨慎乐观的理由,但将会话代理整合到精神卫生保健中提出了重大的临床和伦理挑战,特别是在部分或完全替代人类从业者方面。总的来说,这篇理论论文探讨了在精神卫生服务中部署会话代理作为部分和完全替代人类从业者的临床和伦理意义。一方面,我们概述了这些药物如何通过阶梯式护理、混合护理和个性化医疗方法来规避核心治疗障碍。另一方面,我们认为人类从业者的部分和完全替代可以对精神卫生保健的伦理景观产生深远的影响,潜在地破坏患者的权利和安全。通过提出这一论点,这项工作扩展了先前的文献,具体考虑了医疗保健中会话代理的不同实施水平如何同时呈现机会和风险。我们认为迫切需要建立监管框架,以确保将对话代理整合到精神卫生保健中既安全又合乎道德。
{"title":"A critical discussion of strategies and ramifications of implementing conversational agents in mental healthcare","authors":"Arthur Bran Herbener ,&nbsp;Michał Klincewicz ,&nbsp;Lily Frank ,&nbsp;Malene Flensborg Damholdt","doi":"10.1016/j.chbah.2025.100182","DOIUrl":"10.1016/j.chbah.2025.100182","url":null,"abstract":"<div><div>In recent years, there has been growing optimism about the potential of conversational agents, such as chatbots and social robots, in mental healthcare. Their scalability offers a promising solution to some of the key limitations of the dominant model of treatment in Western countries. However, while recent experimental research provides grounds for cautious optimism, the integration of conversational agents into mental healthcare raises significant clinical and ethical challenges, particularly concerning the partial or full replacement of human practitioners. Overall, this theoretical paper examines the clinical and ethical implications of deploying conversational agents in mental health services as <em>partial</em> and <em>full</em> replacement of human practitioners. On the one hand, we outline how these agents can circumvent core treatment barriers through stepped care, blended care, and a personalized medicine approach. On the other hand, we argue that the partial and full substitution of human practitioners can have profound consequences for the ethical landscape of mental healthcare, potentially undermining patients’ rights and safety. By making this argument, this work extends prior literature by specifically considering how different levels of implementation of conversational agents in healthcare present both opportunities and risks. We argue for the urgent need to establish regulatory frameworks to ensure that the integration of conversational agents into mental healthcare is both safe and ethically sound.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100182"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating relationships with GenAI chatbots: User attitudes, acceptability, and potential 导航与GenAI聊天机器人的关系:用户态度、可接受性和潜力
Pub Date : 2025-07-08 DOI: 10.1016/j.chbah.2025.100183
Laura M. Vowels , Rachel R.R. Francois-Walcott , Maëlle Grandjean , Joëlle Darwiche , Matthew J. Vowels
Despite the growing adoption of GenAI chatbots in health and well-being contexts, little is known about public attitudes toward their use for relationship support or the factors shaping acceptance and effectiveness. This study aims to address the research gap across three studies. Study 1 involved five focus groups with 30 young people to gauge general attitudes toward GenAI chatbots in relationship contexts. Study 2 evaluated user experiences during a single relationship intervention session with 20 participants. Study 3 quantitatively measured changes in attitudes toward GenAI chatbots and online interventions among 260 participants, assessed before, immediately after, and two weeks following their interaction with a GenAI chatbot or a writing task. Three main themes emerged in Studies 1 and 2: Accessible First-Line Treatment, Artificial Advice for Human Connection, and Internet Archive. Additionally, Study 1 revealed themes of Privacy vs. Openness and Are We in a Black Mirror Episode?, while Study 2 uncovered themes of Exceeding Expectations and Supporting Neurodivergence. The Study 3 results indicated that GenAI chatbot interactions led to reduced effort expectancy and short-term effects in increased acceptance and decreased objections to GenAI chatbots, though these effects were not sustained at a two-week follow-up. Both intervention types improved general attitudes toward online interventions, suggesting that exposure can enhance the uptake of digital health tools. This research underscores the evolving role of GenAI chatbots in augmenting therapeutic practices, highlighting their potential for personalized, accessible, and effective relationship interventions in the digital age.
尽管GenAI聊天机器人在健康和幸福环境中的应用越来越多,但公众对使用它们来支持关系的态度或影响接受度和有效性的因素知之甚少。本研究旨在解决三个研究之间的研究差距。研究1涉及5个30名年轻人的焦点小组,以评估在恋爱环境中对GenAI聊天机器人的普遍态度。研究2评估了20名参与者在单一关系干预阶段的用户体验。研究3定量测量了260名参与者对GenAI聊天机器人和在线干预的态度变化,评估了他们在与GenAI聊天机器人或写作任务互动之前、之后和两周后的态度变化。研究1和研究2中出现了三个主要主题:无障碍一线治疗、人际关系的人工建议和互联网档案。此外,研究1揭示了隐私与开放和我们是否处于黑镜情节的主题?,而研究2揭示了超出预期和支持神经分化的主题。研究3的结果表明,GenAI聊天机器人的互动降低了工作量预期,并在短期内增加了对GenAI聊天机器人的接受度和反对度,尽管这些影响在两周的随访中并未持续。这两种干预措施都改善了人们对在线干预措施的普遍态度,这表明接触在线干预措施可以提高对数字健康工具的接受程度。这项研究强调了GenAI聊天机器人在增强治疗实践方面不断发展的作用,强调了它们在数字时代个性化、可访问和有效的关系干预方面的潜力。
{"title":"Navigating relationships with GenAI chatbots: User attitudes, acceptability, and potential","authors":"Laura M. Vowels ,&nbsp;Rachel R.R. Francois-Walcott ,&nbsp;Maëlle Grandjean ,&nbsp;Joëlle Darwiche ,&nbsp;Matthew J. Vowels","doi":"10.1016/j.chbah.2025.100183","DOIUrl":"10.1016/j.chbah.2025.100183","url":null,"abstract":"<div><div>Despite the growing adoption of GenAI chatbots in health and well-being contexts, little is known about public attitudes toward their use for relationship support or the factors shaping acceptance and effectiveness. This study aims to address the research gap across three studies. Study 1 involved five focus groups with 30 young people to gauge general attitudes toward GenAI chatbots in relationship contexts. Study 2 evaluated user experiences during a single relationship intervention session with 20 participants. Study 3 quantitatively measured changes in attitudes toward GenAI chatbots and online interventions among 260 participants, assessed before, immediately after, and two weeks following their interaction with a GenAI chatbot or a writing task. Three main themes emerged in Studies 1 and 2: <em>Accessible First-Line Treatment, Artificial Advice for Human Connection</em>, and <em>Internet Archive</em>. Additionally, Study 1 revealed themes of <em>Privacy vs. Openness</em> and <em>Are We in a Black Mirror Episode?</em>, while Study 2 uncovered themes of <em>Exceeding Expectations</em> and Supporting <em>Neurodivergence</em>. The Study 3 results indicated that GenAI chatbot interactions led to reduced effort expectancy and short-term effects in increased acceptance and decreased objections to GenAI chatbots, though these effects were not sustained at a two-week follow-up. Both intervention types improved general attitudes toward online interventions, suggesting that exposure can enhance the uptake of digital health tools. This research underscores the evolving role of GenAI chatbots in augmenting therapeutic practices, highlighting their potential for personalized, accessible, and effective relationship interventions in the digital age.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100183"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144604433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge cues to human origins facilitate self-disclosure during interactions with chatbots 关于人类起源的知识线索有助于在与聊天机器人的互动中自我披露
Pub Date : 2025-06-20 DOI: 10.1016/j.chbah.2025.100174
Gabriella Warren-Smith , Guy Laban , Emily-Marie Pacheco , Emily S. Cross
Chatbots are emerging as a self-management tool for supporting mental health, appearing across commercial and healthcare settings. Whilst chatbots are valued for their perceived lack of judgement, they lack the emotional intelligence and empathy to build trust and rapport with users. A resulting debate questions whether chatbots facilitate or hinder self-disclosure. This study presents a within-subjects experimental design investigating the parameters of self-disclosure in social interactions with chatbots in an open domain. Participants engaged in two short social interactions with two chatbots: one with the knowledge they were conversing with a chatbot and one with the false belief they were conversing with a human. A significant difference was found across both treatments, with participants disclosing more to the chatbot that was introduced as a human, as well as perceiving themselves to do so, perceiving this chatbot as more comforting, and to be demonstrating higher rates of agency and experience compared to the chatbot that was introduced as a chatbot. However, significant findings also indicated participants’ disclosures to the chatbot that was introduced as a chatbot were more sentimental, and they found it to be friendlier compared to the chatbot that was introduced as a human. These results indicate that whilst cues to a chatbot’s human origins enhance self-disclosure and perceptions of mind, when the artificial agent is perceived against one’s social expectations, it may be viewed negatively on social factors that require higher cognitive processing.
聊天机器人正在成为一种支持心理健康的自我管理工具,出现在商业和医疗机构中。虽然聊天机器人因缺乏判断力而受到重视,但它们缺乏情商和同理心,无法与用户建立信任和融洽关系。由此引发的一场争论是,聊天机器人是促进还是阻碍了自我表露。本研究提出了一个受试者内实验设计,调查在开放领域与聊天机器人的社会互动中的自我表露参数。参与者与两个聊天机器人进行了两次简短的社交互动:一个知道他们正在与聊天机器人交谈,另一个错误地认为他们正在与人类交谈。在两种治疗中都发现了显著的差异,参与者向作为人类引入的聊天机器人透露了更多的信息,并且认为自己这样做,认为这个聊天机器人更令人安慰,并且与作为聊天机器人引入的聊天机器人相比,表现出更高的代理率和经验。然而,重要的发现也表明,参与者对作为聊天机器人介绍的聊天机器人的披露更多愁善感,他们发现与作为人类介绍的聊天机器人相比,聊天机器人更友好。这些结果表明,虽然关于聊天机器人的人类起源的线索可以增强自我披露和心灵感知,但当人工代理被认为违背了一个人的社会期望时,它可能会被视为需要更高认知处理的社会因素的负面影响。
{"title":"Knowledge cues to human origins facilitate self-disclosure during interactions with chatbots","authors":"Gabriella Warren-Smith ,&nbsp;Guy Laban ,&nbsp;Emily-Marie Pacheco ,&nbsp;Emily S. Cross","doi":"10.1016/j.chbah.2025.100174","DOIUrl":"10.1016/j.chbah.2025.100174","url":null,"abstract":"<div><div>Chatbots are emerging as a self-management tool for supporting mental health, appearing across commercial and healthcare settings. Whilst chatbots are valued for their perceived lack of judgement, they lack the emotional intelligence and empathy to build trust and rapport with users. A resulting debate questions whether chatbots facilitate or hinder self-disclosure. This study presents a within-subjects experimental design investigating the parameters of self-disclosure in social interactions with chatbots in an open domain. Participants engaged in two short social interactions with two chatbots: one with the knowledge they were conversing with a chatbot and one with the false belief they were conversing with a human. A significant difference was found across both treatments, with participants disclosing more to the chatbot that was introduced as a human, as well as perceiving themselves to do so, perceiving this chatbot as more comforting, and to be demonstrating higher rates of agency and experience compared to the chatbot that was introduced as a chatbot. However, significant findings also indicated participants’ disclosures to the chatbot that was introduced as a chatbot were more sentimental, and they found it to be friendlier compared to the chatbot that was introduced as a human. These results indicate that whilst cues to a chatbot’s human origins enhance self-disclosure and perceptions of mind, when the artificial agent is perceived against one’s social expectations, it may be viewed negatively on social factors that require higher cognitive processing.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100174"},"PeriodicalIF":0.0,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Intelligence of large language models: A comparative study using verbal and visual IQ tests 评估大型语言模型的智力:使用口头和视觉智商测试的比较研究
Pub Date : 2025-06-18 DOI: 10.1016/j.chbah.2025.100170
Sherif Abdelkarim , David Lu , Dora-Luz Flores , Susanne Jaeggi , Pierre Baldi
Large language models (LLMs) excel on many specialized benchmarks, yet their general-reasoning ability remains opaque. We therefore test 18 models – including GPT-4, Claude 3 and Gemini Pro – on a 14-section IQ suite spanning verbal, numerical and visual puzzles and add a “multi-agent reflection” variant in which one model answers while others critique and revise. Results replicate known patterns: a strong bias towards verbal vs numerical reasoning (GPT-4: 79% vs 53% accuracy), a pronounced modality gap (text-IQ 125 vs visual-IQ 103), and persistent failure on abstract arithmetic ( 20% on missing-number tasks). Scaling lifts mean IQ from 89 (tiny models) to 131 (large models), but gains are non-uniform, and reflection yields only modest extra points for frontier systems. Our contributions include: (1) proposing an evaluation framework for LLM “intelligence” using both verbal and visual IQ tasks, (2) analyzing how multi-agent setups with varying actor and critic sizes affect problem-solving performance; (3) analyzing how model size and multi-modality affect performance across diverse reasoning tasks; and (4) highlighting the value of IQ tests as a standardized, human-referenced benchmark that enables longitudinal comparison of LLMs’ cognitive abilities relative to human norms. We further discuss the limitations of IQ tests as an AI benchmark and outline directions for more comprehensive evaluation of LLM reasoning capabilities.
大型语言模型(llm)在许多专门的基准测试中表现出色,但它们的一般推理能力仍然不透明。因此,我们测试了18个模型——包括GPT-4、Claude 3和Gemini Pro——在一个14部分的IQ套件上,涵盖了语言、数字和视觉难题,并添加了一个“多代理反射”变体,其中一个模型回答,而其他模型则批评和修改。结果重复了已知的模式:语言和数字推理的强烈偏见(GPT-4: 79%对53%的准确率),明显的模态差距(文本智商≈125 vs视觉智商≈103),以及抽象算术的持续失败(在缺失数字任务中≤20%)。缩放将平均智商从89(小模型)提高到131(大模型),但收益是不均匀的,反射只给前沿系统带来适度的加分。我们的贡献包括:(1)提出了一个使用语言和视觉智商任务的法学硕士“智力”评估框架;(2)分析了具有不同演员和评论家大小的多智能体设置如何影响问题解决性能;(3)分析模型大小和多模态对不同推理任务性能的影响;(4)强调智商测试作为一种标准化的、以人为参考的基准的价值,可以对法学硕士的认知能力与人类标准进行纵向比较。我们进一步讨论了智商测试作为人工智能基准的局限性,并概述了更全面评估法学硕士推理能力的方向。
{"title":"Evaluating the Intelligence of large language models: A comparative study using verbal and visual IQ tests","authors":"Sherif Abdelkarim ,&nbsp;David Lu ,&nbsp;Dora-Luz Flores ,&nbsp;Susanne Jaeggi ,&nbsp;Pierre Baldi","doi":"10.1016/j.chbah.2025.100170","DOIUrl":"10.1016/j.chbah.2025.100170","url":null,"abstract":"<div><div>Large language models (LLMs) excel on many specialized benchmarks, yet their general-reasoning ability remains opaque. We therefore test 18 models – including GPT-4, Claude 3 and Gemini Pro – on a 14-section IQ suite spanning verbal, numerical and visual puzzles and add a “multi-agent reflection” variant in which one model answers while others critique and revise. Results replicate known patterns: a strong bias towards verbal vs numerical reasoning (GPT-4: 79% vs 53% accuracy), a pronounced modality gap (text-IQ <span><math><mo>≈</mo></math></span> 125 vs visual-IQ <span><math><mo>≈</mo></math></span> 103), and persistent failure on abstract arithmetic (<span><math><mo>≤</mo></math></span> 20% on missing-number tasks). Scaling lifts mean IQ from 89 (tiny models) to 131 (large models), but gains are non-uniform, and reflection yields only modest extra points for frontier systems. Our contributions include: (1) proposing an evaluation framework for LLM “intelligence” using both verbal and visual IQ tasks, (2) analyzing how multi-agent setups with varying actor and critic sizes affect problem-solving performance; (3) analyzing how model size and multi-modality affect performance across diverse reasoning tasks; and (4) highlighting the value of IQ tests as a standardized, human-referenced benchmark that enables longitudinal comparison of LLMs’ cognitive abilities relative to human norms. We further discuss the limitations of IQ tests as an AI benchmark and outline directions for more comprehensive evaluation of LLM reasoning capabilities.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100170"},"PeriodicalIF":0.0,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of a short AI literacy test (AILIT-S) for university students 大学生人工智能读写能力短测试(AILIT-S)的开发和验证
Pub Date : 2025-06-16 DOI: 10.1016/j.chbah.2025.100176
Marie Hornberger , Arne Bewersdorff , Daniel S. Schiff , Claudia Nerdel
Fostering AI literacy is an important goal in higher education in many disciplines. Assessing AI literacy can inform researchers and educators on current AI literacy levels and provide insights into the effectiveness of learning and teaching in the field of AI. It can also inform decision-makers and policymakers about the successes and gaps with respect to AI literacy within certain institutions, populations, or countries, for example. However, most of the available AI literacy tests are quite long and time-consuming. A short test of AI literacy would instead enable efficient measurement and facilitate better research and understanding. In this study, we develop and validate a short version of an existing validated AI literacy test. Based on a sample of 1,465 university students across three Western countries (Germany, UK, US), we select a subset of items according to content validity, coverage of different difficulty levels, and ability to discriminate between participants. The resulting short version, AILIT-S, consists of 10 items and can be used to assess AI literacy in under 5 minutes. While the shortened test is less reliable than the long version, it maintains high construct validity and has high congruent validity. We offer recommendations for researchers and practitioners on when to use the long or short version.
培养人工智能素养是许多学科高等教育的重要目标。评估人工智能素养可以让研究人员和教育工作者了解当前的人工智能素养水平,并为人工智能领域的学习和教学有效性提供见解。例如,它还可以向决策者和政策制定者通报某些机构、人群或国家在人工智能素养方面的成功和差距。然而,大多数可用的人工智能读写能力测试都相当长且耗时。对人工智能素养进行一次简短的测试,反而可以实现有效的衡量,并促进更好的研究和理解。在本研究中,我们开发并验证了一个现有的经过验证的人工智能读写能力测试的简短版本。基于来自三个西方国家(德国、英国、美国)的1465名大学生的样本,我们根据内容效度、不同难度级别的覆盖范围和区分参与者的能力选择了一个项目子集。由此产生的简短版本AILIT-S由10个项目组成,可以在5分钟内用于评估人工智能的读写能力。虽然短测试的信度低于长测试,但它保持了较高的构念效度和一致性效度。我们为研究人员和从业者提供关于何时使用长版本或短版本的建议。
{"title":"Development and validation of a short AI literacy test (AILIT-S) for university students","authors":"Marie Hornberger ,&nbsp;Arne Bewersdorff ,&nbsp;Daniel S. Schiff ,&nbsp;Claudia Nerdel","doi":"10.1016/j.chbah.2025.100176","DOIUrl":"10.1016/j.chbah.2025.100176","url":null,"abstract":"<div><div>Fostering AI literacy is an important goal in higher education in many disciplines. Assessing AI literacy can inform researchers and educators on current AI literacy levels and provide insights into the effectiveness of learning and teaching in the field of AI. It can also inform decision-makers and policymakers about the successes and gaps with respect to AI literacy within certain institutions, populations, or countries, for example. However, most of the available AI literacy tests are quite long and time-consuming. A short test of AI literacy would instead enable efficient measurement and facilitate better research and understanding. In this study, we develop and validate a short version of an existing validated AI literacy test. Based on a sample of 1,465 university students across three Western countries (Germany, UK, US), we select a subset of items according to content validity, coverage of different difficulty levels, and ability to discriminate between participants. The resulting short version, AILIT-S, consists of 10 items and can be used to assess AI literacy in under 5 minutes. While the shortened test is less reliable than the long version, it maintains high construct validity and has high congruent validity. We offer recommendations for researchers and practitioners on when to use the long or short version.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100176"},"PeriodicalIF":0.0,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1