首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Fine for others but not for me: The role of perspective in patients’ perception of artificial intelligence in online medical platforms 对别人很好,对我却不好视角在患者对在线医疗平台人工智能感知中的作用
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100046
Matthias F.C. Hudecek , Eva Lermer , Susanne Gaube , Julia Cecil , Silke F. Heiss , Falk Batz

In the near future, online medical platforms enabled by artificial intelligence (AI) technology will become increasingly more prevalent, allowing patients to use them directly without having to consult a human doctor. However, there is still little research from the patient's perspective on such AI-enabled tools. We, therefore, conducted a preregistered 2x3 between-subjects experiment (N = 266) to examine the influence of perspective (oneself vs. average person) and source of advice (AI vs. male physician vs. female physician) on the perception of a medical diagnosis and corresponding treatment recommendations. Results of robust ANOVAs showed a statistically significant interaction between the source of advice and perspective for all three dependent variables (i.e., evaluation of the diagnosis, evaluation of the treatment recommendation, and risk perception). People prefer the advice of human doctors to an AI when it comes to their own situation. In contrast, the participants made no differences between the sources of medical advice when it comes to assessing the situation of an average person. Our study contributes to a better understanding of the patient's perspective of modern digital health technology. As our findings suggest the perception of AI-enabled diagnostic tools is more critical when it comes to oneself, future research should examine the relevant factors that influence this perception.

在不久的将来,由人工智能(AI)技术驱动的在线医疗平台将变得越来越普遍,患者无需咨询人类医生即可直接使用这些平台。然而,从患者角度出发对这类人工智能工具的研究仍然很少。因此,我们进行了一项预先登记的 2x3 主体间实验(N = 266),以研究视角(自己与普通人)和建议来源(人工智能与男医生与女医生)对医疗诊断感知和相应治疗建议的影响。稳健方差分析的结果表明,在所有三个因变量(即对诊断的评价、对治疗建议的评价和风险认知)中,建议来源和视角之间存在统计学意义上的显著交互作用。在涉及自身情况时,人们更喜欢人类医生的建议,而不是人工智能。相比之下,在评估普通人的情况时,参与者对不同来源的医疗建议并无差异。我们的研究有助于更好地了解患者对现代数字医疗技术的看法。我们的研究结果表明,当涉及到自身时,对人工智能诊断工具的看法更为关键,因此未来的研究应探讨影响这种看法的相关因素。
{"title":"Fine for others but not for me: The role of perspective in patients’ perception of artificial intelligence in online medical platforms","authors":"Matthias F.C. Hudecek ,&nbsp;Eva Lermer ,&nbsp;Susanne Gaube ,&nbsp;Julia Cecil ,&nbsp;Silke F. Heiss ,&nbsp;Falk Batz","doi":"10.1016/j.chbah.2024.100046","DOIUrl":"10.1016/j.chbah.2024.100046","url":null,"abstract":"<div><p>In the near future, online medical platforms enabled by artificial intelligence (AI) technology will become increasingly more prevalent, allowing patients to use them directly without having to consult a human doctor. However, there is still little research from the patient's perspective on such AI-enabled tools. We, therefore, conducted a preregistered 2x3 between-subjects experiment (<em>N</em> = 266) to examine the influence of <em>perspective</em> (oneself vs. average person) and <em>source of advice</em> (AI vs. male physician vs. female physician) on the perception of a medical diagnosis and corresponding treatment recommendations. Results of robust ANOVAs showed a statistically significant interaction between the source of advice and perspective for all three dependent variables (i.e., evaluation of the diagnosis, evaluation of the treatment recommendation, and risk perception). People prefer the advice of human doctors to an AI when it comes to their own situation. In contrast, the participants made no differences between the sources of medical advice when it comes to assessing the situation of an average person. Our study contributes to a better understanding of the patient's perspective of modern digital health technology. As our findings suggest the perception of AI-enabled diagnostic tools is more critical when it comes to oneself, future research should examine the relevant factors that influence this perception.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100046"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000069/pdfft?md5=2fcb09cbbee613acb0eb286cb234004f&pid=1-s2.0-S2949882124000069-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User-driven prioritization of ethical principles for artificial intelligence systems 用户驱动的人工智能系统伦理原则优先排序
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100055
Yannick Fernholz , Tatiana Ermakova , B. Fabian , P. Buxmann

Despite the progress of Artificial Intelligence (AI) and its contribution to the advancement of human society, the prioritization of ethical principles from the viewpoint of its users has not yet received much attention and empirical investigations. This is important to develop appropriate safeguards and increase the acceptance of AI-mediated technologies among all members of society.

In this research, we collected, integrated, and prioritized ethical principles for AI systems with respect to their relevance in different real-life application scenarios.

First, an overview of ethical principles for AI was systematically derived from various academic and non-academic sources. Our results clearly show that transparency, justice and fairness, non-maleficence, responsibility, and privacy are most frequently mentioned in this corpus of documents.

Next, an empirical survey to systematically identify users’ priorities was designed and conducted in the context of selected scenarios: AI-mediated recruitment (human resources), predictive policing, autonomous vehicles, and hospital robots.

We anticipate that the resulting ranking can serve as a valuable basis for formulating requirements for AI-mediated solutions and creating AI algorithms that prioritize user's needs. Our target audience includes everyone who will be affected by AI systems, e.g., policy makers, algorithm developers, and system managers as our ranking clearly depicts user's awareness regarding AI ethics.

尽管人工智能(AI)在不断进步,并为人类社会的进步做出了贡献,但从用户的角度对伦理原则进行优先排序的问题尚未得到广泛关注和实证调查。在这项研究中,我们收集、整合了人工智能系统的伦理原则,并根据这些原则在不同现实应用场景中的相关性对其进行了优先排序。首先,我们从各种学术和非学术来源系统地得出了人工智能伦理原则的概述。我们的研究结果清楚地表明,透明度、正义与公平、非恶意、责任和隐私在这些文献中被提及的频率最高。接下来,我们设计了一项实证调查,在选定的场景中系统地确定用户的优先考虑事项:我们预计,由此得出的排名可以作为制定以人工智能为媒介的解决方案的要求和创建优先考虑用户需求的人工智能算法的重要依据。我们的目标受众包括所有会受到人工智能系统影响的人,如政策制定者、算法开发者和系统管理员,因为我们的排名清楚地描述了用户对人工智能伦理的认识。
{"title":"User-driven prioritization of ethical principles for artificial intelligence systems","authors":"Yannick Fernholz ,&nbsp;Tatiana Ermakova ,&nbsp;B. Fabian ,&nbsp;P. Buxmann","doi":"10.1016/j.chbah.2024.100055","DOIUrl":"10.1016/j.chbah.2024.100055","url":null,"abstract":"<div><p>Despite the progress of Artificial Intelligence (AI) and its contribution to the advancement of human society, the prioritization of ethical principles from the viewpoint of its users has not yet received much attention and empirical investigations. This is important to develop appropriate safeguards and increase the acceptance of AI-mediated technologies among all members of society.</p><p>In this research, we collected, integrated, and prioritized ethical principles for AI systems with respect to their relevance in different real-life application scenarios.</p><p>First, an overview of ethical principles for AI was systematically derived from various academic and non-academic sources. Our results clearly show that transparency, justice and fairness, non-maleficence, responsibility, and privacy are most frequently mentioned in this corpus of documents.</p><p>Next, an empirical survey to systematically identify users’ priorities was designed and conducted in the context of selected scenarios: AI-mediated recruitment (human resources), predictive policing, autonomous vehicles, and hospital robots.</p><p>We anticipate that the resulting ranking can serve as a valuable basis for formulating requirements for AI-mediated solutions and creating AI algorithms that prioritize user's needs. Our target audience includes everyone who will be affected by AI systems, e.g., policy makers, algorithm developers, and system managers as our ranking clearly depicts user's awareness regarding AI ethics.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400015X/pdfft?md5=911f54e1aba722dbdf8fcef066dde5e5&pid=1-s2.0-S294988212400015X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139889572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial empathy in healthcare chatbots: Does it feel authentic? 医疗聊天机器人中的人工同理心:感觉真实吗?
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100067
Lennart Seitz

Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.

在医疗保健聊天机器人中实施移情被认为有望创造一种人类的温暖感。然而,现有研究往往忽视了移情的多维性,导致人们对人工移情是否与人际移情具有相似感知的认识不足。本文认为,实施体验式移情表达可能会产生意想不到的负面影响,因为它们可能让人感觉不真实。相反,提供工具性支持可能更适合人工共情建模,因为它更符合计算机对聊天机器人的模式。利用医疗聊天机器人进行的两项实验研究考察了移情(与之共感)、同情(为之共感)和行为移情(移情帮助)与非移情反应对感知温暖度、感知真实性的影响,以及它们对信任和使用意图的影响。结果显示,任何形式的移情(与无移情相比)都会增强温暖感知,从而提高信任度和使用意愿。正如假设的那样,在这两项研究中,移情和同情反应会降低聊天机器人的感知真实性,从而抑制这种积极效果。第三项研究并没有在人与人的互动中复制这种反作用。因此,这项研究强调,移情并不同样适用于人与机器人的互动。它进一步引入了 "感知真实性 "的概念,并证明了在与聊天机器人的互动中,明显的人类属性可能会因为感觉不真实而产生反作用。
{"title":"Artificial empathy in healthcare chatbots: Does it feel authentic?","authors":"Lennart Seitz","doi":"10.1016/j.chbah.2024.100067","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100067","url":null,"abstract":"<div><p>Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of <em>empathetic</em> (feeling with), <em>sympathetic</em> (feeling for), and <em>behavioral-empathetic</em> (empathetic helping) vs. <em>non-empathetic</em> responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, <em>empathetic,</em> and <em>sympathetic</em> responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100067"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000276/pdfft?md5=0d321010e61e06e55e950fbc8ca81fa2&pid=1-s2.0-S2949882124000276-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140328259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How chatbots perceive sexting by adolescents 聊天机器人如何看待青少年的色情短信行为
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100068
Tsameret Ricon

This study compares the perceptions and attitudes of two AI chatbots – Claude and ChatGPT – towards sexting by adolescents. Sexting, defined as sharing sexually explicit messages or images, is increasingly common among teenagers and has sparked ethical debates on consent, privacy, and potential harm. The study employs qualitative content analysis to investigate how AI systems address the complex issues related to sexting.

The chatbots were queried on Dec 2023 about the legitimacy of sexting in adolescent relationships, the non-consensual sharing of sexts, and privacy risks. Their responses were analyzed for themes related to the appropriateness, potential harm, and the specificity of recommendations the chatbots offered.

Key differences emerged in their ethical stances. Claude declined to render definitive value judgments, instead emphasizing consent, evaluating risks versus rewards, and seeking to prevent harm by providing concrete advice. ChatGPT was more abstract, stating that appropriateness depends on societal norms. While Claude provided a harm-centric framing of potential emotional, reputational, and legal consequences of activities such as nonconsensual “revenge porn,” ChatGPT used more tentative language. Finally, Claude offered actionable guidance aligned with research insights, while ChatGPT reiterated the need to respect consent without clearly outlining the next steps.

Overall, Claude demonstrated greater nuance in reasoning about ethical sexting issues, while ChatGPT showed greater subjectivity tied to societal standards.

本研究比较了 Claude 和 ChatGPT 这两个人工智能聊天机器人对青少年色情短信的看法和态度。色情短讯被定义为分享露骨的性信息或图片,在青少年中越来越常见,并引发了关于同意、隐私和潜在危害的伦理辩论。这项研究采用定性内容分析的方法,调查人工智能系统如何解决与sexting相关的复杂问题。2023年12月,聊天机器人被问及青少年关系中sexting的合法性、未经同意分享sexts以及隐私风险。我们对聊天机器人的回答进行了分析,分析的主题涉及聊天机器人所提建议的适当性、潜在危害和具体性。Claude 拒绝做出明确的价值判断,而是强调同意、评估风险与回报,并寻求通过提供具体建议来预防伤害。ChatGPT 则更为抽象,它认为适当与否取决于社会规范。克劳德以伤害为中心,阐述了未经同意的 "报复性色情 "等活动可能带来的情感、名誉和法律后果,而 ChatGPT 则使用了更多试探性的语言。最后,克劳德提供了与研究见解相一致的可操作指导,而 ChatGPT 则重申了尊重同意的必要性,但没有明确概述下一步措施。总体而言,克劳德在推理色情短信息伦理问题时表现出更多的细微差别,而 ChatGPT 则表现出与社会标准相关的更大主观性。
{"title":"How chatbots perceive sexting by adolescents","authors":"Tsameret Ricon","doi":"10.1016/j.chbah.2024.100068","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100068","url":null,"abstract":"<div><p>This study compares the perceptions and attitudes of two AI chatbots – Claude and ChatGPT – towards sexting by adolescents. Sexting, defined as sharing sexually explicit messages or images, is increasingly common among teenagers and has sparked ethical debates on consent, privacy, and potential harm. The study employs qualitative content analysis to investigate how AI systems address the complex issues related to sexting.</p><p>The chatbots were queried on Dec 2023 about the legitimacy of sexting in adolescent relationships, the non-consensual sharing of sexts, and privacy risks. Their responses were analyzed for themes related to the appropriateness, potential harm, and the specificity of recommendations the chatbots offered.</p><p>Key differences emerged in their ethical stances. Claude declined to render definitive value judgments, instead emphasizing consent, evaluating risks versus rewards, and seeking to prevent harm by providing concrete advice. ChatGPT was more abstract, stating that appropriateness depends on societal norms. While Claude provided a harm-centric framing of potential emotional, reputational, and legal consequences of activities such as nonconsensual “revenge porn,” ChatGPT used more tentative language. Finally, Claude offered actionable guidance aligned with research insights, while ChatGPT reiterated the need to respect consent without clearly outlining the next steps.</p><p>Overall, Claude demonstrated greater nuance in reasoning about ethical sexting issues, while ChatGPT showed greater subjectivity tied to societal standards.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100068"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000288/pdfft?md5=1fd0ec5bdb989f7d776a272841f738bd&pid=1-s2.0-S2949882124000288-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140332879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atypical responses of job candidates in chatbot job interviews and their possible triggers 求职者在聊天机器人求职面试中的非典型反应及其可能的诱因
Pub Date : 2023-12-12 DOI: 10.1016/j.chbah.2023.100038
Helena Řepová, Pavel Král, Jan Zouhar

Recruiters have observed increased verbal abuse and other non-standard behavior in chatbot job interviews. However, current knowledge about such behavior, which we term atypical responses, is limited. The purpose of this research is to explore and classify the atypical responses of job candidates and explain what triggers these atypical responses in two studies. Study 1 identified atypical candidate responses in chatbot job interviews by applying content analysis to transcripts of authentic job interviews (N = 6583). A multi-stage process classifies atypical responses into six categories: testing the chatbot's capabilities, verbal abuse, testing the chatbot's reactions, further conversation, sex offers, and reverse discrimination. Study 2 tested the triggers of atypical reactions in fictitious chatbot job interviews. Several triggers proved to induce atypical reactions, e.g., lower company attractiveness leads to testing of the chatbot's capabilities and reactions, and additional stress and negative well-being induce responses containing insults.

招聘人员发现,聊天机器人求职面试中的辱骂和其他非标准行为越来越多。然而,目前对此类行为(我们称之为非典型反应)的了解还很有限。本研究的目的是通过两项研究探索求职者的非典型反应并对其进行分类,同时解释引发这些非典型反应的原因。研究 1 通过对真实求职面试记录(N = 6583)进行内容分析,确定了聊天机器人求职面试中求职者的非典型回答。通过多阶段流程,将非典型回答分为六类:测试聊天机器人的能力、辱骂、测试聊天机器人的反应、进一步交谈、性邀约和反向歧视。研究 2 测试了虚构聊天机器人求职面试中非典型反应的触发因素。事实证明,一些触发因素会诱发非典型反应,例如,公司吸引力降低会导致测试聊天机器人的能力和反应,额外的压力和负面情绪会诱发包含侮辱的反应。
{"title":"Atypical responses of job candidates in chatbot job interviews and their possible triggers","authors":"Helena Řepová,&nbsp;Pavel Král,&nbsp;Jan Zouhar","doi":"10.1016/j.chbah.2023.100038","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100038","url":null,"abstract":"<div><p>Recruiters have observed increased verbal abuse and other non-standard behavior in chatbot job interviews. However, current knowledge about such behavior, which we term atypical responses, is limited. The purpose of this research is to explore and classify the atypical responses of job candidates and explain what triggers these atypical responses in two studies. Study 1 identified atypical candidate responses in chatbot job interviews by applying content analysis to transcripts of authentic job interviews (N = 6583). A multi-stage process classifies atypical responses into six categories: testing the chatbot's capabilities, verbal abuse, testing the chatbot's reactions, further conversation, sex offers, and reverse discrimination. Study 2 tested the triggers of atypical reactions in fictitious chatbot job interviews. Several triggers proved to induce atypical reactions, e.g., lower company attractiveness leads to testing of the chatbot's capabilities and reactions, and additional stress and negative well-being induce responses containing insults.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000385/pdfft?md5=5e2309c23cc5c7cdfa25699223ab9646&pid=1-s2.0-S2949882123000385-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138738980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement 有多像人类才足够?揭示虚拟影响力代言的内在机制
Pub Date : 2023-12-11 DOI: 10.1016/j.chbah.2023.100037
Yanni Ma , Jingren Li

Social media and computer-mediated communication technologies have given rise to the emergence of virtual influencers and created a new digital landscape for online interactions. Although an increasing number of virtual influencers - computer-generated agents are developing partnerships with organizations and brands to connect with social media users, there is a paucity of research exploring the mechanism underlying the endorsement of virtual influencers. With an online experiment (N = 320), this study investigated the effects of using virtual influencers in branding. Particularly, we examined how variations in humanlike appearances affect two-dimensional anthropomorphism and para-social interaction in the communication process. In general, results showed that respondents perceived higher levels of mindful anthropomorphism and stronger para-social interactions with virtual influencers that had a more humanlike appearance, leading to more favorable brand attitudes and higher purchase intentions. No significant difference in branding effects was found between a highly humanlike virtual influencer and a real human. Additionally, the branding effects were not different between using a moderately humanlike virtual influencer and a highly humanlike one or a real human endorser via mindless anthropomorphism. Findings provide both theoretical and practical insights into using virtual influencers in branding.

社交媒体和以计算机为媒介的通信技术催生了虚拟影响者的出现,并为在线互动创造了新的数字景观。尽管越来越多的虚拟影响者--计算机生成的代理正与组织和品牌建立合作关系,与社交媒体用户建立联系,但探索虚拟影响者背书机制的研究却很少。通过在线实验(N = 320),本研究调查了在品牌推广中使用虚拟影响者的效果。特别是,我们研究了人形外观的变化如何影响传播过程中的二维拟人化和准社会互动。总体而言,研究结果表明,受访者对外观更像人类的虚拟影响者有更高程度的心灵拟人化感知和更强的准社会互动,从而产生更有利的品牌态度和更高的购买意向。高度拟人化的虚拟影响者与真实的人类在品牌效应上没有明显差异。此外,使用中度拟人化的虚拟影响者与高度拟人化的虚拟影响者或通过无意识拟人化的真人代言人之间的品牌效应也没有差异。研究结果为在品牌推广中使用虚拟影响者提供了理论和实践启示。
{"title":"How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement","authors":"Yanni Ma ,&nbsp;Jingren Li","doi":"10.1016/j.chbah.2023.100037","DOIUrl":"10.1016/j.chbah.2023.100037","url":null,"abstract":"<div><p>Social media and computer-mediated communication technologies have given rise to the emergence of virtual influencers and created a new digital landscape for online interactions. Although an increasing number of virtual influencers - computer-generated agents are developing partnerships with organizations and brands to connect with social media users, there is a paucity of research exploring the mechanism underlying the endorsement of virtual influencers. With an online experiment (<em>N</em> = 320), this study investigated the effects of using virtual influencers in branding. Particularly, we examined how variations in humanlike appearances affect two-dimensional anthropomorphism and para-social interaction in the communication process. In general, results showed that respondents perceived higher levels of mindful anthropomorphism and stronger para-social interactions with virtual influencers that had a more humanlike appearance, leading to more favorable brand attitudes and higher purchase intentions. No significant difference in branding effects was found between a highly humanlike virtual influencer and a real human. Additionally, the branding effects were not different between using a moderately humanlike virtual influencer and a highly humanlike one or a real human endorser via mindless anthropomorphism. Findings provide both theoretical and practical insights into using virtual influencers in branding.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000373/pdfft?md5=d431ea176f9cdc0038568bb5ab7d180a&pid=1-s2.0-S2949882123000373-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139019364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms 机器人能做治疗吗?研究 CBT 机器人与其他行为干预技术在减轻心理健康症状方面的功效比较
Pub Date : 2023-12-08 DOI: 10.1016/j.chbah.2023.100035
Laura Eltahawy , Todd Essig , Nils Myszkowski , Leora Trub

Artificial intelligence therapy bots are gaining traction in the psychotherapy marketplace. Yet, the only existing study examining the efficacy of a therapy bot lacks any meaningful controls for comparison in claiming its effectiveness to treat depression. The current study aims to examine the efficacy of Woebot against three control conditions, including ELIZA, a basic (non-“smart”) conversational bot, a journaling app, and a passive psychoeducation control group. In a sample of 65 young adults, a repeated measures ANOVA failed to detect differences in symptom reduction between active and passive groups. In follow-up analyses using paired samples t-tests, ELIZA users experienced mental health improvements with the largest effect sizes across all mental health outcomes, followed by daily journaling, then Woebot, and finally psychoeducation. Findings reveal that Woebot does not offer benefit above and beyond other self-help behavioral intervention technologies. They underscore that using a no-treatment control group study design to market clinical services should no longer be acceptable nor serve as an acceptable precursor to marketing a chatbot as functionally equivalent to psychotherapy. Doing so creates unnecessary risk for consumers of psychotherapy and undermines the clinical value of robotic therapeutics that could prove effective at addressing mental health problems through rigorous research.

人工智能治疗机器人正在心理治疗市场上获得越来越多的关注。然而,现有的唯一一项关于治疗机器人疗效的研究在宣称其治疗抑郁症的有效性时,缺乏任何有意义的对照。目前的研究旨在检验 Woebot 在三种对照条件下的疗效,包括 ELIZA(一种基本的(非 "智能")对话机器人)、一款日记应用程序和一个被动心理教育对照组。在 65 位年轻人的样本中,重复测量方差分析未能发现主动组和被动组在症状减轻方面的差异。在使用配对样本 t 检验进行的后续分析中,ELIZA 用户在所有心理健康结果中的心理健康改善效果最大,其次是每日日记,然后是 Woebot,最后是心理教育。研究结果表明,Woebot 并没有提供超越其他自助行为干预技术的益处。他们强调,使用无治疗对照组的研究设计来推销临床服务不应再被接受,也不应作为推销聊天机器人的可接受的前奏,将其视为功能等同于心理治疗。这样做会给心理疗法的消费者带来不必要的风险,并损害机器人疗法的临床价值,而通过严格的研究,这些疗法可能被证明能有效解决心理健康问题。
{"title":"Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms","authors":"Laura Eltahawy ,&nbsp;Todd Essig ,&nbsp;Nils Myszkowski ,&nbsp;Leora Trub","doi":"10.1016/j.chbah.2023.100035","DOIUrl":"10.1016/j.chbah.2023.100035","url":null,"abstract":"<div><p>Artificial intelligence therapy bots are gaining traction in the psychotherapy marketplace. Yet, the only existing study examining the efficacy of a therapy bot lacks any meaningful controls for comparison in claiming its effectiveness to treat depression. The current study aims to examine the efficacy of Woebot against three control conditions, including ELIZA, a basic (non-“smart”) conversational bot, a journaling app, and a passive psychoeducation control group. In a sample of 65 young adults, a repeated measures ANOVA failed to detect differences in symptom reduction between active and passive groups. In follow-up analyses using paired samples t-tests, ELIZA users experienced mental health improvements with the largest effect sizes across all mental health outcomes, followed by daily journaling, then Woebot, and finally psychoeducation. Findings reveal that Woebot does not offer benefit above and beyond other self-help behavioral intervention technologies. They underscore that using a no-treatment control group study design to market clinical services should no longer be acceptable nor serve as an acceptable precursor to marketing a chatbot as functionally equivalent to psychotherapy. Doing so creates unnecessary risk for consumers of psychotherapy and undermines the clinical value of robotic therapeutics that could prove effective at addressing mental health problems through rigorous research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300035X/pdfft?md5=2f5886d63cf05ac01ee83fabc35463cb&pid=1-s2.0-S294988212300035X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138611264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics in human-robot interaction 拟人化、xˆ非中心化、意向性和社会性(AXˆIS)机器人技术在人机交互中的作用
Pub Date : 2023-12-07 DOI: 10.1016/j.chbah.2023.100036
Anshu Saxena Arora , Amit Arora , K. Sivakumar , Vasyl Taras

This research explores the socio-cognitive mechanisms of human intelligence through the lens of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics. After delving into three pivotal AXˆIS concepts – robotic anthropomorphism, intentionality, and sociality – the study examines their impact on robot likeability and successful human-robot interaction (HRI) implementation. The research introduces the concept of robotic xˆenocentrism (represented by perceived inferiority and social aggrandizement) as a new global dimension in social robotics literature, positioning it as a higher-order concept that moderates the impact of pivotal independent variables on robot likeability. Analyzing a sample of 308 respondents in global cross-cultural teams, the study confirms that pivotal AXÍS robotics concepts foster positive robot likeability and successful HRI implementation for both industrial and social robots. Perceived inferiority negatively moderated the relationship between anthropomorphism and robot likeability, but it was a positive moderator between intentionality and robot likeability. However, social aggrandizement did not act as a significant boundary condition. Sociality remains unaffected by the moderating influence of xˆenocentrism. The study concludes by outlining future research directions for AXˆIS robotics.

本研究通过拟人化、xˆenocentric、意向性和社会性(AXˆIS)机器人技术的视角,探索人类智能的社会认知机制。在深入探讨了机器人拟人化、意向性和社会性这三个关键的AXˆIS概念后,本研究探讨了它们对机器人喜好度和成功实施人机交互(HRI)的影响。研究引入了机器人xˆenocentrism概念(以感知到的自卑和社会膨胀为代表),将其作为社会机器人学文献中一个新的全球维度,并将其定位为一个高阶概念,可调节关键自变量对机器人喜爱度的影响。通过对全球跨文化团队中 308 名受访者的样本进行分析,研究证实了 AXÍS 机器人技术的关键概念能够促进机器人的积极亲和力,并促进工业机器人和社交机器人成功实施 HRI。自卑感对拟人化与机器人讨人喜欢之间的关系起消极调节作用,但对意向性与机器人讨人喜欢之间的关系起积极调节作用。然而,社会膨胀并不是一个重要的边界条件。社会性仍然不受 xˆ中心主义调节作用的影响。研究最后概述了AXˆIS机器人技术的未来研究方向。
{"title":"The role of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics in human-robot interaction","authors":"Anshu Saxena Arora ,&nbsp;Amit Arora ,&nbsp;K. Sivakumar ,&nbsp;Vasyl Taras","doi":"10.1016/j.chbah.2023.100036","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100036","url":null,"abstract":"<div><p>This research explores the socio-cognitive mechanisms of human intelligence through the lens of anthropomorphic, <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentric, intentional, and social (A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS) robotics. After delving into three pivotal A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS concepts – robotic anthropomorphism, intentionality, and sociality – the study examines their impact on robot likeability and successful human-robot interaction (HRI) implementation. The research introduces the concept of robotic <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentrism (represented by perceived inferiority and social aggrandizement) as a new global dimension in social robotics literature, positioning it as a higher-order concept that moderates the impact of pivotal independent variables on robot likeability. Analyzing a sample of 308 respondents in global cross-cultural teams, the study confirms that pivotal AXÍS robotics concepts foster positive robot likeability and successful HRI implementation for both industrial and social robots. Perceived inferiority negatively moderated the relationship between anthropomorphism and robot likeability, but it was a positive moderator between intentionality and robot likeability. However, social aggrandizement did not act as a significant boundary condition. Sociality remains unaffected by the moderating influence of <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentrism. The study concludes by outlining future research directions for A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS robotics.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000361/pdfft?md5=06b2e254b6c38991e93a7b2a4c5fe749&pid=1-s2.0-S2949882123000361-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138570371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“There Is something Rotten in Denmark”: Investigating the Deepfake persona perceptions and their Implications for human-centered AI "丹麦有些不正常":调查 Deepfake 角色认知及其对以人为本的人工智能的影响
Pub Date : 2023-12-01 DOI: 10.1016/j.chbah.2023.100031
Ilkka Kaate , Joni Salminen , João M. Santos , Soon-Gyo Jung , Hind Almerekhi , Bernard J. Jansen

Although they often have a negative connotation due to their social risks, deepfakes have the potential to improve HCI, human-centered AI, and user experience (UX). To investigate the impact of deepfakes on persona UX, we conducted an experimental study with 46 users who used a deepfake persona and a human persona to carry out a design task. We collected think-aloud, observant notes, and survey data. The results of our mixed-method analysis indicate that if users observe glitches in the deepfake personas, these glitches have a detrimental effect on the persona UX and task performance; however, not all users identify glitches. Our quantitative analysis of survey data shows that there are differences in how (a) users perceive deepfakes, (b) users detect deepfake glitches, (c) deepfake glitches affect information comprehension, and (d) deepfake glitches affect task completion. Glitches have the most significant impact on authenticity, persona perception, and task perception variables but less impact on behavioral variables. The results imply that organizations implementing deepfake personas need to address perceptual challenges before the full potential of deepfake technology can be realized for persona creation.

尽管由于其社会风险,"深度伪造 "往往具有负面含义,但它却具有改善人机交互(HCI)、以人为本的人工智能(AI)和用户体验(UX)的潜力。为了研究 "深度伪造 "对角色用户体验的影响,我们进行了一项实验研究,有46名用户使用 "深度伪造 "角色和人类角色来完成一项设计任务。我们收集了思考录音、观察笔记和调查数据。我们的混合方法分析结果表明,如果用户观察到了deepfake角色中的瑕疵,这些瑕疵就会对角色的用户体验和任务执行产生不利影响;然而,并不是所有用户都能识别出瑕疵。我们对调查数据的定量分析显示,在以下几个方面存在差异:(a)用户如何感知deepfake角色;(b)用户如何发现deepfake角色的漏洞;(c)deepfake角色的漏洞如何影响信息理解;以及(d)deepfake角色的漏洞如何影响任务完成。漏洞对真实性、角色感知和任务感知变量的影响最大,但对行为变量的影响较小。研究结果表明,实施 deepfake 角色的企业需要先解决感知方面的难题,然后才能充分发挥 deepfake 技术在角色创建方面的潜力。
{"title":"“There Is something Rotten in Denmark”: Investigating the Deepfake persona perceptions and their Implications for human-centered AI","authors":"Ilkka Kaate ,&nbsp;Joni Salminen ,&nbsp;João M. Santos ,&nbsp;Soon-Gyo Jung ,&nbsp;Hind Almerekhi ,&nbsp;Bernard J. Jansen","doi":"10.1016/j.chbah.2023.100031","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100031","url":null,"abstract":"<div><p>Although they often have a negative connotation due to their social risks, deepfakes have the potential to improve HCI, human-centered AI, and user experience (UX). To investigate the impact of deepfakes on persona UX, we conducted an experimental study with 46 users who used a deepfake persona and a human persona to carry out a design task. We collected think-aloud, observant notes, and survey data. The results of our mixed-method analysis indicate that if users observe glitches in the deepfake personas, these glitches have a detrimental effect on the persona UX and task performance; however, not all users identify glitches. Our quantitative analysis of survey data shows that there are differences in how (a) users perceive deepfakes, (b) users detect deepfake glitches, (c) deepfake glitches affect information comprehension, and (d) deepfake glitches affect task completion. Glitches have the most significant impact on authenticity, persona perception, and task perception variables but less impact on behavioral variables. The results imply that organizations implementing deepfake personas need to address perceptual challenges before the full potential of deepfake technology can be realized for persona creation.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100031"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000312/pdfft?md5=dc48df221bd193fd7498297cbded5465&pid=1-s2.0-S2949882123000312-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138557625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions 教育中的聊天:是福还是祸?一项探讨早期采用者利用和认知的定性研究
Pub Date : 2023-11-20 DOI: 10.1016/j.chbah.2023.100027
Reza Hadi Mogavi , Chao Deng , Justin Juho Kim , Pengyuan Zhou , Young D. Kwon , Ahmed Hosny Saleh Metwally , Ahmed Tlili , Simone Bassanelli , Antonio Bucchiarone , Sujit Gujar , Lennart E. Nacke , Pan Hui

To foster the development of pedagogically potent and ethically sound AI-integrated learning landscapes, it is pivotal to critically explore the perceptions and experiences of the users immersed in these contexts. In this study, we perform a thorough qualitative content analysis across four key social media platforms. Our goal is to understand the user experience (UX) and views of early adopters of ChatGPT across different educational sectors. The results of our research show that ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training. In social media dialogues, the topics most frequently associated with ChatGPT are productivity, efficiency, and ethics. Early adopters' attitudes towards ChatGPT are multifaceted. On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation. On the other hand, there is a degree of apprehension among concerned users. They worry about a potential overdependence on the AI system, which they fear might encourage superficial learning habits and erode students’ social and critical thinking skills. This dichotomy of opinions underscores the complexity of Human-AI Interaction in educational contexts. Our investigation adds depth to this ongoing discourse, providing crowd-sourced insights for educators and learners who are considering incorporating ChatGPT or similar generative AI tools into their pedagogical strategies.

为了促进教学有效和道德健全的人工智能集成学习景观的发展,批判性地探索沉浸在这些背景下的用户的感知和体验至关重要。在这项研究中,我们对四个主要的社交媒体平台进行了全面的定性内容分析。我们的目标是了解用户体验(UX)和不同教育部门ChatGPT早期采用者的观点。我们的研究结果表明,ChatGPT最常用于高等教育、K-12教育和实践技能培训领域。在社交媒体对话中,最常与ChatGPT相关的话题是生产力、效率和道德。早期采用者对ChatGPT的态度是多方面的。一方面,一些用户将其视为一种能够增强学生自我效能感和学习动机的变革性工具。另一方面,有关用户有一定程度的忧虑。他们担心对人工智能系统的潜在过度依赖,他们担心这可能会鼓励肤浅的学习习惯,侵蚀学生的社交和批判性思维技能。这种观点的二分法强调了教育环境中人类与人工智能交互的复杂性。我们的调查为这一正在进行的讨论增加了深度,为正在考虑将ChatGPT或类似的生成人工智能工具纳入其教学策略的教育工作者和学习者提供了众包见解。
{"title":"ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions","authors":"Reza Hadi Mogavi ,&nbsp;Chao Deng ,&nbsp;Justin Juho Kim ,&nbsp;Pengyuan Zhou ,&nbsp;Young D. Kwon ,&nbsp;Ahmed Hosny Saleh Metwally ,&nbsp;Ahmed Tlili ,&nbsp;Simone Bassanelli ,&nbsp;Antonio Bucchiarone ,&nbsp;Sujit Gujar ,&nbsp;Lennart E. Nacke ,&nbsp;Pan Hui","doi":"10.1016/j.chbah.2023.100027","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100027","url":null,"abstract":"<div><p>To foster the development of pedagogically potent and ethically sound AI-integrated learning landscapes, it is pivotal to critically explore the perceptions and experiences of the users immersed in these contexts. In this study, we perform a thorough qualitative content analysis across four key social media platforms. Our goal is to understand the user experience (UX) and views of early adopters of ChatGPT across different educational sectors. The results of our research show that ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training. In social media dialogues, the topics most frequently associated with ChatGPT are <em>productivity</em>, <em>efficiency</em>, and <em>ethics</em>. Early adopters' attitudes towards ChatGPT are multifaceted. On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation. On the other hand, there is a degree of apprehension among concerned users. They worry about a potential overdependence on the AI system, which they fear might encourage superficial learning habits and erode students’ social and critical thinking skills. This dichotomy of opinions underscores the complexity of Human-AI Interaction in educational contexts. Our investigation adds depth to this ongoing discourse, providing crowd-sourced insights for educators and learners who are considering incorporating ChatGPT or similar generative AI tools into their pedagogical strategies.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100027"},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000270/pdfft?md5=e16714ccddd9036b5ccd2fd32a44df5f&pid=1-s2.0-S2949882123000270-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1