首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Artificial empathy in healthcare chatbots: Does it feel authentic? 医疗聊天机器人中的人工同理心:感觉真实吗?
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100067
Lennart Seitz

Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.

在医疗保健聊天机器人中实施移情被认为有望创造一种人类的温暖感。然而,现有研究往往忽视了移情的多维性,导致人们对人工移情是否与人际移情具有相似感知的认识不足。本文认为,实施体验式移情表达可能会产生意想不到的负面影响,因为它们可能让人感觉不真实。相反,提供工具性支持可能更适合人工共情建模,因为它更符合计算机对聊天机器人的模式。利用医疗聊天机器人进行的两项实验研究考察了移情(与之共感)、同情(为之共感)和行为移情(移情帮助)与非移情反应对感知温暖度、感知真实性的影响,以及它们对信任和使用意图的影响。结果显示,任何形式的移情(与无移情相比)都会增强温暖感知,从而提高信任度和使用意愿。正如假设的那样,在这两项研究中,移情和同情反应会降低聊天机器人的感知真实性,从而抑制这种积极效果。第三项研究并没有在人与人的互动中复制这种反作用。因此,这项研究强调,移情并不同样适用于人与机器人的互动。它进一步引入了 "感知真实性 "的概念,并证明了在与聊天机器人的互动中,明显的人类属性可能会因为感觉不真实而产生反作用。
{"title":"Artificial empathy in healthcare chatbots: Does it feel authentic?","authors":"Lennart Seitz","doi":"10.1016/j.chbah.2024.100067","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100067","url":null,"abstract":"<div><p>Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of <em>empathetic</em> (feeling with), <em>sympathetic</em> (feeling for), and <em>behavioral-empathetic</em> (empathetic helping) vs. <em>non-empathetic</em> responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, <em>empathetic,</em> and <em>sympathetic</em> responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000276/pdfft?md5=0d321010e61e06e55e950fbc8ca81fa2&pid=1-s2.0-S2949882124000276-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140328259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How chatbots perceive sexting by adolescents 聊天机器人如何看待青少年的色情短信行为
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100068
Tsameret Ricon

This study compares the perceptions and attitudes of two AI chatbots – Claude and ChatGPT – towards sexting by adolescents. Sexting, defined as sharing sexually explicit messages or images, is increasingly common among teenagers and has sparked ethical debates on consent, privacy, and potential harm. The study employs qualitative content analysis to investigate how AI systems address the complex issues related to sexting.

The chatbots were queried on Dec 2023 about the legitimacy of sexting in adolescent relationships, the non-consensual sharing of sexts, and privacy risks. Their responses were analyzed for themes related to the appropriateness, potential harm, and the specificity of recommendations the chatbots offered.

Key differences emerged in their ethical stances. Claude declined to render definitive value judgments, instead emphasizing consent, evaluating risks versus rewards, and seeking to prevent harm by providing concrete advice. ChatGPT was more abstract, stating that appropriateness depends on societal norms. While Claude provided a harm-centric framing of potential emotional, reputational, and legal consequences of activities such as nonconsensual “revenge porn,” ChatGPT used more tentative language. Finally, Claude offered actionable guidance aligned with research insights, while ChatGPT reiterated the need to respect consent without clearly outlining the next steps.

Overall, Claude demonstrated greater nuance in reasoning about ethical sexting issues, while ChatGPT showed greater subjectivity tied to societal standards.

本研究比较了 Claude 和 ChatGPT 这两个人工智能聊天机器人对青少年色情短信的看法和态度。色情短讯被定义为分享露骨的性信息或图片,在青少年中越来越常见,并引发了关于同意、隐私和潜在危害的伦理辩论。这项研究采用定性内容分析的方法,调查人工智能系统如何解决与sexting相关的复杂问题。2023年12月,聊天机器人被问及青少年关系中sexting的合法性、未经同意分享sexts以及隐私风险。我们对聊天机器人的回答进行了分析,分析的主题涉及聊天机器人所提建议的适当性、潜在危害和具体性。Claude 拒绝做出明确的价值判断,而是强调同意、评估风险与回报,并寻求通过提供具体建议来预防伤害。ChatGPT 则更为抽象,它认为适当与否取决于社会规范。克劳德以伤害为中心,阐述了未经同意的 "报复性色情 "等活动可能带来的情感、名誉和法律后果,而 ChatGPT 则使用了更多试探性的语言。最后,克劳德提供了与研究见解相一致的可操作指导,而 ChatGPT 则重申了尊重同意的必要性,但没有明确概述下一步措施。总体而言,克劳德在推理色情短信息伦理问题时表现出更多的细微差别,而 ChatGPT 则表现出与社会标准相关的更大主观性。
{"title":"How chatbots perceive sexting by adolescents","authors":"Tsameret Ricon","doi":"10.1016/j.chbah.2024.100068","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100068","url":null,"abstract":"<div><p>This study compares the perceptions and attitudes of two AI chatbots – Claude and ChatGPT – towards sexting by adolescents. Sexting, defined as sharing sexually explicit messages or images, is increasingly common among teenagers and has sparked ethical debates on consent, privacy, and potential harm. The study employs qualitative content analysis to investigate how AI systems address the complex issues related to sexting.</p><p>The chatbots were queried on Dec 2023 about the legitimacy of sexting in adolescent relationships, the non-consensual sharing of sexts, and privacy risks. Their responses were analyzed for themes related to the appropriateness, potential harm, and the specificity of recommendations the chatbots offered.</p><p>Key differences emerged in their ethical stances. Claude declined to render definitive value judgments, instead emphasizing consent, evaluating risks versus rewards, and seeking to prevent harm by providing concrete advice. ChatGPT was more abstract, stating that appropriateness depends on societal norms. While Claude provided a harm-centric framing of potential emotional, reputational, and legal consequences of activities such as nonconsensual “revenge porn,” ChatGPT used more tentative language. Finally, Claude offered actionable guidance aligned with research insights, while ChatGPT reiterated the need to respect consent without clearly outlining the next steps.</p><p>Overall, Claude demonstrated greater nuance in reasoning about ethical sexting issues, while ChatGPT showed greater subjectivity tied to societal standards.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000288/pdfft?md5=1fd0ec5bdb989f7d776a272841f738bd&pid=1-s2.0-S2949882124000288-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140332879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atypical responses of job candidates in chatbot job interviews and their possible triggers 求职者在聊天机器人求职面试中的非典型反应及其可能的诱因
Pub Date : 2023-12-12 DOI: 10.1016/j.chbah.2023.100038
Helena Řepová, Pavel Král, Jan Zouhar

Recruiters have observed increased verbal abuse and other non-standard behavior in chatbot job interviews. However, current knowledge about such behavior, which we term atypical responses, is limited. The purpose of this research is to explore and classify the atypical responses of job candidates and explain what triggers these atypical responses in two studies. Study 1 identified atypical candidate responses in chatbot job interviews by applying content analysis to transcripts of authentic job interviews (N = 6583). A multi-stage process classifies atypical responses into six categories: testing the chatbot's capabilities, verbal abuse, testing the chatbot's reactions, further conversation, sex offers, and reverse discrimination. Study 2 tested the triggers of atypical reactions in fictitious chatbot job interviews. Several triggers proved to induce atypical reactions, e.g., lower company attractiveness leads to testing of the chatbot's capabilities and reactions, and additional stress and negative well-being induce responses containing insults.

招聘人员发现,聊天机器人求职面试中的辱骂和其他非标准行为越来越多。然而,目前对此类行为(我们称之为非典型反应)的了解还很有限。本研究的目的是通过两项研究探索求职者的非典型反应并对其进行分类,同时解释引发这些非典型反应的原因。研究 1 通过对真实求职面试记录(N = 6583)进行内容分析,确定了聊天机器人求职面试中求职者的非典型回答。通过多阶段流程,将非典型回答分为六类:测试聊天机器人的能力、辱骂、测试聊天机器人的反应、进一步交谈、性邀约和反向歧视。研究 2 测试了虚构聊天机器人求职面试中非典型反应的触发因素。事实证明,一些触发因素会诱发非典型反应,例如,公司吸引力降低会导致测试聊天机器人的能力和反应,额外的压力和负面情绪会诱发包含侮辱的反应。
{"title":"Atypical responses of job candidates in chatbot job interviews and their possible triggers","authors":"Helena Řepová,&nbsp;Pavel Král,&nbsp;Jan Zouhar","doi":"10.1016/j.chbah.2023.100038","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100038","url":null,"abstract":"<div><p>Recruiters have observed increased verbal abuse and other non-standard behavior in chatbot job interviews. However, current knowledge about such behavior, which we term atypical responses, is limited. The purpose of this research is to explore and classify the atypical responses of job candidates and explain what triggers these atypical responses in two studies. Study 1 identified atypical candidate responses in chatbot job interviews by applying content analysis to transcripts of authentic job interviews (N = 6583). A multi-stage process classifies atypical responses into six categories: testing the chatbot's capabilities, verbal abuse, testing the chatbot's reactions, further conversation, sex offers, and reverse discrimination. Study 2 tested the triggers of atypical reactions in fictitious chatbot job interviews. Several triggers proved to induce atypical reactions, e.g., lower company attractiveness leads to testing of the chatbot's capabilities and reactions, and additional stress and negative well-being induce responses containing insults.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000385/pdfft?md5=5e2309c23cc5c7cdfa25699223ab9646&pid=1-s2.0-S2949882123000385-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138738980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement 有多像人类才足够?揭示虚拟影响力代言的内在机制
Pub Date : 2023-12-11 DOI: 10.1016/j.chbah.2023.100037
Yanni Ma , Jingren Li

Social media and computer-mediated communication technologies have given rise to the emergence of virtual influencers and created a new digital landscape for online interactions. Although an increasing number of virtual influencers - computer-generated agents are developing partnerships with organizations and brands to connect with social media users, there is a paucity of research exploring the mechanism underlying the endorsement of virtual influencers. With an online experiment (N = 320), this study investigated the effects of using virtual influencers in branding. Particularly, we examined how variations in humanlike appearances affect two-dimensional anthropomorphism and para-social interaction in the communication process. In general, results showed that respondents perceived higher levels of mindful anthropomorphism and stronger para-social interactions with virtual influencers that had a more humanlike appearance, leading to more favorable brand attitudes and higher purchase intentions. No significant difference in branding effects was found between a highly humanlike virtual influencer and a real human. Additionally, the branding effects were not different between using a moderately humanlike virtual influencer and a highly humanlike one or a real human endorser via mindless anthropomorphism. Findings provide both theoretical and practical insights into using virtual influencers in branding.

社交媒体和以计算机为媒介的通信技术催生了虚拟影响者的出现,并为在线互动创造了新的数字景观。尽管越来越多的虚拟影响者--计算机生成的代理正与组织和品牌建立合作关系,与社交媒体用户建立联系,但探索虚拟影响者背书机制的研究却很少。通过在线实验(N = 320),本研究调查了在品牌推广中使用虚拟影响者的效果。特别是,我们研究了人形外观的变化如何影响传播过程中的二维拟人化和准社会互动。总体而言,研究结果表明,受访者对外观更像人类的虚拟影响者有更高程度的心灵拟人化感知和更强的准社会互动,从而产生更有利的品牌态度和更高的购买意向。高度拟人化的虚拟影响者与真实的人类在品牌效应上没有明显差异。此外,使用中度拟人化的虚拟影响者与高度拟人化的虚拟影响者或通过无意识拟人化的真人代言人之间的品牌效应也没有差异。研究结果为在品牌推广中使用虚拟影响者提供了理论和实践启示。
{"title":"How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement","authors":"Yanni Ma ,&nbsp;Jingren Li","doi":"10.1016/j.chbah.2023.100037","DOIUrl":"10.1016/j.chbah.2023.100037","url":null,"abstract":"<div><p>Social media and computer-mediated communication technologies have given rise to the emergence of virtual influencers and created a new digital landscape for online interactions. Although an increasing number of virtual influencers - computer-generated agents are developing partnerships with organizations and brands to connect with social media users, there is a paucity of research exploring the mechanism underlying the endorsement of virtual influencers. With an online experiment (<em>N</em> = 320), this study investigated the effects of using virtual influencers in branding. Particularly, we examined how variations in humanlike appearances affect two-dimensional anthropomorphism and para-social interaction in the communication process. In general, results showed that respondents perceived higher levels of mindful anthropomorphism and stronger para-social interactions with virtual influencers that had a more humanlike appearance, leading to more favorable brand attitudes and higher purchase intentions. No significant difference in branding effects was found between a highly humanlike virtual influencer and a real human. Additionally, the branding effects were not different between using a moderately humanlike virtual influencer and a highly humanlike one or a real human endorser via mindless anthropomorphism. Findings provide both theoretical and practical insights into using virtual influencers in branding.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000373/pdfft?md5=d431ea176f9cdc0038568bb5ab7d180a&pid=1-s2.0-S2949882123000373-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139019364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms 机器人能做治疗吗?研究 CBT 机器人与其他行为干预技术在减轻心理健康症状方面的功效比较
Pub Date : 2023-12-08 DOI: 10.1016/j.chbah.2023.100035
Laura Eltahawy , Todd Essig , Nils Myszkowski , Leora Trub

Artificial intelligence therapy bots are gaining traction in the psychotherapy marketplace. Yet, the only existing study examining the efficacy of a therapy bot lacks any meaningful controls for comparison in claiming its effectiveness to treat depression. The current study aims to examine the efficacy of Woebot against three control conditions, including ELIZA, a basic (non-“smart”) conversational bot, a journaling app, and a passive psychoeducation control group. In a sample of 65 young adults, a repeated measures ANOVA failed to detect differences in symptom reduction between active and passive groups. In follow-up analyses using paired samples t-tests, ELIZA users experienced mental health improvements with the largest effect sizes across all mental health outcomes, followed by daily journaling, then Woebot, and finally psychoeducation. Findings reveal that Woebot does not offer benefit above and beyond other self-help behavioral intervention technologies. They underscore that using a no-treatment control group study design to market clinical services should no longer be acceptable nor serve as an acceptable precursor to marketing a chatbot as functionally equivalent to psychotherapy. Doing so creates unnecessary risk for consumers of psychotherapy and undermines the clinical value of robotic therapeutics that could prove effective at addressing mental health problems through rigorous research.

人工智能治疗机器人正在心理治疗市场上获得越来越多的关注。然而,现有的唯一一项关于治疗机器人疗效的研究在宣称其治疗抑郁症的有效性时,缺乏任何有意义的对照。目前的研究旨在检验 Woebot 在三种对照条件下的疗效,包括 ELIZA(一种基本的(非 "智能")对话机器人)、一款日记应用程序和一个被动心理教育对照组。在 65 位年轻人的样本中,重复测量方差分析未能发现主动组和被动组在症状减轻方面的差异。在使用配对样本 t 检验进行的后续分析中,ELIZA 用户在所有心理健康结果中的心理健康改善效果最大,其次是每日日记,然后是 Woebot,最后是心理教育。研究结果表明,Woebot 并没有提供超越其他自助行为干预技术的益处。他们强调,使用无治疗对照组的研究设计来推销临床服务不应再被接受,也不应作为推销聊天机器人的可接受的前奏,将其视为功能等同于心理治疗。这样做会给心理疗法的消费者带来不必要的风险,并损害机器人疗法的临床价值,而通过严格的研究,这些疗法可能被证明能有效解决心理健康问题。
{"title":"Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms","authors":"Laura Eltahawy ,&nbsp;Todd Essig ,&nbsp;Nils Myszkowski ,&nbsp;Leora Trub","doi":"10.1016/j.chbah.2023.100035","DOIUrl":"10.1016/j.chbah.2023.100035","url":null,"abstract":"<div><p>Artificial intelligence therapy bots are gaining traction in the psychotherapy marketplace. Yet, the only existing study examining the efficacy of a therapy bot lacks any meaningful controls for comparison in claiming its effectiveness to treat depression. The current study aims to examine the efficacy of Woebot against three control conditions, including ELIZA, a basic (non-“smart”) conversational bot, a journaling app, and a passive psychoeducation control group. In a sample of 65 young adults, a repeated measures ANOVA failed to detect differences in symptom reduction between active and passive groups. In follow-up analyses using paired samples t-tests, ELIZA users experienced mental health improvements with the largest effect sizes across all mental health outcomes, followed by daily journaling, then Woebot, and finally psychoeducation. Findings reveal that Woebot does not offer benefit above and beyond other self-help behavioral intervention technologies. They underscore that using a no-treatment control group study design to market clinical services should no longer be acceptable nor serve as an acceptable precursor to marketing a chatbot as functionally equivalent to psychotherapy. Doing so creates unnecessary risk for consumers of psychotherapy and undermines the clinical value of robotic therapeutics that could prove effective at addressing mental health problems through rigorous research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300035X/pdfft?md5=2f5886d63cf05ac01ee83fabc35463cb&pid=1-s2.0-S294988212300035X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138611264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics in human-robot interaction 拟人化、xˆ非中心化、意向性和社会性(AXˆIS)机器人技术在人机交互中的作用
Pub Date : 2023-12-07 DOI: 10.1016/j.chbah.2023.100036
Anshu Saxena Arora , Amit Arora , K. Sivakumar , Vasyl Taras

This research explores the socio-cognitive mechanisms of human intelligence through the lens of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics. After delving into three pivotal AXˆIS concepts – robotic anthropomorphism, intentionality, and sociality – the study examines their impact on robot likeability and successful human-robot interaction (HRI) implementation. The research introduces the concept of robotic xˆenocentrism (represented by perceived inferiority and social aggrandizement) as a new global dimension in social robotics literature, positioning it as a higher-order concept that moderates the impact of pivotal independent variables on robot likeability. Analyzing a sample of 308 respondents in global cross-cultural teams, the study confirms that pivotal AXÍS robotics concepts foster positive robot likeability and successful HRI implementation for both industrial and social robots. Perceived inferiority negatively moderated the relationship between anthropomorphism and robot likeability, but it was a positive moderator between intentionality and robot likeability. However, social aggrandizement did not act as a significant boundary condition. Sociality remains unaffected by the moderating influence of xˆenocentrism. The study concludes by outlining future research directions for AXˆIS robotics.

本研究通过拟人化、xˆenocentric、意向性和社会性(AXˆIS)机器人技术的视角,探索人类智能的社会认知机制。在深入探讨了机器人拟人化、意向性和社会性这三个关键的AXˆIS概念后,本研究探讨了它们对机器人喜好度和成功实施人机交互(HRI)的影响。研究引入了机器人xˆenocentrism概念(以感知到的自卑和社会膨胀为代表),将其作为社会机器人学文献中一个新的全球维度,并将其定位为一个高阶概念,可调节关键自变量对机器人喜爱度的影响。通过对全球跨文化团队中 308 名受访者的样本进行分析,研究证实了 AXÍS 机器人技术的关键概念能够促进机器人的积极亲和力,并促进工业机器人和社交机器人成功实施 HRI。自卑感对拟人化与机器人讨人喜欢之间的关系起消极调节作用,但对意向性与机器人讨人喜欢之间的关系起积极调节作用。然而,社会膨胀并不是一个重要的边界条件。社会性仍然不受 xˆ中心主义调节作用的影响。研究最后概述了AXˆIS机器人技术的未来研究方向。
{"title":"The role of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics in human-robot interaction","authors":"Anshu Saxena Arora ,&nbsp;Amit Arora ,&nbsp;K. Sivakumar ,&nbsp;Vasyl Taras","doi":"10.1016/j.chbah.2023.100036","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100036","url":null,"abstract":"<div><p>This research explores the socio-cognitive mechanisms of human intelligence through the lens of anthropomorphic, <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentric, intentional, and social (A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS) robotics. After delving into three pivotal A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS concepts – robotic anthropomorphism, intentionality, and sociality – the study examines their impact on robot likeability and successful human-robot interaction (HRI) implementation. The research introduces the concept of robotic <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentrism (represented by perceived inferiority and social aggrandizement) as a new global dimension in social robotics literature, positioning it as a higher-order concept that moderates the impact of pivotal independent variables on robot likeability. Analyzing a sample of 308 respondents in global cross-cultural teams, the study confirms that pivotal AXÍS robotics concepts foster positive robot likeability and successful HRI implementation for both industrial and social robots. Perceived inferiority negatively moderated the relationship between anthropomorphism and robot likeability, but it was a positive moderator between intentionality and robot likeability. However, social aggrandizement did not act as a significant boundary condition. Sociality remains unaffected by the moderating influence of <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentrism. The study concludes by outlining future research directions for A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS robotics.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000361/pdfft?md5=06b2e254b6c38991e93a7b2a4c5fe749&pid=1-s2.0-S2949882123000361-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138570371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“There Is something Rotten in Denmark”: Investigating the Deepfake persona perceptions and their Implications for human-centered AI "丹麦有些不正常":调查 Deepfake 角色认知及其对以人为本的人工智能的影响
Pub Date : 2023-12-01 DOI: 10.1016/j.chbah.2023.100031
Ilkka Kaate , Joni Salminen , João M. Santos , Soon-Gyo Jung , Hind Almerekhi , Bernard J. Jansen

Although they often have a negative connotation due to their social risks, deepfakes have the potential to improve HCI, human-centered AI, and user experience (UX). To investigate the impact of deepfakes on persona UX, we conducted an experimental study with 46 users who used a deepfake persona and a human persona to carry out a design task. We collected think-aloud, observant notes, and survey data. The results of our mixed-method analysis indicate that if users observe glitches in the deepfake personas, these glitches have a detrimental effect on the persona UX and task performance; however, not all users identify glitches. Our quantitative analysis of survey data shows that there are differences in how (a) users perceive deepfakes, (b) users detect deepfake glitches, (c) deepfake glitches affect information comprehension, and (d) deepfake glitches affect task completion. Glitches have the most significant impact on authenticity, persona perception, and task perception variables but less impact on behavioral variables. The results imply that organizations implementing deepfake personas need to address perceptual challenges before the full potential of deepfake technology can be realized for persona creation.

尽管由于其社会风险,"深度伪造 "往往具有负面含义,但它却具有改善人机交互(HCI)、以人为本的人工智能(AI)和用户体验(UX)的潜力。为了研究 "深度伪造 "对角色用户体验的影响,我们进行了一项实验研究,有46名用户使用 "深度伪造 "角色和人类角色来完成一项设计任务。我们收集了思考录音、观察笔记和调查数据。我们的混合方法分析结果表明,如果用户观察到了deepfake角色中的瑕疵,这些瑕疵就会对角色的用户体验和任务执行产生不利影响;然而,并不是所有用户都能识别出瑕疵。我们对调查数据的定量分析显示,在以下几个方面存在差异:(a)用户如何感知deepfake角色;(b)用户如何发现deepfake角色的漏洞;(c)deepfake角色的漏洞如何影响信息理解;以及(d)deepfake角色的漏洞如何影响任务完成。漏洞对真实性、角色感知和任务感知变量的影响最大,但对行为变量的影响较小。研究结果表明,实施 deepfake 角色的企业需要先解决感知方面的难题,然后才能充分发挥 deepfake 技术在角色创建方面的潜力。
{"title":"“There Is something Rotten in Denmark”: Investigating the Deepfake persona perceptions and their Implications for human-centered AI","authors":"Ilkka Kaate ,&nbsp;Joni Salminen ,&nbsp;João M. Santos ,&nbsp;Soon-Gyo Jung ,&nbsp;Hind Almerekhi ,&nbsp;Bernard J. Jansen","doi":"10.1016/j.chbah.2023.100031","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100031","url":null,"abstract":"<div><p>Although they often have a negative connotation due to their social risks, deepfakes have the potential to improve HCI, human-centered AI, and user experience (UX). To investigate the impact of deepfakes on persona UX, we conducted an experimental study with 46 users who used a deepfake persona and a human persona to carry out a design task. We collected think-aloud, observant notes, and survey data. The results of our mixed-method analysis indicate that if users observe glitches in the deepfake personas, these glitches have a detrimental effect on the persona UX and task performance; however, not all users identify glitches. Our quantitative analysis of survey data shows that there are differences in how (a) users perceive deepfakes, (b) users detect deepfake glitches, (c) deepfake glitches affect information comprehension, and (d) deepfake glitches affect task completion. Glitches have the most significant impact on authenticity, persona perception, and task perception variables but less impact on behavioral variables. The results imply that organizations implementing deepfake personas need to address perceptual challenges before the full potential of deepfake technology can be realized for persona creation.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000312/pdfft?md5=dc48df221bd193fd7498297cbded5465&pid=1-s2.0-S2949882123000312-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138557625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions 教育中的聊天:是福还是祸?一项探讨早期采用者利用和认知的定性研究
Pub Date : 2023-11-20 DOI: 10.1016/j.chbah.2023.100027
Reza Hadi Mogavi , Chao Deng , Justin Juho Kim , Pengyuan Zhou , Young D. Kwon , Ahmed Hosny Saleh Metwally , Ahmed Tlili , Simone Bassanelli , Antonio Bucchiarone , Sujit Gujar , Lennart E. Nacke , Pan Hui

To foster the development of pedagogically potent and ethically sound AI-integrated learning landscapes, it is pivotal to critically explore the perceptions and experiences of the users immersed in these contexts. In this study, we perform a thorough qualitative content analysis across four key social media platforms. Our goal is to understand the user experience (UX) and views of early adopters of ChatGPT across different educational sectors. The results of our research show that ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training. In social media dialogues, the topics most frequently associated with ChatGPT are productivity, efficiency, and ethics. Early adopters' attitudes towards ChatGPT are multifaceted. On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation. On the other hand, there is a degree of apprehension among concerned users. They worry about a potential overdependence on the AI system, which they fear might encourage superficial learning habits and erode students’ social and critical thinking skills. This dichotomy of opinions underscores the complexity of Human-AI Interaction in educational contexts. Our investigation adds depth to this ongoing discourse, providing crowd-sourced insights for educators and learners who are considering incorporating ChatGPT or similar generative AI tools into their pedagogical strategies.

为了促进教学有效和道德健全的人工智能集成学习景观的发展,批判性地探索沉浸在这些背景下的用户的感知和体验至关重要。在这项研究中,我们对四个主要的社交媒体平台进行了全面的定性内容分析。我们的目标是了解用户体验(UX)和不同教育部门ChatGPT早期采用者的观点。我们的研究结果表明,ChatGPT最常用于高等教育、K-12教育和实践技能培训领域。在社交媒体对话中,最常与ChatGPT相关的话题是生产力、效率和道德。早期采用者对ChatGPT的态度是多方面的。一方面,一些用户将其视为一种能够增强学生自我效能感和学习动机的变革性工具。另一方面,有关用户有一定程度的忧虑。他们担心对人工智能系统的潜在过度依赖,他们担心这可能会鼓励肤浅的学习习惯,侵蚀学生的社交和批判性思维技能。这种观点的二分法强调了教育环境中人类与人工智能交互的复杂性。我们的调查为这一正在进行的讨论增加了深度,为正在考虑将ChatGPT或类似的生成人工智能工具纳入其教学策略的教育工作者和学习者提供了众包见解。
{"title":"ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions","authors":"Reza Hadi Mogavi ,&nbsp;Chao Deng ,&nbsp;Justin Juho Kim ,&nbsp;Pengyuan Zhou ,&nbsp;Young D. Kwon ,&nbsp;Ahmed Hosny Saleh Metwally ,&nbsp;Ahmed Tlili ,&nbsp;Simone Bassanelli ,&nbsp;Antonio Bucchiarone ,&nbsp;Sujit Gujar ,&nbsp;Lennart E. Nacke ,&nbsp;Pan Hui","doi":"10.1016/j.chbah.2023.100027","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100027","url":null,"abstract":"<div><p>To foster the development of pedagogically potent and ethically sound AI-integrated learning landscapes, it is pivotal to critically explore the perceptions and experiences of the users immersed in these contexts. In this study, we perform a thorough qualitative content analysis across four key social media platforms. Our goal is to understand the user experience (UX) and views of early adopters of ChatGPT across different educational sectors. The results of our research show that ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training. In social media dialogues, the topics most frequently associated with ChatGPT are <em>productivity</em>, <em>efficiency</em>, and <em>ethics</em>. Early adopters' attitudes towards ChatGPT are multifaceted. On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation. On the other hand, there is a degree of apprehension among concerned users. They worry about a potential overdependence on the AI system, which they fear might encourage superficial learning habits and erode students’ social and critical thinking skills. This dichotomy of opinions underscores the complexity of Human-AI Interaction in educational contexts. Our investigation adds depth to this ongoing discourse, providing crowd-sourced insights for educators and learners who are considering incorporating ChatGPT or similar generative AI tools into their pedagogical strategies.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000270/pdfft?md5=e16714ccddd9036b5ccd2fd32a44df5f&pid=1-s2.0-S2949882123000270-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents 从像个人一样说话到个性化:与对话代理进行个性化、定期互动的效果
Pub Date : 2023-11-20 DOI: 10.1016/j.chbah.2023.100030
Theo Araujo , Nadine Bol

As human-AI interactions become more pervasive, conversational agents are increasingly relevant in our communication environment. While a rich body of research investigates the consequences of one-shot, single interactions with these agents, knowledge is still scarce on how these consequences evolve across regular, repeated interactions in which these agents make use of AI-enabled techniques to enable increasingly personalized conversations and recommendations. By means of a longitudinal experiment (N = 179) with an agent able to personalize a conversation, this study sheds light on how perceptions – about the agent (anthropomorphism and trust), the interaction (dialogue quality and privacy risks), and the information (relevance and credibility) – and behavior (self-disclosure and recommendation adherence) evolve across interactions. The findings highlight the role of interplay between system-initiated personalization and repeated exposure in this process, suggesting the importance of considering the role of AI in communication processes in a dynamic manner.

随着人类与人工智能的互动变得越来越普遍,对话代理在我们的交流环境中越来越重要。虽然有大量的研究调查了与这些代理进行一次性、单次交互的后果,但关于这些后果如何在常规、重复的交互中演变的知识仍然很少,在这些交互中,这些代理使用支持人工智能的技术来实现越来越个性化的对话和推荐。通过纵向实验(N = 179)与一个能够个性化对话的代理,本研究揭示了感知-关于代理(拟人化和信任),互动(对话质量和隐私风险),信息(相关性和可信度)和行为(自我披露和推荐依从性)如何在互动中演变。研究结果强调了在这一过程中系统发起的个性化和重复暴露之间的相互作用,表明以动态方式考虑人工智能在沟通过程中的作用的重要性。
{"title":"From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents","authors":"Theo Araujo ,&nbsp;Nadine Bol","doi":"10.1016/j.chbah.2023.100030","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100030","url":null,"abstract":"<div><p>As human-AI interactions become more pervasive, conversational agents are increasingly relevant in our communication environment. While a rich body of research investigates the consequences of one-shot, single interactions with these agents, knowledge is still scarce on how these consequences evolve across regular, repeated interactions in which these agents make use of AI-enabled techniques to enable increasingly personalized conversations and recommendations. By means of a longitudinal experiment (<em>N</em> = 179) with an agent able to personalize a conversation, this study sheds light on how perceptions – about the agent (anthropomorphism and trust), the interaction (dialogue quality and privacy risks), and the information (relevance and credibility) – and behavior (self-disclosure and recommendation adherence) evolve across interactions. The findings highlight the role of interplay between system-initiated personalization and repeated exposure in this process, suggesting the importance of considering the role of AI in communication processes in a dynamic manner.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000300/pdfft?md5=0e32c4980a0e73c074f1b9a6eb531c3f&pid=1-s2.0-S2949882123000300-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social intimacy and skewed love: A study of the attachment relationship between internet group young users and a digital human 社会亲密与扭曲的爱:网络群体青年用户与数字人的依恋关系研究
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100019
Hanzhong Zhang , Ziwei Xiang , Jibin Yin

Interactions between human beings and digital humans have become a new network phenomenon, and these relationships have gradually become a topic of research. There is still a lack of sufficient research on whether and what kind of attachment relationship exists in these situations. Based on this problem, in this study, a digital human was designed that was oriented to social software and put into chat groups for interaction and research. A questionnaire survey, case analysis, and netnography analysis were used to collect and examine relevant data. The study found a correlation between the type of attachment of users and the degree of attachment to the digital human. In addition, users who were heavily dependent on the network were more likely to try to complete their attachment with the digital human. Attachment with the digital human was able to calm the users’ emotional intensity. This attachment was considered as close to a skewed desire projection. Through the intermediary of a digital human, Internet users have been better able to fulfill some of their own desires.

人类与数字人之间的互动已经成为一种新的网络现象,这些关系也逐渐成为一个研究课题。对于在这些情况下是否存在依恋关系以及存在何种依恋关系,目前还缺乏足够的研究。基于这个问题,本研究设计了一个面向社交软件的数字人,并将其放入聊天群中进行互动和研究。采用问卷调查、案例分析和网络图分析等方法收集和检验相关数据。研究发现,用户的依恋类型与对数字人的依恋程度之间存在相关性。此外,严重依赖网络的用户更有可能尝试与数字人建立联系。与数字人的依恋能够平息用户的情绪强度。这种依恋被认为是一种扭曲的欲望投射。通过数字人的中介,互联网用户能够更好地满足自己的一些欲望。
{"title":"Social intimacy and skewed love: A study of the attachment relationship between internet group young users and a digital human","authors":"Hanzhong Zhang ,&nbsp;Ziwei Xiang ,&nbsp;Jibin Yin","doi":"10.1016/j.chbah.2023.100019","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100019","url":null,"abstract":"<div><p>Interactions between human beings and digital humans have become a new network phenomenon, and these relationships have gradually become a topic of research. There is still a lack of sufficient research on whether and what kind of attachment relationship exists in these situations. Based on this problem, in this study, a digital human was designed that was oriented to social software and put into chat groups for interaction and research. A questionnaire survey, case analysis, and netnography analysis were used to collect and examine relevant data. The study found a correlation between the type of attachment of users and the degree of attachment to the digital human. In addition, users who were heavily dependent on the network were more likely to try to complete their attachment with the digital human. Attachment with the digital human was able to calm the users’ emotional intensity. This attachment was considered as close to a skewed desire projection. Through the intermediary of a digital human, Internet users have been better able to fulfill some of their own desires.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1