首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors 了解人工智能聊天机器人在教育领域的应用:用户行为因素的 PLS-SEM 分析
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100098
Md Rabiul Hasan , Nahian Ismail Chowdhury , Md Hadisur Rahman , Md Asif Bin Syed , JuHyeong Ryu
The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.
人工智能(AI)与教育的结合是最近才出现的,而聊天机器人则是这一变革中值得关注的新成员。随着在线学习平台的快速发展,学生需要迅速适应,以便在这一动态环境中脱颖而出。因此,了解学生对聊天机器人的接受程度至关重要,尤其是那些采用大型语言模型(LLM)的聊天机器人,如 Chat Generative Pretrained Transformer(ChatGPT)、Google Bard 和其他交互式人工智能技术。调查学生如何接受和看待聊天机器人,对于引导学生将聊天机器人融入工业 4.0 并顺利过渡到工业 5.0 的定制和以人为本的方法至关重要。然而,关于教育领域聊天机器人的现有研究忽略了与行为相关的关键方面,如乐观、创新、不适、不安全、透明、道德、互动、参与和准确性,从而造成了重大的文献空白。为了填补这一空白,本研究采用偏最小二乘法结构方程模型(PLS-SEM)研究学生在教育领域采用聊天机器人的决定因素,同时考虑了技术准备指数和技术接受模型。我们使用五点李克特量表进行数据收集,共收集到 185 个回答,并使用 R-Studio 软件对其进行了分析。为实现研究目标,我们提出了 12 项假设。结果显示,乐观和创新与 "感知易用性 "和 "感知有用性 "呈正相关。相反,"不舒适 "和 "不安全 "对 "感知易用性 "有负面影响,只有 "不安全 "对 "感知有用性 "有负面影响。此外,"感知易用性"、"感知有用性"、"互动与参与"、"准确性 "和 "响应性 "都对 "使用意向 "有显著促进作用,而 "透明度 "和 "道德规范 "则对 "使用意向 "有负面影响。最后,使用意向对互动、参与、准确性、响应性、透明度、道德和决策感知之间的关系起到了中介作用。这些发现为未来的技术设计者提供了启示,阐明了影响聊天机器人在教育环境中的采用和使用的关键用户行为因素。
{"title":"Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors","authors":"Md Rabiul Hasan ,&nbsp;Nahian Ismail Chowdhury ,&nbsp;Md Hadisur Rahman ,&nbsp;Md Asif Bin Syed ,&nbsp;JuHyeong Ryu","doi":"10.1016/j.chbah.2024.100098","DOIUrl":"10.1016/j.chbah.2024.100098","url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making moral decisions with artificial agents as advisors. A fNIRS study 以人工代理为顾问做出道德决策。fNIRS 研究
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100096
Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse
Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an fNIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.
人工智能(AI)即将影响我们生活的每一个领域。它越来越多地被用作辅助决策的顾问。本研究旨在调查人工智能顾问(即决策辅助工具)提供的道德论据对人类道德决策的影响以及相关的神经关联。研究人员向受试者展示了牺牲性道德困境,受试者必须自己做出道德决策(即基线运行),或者在人工智能顾问提供功利主义或去道义论证的情况下做出道德决策(即人工智能顾问运行),同时使用 fNIRS 设备测量受试者的大脑活动。总体而言,人工智能顾问对参与者的影响很大。与功利主义论点相比,对去义务论点的反应时间更长,右侧背外侧前额叶皮层活动减少。机器提供的去道义论证似乎导致了对两难困境的情感反应评估的降低。这导致了功利主义水平的降低,据说是为了避免行为不如机器冷血,维护自己的(自我)形象。综上所述,这些结果表明,动机力可以导致道德决策过程中情感过程的自愿上调或下调。
{"title":"Making moral decisions with artificial agents as advisors. A fNIRS study","authors":"Eve Florianne Fabre ,&nbsp;Damien Mouratille ,&nbsp;Vincent Bonnemains ,&nbsp;Grazia Pia Palmiotti ,&nbsp;Mickael Causse","doi":"10.1016/j.chbah.2024.100096","DOIUrl":"10.1016/j.chbah.2024.100096","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an <em>f</em>NIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100096"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aversion against machines with complex mental abilities: The role of individual differences 对具有复杂心理能力的机器的厌恶:个体差异的作用
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100087
Andrea Grundke , Markus Appel , Jan-Philipp Stein

Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (N1 = 391, N2 = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.

理论认为,与不具备人类心理能力的机器人相比,具备人类心理能力(即高度代理能力和经验)的机器人会引起更强烈的厌恶感。然而,虽然有多项研究支持这一预测,但也有证据表明,机器人的心理能力可以得到积极的评价,至少在某些个体看来是这样。为了帮助解决这种矛盾,我们重点研究了可能会影响用户对具有不同(感知)智力水平的机器的反应的相当稳定的个体差异。具体来说,我们探讨了作为潜在调节因素的四个关键变量:一神论宗教信仰、拟人化倾向、先前对机器人的态度以及对复杂技术的普遍亲和力。我们进行了两次预先注册的在线实验(N1 = 391,N2 = 617),使用文本小故事向参与者介绍一个具有或不具有复杂的类人功能的机器人。结果显示,对机器人的负面态度增加了人们对具有(与不具有)复杂思维的机器的相对反感,而技术亲和力则削弱了条件之间的差异。一神论宗教信仰的结果不一,而拟人化倾向对唤起的厌恶感没有显著影响。总之,我们得出的结论是,某些个体差异在对具有复杂思维的机器的认知中起着重要作用,在未来的研究中应加以考虑。
{"title":"Aversion against machines with complex mental abilities: The role of individual differences","authors":"Andrea Grundke ,&nbsp;Markus Appel ,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2024.100087","DOIUrl":"10.1016/j.chbah.2024.100087","url":null,"abstract":"<div><p>Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (<em>N</em><sub><em>1</em></sub> = 391, <em>N</em><sub><em>2</em></sub> = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000471/pdfft?md5=d427d8fd14eb2a20aa2d28b06757e636&pid=1-s2.0-S2949882124000471-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing ChatGPT's impact in higher education: Student and faculty perspectives 释放 ChatGPT 在高等教育中的影响力:学生和教师的观点
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100090
Parsa Rajabi , Parnian Taghipour , Diana Cukierman , Tenzin Doleck
As Chat Generative Pre-trained Transformer (ChatGPT) gains traction, its impact on post-secondary education is increasingly being debated. This qualitative study explores the perception of students and faculty members at a research university in Canada regarding ChatGPT's use in a post-secondary setting, focusing on how it could be incorporated and what ways instructors can respond to this technology. We present the summary of a discussion that took place in a 2-hour focus group session with 40 participants from the computer science and engineering departments, and highlight issues surrounding plagiarism, assessment methods, and the appropriate use of ChatGPT. Findings suggest that students are likely to use ChatGPT, but there is a need for specific guidelines, more classroom assessments, and mandatory reporting of ChatGPT use. The study contributes to the emergent research on ChatGPT in higher education and emphasizes the importance of proactively addressing challenges and opportunities associated with ChatGPT adoption and use. The novelty of the study involves capturing the perspectives of students and faculty members. This paper aims to provide a more refined understanding of the complex interplay between AI chatbots and higher education that will help educators navigate the rapidly evolving landscape of AI-driven education.
随着 Chat Generative Pre-trained Transformer(ChatGPT)的普及,人们越来越多地讨论它对中学后教育的影响。本定性研究探讨了加拿大一所研究型大学的学生和教师对 ChatGPT 在中学后教育环境中使用的看法,重点是如何将其纳入教学,以及教师可以通过哪些方式应对这种技术。我们对来自计算机科学和工程系的 40 位参与者进行了长达 2 小时的焦点小组讨论,重点讨论了有关剽窃、评估方法和适当使用 ChatGPT 的问题。研究结果表明,学生有可能使用 ChatGPT,但需要具体的指导原则、更多的课堂评估以及强制报告 ChatGPT 的使用情况。这项研究为高等教育中有关 ChatGPT 的新兴研究做出了贡献,并强调了积极应对与 ChatGPT 的采用和使用相关的挑战和机遇的重要性。这项研究的新颖之处在于捕捉了学生和教师的观点。本文旨在为人工智能聊天机器人与高等教育之间复杂的相互作用提供一个更精细的理解,这将有助于教育工作者驾驭人工智能驱动的教育的快速发展。
{"title":"Unleashing ChatGPT's impact in higher education: Student and faculty perspectives","authors":"Parsa Rajabi ,&nbsp;Parnian Taghipour ,&nbsp;Diana Cukierman ,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2024.100090","DOIUrl":"10.1016/j.chbah.2024.100090","url":null,"abstract":"<div><div>As Chat Generative Pre-trained Transformer (ChatGPT) gains traction, its impact on post-secondary education is increasingly being debated. This qualitative study explores the perception of students and faculty members at a research university in Canada regarding ChatGPT's use in a post-secondary setting, focusing on how it could be incorporated and what ways instructors can respond to this technology. We present the summary of a discussion that took place in a 2-hour focus group session with 40 participants from the computer science and engineering departments, and highlight issues surrounding plagiarism, assessment methods, and the appropriate use of ChatGPT. Findings suggest that students are likely to use ChatGPT, but there is a need for specific guidelines, more classroom assessments, and mandatory reporting of ChatGPT use. The study contributes to the emergent research on ChatGPT in higher education and emphasizes the importance of proactively addressing challenges and opportunities associated with ChatGPT adoption and use. The novelty of the study involves capturing the perspectives of students and faculty members. This paper aims to provide a more refined understanding of the complex interplay between AI chatbots and higher education that will help educators navigate the rapidly evolving landscape of AI-driven education.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000501/pdfft?md5=ad3828185881ae4a828f051407953830&pid=1-s2.0-S2949882124000501-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
News bylines and perceived AI authorship: Effects on source and message credibility 新闻署名和认知的人工智能作者:对来源和信息可信度的影响
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100093
Haiyan Jia , Alyssa Appelman , Mu Wu , Steve Bien-Aimé
With emerging abilities to generate content, artificial intelligence (AI) poses a challenge to identifying authorship of news content. This study focuses on source and message credibility evaluation as AI becomes incorporated into journalistic practices. An experiment (N = 269) explored the effects of news bylines and AI authorship on readers’ perceptions. The findings showed that perceived AI contribution, rather than the labeling of the AI role, predicted readers’ perceptions of the source and the content. When readers thought AI contributed more to a news article, they indicated lower message credibility and source credibility perceptions. Humanness perceptions fully mediated the relationships between perceived AI contribution and perceived message credibility and source credibility. This study yielded theoretical implications for understanding readers’ mental model of machine sourceness and practical implications for newsrooms toward ethical AI in news automation and production.
随着人工智能(AI)生成内容的能力不断增强,它对识别新闻内容的作者身份提出了挑战。本研究的重点是在人工智能融入新闻实践的过程中,对消息来源和消息可信度进行评估。一项实验(N = 269)探讨了新闻署名和人工智能作者对读者看法的影响。研究结果表明,读者对新闻来源和内容的看法取决于人工智能的贡献,而不是人工智能角色的标签。当读者认为人工智能对新闻文章的贡献更大时,他们对信息可信度和来源可信度的感知就会降低。人性化感知完全调节了感知到的人工智能贡献与感知到的信息可信度和消息来源可信度之间的关系。这项研究为理解读者对机器来源的心理模型提供了理论依据,也为新闻编辑室在新闻自动化和新闻生产中实现人工智能伦理提供了实践依据。
{"title":"News bylines and perceived AI authorship: Effects on source and message credibility","authors":"Haiyan Jia ,&nbsp;Alyssa Appelman ,&nbsp;Mu Wu ,&nbsp;Steve Bien-Aimé","doi":"10.1016/j.chbah.2024.100093","DOIUrl":"10.1016/j.chbah.2024.100093","url":null,"abstract":"<div><div>With emerging abilities to generate content, artificial intelligence (AI) poses a challenge to identifying authorship of news content. This study focuses on source and message credibility evaluation as AI becomes incorporated into journalistic practices. An experiment (<em>N</em> = 269) explored the effects of news bylines and AI authorship on readers’ perceptions. The findings showed that perceived AI contribution, rather than the labeling of the AI role, predicted readers’ perceptions of the source and the content. When readers thought AI contributed more to a news article, they indicated lower message credibility and source credibility perceptions. Humanness perceptions fully mediated the relationships between perceived AI contribution and perceived message credibility and source credibility. This study yielded theoretical implications for understanding readers’ mental model of machine sourceness and practical implications for newsrooms toward ethical AI in news automation and production.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100093"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance 人工智能集成中的效率-责任权衡:对人类绩效和过度依赖的影响
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100099
Nicolas Spatola
As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, "Cogbot," offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.
随着人工智能在各行各业的普及,探索过度依赖这些系统的心理影响至关重要。本研究探讨了不同形式的聊天机器人辅助(只提供指令、只提供答案以及指令与答案相结合)如何影响用户的表现以及随着时间的推移用户对聊天机器人的依赖程度。在两个实验中,参与者在聊天机器人 "Cogbot "的帮助下完成了推理测试,聊天机器人提供了不同程度的详细解释和直接回答。在实验 1 中,与接受说明的参与者相比,获得直接答案的参与者对聊天机器人的依赖程度更高,这与效率优先于可解释性的实际假设相吻合。实验 2 引入了人工智能错误指导的转移问题,揭示了当人工智能出错时,最初对直接回答的依赖会影响后续任务的表现,从而支持了对自动化自满的担忧。研究结果表明,虽然注重效率的人工智能解决方案能提高即时绩效,但它们有可能导致过度同化和警惕性降低,从而在人工智能准确性出现问题时导致绩效大幅下降。相反,在没有直接答案的情况下,解释性指导并不能显著改善结果。这些结果凸显了人工智能效率与责任之间复杂的动态关系,表明负责任地采用人工智能需要在精简功能与防止过度依赖之间取得平衡。
{"title":"The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2024.100099","DOIUrl":"10.1016/j.chbah.2024.100099","url":null,"abstract":"<div><div>As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, \"Cogbot,\" offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100099"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can you repeat that again? Investigating the mediating effects of perceived accommodation appropriateness for accommodative voice-based assistants 你能再说一遍吗?调查感知到的通融适宜性对通融型语音助手的中介效应
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100102
Matthew J.A. Craig , Xialing Lin , Chad Edwards , Autumn Edwards
The widespread use of Voice-Based Assistants (VBAs) in various applications has introduced a new dimension to human-machine communication. This study explores how users assess VBAs exhibiting either excessive or insufficient communication accommodation in imagined initial interactions. Drawing on Communication Accommodation Theory (CAT) and the Stereotype Content Model (SCM), the present research investigates the mediation effect of perceived accommodation on the relationship between warmth and competence of the SCM and evaluations of the VBA as a communicator and a speaker. Participants evaluated the underaccommodative VBA significantly lower with respect to its communication and evaluations of the VBA as a speaker, which were indirectly predicted by warmth and competence stereotype content models via the perceived appropriateness of the communication. The implications of our findings and future research are discussed.
语音助理(VBA)在各种应用中的广泛使用为人机交流引入了一个新的维度。本研究探讨了用户如何评价 VBA 在想象的初始互动中表现出过度或不足的交流适应性。本研究以交流适应理论(CAT)和刻板印象内容模型(SCM)为基础,探讨了感知到的适应性对单片机的热情和能力与作为交流者和演讲者的 VBA 评价之间关系的调节作用。受试者对不太通融的 VBA 的交流评价和对 VBA 作为演讲者的评价明显较低,而这是由温暖和能力刻板印象内容模型通过感知到的交流适当性间接预测的。本文讨论了我们的发现和未来研究的意义。
{"title":"Can you repeat that again? Investigating the mediating effects of perceived accommodation appropriateness for accommodative voice-based assistants","authors":"Matthew J.A. Craig ,&nbsp;Xialing Lin ,&nbsp;Chad Edwards ,&nbsp;Autumn Edwards","doi":"10.1016/j.chbah.2024.100102","DOIUrl":"10.1016/j.chbah.2024.100102","url":null,"abstract":"<div><div>The widespread use of Voice-Based Assistants (VBAs) in various applications has introduced a new dimension to human-machine communication. This study explores how users assess VBAs exhibiting either excessive or insufficient communication accommodation in imagined initial interactions. Drawing on Communication Accommodation Theory (CAT) and the Stereotype Content Model (SCM), the present research investigates the mediation effect of perceived accommodation on the relationship between warmth and competence of the SCM and evaluations of the VBA as a communicator and a speaker. Participants evaluated the underaccommodative VBA significantly lower with respect to its communication and evaluations of the VBA as a speaker, which were indirectly predicted by warmth and competence stereotype content models via the perceived appropriateness of the communication. The implications of our findings and future research are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can ChatGPT read who you are? ChatGPT 能看懂你是谁吗?
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100088
Erik Derner , Dalibor Kučera , Nuria Oliver , Jan Zahálka

The interplay between artificial intelligence (AI) and psychology, particularly in personality assessment, represents an important emerging area of research. Accurate personality trait estimation is crucial not only for enhancing personalization in human-computer interaction but also for a wide variety of applications ranging from mental health to education. This paper analyzes the capability of a generic chatbot, ChatGPT, to effectively infer personality traits from short texts. We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants. Their self-assessments based on the Big Five Inventory (BFI) questionnaire serve as the ground truth. We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text. We also uncover a ‘positivity bias’ in ChatGPT's assessments across all personality dimensions and explore the impact of prompt composition on accuracy. This work contributes to the understanding of AI capabilities in psychological assessment, highlighting both the potential and limitations of using large language models for personality inference. Our research underscores the importance of responsible AI development, considering ethical implications such as privacy, consent, autonomy, and bias in AI applications.

人工智能(AI)与心理学之间的相互作用,特别是在人格评估方面,是一个重要的新兴研究领域。准确的个性特征评估不仅对增强人机交互的个性化至关重要,而且对从心理健康到教育的各种应用也至关重要。本文分析了通用聊天机器人 ChatGPT 从短文中有效推断个性特征的能力。我们报告了一项综合用户研究的结果,该研究以 155 位参与者用捷克语撰写的文本为特色。他们基于大五量表(BFI)问卷的自我评估是基本事实。我们将 ChatGPT 估算出的人格特质与人类评分者估算出的人格特质进行了比较,并报告了 ChatGPT 在从文本推断人格特质方面具有竞争力的表现。我们还发现了 ChatGPT 在所有人格维度的评估中存在 "积极性偏差",并探讨了提示构成对准确性的影响。这项工作有助于人们了解人工智能在心理评估中的能力,突出了使用大型语言模型进行人格推断的潜力和局限性。我们的研究强调了负责任的人工智能开发的重要性,考虑到了人工智能应用中的隐私、同意、自主和偏见等伦理问题。
{"title":"Can ChatGPT read who you are?","authors":"Erik Derner ,&nbsp;Dalibor Kučera ,&nbsp;Nuria Oliver ,&nbsp;Jan Zahálka","doi":"10.1016/j.chbah.2024.100088","DOIUrl":"10.1016/j.chbah.2024.100088","url":null,"abstract":"<div><p>The interplay between artificial intelligence (AI) and psychology, particularly in personality assessment, represents an important emerging area of research. Accurate personality trait estimation is crucial not only for enhancing personalization in human-computer interaction but also for a wide variety of applications ranging from mental health to education. This paper analyzes the capability of a generic chatbot, ChatGPT, to effectively infer personality traits from short texts. We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants. Their self-assessments based on the Big Five Inventory (BFI) questionnaire serve as the ground truth. We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text. We also uncover a ‘positivity bias’ in ChatGPT's assessments across all personality dimensions and explore the impact of prompt composition on accuracy. This work contributes to the understanding of AI capabilities in psychological assessment, highlighting both the potential and limitations of using large language models for personality inference. Our research underscores the importance of responsible AI development, considering ethical implications such as privacy, consent, autonomy, and bias in AI applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000483/pdfft?md5=e63d2e9d2b171f646e851561d4060bf7&pid=1-s2.0-S2949882124000483-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma 了解年轻人对使用人工智能聊天机器人进行心理治疗的态度:自我耻辱感的作用
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100086
Benjamin David Hoffman, Michelle Leanne Oppert, Mikaela Owen

Mental disorders impact a large proportion of individuals worldwide, with young adults being particularly susceptible to poor mental health. Past research shows that help-seeking self-stigma plays a vital role in deterring help-seeking among young adults; however, this relationship has primarily been examined in the context of human-delivered psychotherapy. The present study aimed to understand how young adults’ perceptions of help-seeking self-stigma associated with different modes of psychotherapy, specifically human-delivered and artificial intelligence (AI)-delivered, influence attitudes towards using AI chatbots for psychotherapy. This study employed a cross-sectional survey design to measure perceived help-seeking self-stigma and attitudes towards both human- and AI-delivered psychotherapy. The results demonstrated that high help-seeking self-stigma associated with human-delivered psychotherapy was linked to more negative attitudes towards human-delivered psychotherapy but more positive attitudes towards AI-delivered psychotherapy. Moreover, high help-seeking self-stigma associated with AI-delivered psychotherapy was linked to more negative attitudes towards AI-delivered psychotherapy but more positive attitudes towards human-delivered psychotherapy. These findings have important real-world implications for future clinical practice and mental health service delivery. The results indicate that young adults who are reluctant to engage with human-delivered psychotherapy due to help-seeking self-stigma may be more inclined to seek help through alternative modes of psychotherapy, such as AI chatbots. Limitations and future directions are discussed.

全世界有很大一部分人受到精神障碍的影响,其中年轻人尤其容易受到精神健康问题的困扰。过去的研究表明,求助者的自我污名在阻碍年轻人求助方面起着至关重要的作用;然而,这种关系主要是在人工心理治疗的背景下进行研究的。本研究旨在了解年轻人对与不同心理治疗模式(特别是人工智能(AI)提供的心理治疗)相关的求助自我污名的看法如何影响他们对使用人工智能聊天机器人进行心理治疗的态度。本研究采用横断面调查设计,测量了人们感知到的求助自我污名以及对人工智能和人工智能提供的心理疗法的态度。结果表明,与人工智能心理疗法相关的高求助自我污名感与对人工智能心理疗法的消极态度有关,但与对人工智能心理疗法的积极态度有关。此外,与人工智能心理疗法相关的高求助自我烙印与对人工智能心理疗法的消极态度有关,但与对人类心理疗法的积极态度有关。这些发现对未来的临床实践和心理健康服务的提供具有重要的现实意义。结果表明,由于求助自我污名而不愿接受人工智能心理治疗的年轻人可能更倾向于通过人工智能聊天机器人等其他心理治疗模式寻求帮助。本文讨论了研究的局限性和未来发展方向。
{"title":"Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma","authors":"Benjamin David Hoffman,&nbsp;Michelle Leanne Oppert,&nbsp;Mikaela Owen","doi":"10.1016/j.chbah.2024.100086","DOIUrl":"10.1016/j.chbah.2024.100086","url":null,"abstract":"<div><p>Mental disorders impact a large proportion of individuals worldwide, with young adults being particularly susceptible to poor mental health. Past research shows that help-seeking self-stigma plays a vital role in deterring help-seeking among young adults; however, this relationship has primarily been examined in the context of human-delivered psychotherapy. The present study aimed to understand how young adults’ perceptions of help-seeking self-stigma associated with different modes of psychotherapy, specifically human-delivered and artificial intelligence (AI)-delivered, influence attitudes towards using AI chatbots for psychotherapy. This study employed a cross-sectional survey design to measure perceived help-seeking self-stigma and attitudes towards both human- and AI-delivered psychotherapy. The results demonstrated that high help-seeking self-stigma associated with human-delivered psychotherapy was linked to more negative attitudes towards human-delivered psychotherapy but more positive attitudes towards AI-delivered psychotherapy. Moreover, high help-seeking self-stigma associated with AI-delivered psychotherapy was linked to more negative attitudes towards AI-delivered psychotherapy but more positive attitudes towards human-delivered psychotherapy. These findings have important real-world implications for future clinical practice and mental health service delivery. The results indicate that young adults who are reluctant to engage with human-delivered psychotherapy due to help-seeking self-stigma may be more inclined to seek help through alternative modes of psychotherapy, such as AI chatbots. Limitations and future directions are discussed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400046X/pdfft?md5=7105a13b93ecb735c5d2187838096a15&pid=1-s2.0-S294988212400046X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perils, power and promises: Latent profile analysis on the attitudes towards artificial intelligence (AI) among middle-aged and older adults in Hong Kong 危险、力量与承诺:香港中老年人对人工智能(AI)态度的潜在特征分析
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100091
Ngai-Yin Eric Shum, Hi-Po Bobo Lau

With the increasing influence of artificial intelligence (AI) on various aspects of society, understanding public attitudes towards AI becomes crucial. This study investigated attitudes towards AI among Hong Kong middle-aged and older adults. In June 2023, an online survey was conducted among a sample of 740 smartphone users aged 45 years or older (Max = 78) in Hong Kong. Using exploratory factor analysis, we found three factors from the General Attitude to Artificial Intelligence Scale (GAAIS) - Perils, Power, and Promises. Subsequently, with latent profile analysis we revealed three latent profiles: (i) Enthusiasts (18.4%; high on Promises and Power but low on Perils); (ii) Skeptics (12.3%; high on Perils but low on Promises and Power), and (iii) Indecisive (69.3%; moderate on all three factors). The Enthusiasts were more likely to be male, with higher socio-economic status, better self-rated health, and greater mobile device proficiency, optimism, innovativeness, but also less insecurity with technology, compared to the Indecisive, and then to the Skeptics. Our findings suggest that most middle-aged and older adults in Hong Kong hold an ambivalent view towards AI, appreciating its power and potentials while also cognizant of the perils it may entail. Our findings are timely considering the recent debates on ethical use of AI evoked by smart phone applications such as ChatGPT and will be valuable for practitioners and scholars for developing inclusive AI-facilitated services and applications.

随着人工智能(AI)对社会各方面的影响越来越大,了解公众对人工智能的态度变得至关重要。本研究调查了香港中老年人对人工智能的态度。2023 年 6 月,我们对香港 740 名 45 岁或以上的智能手机用户(Max = 78)进行了在线调查。通过探索性因子分析,我们从人工智能总体态度量表(GAAIS)中发现了三个因子--危险(Perils)、力量(Power)和承诺(Promises)。随后,通过潜特征分析,我们发现了三种潜特征:(i) 热爱者(18.4%;对承诺和力量的评价较高,对危险的评价较低);(ii) 怀疑者(12.3%;对危险的评价较高,对承诺和力量的评价较低);以及 (iii) 犹豫不决者(69.3%;在所有三个因素上都处于中等水平)。与 "犹豫不决者 "和 "怀疑论者 "相比,"热衷者 "更可能是男性,社会经济地位更高,自我评价健康状况更好,对移动设备的熟练程度、乐观程度和创新能力更高,但对技术的不安全感更低。我们的研究结果表明,香港大多数中老年人对人工智能持一种矛盾的看法,他们既欣赏人工智能的力量和潜力,同时也认识到人工智能可能带来的危险。考虑到最近由智能手机应用程序(如 ChatGPT)引发的有关人工智能伦理使用的辩论,我们的研究结果非常及时,对于从业人员和学者开发包容性人工智能辅助服务和应用程序将很有价值。
{"title":"Perils, power and promises: Latent profile analysis on the attitudes towards artificial intelligence (AI) among middle-aged and older adults in Hong Kong","authors":"Ngai-Yin Eric Shum,&nbsp;Hi-Po Bobo Lau","doi":"10.1016/j.chbah.2024.100091","DOIUrl":"10.1016/j.chbah.2024.100091","url":null,"abstract":"<div><p>With the increasing influence of artificial intelligence (AI) on various aspects of society, understanding public attitudes towards AI becomes crucial. This study investigated attitudes towards AI among Hong Kong middle-aged and older adults. In June 2023, an online survey was conducted among a sample of 740 smartphone users aged 45 years or older (Max = 78) in Hong Kong. Using exploratory factor analysis, we found three factors from the General Attitude to Artificial Intelligence Scale (GAAIS) - Perils, Power, and Promises. Subsequently, with latent profile analysis we revealed three latent profiles: (i) Enthusiasts (18.4%; high on Promises and Power but low on Perils); (ii) Skeptics (12.3%; high on Perils but low on Promises and Power), and (iii) Indecisive (69.3%; moderate on all three factors). The Enthusiasts were more likely to be male, with higher socio-economic status, better self-rated health, and greater mobile device proficiency, optimism, innovativeness, but also less insecurity with technology, compared to the Indecisive, and then to the Skeptics. Our findings suggest that most middle-aged and older adults in Hong Kong hold an ambivalent view towards AI, appreciating its power and potentials while also cognizant of the perils it may entail. Our findings are timely considering the recent debates on ethical use of AI evoked by smart phone applications such as ChatGPT and will be valuable for practitioners and scholars for developing inclusive AI-facilitated services and applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100091"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000513/pdfft?md5=4615a367816801203b2516b1fae73372&pid=1-s2.0-S2949882124000513-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1