Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness

IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Nature Machine Intelligence Pub Date : 2023-10-02 DOI:10.1038/s42256-023-00720-7
Pat Pataranutaporn, Ruby Liu, Ed Finn, Pattie Maes
{"title":"Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness","authors":"Pat Pataranutaporn, Ruby Liu, Ed Finn, Pattie Maes","doi":"10.1038/s42256-023-00720-7","DOIUrl":null,"url":null,"abstract":"As conversational agents powered by large language models become more human-like, users are starting to view them as companions rather than mere assistants. Our study explores how changes to a person’s mental model of an AI system affects their interaction with the system. Participants interacted with the same conversational AI, but were influenced by different priming statements regarding the AI’s inner motives: caring, manipulative or no motives. Here we show that those who perceived a caring motive for the AI also perceived it as more trustworthy, empathetic and better-performing, and that the effects of priming and initial mental models were stronger for a more sophisticated AI model. Our work also indicates a feedback loop in which the user and AI reinforce the user’s mental model over a short time; further work should investigate long-term effects. The research highlights the importance of how AI systems are introduced can notably affect the interaction and how the AI is experienced. The recent accessibility of large language models brought them into contact with a large number of users and, due to the social nature of language, it is hard to avoid prescribing human characteristics such as intentions to a chatbot. Pataranutaporn and colleagues investigated how framing a bot as helpful or manipulative can influence this perception and the behaviour of the humans that interact with it.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"5 10","pages":"1076-1086"},"PeriodicalIF":18.8000,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.nature.com/articles/s42256-023-00720-7","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

As conversational agents powered by large language models become more human-like, users are starting to view them as companions rather than mere assistants. Our study explores how changes to a person’s mental model of an AI system affects their interaction with the system. Participants interacted with the same conversational AI, but were influenced by different priming statements regarding the AI’s inner motives: caring, manipulative or no motives. Here we show that those who perceived a caring motive for the AI also perceived it as more trustworthy, empathetic and better-performing, and that the effects of priming and initial mental models were stronger for a more sophisticated AI model. Our work also indicates a feedback loop in which the user and AI reinforce the user’s mental model over a short time; further work should investigate long-term effects. The research highlights the importance of how AI systems are introduced can notably affect the interaction and how the AI is experienced. The recent accessibility of large language models brought them into contact with a large number of users and, due to the social nature of language, it is hard to avoid prescribing human characteristics such as intentions to a chatbot. Pataranutaporn and colleagues investigated how framing a bot as helpful or manipulative can influence this perception and the behaviour of the humans that interact with it.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过引导人们对人工智能的信念来影响人与人工智能的互动,可以提高感知的可信度、移情能力和有效性
随着由大型语言模型驱动的对话代理变得越来越像人,用户开始将它们视为伙伴而不仅仅是助手。我们的研究探讨了一个人对人工智能系统心智模型的改变如何影响他们与该系统的互动。参与者与相同的人工智能对话系统进行了互动,但他们受到了关于人工智能内在动机的不同启发语句的影响:关爱、操纵或无动机。我们的研究表明,那些认为人工智能有关爱动机的人也认为人工智能更值得信赖、更有同情心、表现更好,而且对于更复杂的人工智能模型来说,引子和初始心理模型的影响更大。我们的研究还表明,用户和人工智能在短时间内强化了用户的心智模型,这是一个反馈回路;进一步的研究应探讨长期效应。这项研究强调了引入人工智能系统的方式的重要性,它能显著影响交互和人工智能的体验方式。大型语言模型最近的普及使它们接触到了大量用户,而由于语言的社会性,很难避免将人类的特征(如意图)赋予聊天机器人。Pataranutaporn 及其同事研究了将聊天机器人定位于帮助型或操纵型会如何影响这种看法以及与之互动的人类的行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
36.90
自引率
2.10%
发文量
127
期刊介绍: Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements. To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects. Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.
期刊最新文献
Moving towards genome-wide data integration for patient stratification with Integrate Any Omics What large language models know and what people think they know A quantitative analysis of knowledge-learning preferences in large language models in molecular science A unified evolution-driven deep learning framework for virus variation driver prediction A machine learning approach to leveraging electronic health records for enhanced omics analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1