研究人工智能幻觉后生成式人工智能响应对用户体验的影响

Hayoen Kim
{"title":"研究人工智能幻觉后生成式人工智能响应对用户体验的影响","authors":"Hayoen Kim","doi":"10.20319/icssh.2024.92101","DOIUrl":null,"url":null,"abstract":"The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of \"AI hallucination,\" where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.","PeriodicalId":518079,"journal":{"name":"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)","volume":"351 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"INVESTIGATING THE EFFECTS OF GENERATIVE-AI RESPONSES ON USER EXPERIENCE AFTER AI HALLUCINATION\",\"authors\":\"Hayoen Kim\",\"doi\":\"10.20319/icssh.2024.92101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of \\\"AI hallucination,\\\" where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.\",\"PeriodicalId\":518079,\"journal\":{\"name\":\"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)\",\"volume\":\"351 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20319/icssh.2024.92101\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20319/icssh.2024.92101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

生成式人工智能(GenAI)系统融入我们的日常生活后,出现了 "人工智能幻觉 "现象,即人工智能生成了令人信服的错误信息,破坏了用户体验和系统可信度。本研究调查了人工智能出错后,人工智能的回应(特别是赞赏和道歉)对用户感知和信任度的影响。利用归因理论,我们探讨了用户是否更喜欢将错误归因于内部或外部的人工智能系统,以及这些归因如何影响用户满意度。我们采用了一种定性方法,对 20 至 30 岁、有人工智能对话经验的人进行了访谈。受访者希望人工智能在幻觉情况下道歉,并将错误责任归咎于外部世界。结果表明,错误沟通的透明度对于保持用户的信任至关重要,并需要详细的解释。这项研究有助于人们理解礼貌和归因策略如何影响用户与人工智能的互动,并对人工智能的发展具有重要意义,同时强调了错误沟通策略在透明度和用户体验之间取得平衡的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
INVESTIGATING THE EFFECTS OF GENERATIVE-AI RESPONSES ON USER EXPERIENCE AFTER AI HALLUCINATION
The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of "AI hallucination," where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CONSUMER PREFERENCES FOR DIFFERENT TYPES OF COFFEE UNRAVELING THE AMBIGUITIES IN UNDERSTANDING OF CSR IN THE BUSINESS WORLD: A PROFESSIONAL ETHICS PERSPECTIVE EXPLORING THE OBESITY PREVALENCE AMONG TAIWANESE ADULTS FROM 2017 TO 2020: A STUDY ON ECONOMIC INCOME AND EDUCATIONAL ATTAINMENT THE COMPLEXITY OF THE HEALTHCARE SYSTEM AND ITS RELATIONSHIP WITH CITIZENS AND INSTITUTIONS: WHAT BALANCE IS POSSIBLE? UNCOVERING DIVERSIFICATION BENEFITS: RETURN SPILLOVERS IN USA ESG AND NON-ESG ORIENTED BANKS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1