{"title":"研究人工智能幻觉后生成式人工智能响应对用户体验的影响","authors":"Hayoen Kim","doi":"10.20319/icssh.2024.92101","DOIUrl":null,"url":null,"abstract":"The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of \"AI hallucination,\" where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.","PeriodicalId":518079,"journal":{"name":"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)","volume":"351 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"INVESTIGATING THE EFFECTS OF GENERATIVE-AI RESPONSES ON USER EXPERIENCE AFTER AI HALLUCINATION\",\"authors\":\"Hayoen Kim\",\"doi\":\"10.20319/icssh.2024.92101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of \\\"AI hallucination,\\\" where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.\",\"PeriodicalId\":518079,\"journal\":{\"name\":\"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)\",\"volume\":\"351 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20319/icssh.2024.92101\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024: Proceedings of Social Science and Humanities Research Association (SSHRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20319/icssh.2024.92101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
INVESTIGATING THE EFFECTS OF GENERATIVE-AI RESPONSES ON USER EXPERIENCE AFTER AI HALLUCINATION
The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of "AI hallucination," where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience.