Generative Artificial Intelligence and Misinformation Acceptance: An Experimental Test of the Effect of Forewarning About Artificial Intelligence Hallucination.
{"title":"Generative Artificial Intelligence and Misinformation Acceptance: An Experimental Test of the Effect of Forewarning About Artificial Intelligence Hallucination.","authors":"Yoori Hwang, Se-Hoon Jeong","doi":"10.1089/cyber.2024.0407","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (AI) tools could create statements that are seemingly plausible but factually incorrect. This is referred to as AI hallucination, which can contribute to the generation and dissemination of misinformation. Thus, the present study examines whether forewarning about AI hallucination could reduce individuals' acceptance of AI-generated misinformation. An online experiment with 208 Korean adults demonstrated that AI hallucination forewarning reduced misinformation acceptance (<i>p</i> = 0.001, Cohen's <i>d</i> = 0.45) while forewarning did not reduce acceptance of true information (<i>p</i> = 0.91). In addition, the effect of AI hallucination forewarning on misinformation acceptance was moderated by preference for effortful thinking (<i>p</i> < 0.01) such that forewarning decreased misinformation acceptance when preference for effortful thinking was high (vs. low).</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":" ","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyberpsychology, behavior and social networking","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1089/cyber.2024.0407","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, SOCIAL","Score":null,"Total":0}
引用次数: 0
Abstract
Generative artificial intelligence (AI) tools could create statements that are seemingly plausible but factually incorrect. This is referred to as AI hallucination, which can contribute to the generation and dissemination of misinformation. Thus, the present study examines whether forewarning about AI hallucination could reduce individuals' acceptance of AI-generated misinformation. An online experiment with 208 Korean adults demonstrated that AI hallucination forewarning reduced misinformation acceptance (p = 0.001, Cohen's d = 0.45) while forewarning did not reduce acceptance of true information (p = 0.91). In addition, the effect of AI hallucination forewarning on misinformation acceptance was moderated by preference for effortful thinking (p < 0.01) such that forewarning decreased misinformation acceptance when preference for effortful thinking was high (vs. low).
期刊介绍:
Cyberpsychology, Behavior, and Social Networking is a leading peer-reviewed journal that is recognized for its authoritative research on the social, behavioral, and psychological impacts of contemporary social networking practices. The journal covers a wide range of platforms, including Twitter, Facebook, internet gaming, and e-commerce, and examines how these digital environments shape human interaction and societal norms.
For over two decades, this journal has been a pioneering voice in the exploration of social networking and virtual reality, establishing itself as an indispensable resource for professionals and academics in the field. It is particularly celebrated for its swift dissemination of findings through rapid communication articles, alongside comprehensive, in-depth studies that delve into the multifaceted effects of interactive technologies on both individual behavior and broader societal trends.
The journal's scope encompasses the full spectrum of impacts—highlighting not only the potential benefits but also the challenges that arise as a result of these technologies. By providing a platform for rigorous research and critical discussions, it fosters a deeper understanding of the complex interplay between technology and human behavior.