相比 ChatGPT,人们更信任人类吗?

IF 1.6 3区 经济学 Q2 ECONOMICS Journal of Behavioral and Experimental Economics Pub Date : 2024-05-31 DOI:10.1016/j.socec.2024.102239
Joy Buchanan , William Hickman
{"title":"相比 ChatGPT,人们更信任人类吗?","authors":"Joy Buchanan ,&nbsp;William Hickman","doi":"10.1016/j.socec.2024.102239","DOIUrl":null,"url":null,"abstract":"<div><p>We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.</p></div>","PeriodicalId":51637,"journal":{"name":"Journal of Behavioral and Experimental Economics","volume":"112 ","pages":"Article 102239"},"PeriodicalIF":1.6000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Do people trust humans more than ChatGPT?\",\"authors\":\"Joy Buchanan ,&nbsp;William Hickman\",\"doi\":\"10.1016/j.socec.2024.102239\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.</p></div>\",\"PeriodicalId\":51637,\"journal\":{\"name\":\"Journal of Behavioral and Experimental Economics\",\"volume\":\"112 \",\"pages\":\"Article 102239\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Behavioral and Experimental Economics\",\"FirstCategoryId\":\"96\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214804324000776\",\"RegionNum\":3,\"RegionCategory\":\"经济学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Behavioral and Experimental Economics","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214804324000776","RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

我们探讨了人们是否信任大型语言模型(LLM)生成的语句与人类撰写的语句的准确性。虽然大型语言模型在生成文本方面展示了令人印象深刻的能力,但人们也对其可能产生的误导、偏见或错误回应表示担忧。在本实验中,参与者在不同的信息条件下对语句的准确性进行评分。在没有明确告知作者的情况下,参与者倾向于更信任他们认为是人工撰写的语句,而不是那些归功于 ChatGPT 的语句。然而,当被告知作者身份时,参与者对人类和人工智能作者表现出同等的怀疑态度。总体而言,知情的参与者更倾向于选择代价高昂的事实核查。这些结果表明,对人工智能生成内容的信任取决于具体情况。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Do people trust humans more than ChatGPT?

We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.60
自引率
12.50%
发文量
113
审稿时长
83 days
期刊介绍: The Journal of Behavioral and Experimental Economics (formerly the Journal of Socio-Economics) welcomes submissions that deal with various economic topics but also involve issues that are related to other social sciences, especially psychology, or use experimental methods of inquiry. Thus, contributions in behavioral economics, experimental economics, economic psychology, and judgment and decision making are especially welcome. The journal is open to different research methodologies, as long as they are relevant to the topic and employed rigorously. Possible methodologies include, for example, experiments, surveys, empirical work, theoretical models, meta-analyses, case studies, and simulation-based analyses. Literature reviews that integrate findings from many studies are also welcome, but they should synthesize the literature in a useful manner and provide substantial contribution beyond what the reader could get by simply reading the abstracts of the cited papers. In empirical work, it is important that the results are not only statistically significant but also economically significant. A high contribution-to-length ratio is expected from published articles and therefore papers should not be unnecessarily long, and short articles are welcome. Articles should be written in a manner that is intelligible to our generalist readership. Book reviews are generally solicited but occasionally unsolicited reviews will also be published. Contact the Book Review Editor for related inquiries.
期刊最新文献
Privacy during pandemics: Attitudes to public use of personal data Understanding inconsistencies in risk attitude elicitation games: Evidence from smallholder farmers in five African countries Inflation expectations in the wake of the war in Ukraine Asking for a friend: Reminders and incentives for crowdfunding college savings ‘Update Bias’: Manipulating past information based on the existing circumstances
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1