被算法破坏了?人工智能和人类书面建议如何塑造诚实

IF 3.8 2区 经济学 Q1 ECONOMICS Economic Journal Pub Date : 2023-09-11 DOI:10.1093/ej/uead056
Margarita Leib, Nils Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch
{"title":"被算法破坏了?人工智能和人类书面建议如何塑造诚实","authors":"Margarita Leib, Nils Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch","doi":"10.1093/ej/uead056","DOIUrl":null,"url":null,"abstract":"Abstract Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AI- and human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.","PeriodicalId":48448,"journal":{"name":"Economic Journal","volume":"20 1","pages":"0"},"PeriodicalIF":3.8000,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Corrupted by Algorithms? How AI-Generated and Human-Written Advice Shape (DIS)Honesty\",\"authors\":\"Margarita Leib, Nils Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch\",\"doi\":\"10.1093/ej/uead056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AI- and human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.\",\"PeriodicalId\":48448,\"journal\":{\"name\":\"Economic Journal\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2023-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Economic Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ej/uead056\",\"RegionNum\":2,\"RegionCategory\":\"经济学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Economic Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ej/uead056","RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)日益成为不可或缺的顾问。如果人工智能说服人们做出不诚实的行为,新的伦理问题就会出现。在一个实验中,我们研究了人工智能建议(由自然语言处理算法生成)如何影响(不)诚实,将其与等价的人类建议进行比较,并测试建议来源的透明度是否重要。我们发现,鼓励不诚实的建议会增加不诚实,而鼓励诚实的建议不会增加诚实。人工智能和人类的建议都是如此。算法透明度(algorithm transparency)是一项通常被提议用来降低人工智能风险的政策,它不会影响人们的行为。这些发现标志着负责任地管理人工智能建议迈出了第一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Corrupted by Algorithms? How AI-Generated and Human-Written Advice Shape (DIS)Honesty
Abstract Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AI- and human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Economic Journal
Economic Journal ECONOMICS-
CiteScore
6.60
自引率
3.10%
发文量
82
期刊介绍: The Economic Journal is the Royal Economic Society''s flagship title, and is one of the founding journals of modern economics. Over the past 125 years the journal has provided a platform for high quality and imaginative economic research, earning a worldwide reputation excellence as a general journal publishing papers in all fields of economics for a broad international readership. It is invaluable to anyone with an active interest in economic issues and is a key source for professional economists in higher education, business, government and the financial sector who want to keep abreast of current thinking in economics.
期刊最新文献
Expectation Formation with Correlated Variables Data-Driven Envelopment with Privacy-Policy Tying Commuting for crime Radicalisation Macroevolutionary Origins of Comparative Development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1