Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government

IF 7.8 1区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Government Information Quarterly Pub Date : 2023-10-01 DOI:10.1016/j.giq.2023.101834
Bjorn Kleizen , Wouter Van Dooren , Koen Verhoest , Evrim Tan
{"title":"Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government","authors":"Bjorn Kleizen ,&nbsp;Wouter Van Dooren ,&nbsp;Koen Verhoest ,&nbsp;Evrim Tan","doi":"10.1016/j.giq.2023.101834","DOIUrl":null,"url":null,"abstract":"<div><p>This study examines the impact of ethical AI information on citizens' trust in and policy support for governmental AI projects. Unlike previous work on direct users of AI, this study focuses on the general public. Two online survey experiments presented participants with information on six types of ethical AI measures: legal compliance, ethics-by-design measures, data-gathering limitations, human-in-the-loop, non-discrimination, and technical robustness. Results reveal that general ethical AI information has little to no effect on trust, perceived trustworthiness or policy support among citizens. Prior attitudes and experiences, including privacy concerns, trust in government, and trust in AI, instead form good predictors. These findings suggest that short-term communication efforts on ethical AI practices have minimal impact. The findings suggest that a more long-term, comprehensive approach is necessary to building trust in governmental AI projects, addressing citizens' underlying concerns and experiences. As governments' use of AI becomes more ubiquitous, understanding citizen responses is crucial for fostering trust, perceived trustworthiness and policy support for AI-based policies and initiatives.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"40 4","pages":"Article 101834"},"PeriodicalIF":7.8000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X23000345","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This study examines the impact of ethical AI information on citizens' trust in and policy support for governmental AI projects. Unlike previous work on direct users of AI, this study focuses on the general public. Two online survey experiments presented participants with information on six types of ethical AI measures: legal compliance, ethics-by-design measures, data-gathering limitations, human-in-the-loop, non-discrimination, and technical robustness. Results reveal that general ethical AI information has little to no effect on trust, perceived trustworthiness or policy support among citizens. Prior attitudes and experiences, including privacy concerns, trust in government, and trust in AI, instead form good predictors. These findings suggest that short-term communication efforts on ethical AI practices have minimal impact. The findings suggest that a more long-term, comprehensive approach is necessary to building trust in governmental AI projects, addressing citizens' underlying concerns and experiences. As governments' use of AI becomes more ubiquitous, understanding citizen responses is crucial for fostering trust, perceived trustworthiness and policy support for AI-based policies and initiatives.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
公民信任值得信赖的人工智能吗?政府人工智能伦理措施局限性的实验证据
本研究考察了人工智能伦理信息对公民对政府人工智能项目的信任和政策支持的影响。与之前关于人工智能直接用户的研究不同,这项研究关注的是普通公众。两个在线调查实验向参与者提供了六种道德人工智能措施的信息:法律合规、设计道德措施、数据收集限制、人在环、非歧视和技术稳健性。结果显示,一般伦理人工智能信息对公民之间的信任、感知可信度或政策支持几乎没有影响。之前的态度和经历,包括对隐私的担忧、对政府的信任和对人工智能的信任,反而是很好的预测因素。这些发现表明,在人工智能道德实践方面的短期沟通努力影响最小。研究结果表明,需要采取更长期、更全面的方法来建立对政府人工智能项目的信任,解决公民的潜在担忧和经历。随着政府对人工智能的使用变得越来越普遍,了解公民的反应对于促进信任、感知可信度和对基于人工智能的政策和举措的政策支持至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Government Information Quarterly
Government Information Quarterly INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
15.70
自引率
16.70%
发文量
106
期刊介绍: Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.
期刊最新文献
Best practices in e-government communication: Lessons from the local Governments' use of official facebook pages The haves and the have nots: Civic technologies and the pathways to government responsiveness Unveiling civil servants' preferences: Human-machine matching vs. regulating algorithms in algorithmic decision-making——Insights from a survey experiment Which data should be publicly accessible? Dispatches from public managers Artificial intelligence governance: Understanding how public organizations implement it
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1