The role of artificial intelligence in disinformation

IF 1.8 Q3 PUBLIC ADMINISTRATION Data & policy Pub Date : 2021-11-25 DOI:10.1017/dap.2021.20
Noémi Bontridder, Y. Poullet
{"title":"The role of artificial intelligence in disinformation","authors":"Noémi Bontridder, Y. Poullet","doi":"10.1017/dap.2021.20","DOIUrl":null,"url":null,"abstract":"Abstract Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":"3 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2021-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data & policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/dap.2021.20","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC ADMINISTRATION","Score":null,"Total":0}
引用次数: 9

Abstract

Abstract Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能在虚假信息中的作用
摘要人工智能(AI)系统在我们世界目前面临的虚假信息现象中发挥着至关重要的作用。这些系统不仅增加了创造真实的人工智能生成的虚假内容的机会,而且从本质上讲,还促进了恶意利益相关者向目标受众大规模传播虚假信息,从而加剧了这一问题。这种情况涉及多种伦理和人权问题,特别是在人的尊严、自主、民主与和平方面。作为回应,开发了其他人工智能系统来检测和缓和网上的虚假信息。这类制度也无法摆脱伦理和人权方面的关切,尤其是在言论和信息自由方面。欧盟最初是从上升的共同监管开始的,现在正朝着下降的共同监管的方向发展。特别是,《数字服务法》提案规定了对大型在线平台的推荐系统和内容审核的透明度义务和外部审计。在这项提案中,欧盟委员会将重点放在对被认为有问题的内容的监管上,但欧盟议会和欧盟理事会呼吁加强对可信内容的访问。根据我们的研究,我们强调虚假信息问题主要是由基于广告收入的网络商业模式引起的,调整这种模式将大大减少问题。我们还观察到,虽然人工智能系统不适合缓和网上的虚假信息内容,甚至不适合检测此类内容,但它们可能更适合对抗数字生态系统的操纵。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
审稿时长
12 weeks
期刊最新文献
Determinants for university students’ location data sharing with public institutions during COVID-19: The Italian case Bus Rapid Transit: End of trend in Latin America? Accelerating and enhancing the generation of socioeconomic data to inform forced displacement policy and response “That is why users do not understand the maps we make for them”: Cartographic gaps between experts and domestic workers and the Right to the City Analysis of spatial–temporal validation patterns in Fortaleza’s public transport systems: a data mining approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1