治理人工智能,助力联合国可持续发展目标

J. Truby
{"title":"治理人工智能,助力联合国可持续发展目标","authors":"J. Truby","doi":"10.1002/sd.2048","DOIUrl":null,"url":null,"abstract":"Big Tech's unregulated roll-out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision-making software making important financial decisions affecting customers. Automated decision-making algorithms have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplify unequal access to finance. Poverty reduction and sustainable development targets are risked by Big Tech's potential exploitation of developing countries by using AI to harvest data and profits. Stakeholder progress toward preventing financial crime and corruption is further threatened by potential misuse of AI. In the light of such risks, Big Tech's unscrupulous history means it cannot be trusted to operate without regulatory oversight. The article proposes effective pre-emptive regulatory options to minimize scenarios of AI damaging the SDGs. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. Furthermore, it argues that AI governance frameworks must require a benefit to the SDGs. The article argues that proactively predicting such problems can enable continued AI innovation through well-designed regulations adhering to international principles. It highlights risks of unregulated AI causing harm to human interests, where a public and regulatory backlash may result in over-regulation that could damage the otherwise beneficial development of AI.","PeriodicalId":11797,"journal":{"name":"ERN: Regulation (IO) (Topic)","volume":"114 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"56","resultStr":"{\"title\":\"Governing Artificial Intelligence to benefit the UN Sustainable Development Goals\",\"authors\":\"J. Truby\",\"doi\":\"10.1002/sd.2048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Big Tech's unregulated roll-out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision-making software making important financial decisions affecting customers. Automated decision-making algorithms have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplify unequal access to finance. Poverty reduction and sustainable development targets are risked by Big Tech's potential exploitation of developing countries by using AI to harvest data and profits. Stakeholder progress toward preventing financial crime and corruption is further threatened by potential misuse of AI. In the light of such risks, Big Tech's unscrupulous history means it cannot be trusted to operate without regulatory oversight. The article proposes effective pre-emptive regulatory options to minimize scenarios of AI damaging the SDGs. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. Furthermore, it argues that AI governance frameworks must require a benefit to the SDGs. The article argues that proactively predicting such problems can enable continued AI innovation through well-designed regulations adhering to international principles. It highlights risks of unregulated AI causing harm to human interests, where a public and regulatory backlash may result in over-regulation that could damage the otherwise beneficial development of AI.\",\"PeriodicalId\":11797,\"journal\":{\"name\":\"ERN: Regulation (IO) (Topic)\",\"volume\":\"114 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"56\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ERN: Regulation (IO) (Topic)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/sd.2048\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERN: Regulation (IO) (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/sd.2048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 56

摘要

大型科技公司不受监管地推出实验性人工智能,对实现联合国可持续发展目标(sdg)构成了风险,发展中国家尤其容易受到影响。人工智能决策软件做出影响客户的重要金融决策,其设计和实施不完善、不受监管,这对普惠金融的目标构成了威胁。自动决策算法已经显示出偏见,缺乏道德治理,并且限制了决策基础的透明度,导致不公平的结果,并扩大了获得融资的不平等机会。大型科技公司利用人工智能获取数据和利润,对发展中国家的潜在剥削,使减贫和可持续发展目标面临风险。利益相关者在预防金融犯罪和腐败方面的进展受到人工智能潜在滥用的进一步威胁。考虑到这些风险,大型科技公司肆无忌惮的历史意味着,在没有监管监督的情况下,它不能被信任。本文提出了有效的先发制人的监管方案,以尽量减少人工智能破坏可持续发展目标的情况。它探讨了国际上公认的人工智能治理原则,并主张将其作为管理人工智能开发人员和编码人员的监管要求实施,并通过算法审计验证合规性。此外,它认为人工智能治理框架必须有利于可持续发展目标。文章认为,积极预测这些问题可以通过遵循国际原则的精心设计的法规来实现持续的人工智能创新。它强调了不受监管的人工智能对人类利益造成损害的风险,公众和监管部门的强烈反对可能导致过度监管,从而损害人工智能本来有益的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Governing Artificial Intelligence to benefit the UN Sustainable Development Goals
Big Tech's unregulated roll-out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision-making software making important financial decisions affecting customers. Automated decision-making algorithms have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplify unequal access to finance. Poverty reduction and sustainable development targets are risked by Big Tech's potential exploitation of developing countries by using AI to harvest data and profits. Stakeholder progress toward preventing financial crime and corruption is further threatened by potential misuse of AI. In the light of such risks, Big Tech's unscrupulous history means it cannot be trusted to operate without regulatory oversight. The article proposes effective pre-emptive regulatory options to minimize scenarios of AI damaging the SDGs. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. Furthermore, it argues that AI governance frameworks must require a benefit to the SDGs. The article argues that proactively predicting such problems can enable continued AI innovation through well-designed regulations adhering to international principles. It highlights risks of unregulated AI causing harm to human interests, where a public and regulatory backlash may result in over-regulation that could damage the otherwise beneficial development of AI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Sound GUPPI Safe Harbor: A Calibrated Unilateral Effects Screen for Horizontal Mergers with Differentiated Products Consolidation on Aisle Five: Effects of Mergers in Consumer Packaged Goods Optimal Exit Policy with Uncertain Demand Friends in High Places: Demand Spillovers and Competition on Digital Platforms The Ambiguous Competitive Effects of Passive Partial Forward Integration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1