Administration by algorithm: A risk management framework

Inf. Polity Pub Date : 2020-12-04 DOI:10.3233/ip-200249
F. Bannister, R. Connolly
{"title":"Administration by algorithm: A risk management framework","authors":"F. Bannister, R. Connolly","doi":"10.3233/ip-200249","DOIUrl":null,"url":null,"abstract":"Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"C-19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Polity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/ip-200249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
算法管理:风险管理框架
算法决策既不是最近才出现的现象,也不一定与人工智能(AI)有关,尽管人工智能的进步正越来越多地导致迄今为止人类的决策被算法和机器学习等技术所取代,或变得依赖于这些技术。这些发展带来了许多潜在的好处,但也并非没有一定的风险。这些风险并不总是被很好地理解。这不仅仅是机器犯错的问题;在软件中嵌入的价值观、偏见和偏见会对社会中的个人和群体造成歧视。这种偏见往往很难发现或证明,特别是在透明度和问责制存在问题以及此类系统外包给私营部门的情况下。因此,能够发现和分类这些风险对于制定系统和校准的应对措施至关重要。本文提出了公共部门决策算法的简单分类,并利用该分类构建了一个风险管理框架,该框架包含若干组成部分,包括问责结构和监管治理。该框架旨在帮助有兴趣确保人工智能在公共领域的结构化问责制和法律监管的学者和从业者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Policy Review: The Evolving Governance of Surveillance Cameras in the UK Editorial: Improving Diversity in our Journal Two Editorials: An Editorial by the Editors-in-Chief and an Editorial by ChatGPT The Dutch Open Government Act: Bridging old and new open government? The Power of Partnership in Open Government: Reconsidering Multistakeholder Governance Reform
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1