Integral system safety for machine learning in the public sector: An empirical account

IF 7.8 1区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Government Information Quarterly Pub Date : 2024-08-23 DOI:10.1016/j.giq.2024.101963
J. Delfos (Jeroen), A.M.G. Zuiderwijk (Anneke), S. van Cranenburgh (Sander), C.G. Chorus (Caspar), R.I.J. Dobbe (Roel)
{"title":"Integral system safety for machine learning in the public sector: An empirical account","authors":"J. Delfos (Jeroen),&nbsp;A.M.G. Zuiderwijk (Anneke),&nbsp;S. van Cranenburgh (Sander),&nbsp;C.G. Chorus (Caspar),&nbsp;R.I.J. Dobbe (Roel)","doi":"10.1016/j.giq.2024.101963","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"41 3","pages":"Article 101963"},"PeriodicalIF":7.8000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0740624X24000558/pdfft?md5=535820313d99de364eb4196e987f032a&pid=1-s2.0-S0740624X24000558-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000558","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
公共部门机器学习的整体系统安全:经验之谈
本文将系统理论和系统安全概念引入到当前有关公共部门机器学习(ML)系统安全的学术讨论中。特别是,我们分析了影响此类系统控制能力的 ML 系统风险因素及其各自的制度背景。我们利用访谈数据,归纳出在公共专业人员的认知中存在哪些此类系统的风险因素,以及哪些因素是基于系统理论所预期但却缺失的。基于从系统理论的视角来看待人工乐虎国际手机版下载系统最合适的假设,我们认为,在解决人工乐虎国际手机版下载系统安全问题的持续努力中,缺失的因素值得更多关注。这些因素包括安全目标和约束条件的阐述、将系统因素纳入系统设计、安全控制结构的开发以及 ML 系统向高风险迁移的趋势。我们的观察结果支持这样的假设,即从系统理论的角度来看待多式联运系统是最合适的。因此,我们得出结论,系统安全概念可以为旨在提高多式联运系统安全的决策者提供有用的帮助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Government Information Quarterly
Government Information Quarterly INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
15.70
自引率
16.70%
发文量
106
期刊介绍: Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.
期刊最新文献
A more secure framework for open government data sharing based on federated learning Does trust in government moderate the perception towards deepfakes? Comparative perspectives from Asia on the risks of AI and misinformation for democracy Open government data and self-efficacy: The empirical evidence of micro foundation via survey experiments Transforming towards inclusion-by-design: Information system design principles shaping data-driven financial inclusiveness Bridging the gap: Towards an expanded toolkit for AI-driven decision-making in the public sector
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1