J. Delfos (Jeroen), A.M.G. Zuiderwijk (Anneke), S. van Cranenburgh (Sander), C.G. Chorus (Caspar), R.I.J. Dobbe (Roel)
{"title":"公共部门机器学习的整体系统安全:经验之谈","authors":"J. Delfos (Jeroen), A.M.G. Zuiderwijk (Anneke), S. van Cranenburgh (Sander), C.G. Chorus (Caspar), R.I.J. Dobbe (Roel)","doi":"10.1016/j.giq.2024.101963","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"41 3","pages":"Article 101963"},"PeriodicalIF":7.8000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0740624X24000558/pdfft?md5=535820313d99de364eb4196e987f032a&pid=1-s2.0-S0740624X24000558-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Integral system safety for machine learning in the public sector: An empirical account\",\"authors\":\"J. Delfos (Jeroen), A.M.G. Zuiderwijk (Anneke), S. van Cranenburgh (Sander), C.G. Chorus (Caspar), R.I.J. Dobbe (Roel)\",\"doi\":\"10.1016/j.giq.2024.101963\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.</p></div>\",\"PeriodicalId\":48258,\"journal\":{\"name\":\"Government Information Quarterly\",\"volume\":\"41 3\",\"pages\":\"Article 101963\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000558/pdfft?md5=535820313d99de364eb4196e987f032a&pid=1-s2.0-S0740624X24000558-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Government Information Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000558\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000558","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
摘要
本文将系统理论和系统安全概念引入到当前有关公共部门机器学习(ML)系统安全的学术讨论中。特别是,我们分析了影响此类系统控制能力的 ML 系统风险因素及其各自的制度背景。我们利用访谈数据,归纳出在公共专业人员的认知中存在哪些此类系统的风险因素,以及哪些因素是基于系统理论所预期但却缺失的。基于从系统理论的视角来看待人工乐虎国际手机版下载系统最合适的假设,我们认为,在解决人工乐虎国际手机版下载系统安全问题的持续努力中,缺失的因素值得更多关注。这些因素包括安全目标和约束条件的阐述、将系统因素纳入系统设计、安全控制结构的开发以及 ML 系统向高风险迁移的趋势。我们的观察结果支持这样的假设,即从系统理论的角度来看待多式联运系统是最合适的。因此,我们得出结论,系统安全概念可以为旨在提高多式联运系统安全的决策者提供有用的帮助。
Integral system safety for machine learning in the public sector: An empirical account
This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.
期刊介绍:
Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.