Executive-centered AI? Designing predictive systems for the public sector.

IF 2.9 2区 社会学 Q1 HISTORY & PHILOSOPHY OF SCIENCE Social Studies of Science Pub Date : 2023-10-01 Epub Date: 2023-05-08 DOI:10.1177/03063127231163756
Anne Henriksen, Lasse Blond
{"title":"Executive-centered AI? Designing predictive systems for the public sector.","authors":"Anne Henriksen,&nbsp;Lasse Blond","doi":"10.1177/03063127231163756","DOIUrl":null,"url":null,"abstract":"<p><p>Recent policies and research articles call for turning AI into a form of IA ('intelligence augmentation'), by envisioning systems that center on and enhance humans. Based on a field study at an AI company, this article studies how AI is performed as developers enact two predictive systems along with stakeholders in public sector accounting and public sector healthcare. Inspired by STS theories about values in design, we analyze our empirical data focusing especially on how objectives, structured performances, and divisions of labor are built into the two systems and at whose expense. Our findings reveal that the development of the two AI systems is informed by politically motivated managerial interests in cost-efficiency. This results in AI systems that are (1) designed as managerial tools meant to enable efficiency improvements and cost reductions, and (2) enforced on professionals on the 'shop floor' in a top-down manner. Based on our findings and a discussion drawing on literature on the original visions of human-centered systems design from the 1960s, we argue that turning AI into IA seems dubious, and ask what human-centered AI really means and whether it remains an ideal not easily realizable in practice. More work should be done to rethink human-machine relationships in the age of big data and AI, in this way making the call for ethical and responsible AI more genuine and trustworthy.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"738-760"},"PeriodicalIF":2.9000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Studies of Science","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/03063127231163756","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/5/8 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"HISTORY & PHILOSOPHY OF SCIENCE","Score":null,"Total":0}
引用次数: 1

Abstract

Recent policies and research articles call for turning AI into a form of IA ('intelligence augmentation'), by envisioning systems that center on and enhance humans. Based on a field study at an AI company, this article studies how AI is performed as developers enact two predictive systems along with stakeholders in public sector accounting and public sector healthcare. Inspired by STS theories about values in design, we analyze our empirical data focusing especially on how objectives, structured performances, and divisions of labor are built into the two systems and at whose expense. Our findings reveal that the development of the two AI systems is informed by politically motivated managerial interests in cost-efficiency. This results in AI systems that are (1) designed as managerial tools meant to enable efficiency improvements and cost reductions, and (2) enforced on professionals on the 'shop floor' in a top-down manner. Based on our findings and a discussion drawing on literature on the original visions of human-centered systems design from the 1960s, we argue that turning AI into IA seems dubious, and ask what human-centered AI really means and whether it remains an ideal not easily realizable in practice. More work should be done to rethink human-machine relationships in the age of big data and AI, in this way making the call for ethical and responsible AI more genuine and trustworthy.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
以高管为中心的人工智能?为公共部门设计预测系统。
最近的政策和研究文章呼吁通过设想以人类为中心并增强人类的系统,将人工智能转变为IA(“智能增强”)的一种形式。基于一家人工智能公司的实地研究,本文研究了当开发人员与公共部门会计和公共部门医疗保健的利益相关者一起制定两个预测系统时,人工智能是如何执行的。受STS设计价值观理论的启发,我们分析了我们的经验数据,特别关注目标、结构化绩效和分工是如何构建在这两个系统中的,以及以谁为代价。我们的研究结果表明,这两个人工智能系统的发展是由出于政治动机的管理层对成本效率的兴趣决定的。这导致人工智能系统(1)被设计为管理工具,旨在提高效率和降低成本,(2)以自上而下的方式在“车间”对专业人员实施。基于我们的研究结果和引用20世纪60年代以人为中心的系统设计的原始愿景的文献进行的讨论,我们认为将人工智能转变为IA似乎是可疑的,并询问以人类为中心的人工智能到底意味着什么,以及它是否仍然是一个在实践中不容易实现的理想。应该做更多的工作来重新思考大数据和人工智能时代的人机关系,从而使对道德和负责任的人工智能的呼吁更加真实和可信。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Social Studies of Science
Social Studies of Science 管理科学-科学史与科学哲学
CiteScore
5.70
自引率
6.70%
发文量
45
审稿时长
>12 weeks
期刊介绍: Social Studies of Science is an international peer reviewed journal that encourages submissions of original research on science, technology and medicine. The journal is multidisciplinary, publishing work from a range of fields including: political science, sociology, economics, history, philosophy, psychology social anthropology, legal and educational disciplines. This journal is a member of the Committee on Publication Ethics (COPE)
期刊最新文献
Wake effects and temperature plumes: Coping with non-knowledge in the expansion of wind and geothermal energy. Population curation: The construction of mutual obligation between individual and state in Danish precision medicine. Hearts and minds: The technopolitical role of affect in sociotechnical imaginaries. Cells and the city: The rise and fall of urban biopolitics in San Francisco, 1970-2020. What work does ‘contamination’ do? An agential realist account of oil wastewater and radium in groundwater
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1