{"title":"Executive-centered AI? Designing predictive systems for the public sector.","authors":"Anne Henriksen, Lasse Blond","doi":"10.1177/03063127231163756","DOIUrl":null,"url":null,"abstract":"<p><p>Recent policies and research articles call for turning AI into a form of IA ('intelligence augmentation'), by envisioning systems that center on and enhance humans. Based on a field study at an AI company, this article studies how AI is performed as developers enact two predictive systems along with stakeholders in public sector accounting and public sector healthcare. Inspired by STS theories about values in design, we analyze our empirical data focusing especially on how objectives, structured performances, and divisions of labor are built into the two systems and at whose expense. Our findings reveal that the development of the two AI systems is informed by politically motivated managerial interests in cost-efficiency. This results in AI systems that are (1) designed as managerial tools meant to enable efficiency improvements and cost reductions, and (2) enforced on professionals on the 'shop floor' in a top-down manner. Based on our findings and a discussion drawing on literature on the original visions of human-centered systems design from the 1960s, we argue that turning AI into IA seems dubious, and ask what human-centered AI really means and whether it remains an ideal not easily realizable in practice. More work should be done to rethink human-machine relationships in the age of big data and AI, in this way making the call for ethical and responsible AI more genuine and trustworthy.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"738-760"},"PeriodicalIF":2.9000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Studies of Science","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/03063127231163756","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/5/8 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"HISTORY & PHILOSOPHY OF SCIENCE","Score":null,"Total":0}
引用次数: 1
Abstract
Recent policies and research articles call for turning AI into a form of IA ('intelligence augmentation'), by envisioning systems that center on and enhance humans. Based on a field study at an AI company, this article studies how AI is performed as developers enact two predictive systems along with stakeholders in public sector accounting and public sector healthcare. Inspired by STS theories about values in design, we analyze our empirical data focusing especially on how objectives, structured performances, and divisions of labor are built into the two systems and at whose expense. Our findings reveal that the development of the two AI systems is informed by politically motivated managerial interests in cost-efficiency. This results in AI systems that are (1) designed as managerial tools meant to enable efficiency improvements and cost reductions, and (2) enforced on professionals on the 'shop floor' in a top-down manner. Based on our findings and a discussion drawing on literature on the original visions of human-centered systems design from the 1960s, we argue that turning AI into IA seems dubious, and ask what human-centered AI really means and whether it remains an ideal not easily realizable in practice. More work should be done to rethink human-machine relationships in the age of big data and AI, in this way making the call for ethical and responsible AI more genuine and trustworthy.
期刊介绍:
Social Studies of Science is an international peer reviewed journal that encourages submissions of original research on science, technology and medicine. The journal is multidisciplinary, publishing work from a range of fields including: political science, sociology, economics, history, philosophy, psychology social anthropology, legal and educational disciplines. This journal is a member of the Committee on Publication Ethics (COPE)