{"title":"AI Errors in Health?","authors":"Veronica Barassi, Rahi Patra","doi":"10.5771/2747-5174-2022-1-34","DOIUrl":null,"url":null,"abstract":"The ever-greater use of AI-driven technologies in the health sector begs moral questions regarding what it means for algorithms to mis-understand and mis-measure human health and how as a society we are understanding AI errors in health. This article argues that AI errors in health are putting us in front of the problem that our AI technologies do not grasp the full pluriverse of human experience, and rely on data and measures that have a long history of scientific bias. However, as we shall see in this paper, contemporary public debate on the issue is very limited. Drawing on a discourse analysis of 520 European news media articles reporting on AI-errors the article will argue that the ‘media frame’ on AI errors in health is often defined by a techno-solutionist perspective, and only rarely it sheds light on the relationship between AI technologies and scientific bias. Yet public awareness on the issue is of central importance because it shows us that rather than ‚fixing‘ or ‚finding solutions‘ for AI errors we need to learn how to coexist with the fact that technlogies – because they are human made, are always going to be inevitably biased.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"335 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Morals & Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5771/2747-5174-2022-1-34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ever-greater use of AI-driven technologies in the health sector begs moral questions regarding what it means for algorithms to mis-understand and mis-measure human health and how as a society we are understanding AI errors in health. This article argues that AI errors in health are putting us in front of the problem that our AI technologies do not grasp the full pluriverse of human experience, and rely on data and measures that have a long history of scientific bias. However, as we shall see in this paper, contemporary public debate on the issue is very limited. Drawing on a discourse analysis of 520 European news media articles reporting on AI-errors the article will argue that the ‘media frame’ on AI errors in health is often defined by a techno-solutionist perspective, and only rarely it sheds light on the relationship between AI technologies and scientific bias. Yet public awareness on the issue is of central importance because it shows us that rather than ‚fixing‘ or ‚finding solutions‘ for AI errors we need to learn how to coexist with the fact that technlogies – because they are human made, are always going to be inevitably biased.