{"title":"Contesting border artificial intelligence: Applying the guidance-ethics approach as a responsible design lens","authors":"Karolina La Fors, F. Meissner","doi":"10.1017/dap.2022.28","DOIUrl":null,"url":null,"abstract":"Abstract Border artificial intelligence (AI)—biometrics-based AI systems used in border control contexts—proliferates as common tools in border securitization projects. Such systems classify some migrants as posing risks like identity fraud, other forms of criminality, or terrorism. From a human rights perspective, using such risk framings for algorithmically facilitated evaluations of migrants’ biometrics systematically calls into question whether these kinds of systems can be built to be trustworthy for migrants. This article provides a thought experiment; we use a bottom-up responsible design lens—the guidance-ethics approach—to evaluate if responsible, trustworthy Border AI might constitute an oxymoron. The proposed European AI Act only limits the use of Border AI systems by classifying such systems as high risk. In parallel with these AI regulatory developments, large-scale civic movements have emerged throughout Europe to ban the use of facial recognition technologies in public spaces to defend EU citizens’ privacy. The fact that such systems remain acceptable for states’ usage to evaluate migrants, we argue, insufficiently protects migrants’ lives. In part, we argue that this is due to regulations and ethical frameworks being top-down and technology driven by focusing more on the safety of AI systems than on the safety of migrants. We conclude that bordering technologies developed from a responsible design angle would entail the development of entirely different technologies. These would refrain from harmful sorting based on biometric identifications but would start from the premise that migration is not a societal problem.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data & policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/dap.2022.28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC ADMINISTRATION","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Border artificial intelligence (AI)—biometrics-based AI systems used in border control contexts—proliferates as common tools in border securitization projects. Such systems classify some migrants as posing risks like identity fraud, other forms of criminality, or terrorism. From a human rights perspective, using such risk framings for algorithmically facilitated evaluations of migrants’ biometrics systematically calls into question whether these kinds of systems can be built to be trustworthy for migrants. This article provides a thought experiment; we use a bottom-up responsible design lens—the guidance-ethics approach—to evaluate if responsible, trustworthy Border AI might constitute an oxymoron. The proposed European AI Act only limits the use of Border AI systems by classifying such systems as high risk. In parallel with these AI regulatory developments, large-scale civic movements have emerged throughout Europe to ban the use of facial recognition technologies in public spaces to defend EU citizens’ privacy. The fact that such systems remain acceptable for states’ usage to evaluate migrants, we argue, insufficiently protects migrants’ lives. In part, we argue that this is due to regulations and ethical frameworks being top-down and technology driven by focusing more on the safety of AI systems than on the safety of migrants. We conclude that bordering technologies developed from a responsible design angle would entail the development of entirely different technologies. These would refrain from harmful sorting based on biometric identifications but would start from the premise that migration is not a societal problem.