{"title":"Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems","authors":"B. Wagner","doi":"10.1002/POI3.198","DOIUrl":null,"url":null,"abstract":"Automated decision making is becoming the norm across large parts of society, which raises \ninteresting liability challenges when human control over technical systems becomes increasingly \nlimited. This article defines \"quasi-automation\" as inclusion of humans as a basic rubber-stamping \nmechanism in an otherwise completely automated decision-making system. Three cases of quasi- \nautomation are examined, where human agency in decision making is currently debatable: self- \ndriving cars, border searches based on passenger name records, and content moderation on social \nmedia. While there are specific regulatory mechanisms for purely automated decision making, these \nregulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions. \nMore broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to \nregulate human or machine agency, rather than looking to regulate both. This results in regulatory \ngray areas where the regulatory mechanisms do not apply, harming human rights by preventing \nmeaningful liability for socio-technical decision making. The article concludes by proposing criteria \nto ensure meaningful agency when humans are included in automated decision-making systems, \nand relates this to the ongoing debate on enabling human rights in Internet infrastructure.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":" ","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/POI3.198","citationCount":"61","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Policy and Internet","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1002/POI3.198","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 61
Abstract
Automated decision making is becoming the norm across large parts of society, which raises
interesting liability challenges when human control over technical systems becomes increasingly
limited. This article defines "quasi-automation" as inclusion of humans as a basic rubber-stamping
mechanism in an otherwise completely automated decision-making system. Three cases of quasi-
automation are examined, where human agency in decision making is currently debatable: self-
driving cars, border searches based on passenger name records, and content moderation on social
media. While there are specific regulatory mechanisms for purely automated decision making, these
regulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions.
More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to
regulate human or machine agency, rather than looking to regulate both. This results in regulatory
gray areas where the regulatory mechanisms do not apply, harming human rights by preventing
meaningful liability for socio-technical decision making. The article concludes by proposing criteria
to ensure meaningful agency when humans are included in automated decision-making systems,
and relates this to the ongoing debate on enabling human rights in Internet infrastructure.
期刊介绍:
Understanding public policy in the age of the Internet requires understanding how individuals, organizations, governments and networks behave, and what motivates them in this new environment. Technological innovation and internet-mediated interaction raise both challenges and opportunities for public policy: whether in areas that have received much work already (e.g. digital divides, digital government, and privacy) or newer areas, like regulation of data-intensive technologies and platforms, the rise of precarious labour, and regulatory responses to misinformation and hate speech. We welcome innovative research in areas where the Internet already impacts public policy, where it raises new challenges or dilemmas, or provides opportunities for policy that is smart and equitable. While we welcome perspectives from any academic discipline, we look particularly for insight that can feed into social science disciplines like political science, public administration, economics, sociology, and communication. We welcome articles that introduce methodological innovation, theoretical development, or rigorous data analysis concerning a particular question or problem of public policy.