撒谎,但无法控制?确保自动化决策系统中有意义的人的能动性

IF 4.1 1区 文学 Q1 COMMUNICATION Policy and Internet Pub Date : 2019-03-01 DOI:10.1002/POI3.198
B. Wagner
{"title":"撒谎,但无法控制?确保自动化决策系统中有意义的人的能动性","authors":"B. Wagner","doi":"10.1002/POI3.198","DOIUrl":null,"url":null,"abstract":"Automated decision making is becoming the norm across large parts of society, which raises \ninteresting liability challenges when human control over technical systems becomes increasingly \nlimited. This article defines \"quasi-automation\" as inclusion of humans as a basic rubber-stamping \nmechanism in an otherwise completely automated decision-making system. Three cases of quasi- \nautomation are examined, where human agency in decision making is currently debatable: self- \ndriving cars, border searches based on passenger name records, and content moderation on social \nmedia. While there are specific regulatory mechanisms for purely automated decision making, these \nregulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions. \nMore broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to \nregulate human or machine agency, rather than looking to regulate both. This results in regulatory \ngray areas where the regulatory mechanisms do not apply, harming human rights by preventing \nmeaningful liability for socio-technical decision making. The article concludes by proposing criteria \nto ensure meaningful agency when humans are included in automated decision-making systems, \nand relates this to the ongoing debate on enabling human rights in Internet infrastructure.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":" ","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/POI3.198","citationCount":"61","resultStr":"{\"title\":\"Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems\",\"authors\":\"B. Wagner\",\"doi\":\"10.1002/POI3.198\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated decision making is becoming the norm across large parts of society, which raises \\ninteresting liability challenges when human control over technical systems becomes increasingly \\nlimited. This article defines \\\"quasi-automation\\\" as inclusion of humans as a basic rubber-stamping \\nmechanism in an otherwise completely automated decision-making system. Three cases of quasi- \\nautomation are examined, where human agency in decision making is currently debatable: self- \\ndriving cars, border searches based on passenger name records, and content moderation on social \\nmedia. While there are specific regulatory mechanisms for purely automated decision making, these \\nregulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions. \\nMore broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to \\nregulate human or machine agency, rather than looking to regulate both. This results in regulatory \\ngray areas where the regulatory mechanisms do not apply, harming human rights by preventing \\nmeaningful liability for socio-technical decision making. The article concludes by proposing criteria \\nto ensure meaningful agency when humans are included in automated decision-making systems, \\nand relates this to the ongoing debate on enabling human rights in Internet infrastructure.\",\"PeriodicalId\":46894,\"journal\":{\"name\":\"Policy and Internet\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2019-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1002/POI3.198\",\"citationCount\":\"61\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Policy and Internet\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1002/POI3.198\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Policy and Internet","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1002/POI3.198","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 61

摘要

自动化决策正在成为社会大部分地区的常态,当人类对技术系统的控制变得越来越有限时,这就带来了有趣的责任挑战。本文将“准自动化”定义为将人类作为一种基本的橡皮图章机制纳入一个完全自动化的决策系统。研究了三种准自动化的情况,其中人类在决策中的能动性目前存在争议:自动驾驶汽车、基于乘客姓名记录的边境搜索和社交媒体上的内容审核。虽然纯自动化决策有特定的监管机制,但如果人类是(橡皮图章)自动化决策,这些监管机制就不适用。更广泛地说,大多数监管机制都遵循二元责任模式,试图监管人类或机器代理,而不是同时监管两者。这导致了监管机制不适用的监管灰色地带,阻碍了社会技术决策的重大责任,从而损害了人权。文章最后提出了在将人纳入自动化决策系统时确保有意义的机构的标准,并将其与正在进行的关于在互联网基础设施中实现人权的辩论联系起来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems
Automated decision making is becoming the norm across large parts of society, which raises interesting liability challenges when human control over technical systems becomes increasingly limited. This article defines "quasi-automation" as inclusion of humans as a basic rubber-stamping mechanism in an otherwise completely automated decision-making system. Three cases of quasi- automation are examined, where human agency in decision making is currently debatable: self- driving cars, border searches based on passenger name records, and content moderation on social media. While there are specific regulatory mechanisms for purely automated decision making, these regulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions. More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to regulate human or machine agency, rather than looking to regulate both. This results in regulatory gray areas where the regulatory mechanisms do not apply, harming human rights by preventing meaningful liability for socio-technical decision making. The article concludes by proposing criteria to ensure meaningful agency when humans are included in automated decision-making systems, and relates this to the ongoing debate on enabling human rights in Internet infrastructure.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.40
自引率
10.20%
发文量
51
期刊介绍: Understanding public policy in the age of the Internet requires understanding how individuals, organizations, governments and networks behave, and what motivates them in this new environment. Technological innovation and internet-mediated interaction raise both challenges and opportunities for public policy: whether in areas that have received much work already (e.g. digital divides, digital government, and privacy) or newer areas, like regulation of data-intensive technologies and platforms, the rise of precarious labour, and regulatory responses to misinformation and hate speech. We welcome innovative research in areas where the Internet already impacts public policy, where it raises new challenges or dilemmas, or provides opportunities for policy that is smart and equitable. While we welcome perspectives from any academic discipline, we look particularly for insight that can feed into social science disciplines like political science, public administration, economics, sociology, and communication. We welcome articles that introduce methodological innovation, theoretical development, or rigorous data analysis concerning a particular question or problem of public policy.
期刊最新文献
Effects of online citizen participation on legitimacy beliefs in local government. Evidence from a comparative study of online participation platforms in three German municipalities “Highly nuanced policy is very difficult to apply at scale”: Examining researcher account and content takedowns online Special issue: The (international) politics of content takedowns: Theory, practice, ethics Countering online terrorist content: A social regulation approach Content takedowns and activist organizing: Impact of social media content moderation on activists and organizing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1