The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control

Richard Warner, R. Sloan
{"title":"The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control","authors":"Richard Warner, R. Sloan","doi":"10.2139/SSRN.3004016","DOIUrl":null,"url":null,"abstract":"David Mindell notes in Our Robots, Ourselves, “For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, ‘there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.’” \nDesigning and using “the wrapper of human control” means making moral decisions — decisions about what ought to happen. The point is not new as the “soldiers, sailors, airmen or Marines” references shows. What is new is the rise of predictive analytics, the process of using large data sets in order to make predictions. \nPredictive analytics greatly exacerbates the long-standing problem about how to balance the benefits of data collection and analysis against the value of privacy, and its pervasive and its ever-increasing use of gives the tradeoff problems an urgency that can no longer be ignore. In tackling the tradeoff issues, it is not enough merely to address obviously invidious uses like a recent photo-editing app for photos of faces from a company called FaceApp. When users asked the app to increase the “hotness” of the photo, the app made skin tones lighter. We focus on widely accepted — or at least currently tolerated — uses of predictive analytics in credit rating, targeted advertising, navigation apps, search engine page ranking, and a variety of other areas. These uses yield considerable benefits, but they also impose significant costs through misclassification in the form of a large number of false positives and false negatives. Predictive analytics not only looks into your private life to construct its profiles of you, it often misrepresents who you are. \nHow should society respond? Our working assumption is that predictive analytics has significant benefits and should not be eliminated, and moreover, that it is now utterly politically infeasible to eliminate it. Thus, we propose making tradeoffs between the benefits and the costs by constraining the use and distribution of information. The constraints would have to apply across a wide range of complex situations. Is there an existing system that makes relevant tradeoffs by constraining the distribution and use of information across a highly varied range of contexts? Indeed, there is: informational norms. Informational norms are social norms that constrain not only the collection, but also the use and distribution of information. We focus on the use and distribution constraints. Those constraints establish an appropriately selective flow of information in a wide range of cases. \nWe contend they provide an essential “wrapper of human control” for predictive analytics. The obvious objection is that the relevant norms do not exist. Technological-driven economic, social, and political developments have far outpaced the slow evolution of norms. New norms will nonetheless evolve and existing norms will adapt to condone surveillance. Reasonable public policy requires controlling the evolution and adaption of norms to reach desirable outcomes.","PeriodicalId":81052,"journal":{"name":"Cumberland law review","volume":"48 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2139/SSRN.3004016","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cumberland law review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.3004016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

David Mindell notes in Our Robots, Ourselves, “For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, ‘there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.’” Designing and using “the wrapper of human control” means making moral decisions — decisions about what ought to happen. The point is not new as the “soldiers, sailors, airmen or Marines” references shows. What is new is the rise of predictive analytics, the process of using large data sets in order to make predictions. Predictive analytics greatly exacerbates the long-standing problem about how to balance the benefits of data collection and analysis against the value of privacy, and its pervasive and its ever-increasing use of gives the tradeoff problems an urgency that can no longer be ignore. In tackling the tradeoff issues, it is not enough merely to address obviously invidious uses like a recent photo-editing app for photos of faces from a company called FaceApp. When users asked the app to increase the “hotness” of the photo, the app made skin tones lighter. We focus on widely accepted — or at least currently tolerated — uses of predictive analytics in credit rating, targeted advertising, navigation apps, search engine page ranking, and a variety of other areas. These uses yield considerable benefits, but they also impose significant costs through misclassification in the form of a large number of false positives and false negatives. Predictive analytics not only looks into your private life to construct its profiles of you, it often misrepresents who you are. How should society respond? Our working assumption is that predictive analytics has significant benefits and should not be eliminated, and moreover, that it is now utterly politically infeasible to eliminate it. Thus, we propose making tradeoffs between the benefits and the costs by constraining the use and distribution of information. The constraints would have to apply across a wide range of complex situations. Is there an existing system that makes relevant tradeoffs by constraining the distribution and use of information across a highly varied range of contexts? Indeed, there is: informational norms. Informational norms are social norms that constrain not only the collection, but also the use and distribution of information. We focus on the use and distribution constraints. Those constraints establish an appropriately selective flow of information in a wide range of cases. We contend they provide an essential “wrapper of human control” for predictive analytics. The obvious objection is that the relevant norms do not exist. Technological-driven economic, social, and political developments have far outpaced the slow evolution of norms. New norms will nonetheless evolve and existing norms will adapt to condone surveillance. Reasonable public policy requires controlling the evolution and adaption of norms to reach desirable outcomes.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
算法的伦理:自治系统与人类控制的包装
大卫·明德尔在《我们的机器人,我们自己》中指出:“对于任何表面上自主的系统,我们总能找到人类控制的包装,使其有用并返回有意义的数据。用美国国防科学委员会(Defense Science Board)最近的一份报告的话来说,“就像没有完全自主的士兵、水手、飞行员或海军陆战队员一样,也不存在完全自主的系统。”设计和使用“人类控制的包装”意味着做出道德决定——关于应该发生什么的决定。正如“士兵、水手、飞行员或海军陆战队”的参考资料所显示的那样,这一点并不新鲜。新出现的是预测分析的兴起,即使用大型数据集进行预测的过程。预测分析极大地加剧了一个长期存在的问题,即如何平衡数据收集和分析的好处与隐私的价值,它的普及和不断增加的使用使权衡问题变得紧迫,不能再忽视。在解决权衡问题时,仅仅解决明显令人反感的用途是不够的,比如最近一家名为FaceApp的公司开发了一款用于人脸照片的照片编辑应用。当用户要求应用程序增加照片的“热度”时,应用程序会使肤色变浅。我们关注的是在信用评级、定向广告、导航应用程序、搜索引擎页面排名和其他各种领域中被广泛接受(或至少目前被容忍)的预测分析。这些用途产生了相当大的好处,但由于大量的假阳性和假阴性的错误分类,它们也带来了巨大的成本。预测分析不仅通过观察你的私人生活来构建你的个人资料,它还经常歪曲你是谁。社会应该如何回应?我们的工作假设是,预测分析有显著的好处,不应该被取消,而且,从政治上讲,现在完全不可能取消它。因此,我们建议通过限制信息的使用和分配,在收益和成本之间进行权衡。这些限制必须适用于各种各样的复杂情况。是否存在一个现有的系统,通过限制信息在高度多样化的环境中的分布和使用来进行相关的权衡?事实上,信息规范是存在的。信息规范是一种社会规范,它不仅约束信息的收集,而且约束信息的使用和分发。我们关注的是使用和分布限制。这些限制在广泛的情况下建立了适当的选择性信息流。我们认为,它们为预测分析提供了必不可少的“人类控制包装”。显而易见的反对意见是,相关规范并不存在。技术驱动的经济、社会和政治发展远远超过了规范的缓慢演变。尽管如此,新的规范将不断发展,现有的规范将适应纵容监视。合理的公共政策需要控制规范的演变和适应,以达到理想的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control Legal Theory from the Regulative Point of View Does bioethics provide answers?: Secular and religious bioethics and our procreative future. Method, mediations, and the moral dimensions of preimplantation genetic diagnosis. Catholic bioethics and the case of Terri Schiavo.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1