机器人的补救措施

IF 1.9 2区 社会学 Q1 LAW University of Chicago Law Review Pub Date : 2018-07-31 DOI:10.2139/SSRN.3223621
Mark A. Lemley, B. Casey
{"title":"机器人的补救措施","authors":"Mark A. Lemley, B. Casey","doi":"10.2139/SSRN.3223621","DOIUrl":null,"url":null,"abstract":"What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things. They have already killed people. \n \nThese new technologies present a number of interesting substantive law questions, from predictability, to transparency, to liability for high stakes decision making in complex computational systems. Our focus here is different. We seek to explore what remedies the law can and should provide once a robot has caused harm. \n \nWhere substantive law defines who wins legal disputes, remedies law asks, “What do I get when I win?” Remedies are sometimes designed to make plaintiffs whole by restoring them to the condition they would have been in “but for” the wrong. But they can also contain elements of moral judgment, punishment, and deterrence. For instance, the law will often act to deprive a defendant of its gains even if the result is a windfall to the plaintiff, because we think it is unfair to let defendants keep those gains. In other instances, the law may order defendants to do (or stop doing) something unlawful or harmful. \n \nEach of these goals of remedies law, however, runs into difficulties when the bad actor in question is neither a person nor a corporation but a robot. We might order a robot—or, more realistically, the designer or owner of the robot—to pay for the damages it causes. (Though, as we will see, even that presents some surprisingly thorny problems.) But it turns out to be much harder for a judge to “order” a robot, rather than a human, to engage in or refrain from certain conduct . Robots can’t directly obey court orders not written in computer code. And bridging the translation gap between natural language and code is often harder than we might expect. This is particularly true of modern AI techniques that empower machines to learn and modify their decision making over time. If we don’t know how the robot “thinks,” we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do. \n \nMoreover, if the ultimate goal of a legal remedy is to encourage good behavior or discourage bad behavior, punishing owners or designers for the behavior of their robots may not always make sense—if only for the simple reason that their owners didn’t act wrongfully in any meaningful way. The same problem affects injunctive relief. Courts are used to ordering people and companies to do (or stop doing) certain things, with a penalty of contempt of court for noncompliance. But ordering a robot to abstain from certain behavior won’t be trivial in many cases. And ordering it to take affirmative acts may prove even more problematic. \n \nIn this paper, we begin to think about how we might design a system of remedies for robots. It may, for example, make sense to focus less of our doctrinal attention on moral guilt and more of it on no-fault liability systems (or at least ones that define fault differently) to compensate plaintiffs. But addressing payments for injury solves only part of the problem. Often we want to compel defendants to do (or not do) something in order to prevent injury. Injunctions, punitive damages, and even remedies like disgorgement are all aimed, directly or indirectly, at modifying or deterring behavior. But deterring robot misbehavior too is going to look very different than deterring humans. Our existing doctrines often take advantage of “irrational” human behavior like cognitive biases and risk aversion. Courts, for instance, can rely on the fact that most of us don’t want to go to jail, so we tend to avoid conduct that might lead to that result. But robots will be deterred only to the extent that their algorithms are modified to include sanctions as part of the risk-reward calculus. These limitations may even require us to institute a “robot death penalty” as a sort of specific deterrence against certain bad behaviors. Today, speculation of this sort may sound far-fetched. But the field already includes examples of misbehaving robots being taken offline permanently—a trend which only appears likely to increase in the years ahead. \n \nFinally, remedies law also has an expressive component that will be complicated by robots. We sometimes grant punitive damages—or disgorge ill-gotten gains—to show our displeasure with you. If our goal is just to feel better about ourselves, perhaps we might also punish robots simply for the sake of punishing them. But if our goal is to send a slightly more nuanced signal than that through the threat of punishment, robots will require us to rethink many of our current doctrines. It also offers important insights into the law of remedies we already apply to people and corporations.","PeriodicalId":51436,"journal":{"name":"University of Chicago Law Review","volume":"76 1","pages":"3"},"PeriodicalIF":1.9000,"publicationDate":"2018-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":"{\"title\":\"Remedies for Robots\",\"authors\":\"Mark A. Lemley, B. Casey\",\"doi\":\"10.2139/SSRN.3223621\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things. They have already killed people. \\n \\nThese new technologies present a number of interesting substantive law questions, from predictability, to transparency, to liability for high stakes decision making in complex computational systems. Our focus here is different. We seek to explore what remedies the law can and should provide once a robot has caused harm. \\n \\nWhere substantive law defines who wins legal disputes, remedies law asks, “What do I get when I win?” Remedies are sometimes designed to make plaintiffs whole by restoring them to the condition they would have been in “but for” the wrong. But they can also contain elements of moral judgment, punishment, and deterrence. For instance, the law will often act to deprive a defendant of its gains even if the result is a windfall to the plaintiff, because we think it is unfair to let defendants keep those gains. In other instances, the law may order defendants to do (or stop doing) something unlawful or harmful. \\n \\nEach of these goals of remedies law, however, runs into difficulties when the bad actor in question is neither a person nor a corporation but a robot. We might order a robot—or, more realistically, the designer or owner of the robot—to pay for the damages it causes. (Though, as we will see, even that presents some surprisingly thorny problems.) But it turns out to be much harder for a judge to “order” a robot, rather than a human, to engage in or refrain from certain conduct . Robots can’t directly obey court orders not written in computer code. And bridging the translation gap between natural language and code is often harder than we might expect. This is particularly true of modern AI techniques that empower machines to learn and modify their decision making over time. If we don’t know how the robot “thinks,” we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do. \\n \\nMoreover, if the ultimate goal of a legal remedy is to encourage good behavior or discourage bad behavior, punishing owners or designers for the behavior of their robots may not always make sense—if only for the simple reason that their owners didn’t act wrongfully in any meaningful way. The same problem affects injunctive relief. Courts are used to ordering people and companies to do (or stop doing) certain things, with a penalty of contempt of court for noncompliance. But ordering a robot to abstain from certain behavior won’t be trivial in many cases. And ordering it to take affirmative acts may prove even more problematic. \\n \\nIn this paper, we begin to think about how we might design a system of remedies for robots. It may, for example, make sense to focus less of our doctrinal attention on moral guilt and more of it on no-fault liability systems (or at least ones that define fault differently) to compensate plaintiffs. But addressing payments for injury solves only part of the problem. Often we want to compel defendants to do (or not do) something in order to prevent injury. Injunctions, punitive damages, and even remedies like disgorgement are all aimed, directly or indirectly, at modifying or deterring behavior. But deterring robot misbehavior too is going to look very different than deterring humans. Our existing doctrines often take advantage of “irrational” human behavior like cognitive biases and risk aversion. Courts, for instance, can rely on the fact that most of us don’t want to go to jail, so we tend to avoid conduct that might lead to that result. But robots will be deterred only to the extent that their algorithms are modified to include sanctions as part of the risk-reward calculus. These limitations may even require us to institute a “robot death penalty” as a sort of specific deterrence against certain bad behaviors. Today, speculation of this sort may sound far-fetched. But the field already includes examples of misbehaving robots being taken offline permanently—a trend which only appears likely to increase in the years ahead. \\n \\nFinally, remedies law also has an expressive component that will be complicated by robots. We sometimes grant punitive damages—or disgorge ill-gotten gains—to show our displeasure with you. If our goal is just to feel better about ourselves, perhaps we might also punish robots simply for the sake of punishing them. But if our goal is to send a slightly more nuanced signal than that through the threat of punishment, robots will require us to rethink many of our current doctrines. It also offers important insights into the law of remedies we already apply to people and corporations.\",\"PeriodicalId\":51436,\"journal\":{\"name\":\"University of Chicago Law Review\",\"volume\":\"76 1\",\"pages\":\"3\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2018-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"26\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"University of Chicago Law Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3223621\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"University of Chicago Law Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.2139/SSRN.3223621","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 26

摘要

当人工智能机器人行为不端时会发生什么?这个问题不仅仅是假设。随着机器人和人工智能(AI)系统越来越多地融入我们的社会,它们会做坏事。他们已经杀人了。这些新技术提出了许多有趣的实体法问题,从可预测性到透明度,再到复杂计算系统中高风险决策的责任。我们在这里的关注点是不同的。我们试图探索一旦机器人造成伤害,法律可以和应该提供什么样的补救措施。实体法规定谁在法律纠纷中获胜,而救济法则要求“我赢了能得到什么?”补救措施有时是为了使原告恢复到“如果没有”过错,他们本来会处于的状态,从而使他们完整。但它们也可以包含道德判断、惩罚和威慑的元素。例如,法律通常会剥夺被告的利益,即使结果对原告来说是意外之财,因为我们认为让被告保留这些利益是不公平的。在其他情况下,法律可能命令被告做(或停止做)非法或有害的事情。然而,当所讨论的不良行为者既不是个人也不是公司,而是机器人时,补救法的这些目标都会遇到困难。我们可以要求机器人——或者更现实地说,要求机器人的设计者或主人——赔偿它造成的损害。(不过,正如我们将看到的,即使这样也会带来一些令人惊讶的棘手问题。)但事实证明,法官要“命令”机器人从事或不从事某些行为要比“命令”人类困难得多。机器人不能直接服从不是用计算机代码写的法庭命令。弥合自然语言和代码之间的翻译差距往往比我们想象的要困难。现代人工智能技术尤其如此,它使机器能够随着时间的推移学习和修改自己的决策。如果我们不知道机器人是如何“思考”的,我们就不知道如何告诉它以一种可能导致它做我们真正希望它做的事情的方式行事。此外,如果法律救济的最终目标是鼓励好的行为或阻止坏的行为,那么惩罚机器人的主人或设计师的行为可能并不总是有意义的——如果只是因为它们的主人没有以任何有意义的方式做出错误的行为。禁令救济也存在同样的问题。法院习惯于命令个人和公司做(或停止做)某些事情,对不遵守的人处以藐视法庭罪。但在很多情况下,命令机器人放弃某些行为并不是微不足道的。命令它采取平权法案可能会证明问题更大。在这篇论文中,我们开始思考如何为机器人设计一个补救系统。例如,我们可以少关注道德上的罪责,多关注无过错责任制度(或者至少是对过错有不同定义的制度)来补偿原告。但解决工伤赔偿问题只能解决部分问题。通常我们想要强迫被告做(或不做)一些事情以防止伤害。禁令、惩罚性损害赔偿,甚至像撤销赔偿这样的补救措施,都是直接或间接地旨在改变或阻止行为。但是,阻止机器人的不当行为也将与阻止人类的行为大相径庭。我们现有的理论经常利用“非理性”的人类行为,如认知偏见和风险厌恶。例如,法院可以依靠这样一个事实:我们大多数人都不想进监狱,所以我们倾向于避免可能导致这种结果的行为。但是,只有对机器人的算法进行修改,将制裁作为风险回报计算的一部分,机器人才会受到威慑。这些限制甚至可能要求我们制定一种“机器人死刑”,作为对某些不良行为的一种特定威慑。今天,这种猜测听起来可能有些牵强。但是这个领域已经出现了行为不端的机器人被永久下线的例子——这种趋势在未来几年只会增加。最后,救济法也有一个表达的部分,这将被机器人复杂化。我们有时给予惩罚性损害赔偿,或退还不义之财,以表示我们对你的不满。如果我们的目标只是让自己感觉更好,也许我们也可能只是为了惩罚机器人而惩罚它们。但是,如果我们的目标是通过惩罚的威胁发出一个稍微微妙一点的信号,机器人将要求我们重新思考我们目前的许多理论。它还为我们已经适用于个人和公司的救济法提供了重要的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Remedies for Robots
What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things. They have already killed people. These new technologies present a number of interesting substantive law questions, from predictability, to transparency, to liability for high stakes decision making in complex computational systems. Our focus here is different. We seek to explore what remedies the law can and should provide once a robot has caused harm. Where substantive law defines who wins legal disputes, remedies law asks, “What do I get when I win?” Remedies are sometimes designed to make plaintiffs whole by restoring them to the condition they would have been in “but for” the wrong. But they can also contain elements of moral judgment, punishment, and deterrence. For instance, the law will often act to deprive a defendant of its gains even if the result is a windfall to the plaintiff, because we think it is unfair to let defendants keep those gains. In other instances, the law may order defendants to do (or stop doing) something unlawful or harmful. Each of these goals of remedies law, however, runs into difficulties when the bad actor in question is neither a person nor a corporation but a robot. We might order a robot—or, more realistically, the designer or owner of the robot—to pay for the damages it causes. (Though, as we will see, even that presents some surprisingly thorny problems.) But it turns out to be much harder for a judge to “order” a robot, rather than a human, to engage in or refrain from certain conduct . Robots can’t directly obey court orders not written in computer code. And bridging the translation gap between natural language and code is often harder than we might expect. This is particularly true of modern AI techniques that empower machines to learn and modify their decision making over time. If we don’t know how the robot “thinks,” we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do. Moreover, if the ultimate goal of a legal remedy is to encourage good behavior or discourage bad behavior, punishing owners or designers for the behavior of their robots may not always make sense—if only for the simple reason that their owners didn’t act wrongfully in any meaningful way. The same problem affects injunctive relief. Courts are used to ordering people and companies to do (or stop doing) certain things, with a penalty of contempt of court for noncompliance. But ordering a robot to abstain from certain behavior won’t be trivial in many cases. And ordering it to take affirmative acts may prove even more problematic. In this paper, we begin to think about how we might design a system of remedies for robots. It may, for example, make sense to focus less of our doctrinal attention on moral guilt and more of it on no-fault liability systems (or at least ones that define fault differently) to compensate plaintiffs. But addressing payments for injury solves only part of the problem. Often we want to compel defendants to do (or not do) something in order to prevent injury. Injunctions, punitive damages, and even remedies like disgorgement are all aimed, directly or indirectly, at modifying or deterring behavior. But deterring robot misbehavior too is going to look very different than deterring humans. Our existing doctrines often take advantage of “irrational” human behavior like cognitive biases and risk aversion. Courts, for instance, can rely on the fact that most of us don’t want to go to jail, so we tend to avoid conduct that might lead to that result. But robots will be deterred only to the extent that their algorithms are modified to include sanctions as part of the risk-reward calculus. These limitations may even require us to institute a “robot death penalty” as a sort of specific deterrence against certain bad behaviors. Today, speculation of this sort may sound far-fetched. But the field already includes examples of misbehaving robots being taken offline permanently—a trend which only appears likely to increase in the years ahead. Finally, remedies law also has an expressive component that will be complicated by robots. We sometimes grant punitive damages—or disgorge ill-gotten gains—to show our displeasure with you. If our goal is just to feel better about ourselves, perhaps we might also punish robots simply for the sake of punishing them. But if our goal is to send a slightly more nuanced signal than that through the threat of punishment, robots will require us to rethink many of our current doctrines. It also offers important insights into the law of remedies we already apply to people and corporations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.40
自引率
5.00%
发文量
2
期刊介绍: The University of Chicago Law Review is a quarterly journal of legal scholarship. Often cited in Supreme Court and other court opinions, as well as in other scholarly works, it is among the most influential journals in the field. Students have full responsibility for editing and publishing the Law Review; they also contribute original scholarship of their own. The Law Review"s editorial board selects all pieces for publication and, with the assistance of staff members, performs substantive and technical edits on each of these pieces prior to publication.
期刊最新文献
Frankfurter, Abstention Doctrine, and the Development of Modern Federalism: A History and Three Futures Remedies for Robots Privatizing Personalized Law Order Without Law Democracy’s Deficits
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1