算法正义:人工智能在刑事量刑中的局限性

Q2 Social Sciences Criminal Justice Ethics Pub Date : 2023-11-03 DOI:10.1080/0731129x.2023.2275967
Isaac Taylor
{"title":"算法正义:人工智能在刑事量刑中的局限性","authors":"Isaac Taylor","doi":"10.1080/0731129x.2023.2275967","DOIUrl":null,"url":null,"abstract":"AbstractCriminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms in sentencing – even in an advisory role – threatens to undermine this value. The paper argues that a principle of “meaningful public control” should be met in all sentencing decisions if they are to retain their condemnatory status. This principle requires that agents who have standing to act on behalf of the wider political community retain moral responsibility for all sentencing decisions. While this principle does not rule out the use of algorithms, it does require limits on how they are constructed.Keywords: artificial intelligence (AI)criminal justiceFeinbergJoelpunishmentsentencing algorithms [I am very grateful to audiences at the Higher Seminar in Philosophy of Law at Uppsala University; the Political Theory Seminar at Stockholm University; and the workshop on “Ethics of AI in the Public Sector” at KTH Royal Institute of Technology for discussions on previous drafts of this paper; as well as to the anonymous reviewers from Criminal Justice Ethics for very helpful comments.][Disclosure Statement: No potential conflict of interest was reported by the author(s)].Notes1 Danziger, Levav and Avnaim-Pesso, “Extraneous Factors in Judicial Decisions.”2 Pamela McCroduck suggests that many members of disadvantaged groups may want to take their chances with an impartial computer over a (potentially biased) human judge. See McCorduck, Machines Who Think, 375.3 Yong, “A Popular Algorithm is No Better at Predicting Crimes Than Random People.”4 Angwin, Larson, Mattu, and Kirchner, “Machine Bias.” The question of whether algorithms can avoid objectionable forms of discrimination has been addressed in Davis and Douglas, “Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.”5 Dressel and Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism.”6 One worry here is that there is no possible algorithm that can simultaneously meet various intuitively plausible criteria of fairness. See, for example, Chouldechova, “Fair Prediction with Disparate Impact.” I set this issue aside for the purposes of this paper, and assume that a fair algorithm is at least possible to construct. This might be because some of the purported criteria of fairness which cannot be met simultaneously are not, in fact, genuine moral requirements. Cf. Hedden, “On Statistical Criteria of Algorithmic Fairness;” Eva, “Algorithmic Fairness and Base Rate Tracking.”7 For the purposes of this paper, “sentencing decisions” will be taken to include not only the initial decision on severity and type of sentence given to criminals immediately after conviction, but also similar decisions made while punishment is being carried out (for example in parole hearings).8 A gap in research about how risk assessment scores are used in the criminal justice system more generally has been noted. See Law Society of England and Wales, “Algorithms in the Criminal Justice System.” 52.9 Skitka, Mosier and Burdick, “Does Automation Bias Decision-Making?”10 By “objective,” I mean that these factors do not consist of individuals' evaluations or psychological reactions. The contrast with subjective factors will be made in due course.11 The utilitarian Jeremy Bentham argues that the reduction of crime is the only legitimate end of criminal punishment. See Bentham, An Introduction to the Principles of Morals and Legislation, 170–203.12 Moore, Placing Blame.13 Ryberg, “Risk and Retribution.” Not all retributivists are resistant to risk-based sentencing. See Husak, “Why Legal Philosophers (Including Retributivists) Should be Less Resistant to Risk-Based Sentencing.”14 Cf. Chiao, “Predicting Proportionality,” 341–3.15 An algorithm that based its recommendations on previous judicial decisions may provide a useful proxy for these factors. An algorithm of this sort is imagined in ibid. Jesper Ryberg notes a dilemma for retributivists seeking to justify the use of machine-learning algorithms using existing cases as inputs. Either these algorithms will rely on a sample that is too small to give acceptable outcomes, or one that is too large to be easily constructed. See Ryberg, “Sentencing Disparity and Artificial Intelligence.”16 Abney, “Autonomous Robots and the Future of Just War Theory,” 347; Wallach and Vallor, “Moral Machines.”17 Duff, Punishment, Communication, and Community, 175–202; Lacey, “Socializing the Subject of Criminal Law;” Ristroph, “Desert, Democracy, and Sentencing Reform.”18 The retributive element might be thought to have this form. See Morris, The Future of Imprisonment.19 Up to the 1970s such wide discretion was given to judges in the US criminal justice system for instrumental reasons. See Berman, “Re-Balancing Fitness, Fairness, and Finality for Sentences,” 157–8.20 This might be based on the idea that desert is to some extent comparative, in the sense that what someone deserves (for example, the appropriate sentence on a retributivist account) might depend on what others have received. On this idea, See Miller, “Comparative and Noncomparative Desert.”21 For another notable example, see Wringe, An Expressive Theory of Punishment.22 Feinberg, “The Expressive Function of Punishment,” 400.23 Ibid., 397–8.24 Ibid., 404–8. We might add that condemnation might be welcomed by both instrumentalists (because the possibility of condemnation can serve as a useful disincentive) and retributivists (because the stigma that is produced by condemnation may form part of the harsh treatment that retributivists view as intrinsically valuable).25 Ibid., 406.26 Ibid.27 Honneth, The Struggle for Recognition.28 Ibid., 118–21.29 For the view that expressivists should support harsher sentencing for hate crimes, see Wellman, “A Defense of Stiffer Penalties for Hate Crimes,” 68.30 Nozick, Philosophical Explanations, 370.31 Cf. Shelby, Dark Ghettos, 240–241, where it is argued that it is the conviction (and not the sentencing) stage which involves an expressive element.32 Boonin, The Problem of Punishment, 176–9.33 For the view that censure is valuable, but not sufficient to justify punishment by itself, see Narayan, “Appropriate Responses and Preventive Benefits;” Hirsh, Censure and Sanctions, 6-19.34 It should be noted that certain theories of punishment that might be labelled “communicative” rather than expressive might also explain why there is a problem with certain uses of algorithms. These theories suggest that punishment should be a reciprocal act that requires some degree of rational engagement from those punished (see, for example, Duff, Punishment, Communication, and Community; Hampton, “The Moral Education Theory of Punishment”). Because certain algorithms may not be explicable to those who are sentenced, rational engagement might be impossible. While this idea warrants further attention, I cannot provide it within this paper.35 Fischer and Ravizza, Responsibility and Control, 12–4.36 For a helpful outline of these different forms of responsibility, see Jeppsson, “Accountability, Answerability and Attributibility.”37 Watson, “Two Faces of Responsibility,” 229.38 Sharkey, “Autonomous Weapons Systems, Killer Robots, and Human Dignity;” Sparrow, “Robots and Respect;” Sparrow, “Killer Robots,” 67; Taylor, “Who is Responsible for Killer Robots,” 232–3.39 This concept has emerged as the guiding principle for framing ongoing international negotiations on the regulation of LAWS. See United Nations Office at Geneva, “Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems,” 25.40 See, for example, Sparrow, “Killer Robots.”41 Santoni de Sia and van den Hoven, “Meaningful Human Control over Autonomous Systems.”42 Taylor, “Who is Responsible for Killer Robots?,” 234.43 Explicability is a widely-recognized requirement of AI ethics, for various different sorts of reasons than the one outlined here. See Floridi and Cowls, “A Unified Framework of Five Principles for AI in Society,” 8. The importance of explicability more generally is discussed in Vredenburgh, “The Right to Explanation.” On the sort of explicability we want from algorithms, see Chiao, “Transparency at Sentencing;” Ryberg, “Sentencing and Algorithmic Transparency;” Ryberg and Petersen, “Sentencing and the Conflict between Algorithmic Accuracy and Transparency.”44 Roff and Moyes, “Meaningful Human Control, Artificial Intelligence and AutonomousWeapons.”45 Cf. Wellman, Rights Forfeiture and Punishment, 49.46 The complementary argument that punishment (rather than sentencing) should be undertaken by public agents for expressive reasons is given in Dorfman and Harel, “The Case Against Privatization,” 92–6.47 Cf. the distinction between direct and indirect delegation in Lawford-Smith, Not in Their Name, 117.48 Dorfman and Harel, “The Case Against Privatization,” 71–6.49 Ripstein, Force and Freedom, 190–8.50 Cordelli, The Privatized State, 159–71.51 This would be a “bottom-up” algorithm in the terms of Tasioulas, “AI and Robot Ethics,” 337.52 How strict these limits are will depend on the precise externalist principles we endorse. On Dorfman and Harel's account, for example, which requires that public agents defer to the public point of view, necessitates a community of practice – an institutional structure that allows the public point of view to be articulated and deferred to. See Dorfman and Harel, “The Case Against Privatization,” 82–3. It is unclear how this could genuinely be put in place when private companies are involved.53 Miller, 117.54 Ibid., 87.55 Ibid.56 This might be particularly true in the security sector. See Pattison, The Morality of Private War, 84–100.57 Shelby, Dark Ghettos, 238–48.","PeriodicalId":35931,"journal":{"name":"Criminal Justice Ethics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Justice by Algorithm: The Limits of AI in Criminal Sentencing\",\"authors\":\"Isaac Taylor\",\"doi\":\"10.1080/0731129x.2023.2275967\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AbstractCriminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms in sentencing – even in an advisory role – threatens to undermine this value. The paper argues that a principle of “meaningful public control” should be met in all sentencing decisions if they are to retain their condemnatory status. This principle requires that agents who have standing to act on behalf of the wider political community retain moral responsibility for all sentencing decisions. While this principle does not rule out the use of algorithms, it does require limits on how they are constructed.Keywords: artificial intelligence (AI)criminal justiceFeinbergJoelpunishmentsentencing algorithms [I am very grateful to audiences at the Higher Seminar in Philosophy of Law at Uppsala University; the Political Theory Seminar at Stockholm University; and the workshop on “Ethics of AI in the Public Sector” at KTH Royal Institute of Technology for discussions on previous drafts of this paper; as well as to the anonymous reviewers from Criminal Justice Ethics for very helpful comments.][Disclosure Statement: No potential conflict of interest was reported by the author(s)].Notes1 Danziger, Levav and Avnaim-Pesso, “Extraneous Factors in Judicial Decisions.”2 Pamela McCroduck suggests that many members of disadvantaged groups may want to take their chances with an impartial computer over a (potentially biased) human judge. See McCorduck, Machines Who Think, 375.3 Yong, “A Popular Algorithm is No Better at Predicting Crimes Than Random People.”4 Angwin, Larson, Mattu, and Kirchner, “Machine Bias.” The question of whether algorithms can avoid objectionable forms of discrimination has been addressed in Davis and Douglas, “Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.”5 Dressel and Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism.”6 One worry here is that there is no possible algorithm that can simultaneously meet various intuitively plausible criteria of fairness. See, for example, Chouldechova, “Fair Prediction with Disparate Impact.” I set this issue aside for the purposes of this paper, and assume that a fair algorithm is at least possible to construct. This might be because some of the purported criteria of fairness which cannot be met simultaneously are not, in fact, genuine moral requirements. Cf. Hedden, “On Statistical Criteria of Algorithmic Fairness;” Eva, “Algorithmic Fairness and Base Rate Tracking.”7 For the purposes of this paper, “sentencing decisions” will be taken to include not only the initial decision on severity and type of sentence given to criminals immediately after conviction, but also similar decisions made while punishment is being carried out (for example in parole hearings).8 A gap in research about how risk assessment scores are used in the criminal justice system more generally has been noted. See Law Society of England and Wales, “Algorithms in the Criminal Justice System.” 52.9 Skitka, Mosier and Burdick, “Does Automation Bias Decision-Making?”10 By “objective,” I mean that these factors do not consist of individuals' evaluations or psychological reactions. The contrast with subjective factors will be made in due course.11 The utilitarian Jeremy Bentham argues that the reduction of crime is the only legitimate end of criminal punishment. See Bentham, An Introduction to the Principles of Morals and Legislation, 170–203.12 Moore, Placing Blame.13 Ryberg, “Risk and Retribution.” Not all retributivists are resistant to risk-based sentencing. See Husak, “Why Legal Philosophers (Including Retributivists) Should be Less Resistant to Risk-Based Sentencing.”14 Cf. Chiao, “Predicting Proportionality,” 341–3.15 An algorithm that based its recommendations on previous judicial decisions may provide a useful proxy for these factors. An algorithm of this sort is imagined in ibid. Jesper Ryberg notes a dilemma for retributivists seeking to justify the use of machine-learning algorithms using existing cases as inputs. Either these algorithms will rely on a sample that is too small to give acceptable outcomes, or one that is too large to be easily constructed. See Ryberg, “Sentencing Disparity and Artificial Intelligence.”16 Abney, “Autonomous Robots and the Future of Just War Theory,” 347; Wallach and Vallor, “Moral Machines.”17 Duff, Punishment, Communication, and Community, 175–202; Lacey, “Socializing the Subject of Criminal Law;” Ristroph, “Desert, Democracy, and Sentencing Reform.”18 The retributive element might be thought to have this form. See Morris, The Future of Imprisonment.19 Up to the 1970s such wide discretion was given to judges in the US criminal justice system for instrumental reasons. See Berman, “Re-Balancing Fitness, Fairness, and Finality for Sentences,” 157–8.20 This might be based on the idea that desert is to some extent comparative, in the sense that what someone deserves (for example, the appropriate sentence on a retributivist account) might depend on what others have received. On this idea, See Miller, “Comparative and Noncomparative Desert.”21 For another notable example, see Wringe, An Expressive Theory of Punishment.22 Feinberg, “The Expressive Function of Punishment,” 400.23 Ibid., 397–8.24 Ibid., 404–8. We might add that condemnation might be welcomed by both instrumentalists (because the possibility of condemnation can serve as a useful disincentive) and retributivists (because the stigma that is produced by condemnation may form part of the harsh treatment that retributivists view as intrinsically valuable).25 Ibid., 406.26 Ibid.27 Honneth, The Struggle for Recognition.28 Ibid., 118–21.29 For the view that expressivists should support harsher sentencing for hate crimes, see Wellman, “A Defense of Stiffer Penalties for Hate Crimes,” 68.30 Nozick, Philosophical Explanations, 370.31 Cf. Shelby, Dark Ghettos, 240–241, where it is argued that it is the conviction (and not the sentencing) stage which involves an expressive element.32 Boonin, The Problem of Punishment, 176–9.33 For the view that censure is valuable, but not sufficient to justify punishment by itself, see Narayan, “Appropriate Responses and Preventive Benefits;” Hirsh, Censure and Sanctions, 6-19.34 It should be noted that certain theories of punishment that might be labelled “communicative” rather than expressive might also explain why there is a problem with certain uses of algorithms. These theories suggest that punishment should be a reciprocal act that requires some degree of rational engagement from those punished (see, for example, Duff, Punishment, Communication, and Community; Hampton, “The Moral Education Theory of Punishment”). Because certain algorithms may not be explicable to those who are sentenced, rational engagement might be impossible. While this idea warrants further attention, I cannot provide it within this paper.35 Fischer and Ravizza, Responsibility and Control, 12–4.36 For a helpful outline of these different forms of responsibility, see Jeppsson, “Accountability, Answerability and Attributibility.”37 Watson, “Two Faces of Responsibility,” 229.38 Sharkey, “Autonomous Weapons Systems, Killer Robots, and Human Dignity;” Sparrow, “Robots and Respect;” Sparrow, “Killer Robots,” 67; Taylor, “Who is Responsible for Killer Robots,” 232–3.39 This concept has emerged as the guiding principle for framing ongoing international negotiations on the regulation of LAWS. See United Nations Office at Geneva, “Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems,” 25.40 See, for example, Sparrow, “Killer Robots.”41 Santoni de Sia and van den Hoven, “Meaningful Human Control over Autonomous Systems.”42 Taylor, “Who is Responsible for Killer Robots?,” 234.43 Explicability is a widely-recognized requirement of AI ethics, for various different sorts of reasons than the one outlined here. See Floridi and Cowls, “A Unified Framework of Five Principles for AI in Society,” 8. The importance of explicability more generally is discussed in Vredenburgh, “The Right to Explanation.” On the sort of explicability we want from algorithms, see Chiao, “Transparency at Sentencing;” Ryberg, “Sentencing and Algorithmic Transparency;” Ryberg and Petersen, “Sentencing and the Conflict between Algorithmic Accuracy and Transparency.”44 Roff and Moyes, “Meaningful Human Control, Artificial Intelligence and AutonomousWeapons.”45 Cf. Wellman, Rights Forfeiture and Punishment, 49.46 The complementary argument that punishment (rather than sentencing) should be undertaken by public agents for expressive reasons is given in Dorfman and Harel, “The Case Against Privatization,” 92–6.47 Cf. the distinction between direct and indirect delegation in Lawford-Smith, Not in Their Name, 117.48 Dorfman and Harel, “The Case Against Privatization,” 71–6.49 Ripstein, Force and Freedom, 190–8.50 Cordelli, The Privatized State, 159–71.51 This would be a “bottom-up” algorithm in the terms of Tasioulas, “AI and Robot Ethics,” 337.52 How strict these limits are will depend on the precise externalist principles we endorse. On Dorfman and Harel's account, for example, which requires that public agents defer to the public point of view, necessitates a community of practice – an institutional structure that allows the public point of view to be articulated and deferred to. See Dorfman and Harel, “The Case Against Privatization,” 82–3. It is unclear how this could genuinely be put in place when private companies are involved.53 Miller, 117.54 Ibid., 87.55 Ibid.56 This might be particularly true in the security sector. See Pattison, The Morality of Private War, 84–100.57 Shelby, Dark Ghettos, 238–48.\",\"PeriodicalId\":35931,\"journal\":{\"name\":\"Criminal Justice Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Criminal Justice Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/0731129x.2023.2275967\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Criminal Justice Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0731129x.2023.2275967","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

摘要刑事司法系统传统上严重依赖于人的决策,但新技术越来越多地补充了人在这一领域的作用。本文考虑了在量刑决策中使用算法需要设置的一般限制。它认为,即使我们能够构建出与人类决策能力相当的算法,也需要对它们的设计和开发方式施加严格的限制。谴责行为是刑事判决的一个有价值的因素,在判决中使用算法- -即使是作为咨询的角色- -有可能破坏这一价值。这篇论文认为,如果要保持量刑的地位,就应该在所有量刑决定中遵守“有意义的公共控制”原则。这一原则要求有资格代表更广泛的政治团体行事的代理人对所有量刑决定负有道德责任。虽然这一原则并不排除算法的使用,但它确实要求对它们的构造方式进行限制。关键词:人工智能(AI)刑事司法feinberg joelpunishment量刑算法我非常感谢在乌普萨拉大学法律哲学高级研讨会上的观众;斯德哥尔摩大学政治理论研讨会;以及在KTH皇家理工学院举办的“公共部门人工智能伦理”研讨会,讨论本文件的先前草稿;以及来自《刑事司法伦理》的匿名评论者,他们的评论非常有帮助。[披露声明:作者未报告潜在的利益冲突]。注1 Danziger, Levav和Avnaim-Pesso,“司法判决中的外来因素”。帕梅拉·麦克罗达克认为,许多弱势群体的成员可能更愿意选择一个公正的计算机,而不是一个(可能有偏见的)人类法官。参见McCorduck,会思考的机器,375.3 Yong,“一个流行的算法在预测犯罪方面并不比随机的人好。”4 Angwin, Larson, Mattu, and Kirchner, <机器偏见>算法是否能避免不良的问题形式的歧视已经在戴维斯和道格拉斯,解决“学习区别:完美的代理问题在人工智能的刑事判决。5 Dressel and Farid,《预测累犯的准确性、公平性和局限性》。这里的一个担忧是,没有可能的算法可以同时满足各种直觉上合理的公平标准。例如,请参阅Chouldechova的“具有不同影响的公平预测”。为了本文的目的,我把这个问题放在一边,并假设至少有可能构建一个公平的算法。这可能是因为一些不能同时满足的所谓公平标准实际上并不是真正的道德要求。参见Hedden,“论算法公平的统计标准”;Eva,“算法公平与基准率跟踪”。“7为本文的目的,“量刑决定”将不仅包括在定罪后立即对罪犯作出的严重程度和量刑类型的初步决定,而且还包括在执行刑罚时作出的类似决定(例如在假释听证会上)人们注意到,关于风险评估分数如何更普遍地用于刑事司法系统的研究存在空白。参见英格兰和威尔士法律协会,“刑事司法系统中的算法”。52.9 Skitka, Mosier和Burdick,《自动化会影响决策吗?》我所说的“客观”是指这些因素不包括个人的评价或心理反应。与主观因素的对比将在适当的时候进行功利主义者边沁认为,减少犯罪是刑事惩罚的唯一合法目的。参见边沁,《道德与立法原则导论》,170-203.12摩尔,《怪罪》,13瑞伯格,《风险与报应》。并非所有的报复主义者都反对基于风险的量刑。参见胡萨克,“为什么法律哲学家(包括报应主义)应该更少抵制基于风险的判决。”14 Cf. Chiao,“预测比例”,341-3.15基于先前司法判决的建议的算法可能为这些因素提供有用的代理。类似的算法在同上被设想过。Jesper Ryberg注意到报应主义者试图证明使用现有案例作为输入的机器学习算法是合理的一个困境。这些算法要么依赖于一个太小而无法给出可接受结果的样本,要么依赖于一个太大而难以构建的样本。看到Ryberg量刑差异和人工智能。16 Abney,“自主机器人与正义战争理论的未来”,347;Wallach and valor,道德机器。17达夫:《惩罚、交流与社区》,第175-202页;《刑法主体的社会化》;《沙漠、民主与量刑改革》。 18报复因素可能被认为具有这种形式。19一直到20世纪70年代,美国刑事司法系统之所以给予法官如此广泛的自由裁量权,是出于工具性的原因。参见Berman,“重新平衡句子的适宜性、公平性和终了性”,157-8.20这可能是基于这样一种观点,即在某种程度上,应得是比较的,在某种意义上,某人应得的(例如,报应主义的适当判决)可能取决于其他人得到了什么。关于这个观点,请参见米勒的《比较与非比较沙漠》。21另一个著名的例子,见Wringe:《惩罚的表达理论》。22 Feinberg:《惩罚的表达功能》,400.23同上,397-8.24同上,404-8。我们可以补充说,谴责可能受到工具主义者(因为谴责的可能性可以作为一种有用的抑制因素)和报复主义者(因为谴责产生的耻辱可能构成严厉对待的一部分,报复主义者认为这是有内在价值的)的欢迎关于表现主义者应该支持对仇恨犯罪判处更严厉刑罚的观点,请见威尔曼:《为仇恨犯罪判处更严厉刑罚辩护》,68.30诺齐克:《哲学解释》,370.31 Cf.谢尔比:《黑暗的贫民窟》,240-241,其中认为是定罪(而不是量刑)阶段涉及表达因素Boonin, The Problem of Punishment, 176-9.33关于谴责是有价值的,但不足以证明惩罚本身是正当的观点,见Narayan,“适当的反应和预防性利益”;Hirsh,谴责和制裁,6-19.34应该指出,某些可能被标记为“交际”而不是表达的惩罚理论也可能解释为什么算法的某些使用存在问题。这些理论表明,惩罚应该是一种互惠行为,需要被惩罚者一定程度的理性参与(例如,参见Duff, punishment, Communication, and Community;汉普顿,《惩罚的道德教育理论》)。因为某些算法可能无法向那些被判刑的人解释,理性参与可能是不可能的。虽然这个想法值得进一步注意,但我无法在本文中提供它Fischer和Ravizza,责任和控制,12-4.36,这些不同形式的责任的有用概述,见Jeppsson,“问责制,可回答性和归因性。”37沃森,《责任的两面》,229.38夏基,《自主武器系统、杀手机器人和人类尊严》,《麻雀》,《机器人和尊重》,《麻雀》,《杀手机器人》,67;泰勒,“谁该对杀人机器人负责”,232-3.39这个概念已经成为制定正在进行的关于法律监管的国际谈判的指导原则。见联合国日内瓦办事处,“致命自主武器系统领域新兴技术政府专家组2019年会议报告”,25.40例如,见麻雀,“杀手机器人”。41 Santoni de Sia和van den Hoven,“对自主系统的有意义的人类控制”。42泰勒,《谁该为杀手机器人负责?》234.43可解释性是人工智能伦理的一个被广泛认可的要求,原因与这里概述的不同。参见Floridi和Cowls,“人工智能在社会中的五项原则的统一框架”,7。可解释性的重要性在弗里登伯格的《解释的权利》一书中得到了更广泛的讨论。关于我们希望从算法中获得的可解释性,请参见Chiao,“量刑时的透明度”;Ryberg,“量刑与算法透明度”;Ryberg和Petersen,“量刑与算法准确性与透明度之间的冲突”。44 Roff和Moyes,“有意义的人类控制,人工智能和自主武器。”(45 Cf. Wellman,权利没收与惩罚,49.46)关于惩罚(而不是量刑)应该由公共代理人出于明确的理由而承担的补充论点,见Dorfman和Harel,“反对私有化的案例”,92-6.47 Cf.直接和间接授权的区别,见Lawford-Smith, Not in Their Name, 117.48 Dorfman和Harel,“反对私有化的案例”,在Tasioulas的《人工智能和机器人伦理》中,这将是一种“自下而上”的算法。这些限制有多严格将取决于我们所赞同的确切的外部主义原则。例如,根据多尔夫曼和哈雷尔的说法,要求公共代理人服从公众观点,就需要一个实践共同体——一个允许公众观点得到阐述和服从的制度结构。见多尔夫曼和哈雷尔,“反对私有化的案例”,82-3页。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Justice by Algorithm: The Limits of AI in Criminal Sentencing
AbstractCriminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms in sentencing – even in an advisory role – threatens to undermine this value. The paper argues that a principle of “meaningful public control” should be met in all sentencing decisions if they are to retain their condemnatory status. This principle requires that agents who have standing to act on behalf of the wider political community retain moral responsibility for all sentencing decisions. While this principle does not rule out the use of algorithms, it does require limits on how they are constructed.Keywords: artificial intelligence (AI)criminal justiceFeinbergJoelpunishmentsentencing algorithms [I am very grateful to audiences at the Higher Seminar in Philosophy of Law at Uppsala University; the Political Theory Seminar at Stockholm University; and the workshop on “Ethics of AI in the Public Sector” at KTH Royal Institute of Technology for discussions on previous drafts of this paper; as well as to the anonymous reviewers from Criminal Justice Ethics for very helpful comments.][Disclosure Statement: No potential conflict of interest was reported by the author(s)].Notes1 Danziger, Levav and Avnaim-Pesso, “Extraneous Factors in Judicial Decisions.”2 Pamela McCroduck suggests that many members of disadvantaged groups may want to take their chances with an impartial computer over a (potentially biased) human judge. See McCorduck, Machines Who Think, 375.3 Yong, “A Popular Algorithm is No Better at Predicting Crimes Than Random People.”4 Angwin, Larson, Mattu, and Kirchner, “Machine Bias.” The question of whether algorithms can avoid objectionable forms of discrimination has been addressed in Davis and Douglas, “Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.”5 Dressel and Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism.”6 One worry here is that there is no possible algorithm that can simultaneously meet various intuitively plausible criteria of fairness. See, for example, Chouldechova, “Fair Prediction with Disparate Impact.” I set this issue aside for the purposes of this paper, and assume that a fair algorithm is at least possible to construct. This might be because some of the purported criteria of fairness which cannot be met simultaneously are not, in fact, genuine moral requirements. Cf. Hedden, “On Statistical Criteria of Algorithmic Fairness;” Eva, “Algorithmic Fairness and Base Rate Tracking.”7 For the purposes of this paper, “sentencing decisions” will be taken to include not only the initial decision on severity and type of sentence given to criminals immediately after conviction, but also similar decisions made while punishment is being carried out (for example in parole hearings).8 A gap in research about how risk assessment scores are used in the criminal justice system more generally has been noted. See Law Society of England and Wales, “Algorithms in the Criminal Justice System.” 52.9 Skitka, Mosier and Burdick, “Does Automation Bias Decision-Making?”10 By “objective,” I mean that these factors do not consist of individuals' evaluations or psychological reactions. The contrast with subjective factors will be made in due course.11 The utilitarian Jeremy Bentham argues that the reduction of crime is the only legitimate end of criminal punishment. See Bentham, An Introduction to the Principles of Morals and Legislation, 170–203.12 Moore, Placing Blame.13 Ryberg, “Risk and Retribution.” Not all retributivists are resistant to risk-based sentencing. See Husak, “Why Legal Philosophers (Including Retributivists) Should be Less Resistant to Risk-Based Sentencing.”14 Cf. Chiao, “Predicting Proportionality,” 341–3.15 An algorithm that based its recommendations on previous judicial decisions may provide a useful proxy for these factors. An algorithm of this sort is imagined in ibid. Jesper Ryberg notes a dilemma for retributivists seeking to justify the use of machine-learning algorithms using existing cases as inputs. Either these algorithms will rely on a sample that is too small to give acceptable outcomes, or one that is too large to be easily constructed. See Ryberg, “Sentencing Disparity and Artificial Intelligence.”16 Abney, “Autonomous Robots and the Future of Just War Theory,” 347; Wallach and Vallor, “Moral Machines.”17 Duff, Punishment, Communication, and Community, 175–202; Lacey, “Socializing the Subject of Criminal Law;” Ristroph, “Desert, Democracy, and Sentencing Reform.”18 The retributive element might be thought to have this form. See Morris, The Future of Imprisonment.19 Up to the 1970s such wide discretion was given to judges in the US criminal justice system for instrumental reasons. See Berman, “Re-Balancing Fitness, Fairness, and Finality for Sentences,” 157–8.20 This might be based on the idea that desert is to some extent comparative, in the sense that what someone deserves (for example, the appropriate sentence on a retributivist account) might depend on what others have received. On this idea, See Miller, “Comparative and Noncomparative Desert.”21 For another notable example, see Wringe, An Expressive Theory of Punishment.22 Feinberg, “The Expressive Function of Punishment,” 400.23 Ibid., 397–8.24 Ibid., 404–8. We might add that condemnation might be welcomed by both instrumentalists (because the possibility of condemnation can serve as a useful disincentive) and retributivists (because the stigma that is produced by condemnation may form part of the harsh treatment that retributivists view as intrinsically valuable).25 Ibid., 406.26 Ibid.27 Honneth, The Struggle for Recognition.28 Ibid., 118–21.29 For the view that expressivists should support harsher sentencing for hate crimes, see Wellman, “A Defense of Stiffer Penalties for Hate Crimes,” 68.30 Nozick, Philosophical Explanations, 370.31 Cf. Shelby, Dark Ghettos, 240–241, where it is argued that it is the conviction (and not the sentencing) stage which involves an expressive element.32 Boonin, The Problem of Punishment, 176–9.33 For the view that censure is valuable, but not sufficient to justify punishment by itself, see Narayan, “Appropriate Responses and Preventive Benefits;” Hirsh, Censure and Sanctions, 6-19.34 It should be noted that certain theories of punishment that might be labelled “communicative” rather than expressive might also explain why there is a problem with certain uses of algorithms. These theories suggest that punishment should be a reciprocal act that requires some degree of rational engagement from those punished (see, for example, Duff, Punishment, Communication, and Community; Hampton, “The Moral Education Theory of Punishment”). Because certain algorithms may not be explicable to those who are sentenced, rational engagement might be impossible. While this idea warrants further attention, I cannot provide it within this paper.35 Fischer and Ravizza, Responsibility and Control, 12–4.36 For a helpful outline of these different forms of responsibility, see Jeppsson, “Accountability, Answerability and Attributibility.”37 Watson, “Two Faces of Responsibility,” 229.38 Sharkey, “Autonomous Weapons Systems, Killer Robots, and Human Dignity;” Sparrow, “Robots and Respect;” Sparrow, “Killer Robots,” 67; Taylor, “Who is Responsible for Killer Robots,” 232–3.39 This concept has emerged as the guiding principle for framing ongoing international negotiations on the regulation of LAWS. See United Nations Office at Geneva, “Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems,” 25.40 See, for example, Sparrow, “Killer Robots.”41 Santoni de Sia and van den Hoven, “Meaningful Human Control over Autonomous Systems.”42 Taylor, “Who is Responsible for Killer Robots?,” 234.43 Explicability is a widely-recognized requirement of AI ethics, for various different sorts of reasons than the one outlined here. See Floridi and Cowls, “A Unified Framework of Five Principles for AI in Society,” 8. The importance of explicability more generally is discussed in Vredenburgh, “The Right to Explanation.” On the sort of explicability we want from algorithms, see Chiao, “Transparency at Sentencing;” Ryberg, “Sentencing and Algorithmic Transparency;” Ryberg and Petersen, “Sentencing and the Conflict between Algorithmic Accuracy and Transparency.”44 Roff and Moyes, “Meaningful Human Control, Artificial Intelligence and AutonomousWeapons.”45 Cf. Wellman, Rights Forfeiture and Punishment, 49.46 The complementary argument that punishment (rather than sentencing) should be undertaken by public agents for expressive reasons is given in Dorfman and Harel, “The Case Against Privatization,” 92–6.47 Cf. the distinction between direct and indirect delegation in Lawford-Smith, Not in Their Name, 117.48 Dorfman and Harel, “The Case Against Privatization,” 71–6.49 Ripstein, Force and Freedom, 190–8.50 Cordelli, The Privatized State, 159–71.51 This would be a “bottom-up” algorithm in the terms of Tasioulas, “AI and Robot Ethics,” 337.52 How strict these limits are will depend on the precise externalist principles we endorse. On Dorfman and Harel's account, for example, which requires that public agents defer to the public point of view, necessitates a community of practice – an institutional structure that allows the public point of view to be articulated and deferred to. See Dorfman and Harel, “The Case Against Privatization,” 82–3. It is unclear how this could genuinely be put in place when private companies are involved.53 Miller, 117.54 Ibid., 87.55 Ibid.56 This might be particularly true in the security sector. See Pattison, The Morality of Private War, 84–100.57 Shelby, Dark Ghettos, 238–48.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Criminal Justice Ethics
Criminal Justice Ethics Social Sciences-Law
CiteScore
1.10
自引率
0.00%
发文量
11
期刊最新文献
Exposing, Reversing, and Inheriting Crimes as Traumas from the Neurosciences to Epigenetics: Why Criminal Law Cannot Yet Afford A(nother) Biology-induced Overhaul Institutional Corruption, Institutional Corrosion and Collective Responsibility Sentencing, Artificial Intelligence, and Condemnation: A Reply to Taylor Double Jeopardy, Autrefois Acquit and the Legal Ethics of the Rule Against Unreasonably Splitting a Case Ethical Resource Allocation in Policing: Why Policing Requires a Different Approach from Healthcare
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1