Algorithmic evidence in U.S criminal sentencing

Suzanne Kawamleh
{"title":"Algorithmic evidence in U.S criminal sentencing","authors":"Suzanne Kawamleh","doi":"10.1007/s43681-024-00473-y","DOIUrl":null,"url":null,"abstract":"<div><p>The use of automated risk assessment tools to predict a defendant’s risk of recidivism is necessarily unfair. There is a tradeoff between equal treatment and equal outcomes which constitutes the “impossibility of fairness” problem in machine learning. This article provides an account of algorithmic fairness that centers on equal treatment and requires the use of <i>equally confirmatory</i> algorithmic evidence. The analysis relies on a Bayesian account of evidence to assess AI predictions of recidivism risk as evidence for or against hypotheses about a black and white defendant’s probability of future rearrest. Such predictions are shown to provide weaker confirmatory evidence for a black defendant’s future recidivism risk than a white defendant. Thus, the use of such evidence is necessarily unfair to black defendants because such use violates equal treatment and thus cannot meet a necessary condition of algorithmic fairness. This proposed account of algorithmic fairness provides the theoretical resources to avoid the “impossibility of fairness” problem. On this view of algorithmic fairness, fairness is neither inevitable nor impossible. By requiring equally confirmatory scores, rather than simply the same scores, decision makers can both satisfy equal treatment and reduce racial disparities in criminal sentencing.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1315 - 1328"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00473-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The use of automated risk assessment tools to predict a defendant’s risk of recidivism is necessarily unfair. There is a tradeoff between equal treatment and equal outcomes which constitutes the “impossibility of fairness” problem in machine learning. This article provides an account of algorithmic fairness that centers on equal treatment and requires the use of equally confirmatory algorithmic evidence. The analysis relies on a Bayesian account of evidence to assess AI predictions of recidivism risk as evidence for or against hypotheses about a black and white defendant’s probability of future rearrest. Such predictions are shown to provide weaker confirmatory evidence for a black defendant’s future recidivism risk than a white defendant. Thus, the use of such evidence is necessarily unfair to black defendants because such use violates equal treatment and thus cannot meet a necessary condition of algorithmic fairness. This proposed account of algorithmic fairness provides the theoretical resources to avoid the “impossibility of fairness” problem. On this view of algorithmic fairness, fairness is neither inevitable nor impossible. By requiring equally confirmatory scores, rather than simply the same scores, decision makers can both satisfy equal treatment and reduce racial disparities in criminal sentencing.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
美国刑事判决中的算法证据
使用自动风险评估工具来预测被告再犯的风险必然是不公平的。在平等待遇和平等结果之间存在权衡,这构成了机器学习中的“不可能公平”问题。本文提供了一种算法公平的解释,它以平等对待为中心,并要求使用同样确认的算法证据。该分析依赖于对证据的贝叶斯解释,以评估人工智能对累犯风险的预测,作为支持或反对黑人和白人被告未来再次被捕概率假设的证据。与白人被告相比,这种预测对黑人被告未来再犯风险提供的证实性证据较弱。因此,这种证据的使用对黑人被告必然是不公平的,因为这种使用违反了平等待遇,因此不能满足算法公平的必要条件。这种算法公平性的解释为避免“公平性不可能”问题提供了理论资源。在这种算法公平的观点下,公平既不是不可避免的,也不是不可能的。通过要求相同的证实性分数,而不是简单地要求相同的分数,决策者既可以满足平等待遇,又可以减少刑事判决中的种族差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Beyond black-box medicine: a bioethical considerations for informed consent in AI-driven endoscopy Rectifying illusion: a Buddhist–Confucian framework for LLM hallucinations A dynamic contextual responsibility framework for evaluating large language models in socio-technical contexts Political fantasies of fairness: artificial intelligence, law, and the myth of sovereign reason A critical analysis of the ethical benefits and challenges related to the development and use of wearable AI devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1