在预测性司法网站上

Noûs Pub Date : 2023-08-27 DOI:10.1111/nous.12477
Seth Lazar, Jake Stone
{"title":"在预测性司法网站上","authors":"Seth Lazar, Jake Stone","doi":"10.1111/nous.12477","DOIUrl":null,"url":null,"abstract":"Abstract Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.","PeriodicalId":173366,"journal":{"name":"Noûs","volume":"140 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Site of Predictive Justice\",\"authors\":\"Seth Lazar, Jake Stone\",\"doi\":\"10.1111/nous.12477\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.\",\"PeriodicalId\":173366,\"journal\":{\"name\":\"Noûs\",\"volume\":\"140 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Noûs\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1111/nous.12477\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Noûs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/nous.12477","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,人们对通过机器学习(ML)进行廉价、准确的预测来增强社会决策能力的乐观态度已经变得黯淡起来,因为这些“廉价”的预测已经付出了巨大的社会代价,导致已经处于弱势地位的人群遭受了系统性的伤害。但是,当机器学习出现问题时,究竟是什么出了问题?我们认为,除了对基于机器学习的决策的下游影响有更明显的担忧外,对这些预测本身的批评也有道德依据。我们介绍并捍卫了一种预测正义理论,根据该理论,对系统弱势群体的不同模型表现可以成为对模型进行道德批评的依据,而不受其下游影响的影响。除了帮助解决围绕算法公平性的一些紧迫争议外,这一理论还为认识伦理的一个新维度指明了道路,该维度与最近讨论的谬论错误有关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
On the Site of Predictive Justice
Abstract Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Happiness and well‐being: Is it all in your head? Evidence from the folk The transparency of mental vehicles Invariantism, contextualism, and the explanatory power of knowledge Disagreement & classification in comparative cognitive science Higher‐order evidence and the duty to double‐check
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1