使人工智能透明:公平性与代理变量问题

Q2 Social Sciences Criminal Justice Ethics Pub Date : 2021-01-02 DOI:10.1080/0731129X.2021.1893932
Richard Warner, R. Sloan
{"title":"使人工智能透明:公平性与代理变量问题","authors":"Richard Warner, R. Sloan","doi":"10.1080/0731129X.2021.1893932","DOIUrl":null,"url":null,"abstract":"AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach to transparency is to require that systems be explainable, as that concept is understood in computer science. A system is explainable if one can provide a human-understandable explanation of why it makes any particular prediction. Explainability should not be equated with transparency. To address transparency and characterize its relation to explainability, we define transparency for a regulatory purpose. A system is transparent for a regulatory purpose (r-transparent) when and only when regulators have an explanation, adequate for that purpose, of why it yields the predictions it does. Explainability remains relevant to transparency but turns out to be neither necessary nor sufficient for it. The concepts of explainability and r-transparency combine to yield four possibilities: explainable and either r-transparent or not; and not explainable and either not r-transparent or r-transparent. Combining r-transparency with ideas from the Harvard computer scientist Cynthia Dwork, we propose four requirements on AI systems.","PeriodicalId":35931,"journal":{"name":"Criminal Justice Ethics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0731129X.2021.1893932","citationCount":"5","resultStr":"{\"title\":\"Making Artificial Intelligence Transparent: Fairness and the Problem of Proxy Variables\",\"authors\":\"Richard Warner, R. Sloan\",\"doi\":\"10.1080/0731129X.2021.1893932\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach to transparency is to require that systems be explainable, as that concept is understood in computer science. A system is explainable if one can provide a human-understandable explanation of why it makes any particular prediction. Explainability should not be equated with transparency. To address transparency and characterize its relation to explainability, we define transparency for a regulatory purpose. A system is transparent for a regulatory purpose (r-transparent) when and only when regulators have an explanation, adequate for that purpose, of why it yields the predictions it does. Explainability remains relevant to transparency but turns out to be neither necessary nor sufficient for it. The concepts of explainability and r-transparency combine to yield four possibilities: explainable and either r-transparent or not; and not explainable and either not r-transparent or r-transparent. Combining r-transparency with ideas from the Harvard computer scientist Cynthia Dwork, we propose four requirements on AI systems.\",\"PeriodicalId\":35931,\"journal\":{\"name\":\"Criminal Justice Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/0731129X.2021.1893932\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Criminal Justice Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/0731129X.2021.1893932\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Criminal Justice Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0731129X.2021.1893932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 5

摘要

人工智能驱动的决策可以从你生活的几乎任何领域提取数据,从而对你生活的任何其他领域做出决策。这就产生了公平问题。确保公平的有效监管要求人工智能系统透明。也就是说,监管机构必须有足够的机会了解解释和证明决策合理性的因素。透明度的一种方法是要求系统是可解释的,这一概念在计算机科学中得到了理解。如果一个系统能够提供一个人类可以理解的解释来解释为什么它会做出任何特定的预测,那么它就是可以解释的。可解释性不应等同于透明度。为了解决透明度问题并描述其与可解释性的关系,我们将透明度定义为监管目的。一个系统出于监管目的是透明的(r-transparent),当且只有当监管机构对其产生预测的原因有充分的解释时。可解释性仍然与透明度相关,但事实证明它既不必要也不充分。可解释性和r-transparency的概念结合起来产生了四种可能性:可解释性、r-transparent或not;并且是不可解释的并且不是r透明的或者r透明的。将r-transparency与哈佛大学计算机科学家Cynthia Dwork的想法相结合,我们提出了人工智能系统的四个要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Making Artificial Intelligence Transparent: Fairness and the Problem of Proxy Variables
AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach to transparency is to require that systems be explainable, as that concept is understood in computer science. A system is explainable if one can provide a human-understandable explanation of why it makes any particular prediction. Explainability should not be equated with transparency. To address transparency and characterize its relation to explainability, we define transparency for a regulatory purpose. A system is transparent for a regulatory purpose (r-transparent) when and only when regulators have an explanation, adequate for that purpose, of why it yields the predictions it does. Explainability remains relevant to transparency but turns out to be neither necessary nor sufficient for it. The concepts of explainability and r-transparency combine to yield four possibilities: explainable and either r-transparent or not; and not explainable and either not r-transparent or r-transparent. Combining r-transparency with ideas from the Harvard computer scientist Cynthia Dwork, we propose four requirements on AI systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Criminal Justice Ethics
Criminal Justice Ethics Social Sciences-Law
CiteScore
1.10
自引率
0.00%
发文量
11
期刊最新文献
Exposing, Reversing, and Inheriting Crimes as Traumas from the Neurosciences to Epigenetics: Why Criminal Law Cannot Yet Afford A(nother) Biology-induced Overhaul Institutional Corruption, Institutional Corrosion and Collective Responsibility Sentencing, Artificial Intelligence, and Condemnation: A Reply to Taylor Double Jeopardy, Autrefois Acquit and the Legal Ethics of the Rule Against Unreasonably Splitting a Case Ethical Resource Allocation in Policing: Why Policing Requires a Different Approach from Healthcare
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1