首页 > 最新文献

Proceedings of the Conference on Fairness, Accountability, and Transparency最新文献

英文 中文
Algorithmic Transparency from the South: Examining the state of algorithmic transparency in Chile's public administration algorithms 来自南方的算法透明度:考察智利公共行政算法的算法透明度状况
Pub Date : 2023-01-01 DOI: 10.1145/3593013.3593991
José Pablo Lapostol Piderit, Romina Garrido Iglesias, María Paz Hermosilla Cornejo
{"title":"Algorithmic Transparency from the South: Examining the state of algorithmic transparency in Chile's public administration algorithms","authors":"José Pablo Lapostol Piderit, Romina Garrido Iglesias, María Paz Hermosilla Cornejo","doi":"10.1145/3593013.3593991","DOIUrl":"https://doi.org/10.1145/3593013.3593991","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86082880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021 《中国计算机学会公平、问责与透明度研讨会》,虚拟会议/加拿大多伦多,2021年3月3-10日
{"title":"FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021","authors":"","doi":"10.1145/3442188","DOIUrl":"https://doi.org/10.1145/3442188","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89792355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Transparency universal 透明度普遍
Pub Date : 2020-01-14 DOI: 10.4324/9780429340819-5
Rachel Adams
{"title":"Transparency universal","authors":"Rachel Adams","doi":"10.4324/9780429340819-5","DOIUrl":"https://doi.org/10.4324/9780429340819-5","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80774647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resisting transparency 抵制透明度
Pub Date : 2020-01-14 DOI: 10.4324/9780429340819-10
Rachel Adams
{"title":"Resisting transparency","authors":"Rachel Adams","doi":"10.4324/9780429340819-10","DOIUrl":"https://doi.org/10.4324/9780429340819-10","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85316296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conclusion 结论
Pub Date : 2020-01-14 DOI: 10.4324/9780429340819-11
Rachel Adams
{"title":"Conclusion","authors":"Rachel Adams","doi":"10.4324/9780429340819-11","DOIUrl":"https://doi.org/10.4324/9780429340819-11","url":null,"abstract":"","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88247144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People 在指导7000万人健康决策的算法中剖析种族偏见
Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287593
Z. Obermeyer, S. Mullainathan
A single algorithm drives an important health care decision for over 70 million people in the US. When health systems anticipate that a patient will have especially complex and intensive future health care needs, she is enrolled in a 'care management' program, which provides considerable additional resources: greater attention from trained providers and help with coordination of her care. To determine which patients will have complex future health care needs, and thus benefit from program enrollment, many systems rely on an algorithmically generated commercial risk score. In this paper, we exploit a rich dataset to study racial bias in a commercial algorithm that is deployed nationwide today in many of the US's most prominent Accountable Care Organizations (ACOs). We document significant racial bias in this widely used algorithm, using data on primary care patients at a large hospital. Blacks and whites with the same algorithmic risk scores have very different realized health. For example, the highest-risk black patients (those at the threshold where patients are auto-enrolled in the program), have significantly more chronic illnesses than white enrollees with the same risk score. We use detailed physiological data to show the pervasiveness of the bias: across a range of biomarkers, from HbA1c levels for diabetics to blood pressure control for hypertensives, we find significant racial health gaps conditional on risk score. This bias has significant material consequences for patients: it effectively means that white patients with the same health as black patients are far more likely be enrolled in the care management program, and benefit from its resources. If we simulated a world without this gap in predictions, blacks would be auto-enrolled into the program at more than double the current rate. An unusual aspect of our dataset is that we observe not just the risk scores but also the input data and objective function used to construct it. This provides a unique window into the mechanisms by which bias arises. The algorithm is given a data frame with (1) Yit (label), total medical expenditures ('costs') in year t; and (2) Xi,t--1 (features), fine-grained care utilization data in year t -- 1 (e.g., visits to cardiologists, number of x-rays, etc.). The algorithm's predicted risk of developing complex health needs is thus in fact predicted costs. And by this metric, one could easily call the algorithm unbiased: costs are very similar for black and white patients with the same risk scores. So far, this is inconsistent with algorithmic bias: conditional on risk score, predictions do not favor whites or blacks. The fundamental problem we uncover is that when thinking about 'health care needs,' hospitals and insurers focus on costs. They use an algorithm whose specific objective is cost prediction, and from this perspective, predictions are accurate and unbiased. Yet from the social perspective, actual health -- not just costs -- also matters. This is wh
在美国,一个算法就能驱动7000多万人做出重要的医疗保健决定。当卫生系统预计患者未来将有特别复杂和密集的卫生保健需求时,她将参加“护理管理”计划,这将提供相当多的额外资源:训练有素的提供者给予更多关注,并帮助协调其护理。为了确定哪些患者将有复杂的未来医疗保健需求,从而从项目登记中受益,许多系统依赖于算法生成的商业风险评分。在本文中,我们利用丰富的数据集来研究商业算法中的种族偏见,该算法目前在美国许多最著名的问责保健组织(ACOs)中部署。我们使用一家大医院初级保健患者的数据,在这个广泛使用的算法中记录了明显的种族偏见。具有相同算法风险评分的黑人和白人的实际健康水平差异很大。例如,风险最高的黑人患者(那些处于自动加入该计划的阈值的患者)比具有相同风险评分的白人患者有更多的慢性疾病。我们使用详细的生理数据来显示这种偏差的普遍性:在一系列生物标志物中,从糖尿病患者的HbA1c水平到高血压患者的血压控制,我们发现显著的种族健康差距取决于风险评分。这种偏见对患者产生了重大的物质影响:它实际上意味着,与黑人患者健康状况相同的白人患者更有可能参加护理管理项目,并从其资源中受益。如果我们模拟一个没有这种预测差距的世界,黑人自动加入该计划的比率将是目前的两倍多。我们数据集的一个不同寻常的方面是,我们不仅观察风险评分,还观察用于构建它的输入数据和目标函数。这为偏见产生的机制提供了一个独特的窗口。该算法给出了一个数据框架,其中(1)Yit(标签),第t年的总医疗支出(“费用”);因此,该算法预测的产生复杂健康需求的风险实际上是预测的成本。根据这个指标,人们可以很容易地称该算法是无偏倚的:具有相同风险评分的黑人和白人患者的成本非常相似。到目前为止,这与算法偏见是不一致的:根据风险评分,预测不会偏向白人或黑人。我们发现的根本问题是,在考虑“医疗保健需求”时,医院和保险公司关注的是成本。他们使用一种算法,其具体目标是成本预测,从这个角度来看,预测是准确和公正的。然而,从社会的角度来看,实际的健康——而不仅仅是成本——也很重要。这就是问题所在:成本与健康不一样。虽然成本是健康的一个合理指标(平均而言,病人的成本确实更高),但它并不完美:健康以外的因素也会推动成本——比如种族。我们发现,黑人的平均成本高于白人;但这种差距可以分解为两种相互抵消的效应。首先,黑人承担着不同的、更大的疾病负担,这使得他们的成本更高。但这种疾病上的差异被第二个因素抵消了:黑人花费更少,他们的慢性病没有变化,这一力量极大地缩小了总体成本差距。反常的是,以健康为条件的黑人成本低于白人这一事实意味着,准确预测种族群体成本的算法也必然会对健康产生有偏见的预测。这种偏差的根本原因不在于预测过程,也不在于底层数据,而在于算法的目标函数本身。这种偏见类似于但不同于“测量错误的标签”:它产生于标签的选择,而不是对标签的测量,这反过来又是卫生部门和社会中私人行为者不同的客观职能的结果。从私人的角度来看,他们关注的变量——成本——正在得到适当的优化。但我们的研究结果暗示,算法可能会放大整个医疗保健领域的一个基本问题:当医疗保健提供者过于狭隘地关注财务动机、优化成本而损害健康时,就会产生外部性。从这个意义上说,我们的研究结果表明,医疗保健中普遍存在的一个问题——诱导医疗系统关注金钱而不是健康的激励机制——也对算法的构建和监控方式产生了影响。
{"title":"Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People","authors":"Z. Obermeyer, S. Mullainathan","doi":"10.1145/3287560.3287593","DOIUrl":"https://doi.org/10.1145/3287560.3287593","url":null,"abstract":"A single algorithm drives an important health care decision for over 70 million people in the US. When health systems anticipate that a patient will have especially complex and intensive future health care needs, she is enrolled in a 'care management' program, which provides considerable additional resources: greater attention from trained providers and help with coordination of her care. To determine which patients will have complex future health care needs, and thus benefit from program enrollment, many systems rely on an algorithmically generated commercial risk score. In this paper, we exploit a rich dataset to study racial bias in a commercial algorithm that is deployed nationwide today in many of the US's most prominent Accountable Care Organizations (ACOs). We document significant racial bias in this widely used algorithm, using data on primary care patients at a large hospital. Blacks and whites with the same algorithmic risk scores have very different realized health. For example, the highest-risk black patients (those at the threshold where patients are auto-enrolled in the program), have significantly more chronic illnesses than white enrollees with the same risk score. We use detailed physiological data to show the pervasiveness of the bias: across a range of biomarkers, from HbA1c levels for diabetics to blood pressure control for hypertensives, we find significant racial health gaps conditional on risk score. This bias has significant material consequences for patients: it effectively means that white patients with the same health as black patients are far more likely be enrolled in the care management program, and benefit from its resources. If we simulated a world without this gap in predictions, blacks would be auto-enrolled into the program at more than double the current rate. An unusual aspect of our dataset is that we observe not just the risk scores but also the input data and objective function used to construct it. This provides a unique window into the mechanisms by which bias arises. The algorithm is given a data frame with (1) Yit (label), total medical expenditures ('costs') in year t; and (2) Xi,t--1 (features), fine-grained care utilization data in year t -- 1 (e.g., visits to cardiologists, number of x-rays, etc.). The algorithm's predicted risk of developing complex health needs is thus in fact predicted costs. And by this metric, one could easily call the algorithm unbiased: costs are very similar for black and white patients with the same risk scores. So far, this is inconsistent with algorithmic bias: conditional on risk score, predictions do not favor whites or blacks. The fundamental problem we uncover is that when thinking about 'health care needs,' hospitals and insurers focus on costs. They use an algorithm whose specific objective is cost prediction, and from this perspective, predictions are accurate and unbiased. Yet from the social perspective, actual health -- not just costs -- also matters. This is wh","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81160042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Fairness-Aware Programming Fairness-Aware编程
Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287588
Aws Albarghouthi, Samuel Vinitsky
Increasingly, programming tasks involve automating and deploying sensitive decision-making processes that may have adverse impacts on individuals or groups of people. The issue of fairness in automated decision-making has thus become a major problem, attracting interdisciplinary attention. In this work, we aim to make fairness a first-class concern in programming. Specifically, we propose fairness-aware programming, where programmers can state fairness expectations natively in their code, and have a runtime system monitor decision-making and report violations of fairness. We present a rich and general specification language that allows a programmer to specify a range of fairness definitions from the literature, as well as others. As the decision-making program executes, the runtime maintains statistics on the decisions made and incrementally checks whether the fairness definitions have been violated, reporting such violations to the developer. The advantages of this approach are two fold: (i) Enabling declarative mathematical specifications of fairness in the programming language simplifies the process of checking fairness, as the programmer does not have to write ad hoc code for maintaining statistics. (ii) Compared to existing techniques for checking and ensuring fairness, our approach monitors a decision-making program in the wild, which may be running on a distribution that is unlike the dataset on which a classifier was trained and tested. We describe an implementation of our proposed methodology as a library in the Python programming language and illustrate its use on case studies from the algorithmic fairness literature.
越来越多的编程任务涉及到自动化和部署敏感的决策过程,这些决策过程可能对个人或群体产生不利影响。因此,自动化决策中的公平性问题已经成为一个重大问题,引起了跨学科的关注。在这项工作中,我们的目标是使公平性成为编程中的头等大事。具体来说,我们提出了公平感知编程,程序员可以在他们的代码中本地声明公平期望,并有一个运行时系统监控决策和报告违反公平的情况。我们提供了一种丰富而通用的规范语言,允许程序员从文献以及其他文献中指定一系列公平定义。当决策程序执行时,运行时维护所做决策的统计信息,并逐步检查是否违反了公平性定义,并向开发人员报告此类违规行为。这种方法的优点有两个方面:(i)在编程语言中启用公平性的声明性数学规范简化了检查公平性的过程,因为程序员不必为维护统计数据编写特别的代码。(ii)与现有的检查和确保公平性的技术相比,我们的方法监控了野外的决策程序,该程序可能运行在与分类器训练和测试的数据集不同的分布上。我们将我们提出的方法描述为Python编程语言中的库的实现,并说明其在算法公平性文献中的案例研究中的使用。
{"title":"Fairness-Aware Programming","authors":"Aws Albarghouthi, Samuel Vinitsky","doi":"10.1145/3287560.3287588","DOIUrl":"https://doi.org/10.1145/3287560.3287588","url":null,"abstract":"Increasingly, programming tasks involve automating and deploying sensitive decision-making processes that may have adverse impacts on individuals or groups of people. The issue of fairness in automated decision-making has thus become a major problem, attracting interdisciplinary attention. In this work, we aim to make fairness a first-class concern in programming. Specifically, we propose fairness-aware programming, where programmers can state fairness expectations natively in their code, and have a runtime system monitor decision-making and report violations of fairness. We present a rich and general specification language that allows a programmer to specify a range of fairness definitions from the literature, as well as others. As the decision-making program executes, the runtime maintains statistics on the decisions made and incrementally checks whether the fairness definitions have been violated, reporting such violations to the developer. The advantages of this approach are two fold: (i) Enabling declarative mathematical specifications of fairness in the programming language simplifies the process of checking fairness, as the programmer does not have to write ad hoc code for maintaining statistics. (ii) Compared to existing techniques for checking and ensuring fairness, our approach monitors a decision-making program in the wild, which may be running on a distribution that is unlike the dataset on which a classifier was trained and tested. We describe an implementation of our proposed methodology as a library in the Python programming language and illustrate its use on case studies from the algorithmic fairness literature.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84336804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media 从社交媒体推断心理健康状态的伦理紧张分类
Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287587
Stevie Chancellor, M. Birnbaum, E. Caine, V. Silenzio, M. Choudhury
Powered by machine learning techniques, social media provides an unobtrusive lens into individual behaviors, emotions, and psychological states. Recent research has successfully employed social media data to predict mental health states of individuals, ranging from the presence and severity of mental disorders like depression to the risk of suicide. These algorithmic inferences hold great potential in supporting early detection and treatment of mental disorders and in the design of interventions. At the same time, the outcomes of this research can pose great risks to individuals, such as issues of incorrect, opaque algorithmic predictions, involvement of bad or unaccountable actors, and potential biases from intentional or inadvertent misuse of insights. Amplifying these tensions, there are also divergent and sometimes inconsistent methodological gaps and under-explored ethics and privacy dimensions. This paper presents a taxonomy of these concerns and ethical challenges, drawing from existing literature, and poses questions to be resolved as this research gains traction. We identify three areas of tension: ethics committees and the gap of social media research; questions of validity, data, and machine learning; and implications of this research for key stakeholders. We conclude with calls to action to begin resolving these interdisciplinary dilemmas.
在机器学习技术的支持下,社交媒体提供了一个不显眼的镜头来观察个人行为、情绪和心理状态。最近的研究成功地利用社交媒体数据来预测个人的心理健康状态,从抑郁症等精神障碍的存在和严重程度到自杀的风险。这些算法推断在支持早期发现和治疗精神障碍以及设计干预措施方面具有巨大潜力。与此同时,这项研究的结果可能会给个人带来巨大的风险,例如不正确、不透明的算法预测问题,不良或不负责任的行为者的参与,以及有意或无意滥用见解的潜在偏见。放大这些紧张关系的是,还存在分歧,有时不一致的方法差距,以及未充分探索的道德和隐私层面。本文从现有文献中提出了这些问题和伦理挑战的分类,并提出了需要解决的问题,因为这项研究获得了牵引力。我们确定了三个紧张的领域:伦理委员会和社交媒体研究的差距;有效性、数据和机器学习的问题;以及这项研究对关键利益相关者的影响。最后,我们呼吁采取行动,开始解决这些跨学科的困境。
{"title":"A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media","authors":"Stevie Chancellor, M. Birnbaum, E. Caine, V. Silenzio, M. Choudhury","doi":"10.1145/3287560.3287587","DOIUrl":"https://doi.org/10.1145/3287560.3287587","url":null,"abstract":"Powered by machine learning techniques, social media provides an unobtrusive lens into individual behaviors, emotions, and psychological states. Recent research has successfully employed social media data to predict mental health states of individuals, ranging from the presence and severity of mental disorders like depression to the risk of suicide. These algorithmic inferences hold great potential in supporting early detection and treatment of mental disorders and in the design of interventions. At the same time, the outcomes of this research can pose great risks to individuals, such as issues of incorrect, opaque algorithmic predictions, involvement of bad or unaccountable actors, and potential biases from intentional or inadvertent misuse of insights. Amplifying these tensions, there are also divergent and sometimes inconsistent methodological gaps and under-explored ethics and privacy dimensions. This paper presents a taxonomy of these concerns and ethical challenges, drawing from existing literature, and poses questions to be resolved as this research gains traction. We identify three areas of tension: ethics committees and the gap of social media research; questions of validity, data, and machine learning; and implications of this research for key stakeholders. We conclude with calls to action to begin resolving these interdisciplinary dilemmas.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78852107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines "Good" and "Bad" Behavior 明确的制裁,模糊的奖励:中国社会信用体系目前如何定义“好”和“坏”行为
Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287585
Severin Engelmann, Mo Chen, Felix A. Fischer, Ching-yu Kao, Jens Grossklags
China's Social Credit System (SCS, 社会信用体系 or shehui xinyong tixi) is expected to become the first digitally-implemented nationwide scoring system with the purpose to rate the behavior of citizens, companies, and other entities. Thereby, in the SCS, "good" behavior can result in material rewards and reputational gain while "bad" behavior can lead to exclusion from material resources and reputational loss. Crucially, for the implementation of the SCS, society must be able to distinguish between behaviors that result in reward and those that lead to sanction. In this paper, we conduct the first transparency analysis of two central administrative information platforms of the SCS to understand how the SCS currently defines "good" and "bad" behavior. We analyze 194,829 behavioral records and 942 reports on citizens' behaviors published on the official Beijing SCS website and the national SCS platform "Credit China", respectively. By applying a mixed-method approach, we demonstrate that there is a considerable asymmetry between information provided by the so-called Redlist (information on "good" behavior) and the Blacklist (information on "bad" behavior). At the current stage of the SCS implementation, the majority of explanations on blacklisted behaviors includes a detailed description of the causal relation between inadequate behavior and its sanction. On the other hand, explanations on redlisted behavior, which comprise positive norms fostering value internalization and integration, are less transparent. Finally, this first SCS transparency analysis suggests that socio-technical systems applying a scoring mechanism might use different degrees of transparency to achieve particular behavioral engineering goals.
中国的社会信用系统(SCS)有望成为第一个数字化实施的全国性评分系统,旨在对公民、公司和其他实体的行为进行评级。因此,在SCS中,“好”行为可能导致物质奖励和声誉收益,而“坏”行为可能导致物质资源排除和声誉损失。至关重要的是,为了实施SCS,社会必须能够区分导致奖励和导致制裁的行为。在本文中,我们首先对SCS的两个中央行政信息平台进行透明度分析,以了解SCS目前如何定义“好”和“坏”行为。我们分别分析了北京SCS官方网站和国家SCS平台“信用中国”上发布的194829份公民行为记录和942份公民行为报告。通过应用混合方法,我们证明了所谓的Redlist(关于“好”行为的信息)和Blacklist(关于“坏”行为的信息)提供的信息之间存在相当大的不对称。在SCS实施的当前阶段,大多数对黑名单行为的解释都包括对不当行为与其制裁之间因果关系的详细描述。另一方面,对重新列入的行为的解释不那么透明,这些行为包括促进价值内化和整合的积极规范。最后,这第一个SCS透明度分析表明,应用评分机制的社会技术系统可能会使用不同程度的透明度来实现特定的行为工程目标。
{"title":"Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines \"Good\" and \"Bad\" Behavior","authors":"Severin Engelmann, Mo Chen, Felix A. Fischer, Ching-yu Kao, Jens Grossklags","doi":"10.1145/3287560.3287585","DOIUrl":"https://doi.org/10.1145/3287560.3287585","url":null,"abstract":"China's Social Credit System (SCS, 社会信用体系 or shehui xinyong tixi) is expected to become the first digitally-implemented nationwide scoring system with the purpose to rate the behavior of citizens, companies, and other entities. Thereby, in the SCS, \"good\" behavior can result in material rewards and reputational gain while \"bad\" behavior can lead to exclusion from material resources and reputational loss. Crucially, for the implementation of the SCS, society must be able to distinguish between behaviors that result in reward and those that lead to sanction. In this paper, we conduct the first transparency analysis of two central administrative information platforms of the SCS to understand how the SCS currently defines \"good\" and \"bad\" behavior. We analyze 194,829 behavioral records and 942 reports on citizens' behaviors published on the official Beijing SCS website and the national SCS platform \"Credit China\", respectively. By applying a mixed-method approach, we demonstrate that there is a considerable asymmetry between information provided by the so-called Redlist (information on \"good\" behavior) and the Blacklist (information on \"bad\" behavior). At the current stage of the SCS implementation, the majority of explanations on blacklisted behaviors includes a detailed description of the causal relation between inadequate behavior and its sanction. On the other hand, explanations on redlisted behavior, which comprise positive norms fostering value internalization and integration, are less transparent. Finally, this first SCS transparency analysis suggests that socio-technical systems applying a scoring mechanism might use different degrees of transparency to achieve particular behavioral engineering goals.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82159949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms 衡量重要的偏见:算法公平衡量的道德和偶然基础
Pub Date : 2019-01-29 DOI: 10.1145/3287560.3287573
Bruce Glymour, J. Herington
Measures of algorithmic bias can be roughly classified into four categories, distinguished by the conditional probabilistic dependencies to which they are sensitive. First, measures of "procedural bias" diagnose bias when the score returned by an algorithm is probabilistically dependent on a sensitive class variable (e.g. race or sex). Second, measures of "outcome bias" capture probabilistic dependence between class variables and the outcome for each subject (e.g. parole granted or loan denied). Third, measures of "behavior-relative error bias" capture probabilistic dependence between class variables and the algorithmic score, conditional on target behaviors (e.g. recidivism or loan default). Fourth, measures of "score-relative error bias" capture probabilistic dependence between class variables and behavior, conditional on score. Several recent discussions have demonstrated a tradeoff between these different measures of algorithmic bias, and at least one recent paper has suggested conditions under which tradeoffs may be minimized. In this paper we use the machinery of causal graphical models to show that, under standard assumptions, the underlying causal relations among variables forces some tradeoffs. We delineate a number of normative considerations that are encoded in different measures of bias, with reference to the philosophical literature on the wrongfulness of disparate treatment and disparate impact. While both kinds of error bias are nominally motivated by concern to avoid disparate impact, we argue that consideration of causal structures shows that these measures are better understood as complicated and unreliable measures of procedural biases (i.e. disparate treatment). Moreover, while procedural bias is indicative of disparate treatment, we show that the measure of procedural bias one ought to adopt is dependent on the account of the wrongfulness of disparate treatment one endorses. Finally, given that neither score-relative nor behavior-relative measures of error bias capture the relevant normative considerations, we suggest that error bias proper is best measured by score-based measures of accuracy, such as the Brier score.
算法偏差的度量可以大致分为四类,根据它们敏感的条件概率依赖性来区分。首先,当算法返回的分数在概率上依赖于一个敏感的类别变量(如种族或性别)时,“程序偏差”的测量方法可以诊断出偏差。其次,“结果偏差”的测量方法捕获类别变量与每个受试者的结果之间的概率依赖关系(例如,批准假释或拒绝贷款)。第三,“行为相对误差偏差”的度量捕捉类变量和算法得分之间的概率依赖关系,以目标行为为条件(例如累犯或贷款违约)。第四,“分数相对误差偏差”的测量方法捕捉类别变量和行为之间的概率依赖关系,以分数为条件。最近的一些讨论已经证明了这些不同的算法偏差度量之间的权衡,并且至少有一篇最近的论文提出了可以将权衡最小化的条件。在本文中,我们使用因果图模型的机制来表明,在标准假设下,变量之间的潜在因果关系迫使一些权衡。我们描述了一些规范性的考虑,这些考虑被编码在不同的偏见测量中,并参考了关于差别待遇和差别影响的错误性的哲学文献。虽然这两种误差偏差名义上都是出于避免差异影响的考虑,但我们认为,对因果结构的考虑表明,这些措施最好被理解为程序偏差(即差异处理)的复杂且不可靠的措施。此外,虽然程序性偏见是差别待遇的标志,但我们表明,一个人应该采用的程序性偏见的衡量标准取决于他所赞同的差别待遇的不当性。最后,鉴于误差偏差的分数相对度量和行为相对度量都没有捕捉到相关的规范性考虑,我们建议误差偏差最好通过基于分数的准确性度量来衡量,例如Brier分数。
{"title":"Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms","authors":"Bruce Glymour, J. Herington","doi":"10.1145/3287560.3287573","DOIUrl":"https://doi.org/10.1145/3287560.3287573","url":null,"abstract":"Measures of algorithmic bias can be roughly classified into four categories, distinguished by the conditional probabilistic dependencies to which they are sensitive. First, measures of \"procedural bias\" diagnose bias when the score returned by an algorithm is probabilistically dependent on a sensitive class variable (e.g. race or sex). Second, measures of \"outcome bias\" capture probabilistic dependence between class variables and the outcome for each subject (e.g. parole granted or loan denied). Third, measures of \"behavior-relative error bias\" capture probabilistic dependence between class variables and the algorithmic score, conditional on target behaviors (e.g. recidivism or loan default). Fourth, measures of \"score-relative error bias\" capture probabilistic dependence between class variables and behavior, conditional on score. Several recent discussions have demonstrated a tradeoff between these different measures of algorithmic bias, and at least one recent paper has suggested conditions under which tradeoffs may be minimized. In this paper we use the machinery of causal graphical models to show that, under standard assumptions, the underlying causal relations among variables forces some tradeoffs. We delineate a number of normative considerations that are encoded in different measures of bias, with reference to the philosophical literature on the wrongfulness of disparate treatment and disparate impact. While both kinds of error bias are nominally motivated by concern to avoid disparate impact, we argue that consideration of causal structures shows that these measures are better understood as complicated and unreliable measures of procedural biases (i.e. disparate treatment). Moreover, while procedural bias is indicative of disparate treatment, we show that the measure of procedural bias one ought to adopt is dependent on the account of the wrongfulness of disparate treatment one endorses. Finally, given that neither score-relative nor behavior-relative measures of error bias capture the relevant normative considerations, we suggest that error bias proper is best measured by score-based measures of accuracy, such as the Brier score.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81879050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
期刊
Proceedings of the Conference on Fairness, Accountability, and Transparency
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1