首页 > 最新文献

Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency最新文献

英文 中文
AI explainability 360: hands-on tutorial AI解释性360:动手教程
Vijay Arya, R. Bellamy, Pin-Yu Chen, Amit Dhurandhar, M. Hind, Samuel C. Hoffman, Stephanie Houde, Q. Liao, Ronny Luss, A. Mojsilovic, Sami Mourad, Pablo Pedemonte, R. Raghavendra, John T. Richards, P. Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models. Motivation for the toolkit. The AIX360 toolkit illustrates that there is no single approach to explainability that works best for all situations. There are many ways to explain: data vs. model, direct vs. post-hoc explanation, local vs. global, etc. The toolkit includes ten state of the art algorithms that cover different dimensions of explanations along with proxy explainability metrics. Moreover, one of our prime objectives is for AIX360 to serve as an educational tool even for non-machine learning experts (viz. social scientists, healthcare experts). To this end, the toolkit has an interactive demonstration, highly descriptive Jupyter notebooks covering diverse real-world use cases, and guidance materials, all helping one navigate the complex explainability space. Compared to existing open-source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining, usability in industry, and software engineering. By integrating these three aspects, we hope that AIX360 will attract researchers in AI explainability and help translate our collective research results for practicing data scientists and developers deploying solutions in a variety of industries. Regarding the first aspect of diversity, Table 1 in [1] compares AIX360 to existing toolkits in terms of the types of explainability methods offered. The table shows that AIX360 not only covers more types of methods but also has metrics which can act as proxies for judging the quality of explanations. Regarding the second aspect of industry usage, AIX360 illustrates how these explainability algorithms can be applied in specific contexts (please see Audience, goals, and outcomes below). In just a few months since its initial release, the AIX360 toolkit already has a vibrant slack community with over 120 members and has been forked almost 80 times accumulating over 400 stars. This response leads us to believe that there is significant interest in the community in learning more about the toolkit and explainability in general. Audience, goals, and outcomes. The presentations in the tutorial will be aimed at an audience with different backgrounds and computer science expertise levels. For all audience members and especially those unfamiliar with Python programming, AIX360 provides an interactive experience (http://aix360.mybluemix.net/data) centered around a credit approval scenario as a gentle and grounded introduction to the concepts and capabilities of the toolkit. We will also teach all participants which type of explainability algorithm is most appropriate for a given use case, not only for those in the toolkit but also from the broader explainability literature
本教程将教参与者使用一个名为AI Explainability 360 (AIX360) (https://aix360.mybluemix.net)的新开源Python包,这是一个全面且可扩展的工具包,支持数据和机器学习模型的可解释性和可解释性。工具箱的动机。AIX360工具包说明,没有一种单一的可解释性方法最适合所有情况。有很多解释方法:数据vs.模型,直接vs.事后解释,局部vs.全局,等等。该工具包包括10种最先进的算法,涵盖了解释的不同维度以及代理可解释性度量。此外,我们的主要目标之一是让AIX360成为非机器学习专家(即社会科学家、医疗保健专家)的教育工具。为此,该工具包有一个交互式演示、高度描述性的Jupyter笔记本(涵盖了各种实际用例)和指导材料,所有这些都有助于人们在复杂的可解释性空间中导航。与现有的关于人工智能解释性的开源努力相比,AIX360在关注更多样化的解释方式、工业可用性和软件工程方面迈出了一步。通过整合这三个方面,我们希望AIX360能够吸引AI解释性方面的研究人员,并帮助将我们的集体研究成果转化为实践数据科学家和开发人员在各种行业中部署解决方案。关于多样性的第一个方面,[1]中的表1比较了AIX360与现有工具包所提供的可解释性方法的类型。从表中可以看出,AIX360不仅涵盖了更多类型的方法,而且还提供了可以作为判断解释质量的代理的指标。关于行业使用的第二个方面,AIX360说明了如何将这些可解释性算法应用于特定的上下文中(请参阅下面的受众、目标和结果)。AIX360工具包在最初发布后的几个月里,已经拥有了一个充满活力的懒散社区,拥有超过120名成员,已经被分叉了近80次,积累了超过400颗星星。这种反应使我们相信,社区对学习更多关于工具包和一般可解释性的信息非常感兴趣。受众、目标和结果。本教程中的演示将针对具有不同背景和计算机科学专业水平的听众。对于所有读者,特别是那些不熟悉Python编程的读者,AIX360提供了一个以信用审批场景为中心的交互式体验(http://aix360.mybluemix.net/data),作为对该工具包的概念和功能的温和而基础的介绍。我们还将教导所有参与者哪种类型的可解释性算法最适合给定的用例,不仅适用于工具包中的算法,也适用于更广泛的可解释性文献。知道哪种可解释性算法适用于哪种上下文中并理解何时使用它们可以使大多数人受益,而不管他们的技术背景如何。本教程的第二部分将包括三个用例,它们具有不同的行业领域和解释方法。数据科学家和开发人员可以通过运行和修改Jupyter笔记本来获得使用该工具包的实践经验,而其他人则可以通过查看笔记本的呈现版本来跟随。1)序曲:简要介绍可解释性领域,并介绍常用术语。2)交互式Web体验:AIX360交互式Web体验(http://aix360.mybluemix.net/data)旨在向非计算机科学的受众展示不同的可解释性方法如何适用于信贷审批场景中的不同利益相关者(数据科学家、信贷员和银行客户)。3)分类法:接下来我们将介绍一个分类法,这个分类法是我们为组织解释空间和指导从业者为他们的应用程序做出适当选择而创建的。4)安装:我们将过渡到Python环境,并要求参与者按照提供的说明在他们的机器上安装AIX360包。5)金融、政府和医疗保健中的示例用例:我们将以Jupyter笔记本的形式向参与者介绍不同应用领域中的三个用例。6)度量标准:我们将简要展示目前通过工具包可用的两个可解释性度量标准。7)未来方向:最后一部分将讨论未来的方向以及参与者如何为工具包做出贡献。
{"title":"AI explainability 360: hands-on tutorial","authors":"Vijay Arya, R. Bellamy, Pin-Yu Chen, Amit Dhurandhar, M. Hind, Samuel C. Hoffman, Stephanie Houde, Q. Liao, Ronny Luss, A. Mojsilovic, Sami Mourad, Pablo Pedemonte, R. Raghavendra, John T. Richards, P. Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang","doi":"10.1145/3351095.3375667","DOIUrl":"https://doi.org/10.1145/3351095.3375667","url":null,"abstract":"This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models. Motivation for the toolkit. The AIX360 toolkit illustrates that there is no single approach to explainability that works best for all situations. There are many ways to explain: data vs. model, direct vs. post-hoc explanation, local vs. global, etc. The toolkit includes ten state of the art algorithms that cover different dimensions of explanations along with proxy explainability metrics. Moreover, one of our prime objectives is for AIX360 to serve as an educational tool even for non-machine learning experts (viz. social scientists, healthcare experts). To this end, the toolkit has an interactive demonstration, highly descriptive Jupyter notebooks covering diverse real-world use cases, and guidance materials, all helping one navigate the complex explainability space. Compared to existing open-source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining, usability in industry, and software engineering. By integrating these three aspects, we hope that AIX360 will attract researchers in AI explainability and help translate our collective research results for practicing data scientists and developers deploying solutions in a variety of industries. Regarding the first aspect of diversity, Table 1 in [1] compares AIX360 to existing toolkits in terms of the types of explainability methods offered. The table shows that AIX360 not only covers more types of methods but also has metrics which can act as proxies for judging the quality of explanations. Regarding the second aspect of industry usage, AIX360 illustrates how these explainability algorithms can be applied in specific contexts (please see Audience, goals, and outcomes below). In just a few months since its initial release, the AIX360 toolkit already has a vibrant slack community with over 120 members and has been forked almost 80 times accumulating over 400 stars. This response leads us to believe that there is significant interest in the community in learning more about the toolkit and explainability in general. Audience, goals, and outcomes. The presentations in the tutorial will be aimed at an audience with different backgrounds and computer science expertise levels. For all audience members and especially those unfamiliar with Python programming, AIX360 provides an interactive experience (http://aix360.mybluemix.net/data) centered around a credit approval scenario as a gentle and grounded introduction to the concepts and capabilities of the toolkit. We will also teach all participants which type of explainability algorithm is most appropriate for a given use case, not only for those in the toolkit but also from the broader explainability literature","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Manifesting the sociotechnical: experimenting with methods for social context and social justice 体现社会技术:试验社会背景和社会正义的方法
E. Goss, Lily Hu, Manuel Sabin, Stephanie Teeple
Critiques of 'algorithmic fairness' have counseled against a purely technical approach. Recent work from the FAT* conference has warned specifically about abstracting away the social context that these automated systems are operating within and has suggested that "[fairness work] require[s] technical researchers to learn new skills or partner with social scientists" [Fairness and abstraction in sociotechnical systems, Selbst et al. 2019, FAT* '19]. That "social context" includes groups outside the academy organizing for data and/or tech justice (e.g., Allied Media Projects, Stop LAPD Spying Coalition, data4blacklives, etc). These struggles have deep historical roots but have become prominent in the past several years alongside broader citizen-science efforts. In this CRAFT session we as STEM researchers hope to initiate conversation about methods used by community organizers to analyze power relations present in that social context. We will take this time to learn together and discuss if/how these and other methods, collaborations and efforts can be used to actualize oft-mentioned critiques of algorithmic fairness and move toward a data justice-oriented approach. Many scholars and activists have spoken on how to approach social context when discussing algorithmic fairness interventions. Community organizing and attendant methods for power analysis present one such approach: documenting all stakeholders and entities relevant to an issue and the nature of the power differentials between them. The facilitators for this session are not experts in community organizing theory or practice. Instead, we will share what we have learned from our readings of decades of rich work and writings from community organizers. This session is a collective, interdisciplinary learning experience, open to all who see their interests as relevant to the conversation. We will open with a discussion of community organizing practice: What is community organizing, what are its goals, methods, past and ongoing examples? What disciplines and intellectual lineages does it draw from? We will incorporate key sources we have found helpful for synthesizing this knowledge so that participants can continue exposing themselves to the field after the conference. We will also consider the concept of social power, including power that the algorithmic fairness community holds. Noting that there are many ways to theorize and understand power, we will share the framings that have been most useful to us. We plan to present different tools, models and procedures for doing power analysis in various organizing settings. We will propose to our group that we conduct a power analysis of our own. We have prepared a hypothetical but realistic scenario involving risk assessment in a hospital setting as an example. However, we encourage participants to bring their own experiences to the table, especially if they pertain in any way to data injustice. We also invite participants to bring examples of ongo
对“算法公平性”的批评反对采用纯粹的技术方法。FAT*会议最近的工作特别警告了抽象这些自动化系统运行的社会背景,并建议“[公平工作]需要技术研究人员学习新技能或与社会科学家合作”[社会技术系统中的公平和抽象,Selbst等人,2019,FAT* '19]。这种“社会背景”包括学院外组织数据和/或技术正义的团体(例如,Allied Media Projects, Stop LAPD间谍联盟,data4blacklives等)。这些斗争有着深刻的历史根源,但在过去几年里,随着更广泛的公民科学努力,这些斗争变得更加突出。在这次CRAFT会议上,我们作为STEM研究人员希望发起关于社区组织者用来分析社会背景下权力关系的方法的讨论。我们将利用这段时间一起学习和讨论是否/如何使用这些方法和其他方法、合作和努力来实现经常提到的对算法公平性的批评,并朝着以数据公正为导向的方法发展。许多学者和活动家在讨论算法公平干预时谈到了如何处理社会背景。社区组织和随之而来的权力分析方法提供了这样一种方法:记录与问题相关的所有利益相关者和实体,以及它们之间权力差异的本质。本次会议的主持人并非社区组织理论或实践方面的专家。相反,我们将分享我们从阅读社区组织者几十年来丰富的工作和著作中学到的东西。这个会议是一个集体的,跨学科的学习经验,开放给所有谁看到他们的兴趣相关的谈话。我们将以社区组织实践的讨论开始:什么是社区组织,它的目标、方法、过去和正在进行的例子是什么?它借鉴了哪些学科和知识谱系?我们将结合我们发现的有助于综合这些知识的关键来源,以便与会者在会议结束后继续接触该领域。我们还将考虑社会权力的概念,包括算法公平社区所拥有的权力。注意到有许多方法可以理论化和理解权力,我们将分享对我们最有用的框架。我们计划介绍不同的工具、模型和程序,在不同的组织环境中进行功率分析。我们将向我们的团队提议,我们自己也进行一次权力分析。我们准备了一个假设但现实的场景,以医院环境中的风险评估为例。然而,我们鼓励参与者将他们自己的经历带到桌面上,特别是如果他们以任何方式与数据不公正有关。我们还邀请参与者提供正在进行的组织努力的例子,让算法公平研究人员能够团结一致。参与者将带着以下内容离开本课程:1)了解进一步了解这些主题所需的关键术语和资源;2)在现实的、有基础的场景中分析电力的初步经验。
{"title":"Manifesting the sociotechnical: experimenting with methods for social context and social justice","authors":"E. Goss, Lily Hu, Manuel Sabin, Stephanie Teeple","doi":"10.1145/3351095.3375682","DOIUrl":"https://doi.org/10.1145/3351095.3375682","url":null,"abstract":"Critiques of 'algorithmic fairness' have counseled against a purely technical approach. Recent work from the FAT* conference has warned specifically about abstracting away the social context that these automated systems are operating within and has suggested that \"[fairness work] require[s] technical researchers to learn new skills or partner with social scientists\" [Fairness and abstraction in sociotechnical systems, Selbst et al. 2019, FAT* '19]. That \"social context\" includes groups outside the academy organizing for data and/or tech justice (e.g., Allied Media Projects, Stop LAPD Spying Coalition, data4blacklives, etc). These struggles have deep historical roots but have become prominent in the past several years alongside broader citizen-science efforts. In this CRAFT session we as STEM researchers hope to initiate conversation about methods used by community organizers to analyze power relations present in that social context. We will take this time to learn together and discuss if/how these and other methods, collaborations and efforts can be used to actualize oft-mentioned critiques of algorithmic fairness and move toward a data justice-oriented approach. Many scholars and activists have spoken on how to approach social context when discussing algorithmic fairness interventions. Community organizing and attendant methods for power analysis present one such approach: documenting all stakeholders and entities relevant to an issue and the nature of the power differentials between them. The facilitators for this session are not experts in community organizing theory or practice. Instead, we will share what we have learned from our readings of decades of rich work and writings from community organizers. This session is a collective, interdisciplinary learning experience, open to all who see their interests as relevant to the conversation. We will open with a discussion of community organizing practice: What is community organizing, what are its goals, methods, past and ongoing examples? What disciplines and intellectual lineages does it draw from? We will incorporate key sources we have found helpful for synthesizing this knowledge so that participants can continue exposing themselves to the field after the conference. We will also consider the concept of social power, including power that the algorithmic fairness community holds. Noting that there are many ways to theorize and understand power, we will share the framings that have been most useful to us. We plan to present different tools, models and procedures for doing power analysis in various organizing settings. We will propose to our group that we conduct a power analysis of our own. We have prepared a hypothetical but realistic scenario involving risk assessment in a hospital setting as an example. However, we encourage participants to bring their own experiences to the table, especially if they pertain in any way to data injustice. We also invite participants to bring examples of ongo","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126491256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Probing ML models for fairness with the what-if tool and SHAP: hands-on tutorial 使用what-if工具和SHAP:动手教程探索ML模型的公平性
James Wexler, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi, Andrew Zaldivar
As more and more industries use machine learning, it's important to understand how these models make predictions, and where bias can be introduced in the process. In this tutorial we'll walk through two open source frameworks for analyzing your models from a fairness perspective. We'll start with the What-If Tool, a visualization tool that you can run inside a Python notebook to analyze an ML model. With the What-If Tool, you can identify dataset imbalances, see how individual features impact your model's prediction through partial dependence plots, and analyze human-centered ML models from a fairness perspective using various optimization strategies. Then we'll look at SHAP, a tool for interpreting the output of any machine learning model, and seeing how a model arrived at predictions for individual datapoints. We will then show how to use SHAP and the What-If Tool together. After the tutorial you'll have the skills to get started with both of these tools on your own datasets, and be better equipped to analyze your models from a fairness perspective.
随着越来越多的行业使用机器学习,了解这些模型是如何进行预测的,以及在这个过程中会在哪里引入偏见是很重要的。在本教程中,我们将介绍两个开源框架,用于从公平性的角度分析模型。我们将从What-If Tool开始,这是一个可视化工具,您可以在Python笔记本中运行它来分析ML模型。使用What-If Tool,您可以识别数据集的不平衡,通过部分依赖图查看单个特征如何影响模型的预测,并使用各种优化策略从公平的角度分析以人为中心的ML模型。然后,我们将了解SHAP,这是一种解释任何机器学习模型输出的工具,并了解模型如何对单个数据点进行预测。然后我们将展示如何使用SHAP和假设工具。在本教程之后,您将掌握在自己的数据集上开始使用这两个工具的技能,并更好地从公平的角度分析您的模型。
{"title":"Probing ML models for fairness with the what-if tool and SHAP: hands-on tutorial","authors":"James Wexler, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi, Andrew Zaldivar","doi":"10.1145/3351095.3375662","DOIUrl":"https://doi.org/10.1145/3351095.3375662","url":null,"abstract":"As more and more industries use machine learning, it's important to understand how these models make predictions, and where bias can be introduced in the process. In this tutorial we'll walk through two open source frameworks for analyzing your models from a fairness perspective. We'll start with the What-If Tool, a visualization tool that you can run inside a Python notebook to analyze an ML model. With the What-If Tool, you can identify dataset imbalances, see how individual features impact your model's prediction through partial dependence plots, and analyze human-centered ML models from a fairness perspective using various optimization strategies. Then we'll look at SHAP, a tool for interpreting the output of any machine learning model, and seeing how a model arrived at predictions for individual datapoints. We will then show how to use SHAP and the What-If Tool together. After the tutorial you'll have the skills to get started with both of these tools on your own datasets, and be better equipped to analyze your models from a fairness perspective.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128869894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The effects of competition and regulation on error inequality in data-driven markets 在数据驱动的市场中,竞争和监管对误差不平等的影响
Hadi Elzayn, Benjamin Fish
Recent work has documented instances of unfairness in deployed machine learning models, and significant researcher effort has been dedicated to creating algorithms that intrinsically consider fairness. In this work, we highlight another source of unfairness: market forces that drive differential investment in the data pipeline for differing groups. We develop a high-level model to study this question. First, we show that our model predicts unfairness in a monopoly setting. Then, we show that under all but the most extreme models, competition does not eliminate this tendency, and may even exacerbate it. Finally, we consider two avenues for regulating a machine-learning driven monopolist - relative error inequality and absolute error-bounds - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, not only technological ones.
最近的工作记录了部署的机器学习模型中不公平的实例,并且大量的研究人员致力于创建本质上考虑公平的算法。在这项工作中,我们强调了不公平的另一个来源:推动不同群体在数据管道中进行差异投资的市场力量。我们开发了一个高级模型来研究这个问题。首先,我们证明了我们的模型预测了垄断环境中的不公平。然后,我们表明,除了最极端的模型外,竞争并没有消除这种趋势,甚至可能加剧这种趋势。最后,我们考虑了监管机器学习驱动的垄断者的两种途径——相对误差不等式和绝对误差界限——并量化了公平的代价(以及谁来支付代价)。这些模型暗示,减轻公平问题可能需要政策驱动的解决方案,而不仅仅是技术解决方案。
{"title":"The effects of competition and regulation on error inequality in data-driven markets","authors":"Hadi Elzayn, Benjamin Fish","doi":"10.1145/3351095.3372842","DOIUrl":"https://doi.org/10.1145/3351095.3372842","url":null,"abstract":"Recent work has documented instances of unfairness in deployed machine learning models, and significant researcher effort has been dedicated to creating algorithms that intrinsically consider fairness. In this work, we highlight another source of unfairness: market forces that drive differential investment in the data pipeline for differing groups. We develop a high-level model to study this question. First, we show that our model predicts unfairness in a monopoly setting. Then, we show that under all but the most extreme models, competition does not eliminate this tendency, and may even exacerbate it. Finally, we consider two avenues for regulating a machine-learning driven monopolist - relative error inequality and absolute error-bounds - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, not only technological ones.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134490810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Bridging the gap from AI ethics research to practice 弥合人工智能伦理研究与实践之间的差距
K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach
The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.
近年来,对机器学习应用公平性的研究已经出现了大量的学术调查、研究和出版物。与此同时,科技公司已经开始在更广泛的范围内实例化人工智能伦理和产品伦理方面的新生项目。作为这些努力的结果,人工智能伦理从业者已经试行了新的流程来评估和确保机器学习应用的公平性。在本次会议上,来自LinkedIn、Yoti、微软、Pymetrics、Facebook和Salesforce的六位行业从业者分享了他们在公平领域所做的工作,哪些有效,哪些无效,从中吸取的经验教训和制定的最佳实践。•Krishnaram Kenthapadi介绍了LinkedIn在人才搜索方面的公平重新排名。•朱莉·道森(Julie Dawson)分享了Yoti如何将机器学习公平性研究应用于其数字身份平台的年龄估计。•汉娜•瓦拉赫(Hanna Wallach)阐述了微软如何在实践中应用公平原则。•Lewis Baker介绍了Pymetric招聘算法中的公平机制。•Isabel Kloumann通过对内容审核系统公平性的案例研究,展示了Facebook的公平性评估框架。•Sarah Aerni介绍了Salesforce如何在Einstein人工智能平台中构建公平功能。在这些见解的基础上,我们讨论见解并集思广益,通过这些见解来建立实践者的工作。确定了进一步研究或合作的机会,目标是对人工智能伦理从业者的经验和需求形成共同的理解。最终的目标是为更道德和公平的人工智能产品开发和部署制定一个剧本。
{"title":"Bridging the gap from AI ethics research to practice","authors":"K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach","doi":"10.1145/3351095.3375680","DOIUrl":"https://doi.org/10.1145/3351095.3375680","url":null,"abstract":"The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Explainable AI in industry: practical challenges and lessons learned: implications tutorial 工业中可解释的人工智能:实际挑战和经验教训:启示教程
Krishna Gade, S. Geyik, K. Kenthapadi, Varun Mithal, Ankur Taly
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling. As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques. In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as: • Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations. • Sales: Understanding of sales predictions in terms of customer up-sell/churn. • Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent. • Lending: How to understand/interpret lending decisions made by an AI system. We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.
人工智能在决定我们的日常体验方面发挥着越来越重要的作用。此外,随着基于人工智能的解决方案在招聘、贷款、刑事司法、医疗保健和教育等领域的扩散,人工智能对个人和职业的影响已经变得深远。人工智能模型在这些领域中所扮演的主导角色导致人们越来越关注这些模型中潜在的偏见,以及对模型透明度和可解释性的需求[2,4]。模型可解释性被认为是在高风险领域(如贷款和医疗保健[1])建立信任和采用人工智能系统的先决条件,这些领域需要可靠性、安全性和公平性。它对于自动化运输和其他具有重大社会经济影响的工业应用(如预测性维护、自然资源勘探和气候变化建模)也至关重要。因此,人工智能研究人员和从业者将注意力集中在可解释的人工智能上,以帮助他们更好地信任和理解大规模的模型[5,6,8]。事实上,AI/ML的可解释性领域正处于一个转折点。从社会、监管、商业、最终用户和模型开发人员的角度来看,这是一个巨大的需求。因此,实用且可扩展的可解释性方法正迅速变得可用。研究界面临的挑战包括:(i)就模型可解释性的正确概念达成共识,(ii)从不同利益相关者的角度确定和形式化可解释性任务,以及(iii)设计评估可解释性技术的措施。在本教程中,我们将首先从多个角度激发AI中模型可解释性和可解释性的需求[3]。然后,我们将简要概述几种可解释性技术和工具。本教程的其余部分将侧重于可解释性技术在工业中的实际应用。我们将展示跨越几个领域的案例研究,例如:•搜索和推荐系统:对搜索和推荐系统的理解,以及检索和排名决策如何实时发生[7]。示例应用包括解释人工智能系统对工作推荐做出的决定,为招聘海报的潜在候选人排名,以及内容推荐。•销售:根据客户追加销售/流失了解销售预测。•欺诈检测:检查和解释人工智能系统,确定内容或事件是否具有欺诈性。•借贷:如何理解/解释人工智能系统做出的借贷决策。我们将重点关注这些系统在开发和部署过程中的社会技术维度、实际挑战和经验教训,这将有利于对可解释人工智能感兴趣的研究人员和实践者。最后,我们将讨论对社区开放的挑战和研究方向。
{"title":"Explainable AI in industry: practical challenges and lessons learned: implications tutorial","authors":"Krishna Gade, S. Geyik, K. Kenthapadi, Varun Mithal, Ankur Taly","doi":"10.1145/3351095.3375664","DOIUrl":"https://doi.org/10.1145/3351095.3375664","url":null,"abstract":"Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling. As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques. In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as: • Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations. • Sales: Understanding of sales predictions in terms of customer up-sell/churn. • Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent. • Lending: How to understand/interpret lending decisions made by an AI system. We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129219869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Leap of FATE: human rights as a complementary framework for AI policy and practice 命运之跃:人权作为人工智能政策和实践的补充框架
Corinne Cath, Mark Latonero, Vidushi Marda, Roya Pakzad
The premise of this translation tutorial is that human rights serves as a complementary framework - in addition to Fairness, Accountability, Transparency, and Ethics - for guiding and governing artificial intelligence (AI) and machine learning research and development. Attendees will participate in a case study, which will demonstrate show how a human rights framework, grounded in international law, fundamental values, and global systems of accountability, can offer the technical community a practical approach to addressing global AI risks and harms. This tutorial discusses how human rights frameworks can inform, guide and govern AI policy and practice in a manner that is complementary to Fairness, Accountability, Transparency, and Ethics (FATE) frameworks. Using the case study of researchers developing a facial recognition API at a tech company and its use by a law enforcement client, we will engage the audience to think through the benefits and challenges of applying human rights frameworks to AI system design and deployment. We will do so by providing a brief overview of the international human rights law, and various non-binding human rights frameworks in relation to our current discussions around FATE and then apply them to contemporary debates and case studies
本翻译教程的前提是,除了 "公平"、"问责"、"透明 "和 "道德 "之外,人权还可以作为指导和管理人工智能(AI)与机器学习研发的补充框架。与会者将参与一个案例研究,该案例研究将展示以国际法、基本价值观和全球问责制度为基础的人权框架如何为技术界提供解决全球人工智能风险和危害的实用方法。本教程将讨论人权框架如何以一种与公平、问责、透明和道德(FATE)框架相辅相成的方式,为人工智能政策和实践提供信息、指导和管理。我们将以一家科技公司的研究人员开发人脸识别应用程序接口(API)以及执法部门客户使用该应用程序接口为案例,让听众思考将人权框架应用于人工智能系统设计和部署的益处和挑战。为此,我们将简要介绍国际人权法和各种不具约束力的人权框架,并将其与我们当前围绕 FATE 的讨论联系起来,然后将其应用于当代辩论和案例研究中。
{"title":"Leap of FATE: human rights as a complementary framework for AI policy and practice","authors":"Corinne Cath, Mark Latonero, Vidushi Marda, Roya Pakzad","doi":"10.1145/3351095.3375665","DOIUrl":"https://doi.org/10.1145/3351095.3375665","url":null,"abstract":"The premise of this translation tutorial is that human rights serves as a complementary framework - in addition to Fairness, Accountability, Transparency, and Ethics - for guiding and governing artificial intelligence (AI) and machine learning research and development. Attendees will participate in a case study, which will demonstrate show how a human rights framework, grounded in international law, fundamental values, and global systems of accountability, can offer the technical community a practical approach to addressing global AI risks and harms. This tutorial discusses how human rights frameworks can inform, guide and govern AI policy and practice in a manner that is complementary to Fairness, Accountability, Transparency, and Ethics (FATE) frameworks. Using the case study of researchers developing a facial recognition API at a tech company and its use by a law enforcement client, we will engage the audience to think through the benefits and challenges of applying human rights frameworks to AI system design and deployment. We will do so by providing a brief overview of the international human rights law, and various non-binding human rights frameworks in relation to our current discussions around FATE and then apply them to contemporary debates and case studies","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128549710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Assessing the intersection of organizational structure and FAT* efforts within industry: implications tutorial 评估行业内组织结构和FAT*工作的交集:启示教程
B. Rakova, Rumman Chowdhury, Jingying Yang
The work within the Fairness, Accountability, and Transparency of ML (fair-ML) community will positively benefit from appreciating the role of organizational culture and structure in the effective practice of fair-ML efforts of individuals, teams, and initiatives within industry. In this tutorial session we will explore various organizational structures and possible leverage points to effectively intervene in the process of design, development, and deployment of AI systems, towards contributing to positive fair-ML outcomes. We will begin by presenting the results of interviews conducted during an ethnographic study among practitioners working in industry, including themes related to: origination and evolution, common challenges, ethical tensions, and effective enablers. The study was designed through the lens of Industrial Organizational Psychology and aims to create a mapping of the current state of the fair-ML organizational structures inside major AI companies. We also look at the most-desired future state to enable effective work to increase algorithmic accountability, as well as the key elements in the transition from the current to that future state. We investigate drivers for change as well as the tensions between creating an 'ethical' system vs one that is 'ethical' enough. After presenting our preliminary findings, the rest of the tutorial will be highly interactive. Starting with a facilitated activity in break out groups, we will discuss the already identified challenges, best practices, and mitigation strategies. Finally, we hope to create space for productive discussion among AI practitioners in industry, academic researchers within various fields working directly on algorithmic accountability and transparency, advocates for various communities most impacted by technology, and others. Based on the interactive component of the tutorial, facilitators and interested participants will collaborate on further developing the discussed challenges into scenarios and guidelines that will be published as a follow up report.
公平、问责和透明的机器学习(公平机器学习)社区的工作将积极受益于欣赏组织文化和结构在行业内个人、团队和倡议的公平机器学习努力的有效实践中的作用。在本教程中,我们将探讨各种组织结构和可能的杠杆点,以有效地干预人工智能系统的设计、开发和部署过程,从而促进积极的公平ml结果。我们将首先介绍在一项民族志研究中对行业从业人员进行的访谈结果,包括与以下主题相关的:起源和演变、共同挑战、伦理紧张关系和有效的促成因素。该研究是通过工业组织心理学的视角设计的,旨在绘制主要人工智能公司内部公平机器学习组织结构的现状图。我们还研究了最期望的未来状态,以实现有效的工作,以增加算法的问责制,以及从当前状态过渡到未来状态的关键因素。我们调查了变革的驱动因素,以及创建一个“道德”系统与一个足够“道德”的系统之间的紧张关系。在展示了我们的初步发现之后,本教程的其余部分将是高度互动的。我们将从分组促进活动开始,讨论已经确定的挑战、最佳做法和缓解战略。最后,我们希望为工业界的人工智能从业者、直接研究算法问责制和透明度的各个领域的学术研究人员、受技术影响最大的各种社区的倡导者等之间的富有成效的讨论创造空间。根据教程的互动部分,主持人和感兴趣的参与者将合作,进一步将讨论的挑战发展成情景和指导方针,并将作为后续报告发布。
{"title":"Assessing the intersection of organizational structure and FAT* efforts within industry: implications tutorial","authors":"B. Rakova, Rumman Chowdhury, Jingying Yang","doi":"10.1145/3351095.3375672","DOIUrl":"https://doi.org/10.1145/3351095.3375672","url":null,"abstract":"The work within the Fairness, Accountability, and Transparency of ML (fair-ML) community will positively benefit from appreciating the role of organizational culture and structure in the effective practice of fair-ML efforts of individuals, teams, and initiatives within industry. In this tutorial session we will explore various organizational structures and possible leverage points to effectively intervene in the process of design, development, and deployment of AI systems, towards contributing to positive fair-ML outcomes. We will begin by presenting the results of interviews conducted during an ethnographic study among practitioners working in industry, including themes related to: origination and evolution, common challenges, ethical tensions, and effective enablers. The study was designed through the lens of Industrial Organizational Psychology and aims to create a mapping of the current state of the fair-ML organizational structures inside major AI companies. We also look at the most-desired future state to enable effective work to increase algorithmic accountability, as well as the key elements in the transition from the current to that future state. We investigate drivers for change as well as the tensions between creating an 'ethical' system vs one that is 'ethical' enough. After presenting our preliminary findings, the rest of the tutorial will be highly interactive. Starting with a facilitated activity in break out groups, we will discuss the already identified challenges, best practices, and mitigation strategies. Finally, we hope to create space for productive discussion among AI practitioners in industry, academic researchers within various fields working directly on algorithmic accountability and transparency, advocates for various communities most impacted by technology, and others. Based on the interactive component of the tutorial, facilitators and interested participants will collaborate on further developing the discussed challenges into scenarios and guidelines that will be published as a follow up report.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127440724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Lost in translation: an interactive workshop mapping interdisciplinary translations for epistemic justice 迷失在翻译:一个互动的研讨会映射跨学科翻译的认识正义
Evelyn Wan, A. D. Groot, Shazade Jameson, M. Paun, Phillip Lücking, Goda Klumbytė, Danny Lämmerhirt
There are gaps in understanding in and between those who design systems of AI/ ML, those who critique them, and those positioned between these discourses. This gap can be defined in multiple ways - e.g. methodological, epistemological, linguistic, or cultural. To bridge this gap requires a set of translations: the generation of a collaborative space and a new set of shared sensibilities that traverse disciplinary boundaries. This workshop aims to explore translations across multiple fields, and translations between theory and practice, as well as how interdisciplinary work could generate new operationalizable approaches. We define 'knowledge' as a social product (L. Code) which requires fair and broad epistemic cooperation in its generation, development, and dissemination. As a "marker for truth" (B. Williams) and therefore a basis for action, knowledge circulation sustains the systems of power which produce it in the first place (M. Foucault). Enabled by epistemic credence, authority or knowledge, epistemic power can be an important driver of, but also result from, other (e.g. economic, political) powers. To produce reliable output, our standards and methods should serve us all and exclude no-one. Critical theorists have long revealed failings of epistemic practices, resulting in the marginalization and exclusion of some types of knowledge. How can we cultivate more reflexive epistemic practices in the interdisciplinary research setting of FAT*? We frame this ideal as 'epistemic justice' (M. Geuskens), the positive of 'epistemic injustice', defined by M. Fricker as injustice that exists when people are wronged as a knower or as an epistemic subject. Epistemic justice is the proper use and allocation of epistemic power; the inclusion and balancing of all epistemic sources. As S. Jasanoff reminds us, any authoritative way of seeing must be legitimized in discourse and practice, showing that practices can be developed to value and engage with other viewpoints and possibly reshape our ways of knowing. Our workshop aims to address the following questions: how could critical theory or higher level critiques be translated into and anchored in ML/AI design practices - and vice versa? What kind of cartographies and methodologies are needed in order to identify issues that can act as the basis of collaborative research and design? How can we (un)learn our established ways of thinking for such collaborative work to take place? During the workshop, participants will create, share and explode prototypical workflows of designing, researching and critiquing algorithmic systems. We will identify moments in which translations and interdisciplinary interventions could or should happen in order to build actionable steps and methodological frameworks that advance epistemic justice and are conducive to future interdisciplinary collaboration.
那些设计AI/ ML系统的人,那些批评他们的人,以及那些处于这些话语之间的人之间的理解存在差距。这种差距可以用多种方式来定义,例如方法论、认识论、语言或文化。为了弥合这一差距,需要一系列的翻译:一个协作空间的产生和一套新的跨越学科界限的共享情感。本次研讨会旨在探讨跨领域的翻译,理论与实践之间的翻译,以及跨学科工作如何产生新的可操作方法。我们将“知识”定义为一种社会产品(L. Code),在其产生、发展和传播过程中需要公平和广泛的知识合作。作为“真理的标记”(B. Williams),因此也是行动的基础,知识流通首先维持着产生它的权力系统(M. Foucault)。在认识论的信任、权威或知识的支持下,认识论权力可以是其他权力(如经济、政治)的重要驱动力,也可以是其他权力的结果。为了产生可靠的产出,我们的标准和方法应该服务于所有人,不排斥任何人。批判理论家早就揭示了认识论实践的失败,导致某些类型的知识被边缘化和排斥。在FAT*的跨学科研究背景下,我们如何培养更多的反思性认知实践?我们将这种理想定义为“认识论正义”(M. Geuskens),这是“认识论不公正”(M. Fricker将其定义为当人们作为一个知者或认识论主体被冤枉时存在的不公正)的积极面。认识正义是认识权力的合理运用和分配;包括和平衡所有的知识来源。正如S. Jasanoff提醒我们的那样,任何权威的观察方式都必须在话语和实践中得到合法化,这表明实践可以发展到重视和参与其他观点,并可能重塑我们的认识方式。我们的研讨会旨在解决以下问题:如何将批评理论或更高层次的批评转化为ML/AI设计实践,反之亦然?为了确定可以作为合作研究和设计基础的问题,需要什么样的制图和方法?我们如何(不)学习我们的既定思维方式,以实现这种协作工作?在研讨会期间,参与者将创建,分享和爆炸设计,研究和批评算法系统的原型工作流程。我们将确定翻译和跨学科干预可能或应该发生的时刻,以建立可操作的步骤和方法框架,促进认识正义,并有利于未来的跨学科合作。
{"title":"Lost in translation: an interactive workshop mapping interdisciplinary translations for epistemic justice","authors":"Evelyn Wan, A. D. Groot, Shazade Jameson, M. Paun, Phillip Lücking, Goda Klumbytė, Danny Lämmerhirt","doi":"10.1145/3351095.3375685","DOIUrl":"https://doi.org/10.1145/3351095.3375685","url":null,"abstract":"There are gaps in understanding in and between those who design systems of AI/ ML, those who critique them, and those positioned between these discourses. This gap can be defined in multiple ways - e.g. methodological, epistemological, linguistic, or cultural. To bridge this gap requires a set of translations: the generation of a collaborative space and a new set of shared sensibilities that traverse disciplinary boundaries. This workshop aims to explore translations across multiple fields, and translations between theory and practice, as well as how interdisciplinary work could generate new operationalizable approaches. We define 'knowledge' as a social product (L. Code) which requires fair and broad epistemic cooperation in its generation, development, and dissemination. As a \"marker for truth\" (B. Williams) and therefore a basis for action, knowledge circulation sustains the systems of power which produce it in the first place (M. Foucault). Enabled by epistemic credence, authority or knowledge, epistemic power can be an important driver of, but also result from, other (e.g. economic, political) powers. To produce reliable output, our standards and methods should serve us all and exclude no-one. Critical theorists have long revealed failings of epistemic practices, resulting in the marginalization and exclusion of some types of knowledge. How can we cultivate more reflexive epistemic practices in the interdisciplinary research setting of FAT*? We frame this ideal as 'epistemic justice' (M. Geuskens), the positive of 'epistemic injustice', defined by M. Fricker as injustice that exists when people are wronged as a knower or as an epistemic subject. Epistemic justice is the proper use and allocation of epistemic power; the inclusion and balancing of all epistemic sources. As S. Jasanoff reminds us, any authoritative way of seeing must be legitimized in discourse and practice, showing that practices can be developed to value and engage with other viewpoints and possibly reshape our ways of knowing. Our workshop aims to address the following questions: how could critical theory or higher level critiques be translated into and anchored in ML/AI design practices - and vice versa? What kind of cartographies and methodologies are needed in order to identify issues that can act as the basis of collaborative research and design? How can we (un)learn our established ways of thinking for such collaborative work to take place? During the workshop, participants will create, share and explode prototypical workflows of designing, researching and critiquing algorithmic systems. We will identify moments in which translations and interdisciplinary interventions could or should happen in order to build actionable steps and methodological frameworks that advance epistemic justice and are conducive to future interdisciplinary collaboration.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128457816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Regulating transparency?: Facebook, Twitter and the German Network Enforcement Act 调节透明度?: Facebook、Twitter和德国网络执法法案
B. Wagner, Krisztina Rozgonyi, Marie-Theres Sekwenz, Jennifer Cobbe, Jatinder Singh
Regulatory regimes designed to ensure transparency often struggle to ensure that transparency is meaningful in practice. This challenge is particularly great when coupled with the widespread usage of dark patterns --- design techniques used to manipulate users. The following article analyses the implementation of the transparency provisions of the German Network Enforcement Act (NetzDG) by Facebook and Twitter, as well as the consequences of these implementations for the effective regulation of online platforms. This question of effective regulation is particularly salient, due to an enforcement action in 2019 by Germany's Federal Office of Justice (BfJ) against Facebook for what the BfJ claim were insufficient compliance with transparency requirements, under NetzDG. This article provides an overview of the transparency requirements of NetzDG and contrasts these with the transparency requirements of other relevant regulations. It will then discuss how transparency concerns not only providing data, but also how the visibility of the data that is made transparent is managed, by deciding how the data is provided and is framed. We will then provide an empirical analysis of the design choices made by Facebook and Twitter, to assess the ways in which their implementations differ. The consequences of these two divergent implementations on interface design and user behaviour are then discussed, through a comparison of the transparency reports and reporting mechanisms used by Facebook and Twitter. As a next step, we will discuss the BfJ's consideration of the design of Facebook's content reporting mechanisms, and what this reveals about their respective interpretations of NetzDG's scope. Finally, in recognising that this situation is one in which a regulator is considering design as part of their action - we develop a wider argument on the potential for regulatory enforcement around dark patterns, and design practices more generally, for which this case is an early, indicative example.
旨在确保透明度的监管制度往往难以确保透明度在实践中具有意义。当与广泛使用的暗模式(用于操纵用户的设计技术)相结合时,这一挑战尤其巨大。以下文章分析了Facebook和Twitter对德国网络执法法案(NetzDG)透明度条款的实施情况,以及这些实施对在线平台有效监管的影响。由于德国联邦司法办公室(BfJ)在2019年对Facebook采取了执法行动,因为BfJ声称Facebook没有充分遵守NetzDG规定的透明度要求,因此有效监管的问题尤为突出。本文概述了NetzDG的透明度要求,并将其与其他相关法规的透明度要求进行了对比。然后,它将讨论透明度如何不仅涉及提供数据,还涉及如何通过决定数据的提供方式和框架来管理透明数据的可见性。然后,我们将对Facebook和Twitter所做的设计选择进行实证分析,以评估它们在实现方面的不同。然后,通过对Facebook和Twitter使用的透明度报告和报告机制的比较,讨论了这两种不同实现对界面设计和用户行为的影响。下一步,我们将讨论BfJ对Facebook内容报告机制设计的考虑,以及这揭示了他们各自对NetzDG范围的解释。最后,在认识到这种情况是监管机构将设计视为其行动的一部分的情况下,我们就围绕黑暗模式和更广泛的设计实践进行监管执法的可能性展开了更广泛的争论,该案例是一个早期的指示性例子。
{"title":"Regulating transparency?: Facebook, Twitter and the German Network Enforcement Act","authors":"B. Wagner, Krisztina Rozgonyi, Marie-Theres Sekwenz, Jennifer Cobbe, Jatinder Singh","doi":"10.1145/3351095.3372856","DOIUrl":"https://doi.org/10.1145/3351095.3372856","url":null,"abstract":"Regulatory regimes designed to ensure transparency often struggle to ensure that transparency is meaningful in practice. This challenge is particularly great when coupled with the widespread usage of dark patterns --- design techniques used to manipulate users. The following article analyses the implementation of the transparency provisions of the German Network Enforcement Act (NetzDG) by Facebook and Twitter, as well as the consequences of these implementations for the effective regulation of online platforms. This question of effective regulation is particularly salient, due to an enforcement action in 2019 by Germany's Federal Office of Justice (BfJ) against Facebook for what the BfJ claim were insufficient compliance with transparency requirements, under NetzDG. This article provides an overview of the transparency requirements of NetzDG and contrasts these with the transparency requirements of other relevant regulations. It will then discuss how transparency concerns not only providing data, but also how the visibility of the data that is made transparent is managed, by deciding how the data is provided and is framed. We will then provide an empirical analysis of the design choices made by Facebook and Twitter, to assess the ways in which their implementations differ. The consequences of these two divergent implementations on interface design and user behaviour are then discussed, through a comparison of the transparency reports and reporting mechanisms used by Facebook and Twitter. As a next step, we will discuss the BfJ's consideration of the design of Facebook's content reporting mechanisms, and what this reveals about their respective interpretations of NetzDG's scope. Finally, in recognising that this situation is one in which a regulator is considering design as part of their action - we develop a wider argument on the potential for regulatory enforcement around dark patterns, and design practices more generally, for which this case is an early, indicative example.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116681927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
期刊
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1