首页 > 最新文献

Proceedings of the Conference on Fairness, Accountability, and Transparency最新文献

英文 中文
Explaining Explanations in AI 解释AI中的解释
Pub Date : 2018-11-04 DOI: 10.1145/3287560.3287574
B. Mittelstadt, Chris Russell, Sandra Wachter
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
最近关于机器学习和人工智能可解释性的研究主要集中在构建简化模型上,这些模型近似于用于决策的真实标准。这些模型是一种有用的教学工具,可以教训练有素的专业人员如何预测复杂系统将做出哪些决策,最重要的是,系统可能如何崩溃。然而,在考虑任何这样的模型时,重要的是要记住Box的格言:“所有模型都是错误的,但有些模型是有用的。”我们将重点讨论这些模型与哲学和社会学解释之间的区别。这些模型可以被理解为用于解释的“自己动手工具包”,允许从业者直接回答“如果问题”或在没有外部帮助的情况下生成对比解释。虽然这是一种有价值的能力,但将这些模型作为解释似乎比必要的要困难得多,而其他形式的解释可能没有同样的权衡。我们对比了不同的思想流派对什么是解释,并建议机器学习可能会受益于更广泛地看待问题。
{"title":"Explaining Explanations in AI","authors":"B. Mittelstadt, Chris Russell, Sandra Wachter","doi":"10.1145/3287560.3287574","DOIUrl":"https://doi.org/10.1145/3287560.3287574","url":null,"abstract":"Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that \"All models are wrong but some are useful.\" We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a \"do it yourself kit\" for explanations, allowing a practitioner to directly answer \"what if questions\" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77630728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 500
Model Cards for Model Reporting 模型报告的模型卡
Pub Date : 2018-10-05 DOI: 10.1145/3287560.3287596
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, B. Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
训练有素的机器学习模型越来越多地用于执行执法、医学、教育和就业等领域的高影响力任务。为了澄清机器学习模型的预期用例,并尽量减少它们在不太适合的环境中的使用,我们建议发布的模型附带详细说明其性能特征的文档。在本文中,我们提出了一个框架,我们称之为模型卡,以鼓励这种透明的模型报告。模型卡是伴随训练有素的机器学习模型的简短文档,可以在各种条件下提供基准评估,例如跨不同文化,人口统计学或表型组(例如,种族,地理位置,性别,Fitzpatrick皮肤类型[15])和与预期应用领域相关的交叉组(例如,年龄和种族,或性别和Fitzpatrick皮肤类型)。模型卡还揭示了模型将要被使用的环境,性能评估程序的细节,以及其他相关信息。虽然我们主要关注计算机视觉和自然语言处理应用领域中以人为中心的机器学习模型,但该框架可用于记录任何经过训练的机器学习模型。为了巩固这个概念,我们为两个监督模型提供了卡片:一个被训练来检测图像中的笑脸,另一个被训练来检测文本中的有毒评论。我们建议将模型卡作为机器学习和相关人工智能技术负责任民主化的一步,增加人工智能技术工作的透明度。我们希望这项工作能鼓励那些发布训练有素的机器学习模型的人,在发布模型时提供类似的详细评估数字和其他相关文档。
{"title":"Model Cards for Model Reporting","authors":"Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, B. Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru","doi":"10.1145/3287560.3287596","DOIUrl":"https://doi.org/10.1145/3287560.3287596","url":null,"abstract":"Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90690620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1211
From Soft Classifiers to Hard Decisions: How fair can we be? 从软分类到硬决策:我们能做到多公平?
Pub Date : 2018-10-03 DOI: 10.1145/3287560.3287561
R. Canetti, A. Cohen, Nishanth Dikkala, Govind Ramnarayan, Sarah Scheffler, Adam D. Smith
A popular methodology for building binary decision-making classifiers in the presence of imperfect information is to first construct a calibrated non-binary "scoring" classifier, and then to post-process this score to obtain a binary decision. We study various fairness (or, error-balance) properties of this methodology, when the non-binary scores are calibrated over all protected groups, and with a variety of post-processing algorithms. Specifically, we show: First, there does not exist a general way to post-process a calibrated classifier to equalize protected groups' positive or negative predictive value (PPV or NPV). For certain "nice" calibrated classifiers, either PPV or NPV can be equalized when the post-processor uses different thresholds across protected groups. Still, when the post-processing consists of a single global threshold across all groups, natural fairness properties, such as equalizing PPV in a nontrivial way, do not hold even for "nice" classifiers. Second, when the post-processing stage is allowed to defer on some decisions (that is, to avoid making a decision by handing off some examples to a separate process), then for the non-deferred decisions, the resulting classifier can be made to equalize PPV, NPV, false positive rate (FPR) and false negative rate (FNR) across the protected groups. This suggests a way to partially evade the impossibility results of Chouldechova and Kleinberg et al., which preclude equalizing all of these measures simultaneously. We also present different deferring strategies and show how they affect the fairness properties of the overall system. We evaluate our post-processing techniques using the COMPAS data set from 2016.
在存在不完全信息的情况下构建二元决策分类器的一种流行方法是首先构建一个校准的非二元“评分”分类器,然后对该评分进行后处理以获得二元决策。我们研究了这种方法的各种公平性(或错误平衡)属性,当非二进制分数在所有受保护组上进行校准时,并使用各种后处理算法。具体来说,我们表明:首先,不存在一种通用的方法来后处理校准的分类器来平衡保护组的正或负预测值(PPV或NPV)。对于某些“好的”校准分类器,当后处理器在保护组中使用不同的阈值时,PPV或NPV都可以均衡。尽管如此,当后处理包含跨所有组的单个全局阈值时,自然公平性属性,例如以非平凡的方式均衡PPV,即使对于“好的”分类器也不适用。其次,当后处理阶段允许延迟某些决策(即通过将一些示例交给单独的进程来避免做出决策)时,则对于非延迟决策,可以使生成的分类器在受保护组中均衡PPV, NPV,假阳性率(FPR)和假阴性率(FNR)。这提示了一种方法,可以部分地逃避Chouldechova和Kleinberg等人的不可能结果,该结果排除了同时均衡所有这些测量。我们还提出了不同的延迟策略,并展示了它们如何影响整个系统的公平性属性。我们使用2016年的COMPAS数据集来评估我们的后处理技术。
{"title":"From Soft Classifiers to Hard Decisions: How fair can we be?","authors":"R. Canetti, A. Cohen, Nishanth Dikkala, Govind Ramnarayan, Sarah Scheffler, Adam D. Smith","doi":"10.1145/3287560.3287561","DOIUrl":"https://doi.org/10.1145/3287560.3287561","url":null,"abstract":"A popular methodology for building binary decision-making classifiers in the presence of imperfect information is to first construct a calibrated non-binary \"scoring\" classifier, and then to post-process this score to obtain a binary decision. We study various fairness (or, error-balance) properties of this methodology, when the non-binary scores are calibrated over all protected groups, and with a variety of post-processing algorithms. Specifically, we show: First, there does not exist a general way to post-process a calibrated classifier to equalize protected groups' positive or negative predictive value (PPV or NPV). For certain \"nice\" calibrated classifiers, either PPV or NPV can be equalized when the post-processor uses different thresholds across protected groups. Still, when the post-processing consists of a single global threshold across all groups, natural fairness properties, such as equalizing PPV in a nontrivial way, do not hold even for \"nice\" classifiers. Second, when the post-processing stage is allowed to defer on some decisions (that is, to avoid making a decision by handing off some examples to a separate process), then for the non-deferred decisions, the resulting classifier can be made to equalize PPV, NPV, false positive rate (FPR) and false negative rate (FNR) across the protected groups. This suggests a way to partially evade the impossibility results of Chouldechova and Kleinberg et al., which preclude equalizing all of these measures simultaneously. We also present different deferring strategies and show how they affect the fairness properties of the overall system. We evaluate our post-processing techniques using the COMPAS data set from 2016.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82005712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Actionable Recourse in Linear Classification 线性分类中的可诉追索权
Pub Date : 2018-09-18 DOI: 10.1145/3287560.3287566
Berk Ustun, Alexander Spangher, Yang Liu
Classification models are often used to make decisions that affect humans: whether to approve a loan application, extend a job offer, or provide insurance. In such applications, individuals should have the ability to change the decision of the model. When a person is denied a loan by a credit scoring model, for example, they should be able to change the input variables of the model in a way that will guarantee approval. Otherwise, this person will be denied the loan so long as the model is deployed, and -- more importantly --will lack agency over a decision that affects their livelihood. In this paper, we propose to evaluate a linear classification model in terms of recourse, which we define as the ability of a person to change the decision of the model through actionable input variables (e.g., income vs. age or marital status). We present an integer programming toolkit to: (i) measure the feasibility and difficulty of recourse in a target population; and (ii) generate a list of actionable changes for a person to obtain a desired outcome. We discuss how our tools can inform different stakeholders by using them to audit recourse for credit scoring models built with real-world datasets. Our results illustrate how recourse can be significantly affected by common modeling practices, and motivate the need to evaluate recourse in algorithmic decision-making.
分类模型通常用于做出影响人类的决策:是否批准贷款申请、延长工作机会或提供保险。在这样的应用程序中,个人应该有能力改变模型的决策。例如,当一个人被信用评分模型拒绝贷款时,他们应该能够以一种保证获得批准的方式更改模型的输入变量。否则,只要该模型被部署,这个人就会被拒绝贷款,更重要的是,他将缺乏对影响其生计的决策的代理权。在本文中,我们建议从追索权的角度来评估一个线性分类模型,我们将其定义为一个人通过可操作的输入变量(例如,收入与年龄或婚姻状况)改变模型决策的能力。我们提出了一个整数规划工具包:(i)衡量在目标人群中追索的可行性和难度;(ii)生成一份可操作的变更清单,以便人们获得期望的结果。我们讨论了我们的工具如何通过使用它们来审计使用真实数据集构建的信用评分模型的追索权来通知不同的利益相关者。我们的结果说明了追索权如何受到常见建模实践的显著影响,并激发了在算法决策中评估追索权的需要。
{"title":"Actionable Recourse in Linear Classification","authors":"Berk Ustun, Alexander Spangher, Yang Liu","doi":"10.1145/3287560.3287566","DOIUrl":"https://doi.org/10.1145/3287560.3287566","url":null,"abstract":"Classification models are often used to make decisions that affect humans: whether to approve a loan application, extend a job offer, or provide insurance. In such applications, individuals should have the ability to change the decision of the model. When a person is denied a loan by a credit scoring model, for example, they should be able to change the input variables of the model in a way that will guarantee approval. Otherwise, this person will be denied the loan so long as the model is deployed, and -- more importantly --will lack agency over a decision that affects their livelihood. In this paper, we propose to evaluate a linear classification model in terms of recourse, which we define as the ability of a person to change the decision of the model through actionable input variables (e.g., income vs. age or marital status). We present an integer programming toolkit to: (i) measure the feasibility and difficulty of recourse in a target population; and (ii) generate a list of actionable changes for a person to obtain a desired outcome. We discuss how our tools can inform different stakeholders by using them to audit recourse for credit scoring models built with real-world datasets. Our results illustrate how recourse can be significantly affected by common modeling practices, and motivate the need to evaluate recourse in algorithmic decision-making.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83467159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 415
Access to Population-Level Signaling as a Source of Inequality 获取作为不平等根源的人口水平信号
Pub Date : 2018-09-12 DOI: 10.1145/3287560.3287579
Nicole Immorlica, Katrina Ligett, Juba Ziani
We identify and explore differential access to population-level signaling (also known as information design) as a source of unequal access to opportunity. A population-level signaler has potentially noisy observations of a binary type for each member of a population and, based on this, produces a signal about each member. A decision-maker infers types from signals and accepts those individuals whose type is high in expectation. We assume the signaler of the disadvantaged population reveals her observations to the decision-maker, whereas the signaler of the advantaged population forms signals strategically. We study the expected utility of the populations as measured by the fraction of accepted members, as well as the false positive rates (FPR) and false negative rates (FNR). We first show the intuitive results that for a fixed environment, the advantaged population has higher expected utility, higher FPR, and lower FNR, than the disadvantaged one (despite having identical population quality), and that more accurate observations improve the expected utility of the advantaged population while harming that of the disadvantaged one. We next explore the introduction of a publicly-observable signal, such as a test score, as a potential intervention. Our main finding is that this natural intervention, intended to reduce the inequality between the populations' utilities, may actually exacerbate it in settings where observations and test scores are noisy.
我们识别和探索人口水平信号(也称为信息设计)的差异获取途径,作为机会不平等获取的来源。种群级信号器对种群的每个成员具有潜在的二元类型的噪声观测,并基于此产生关于每个成员的信号。决策者从信号中推断类型,并接受那些期望值高的类型。我们假设弱势群体的信号者向决策者透露她的观察结果,而优势群体的信号者则策略性地形成信号。我们研究了总体的预期效用,通过接受成员的比例来衡量,以及假阳性率(FPR)和假阴性率(FNR)。我们首先展示了直观的结果,即在固定的环境下,优势群体比劣势群体具有更高的期望效用、更高的FPR和更低的FNR(尽管群体质量相同),并且更准确的观测提高了优势群体的期望效用,同时损害了劣势群体的期望效用。接下来,我们将探讨引入一个可公开观察的信号,如考试成绩,作为潜在的干预措施。我们的主要发现是,这种旨在减少人群效用之间不平等的自然干预,实际上可能会在观察和测试分数嘈杂的环境中加剧这种不平等。
{"title":"Access to Population-Level Signaling as a Source of Inequality","authors":"Nicole Immorlica, Katrina Ligett, Juba Ziani","doi":"10.1145/3287560.3287579","DOIUrl":"https://doi.org/10.1145/3287560.3287579","url":null,"abstract":"We identify and explore differential access to population-level signaling (also known as information design) as a source of unequal access to opportunity. A population-level signaler has potentially noisy observations of a binary type for each member of a population and, based on this, produces a signal about each member. A decision-maker infers types from signals and accepts those individuals whose type is high in expectation. We assume the signaler of the disadvantaged population reveals her observations to the decision-maker, whereas the signaler of the advantaged population forms signals strategically. We study the expected utility of the populations as measured by the fraction of accepted members, as well as the false positive rates (FPR) and false negative rates (FNR). We first show the intuitive results that for a fixed environment, the advantaged population has higher expected utility, higher FPR, and lower FNR, than the disadvantaged one (despite having identical population quality), and that more accurate observations improve the expected utility of the advantaged population while harming that of the disadvantaged one. We next explore the introduction of a publicly-observable signal, such as a test score, as a potential intervention. Our main finding is that this natural intervention, intended to reduce the inequality between the populations' utilities, may actually exacerbate it in settings where observations and test scores are noisy.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76388041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data 通过因果意识的公平:学习有偏差数据的因果潜变量模型
Pub Date : 2018-09-07 DOI: 10.1145/3287560.3287564
David Madras, Elliot Creager, T. Pitassi, R. Zemel
How do we learn from biased data? Historical datasets often reflect historical prejudices; sensitive or protected attributes may affect the observed treatments and outcomes. Classification algorithms tasked with predicting outcomes accurately from these datasets tend to replicate these biases. We advocate a causal modeling approach to learning from biased data, exploring the relationship between fair classification and intervention. We propose a causal model in which the sensitive attribute confounds both the treatment and the outcome. Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders. We show experimentally that fairness-aware causal modeling provides better estimates of the causal effects between the sensitive attribute, the treatment, and the outcome. We further present evidence that estimating these causal effects can help learn policies that are both more accurate and fair, when presented with a historically biased dataset.
我们如何从有偏见的数据中学习?历史数据集往往反映历史偏见;敏感或受保护的属性可能影响观察到的治疗和结果。从这些数据集中准确预测结果的分类算法往往会复制这些偏见。我们提倡用因果建模的方法从有偏见的数据中学习,探索公平分类与干预之间的关系。我们提出了一个因果模型,其中敏感属性混淆了治疗和结果。基于之前在深度学习和生成建模方面的工作,我们描述了如何仅从观测数据中学习该因果模型的参数,即使存在未观察到的混杂因素。我们通过实验证明,公平意识因果模型可以更好地估计敏感属性、处理和结果之间的因果关系。我们进一步提供的证据表明,当使用有历史偏见的数据集时,估计这些因果关系可以帮助学习更准确和公平的政策。
{"title":"Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data","authors":"David Madras, Elliot Creager, T. Pitassi, R. Zemel","doi":"10.1145/3287560.3287564","DOIUrl":"https://doi.org/10.1145/3287560.3287564","url":null,"abstract":"How do we learn from biased data? Historical datasets often reflect historical prejudices; sensitive or protected attributes may affect the observed treatments and outcomes. Classification algorithms tasked with predicting outcomes accurately from these datasets tend to replicate these biases. We advocate a causal modeling approach to learning from biased data, exploring the relationship between fair classification and intervention. We propose a causal model in which the sensitive attribute confounds both the treatment and the outcome. Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders. We show experimentally that fairness-aware causal modeling provides better estimates of the causal effects between the sensitive attribute, the treatment, and the outcome. We further present evidence that estimating these causal effects can help learn policies that are both more accurate and fair, when presented with a historically biased dataset.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90832895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 119
Fair Algorithms for Learning in Allocation Problems 分配问题中学习的公平算法
Pub Date : 2018-08-30 DOI: 10.1145/3287560.3287571
Hadi Elzayn, S. Jabbari, Christopher Jung, Michael Kearns, S. Neel, Aaron Roth, Zachary Schutzman
Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended). Often in such problems fairness is also a concern. One natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource in question, the probability of actually receiving it is approximately independent of the individual's group. For example, in lending this would mean that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it would mean that two individuals committing the same crime in different districts would have roughly equal chances of being arrested. In this paper, we formalize this general notion of fairness for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the allocator does not know the frequency of candidates (i.e. creditworthy individuals or criminals) in each group. This algorithm operates in a censored feedback model in which only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates in each group. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to and do not learn about crimes committed if the police presence in a district is low. As an application of our framework and algorithm, we consider the predictive policing problem, in which the resource being allocated to each group is the number of police officers assigned to each district. The learning algorithm is trained on arrest data gathered from its own deployments on previous days, resulting in a potential feedback loop that our algorithm provably overcomes. In this case, the fairness constraint asks that the probability that an individual who has committed a crime is arrested should be independent of the district in which they live. We investigate the performance of our learning algorithm on the Philadelphia Crime Incidents dataset.
可以通过集中代理在几个群体中分配稀缺资源(例如贷款或警察)来模拟诸如贷款和警务之类的设置,以便最大化某些目标(例如偿还的贷款或逮捕的罪犯)。通常在这类问题中,公平也是一个值得关注的问题。基于机会平等的一般原则,公平的一个自然概念是,如果一个人是有关资源的候选人,那么实际获得该资源的概率大约与该个人所属的群体无关。例如,在贷款这将意味着同样有信誉的个体在不同种族有大致相等的机会获得贷款。在警务工作中,这意味着在不同地区犯下同样罪行的两个人被逮捕的几率大致相同。在本文中,我们形式化了分配问题公平性的一般概念,并研究了其算法结果。我们的主要技术成果包括一个有效的学习算法,即使分配者不知道每个组中候选人(即信誉良好的个人或罪犯)的频率,该算法也会收敛到最优公平分配。该算法在审查反馈模型中运行,在该模型中,只能观察到在给定分配中接收资源的候选人数量,而不是每组中候选人的真实数量。这就说明了这样一个事实:如果一个地区的警察人数很少,我们就无法了解那些我们不贷款的人的信用状况,也无法了解他们犯下的罪行。作为我们的框架和算法的应用,我们考虑了预测警务问题,其中分配给每个组的资源是分配给每个地区的警察数量。学习算法是根据前几天从自己的部署中收集的逮捕数据进行训练的,这导致了我们的算法可以克服的潜在反馈循环。在这种情况下,公平约束要求的概率一个犯了罪的人被逮捕应独立于他们所居住的地区。我们在费城犯罪事件数据集上研究了我们的学习算法的性能。
{"title":"Fair Algorithms for Learning in Allocation Problems","authors":"Hadi Elzayn, S. Jabbari, Christopher Jung, Michael Kearns, S. Neel, Aaron Roth, Zachary Schutzman","doi":"10.1145/3287560.3287571","DOIUrl":"https://doi.org/10.1145/3287560.3287571","url":null,"abstract":"Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended). Often in such problems fairness is also a concern. One natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource in question, the probability of actually receiving it is approximately independent of the individual's group. For example, in lending this would mean that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it would mean that two individuals committing the same crime in different districts would have roughly equal chances of being arrested. In this paper, we formalize this general notion of fairness for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the allocator does not know the frequency of candidates (i.e. creditworthy individuals or criminals) in each group. This algorithm operates in a censored feedback model in which only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates in each group. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to and do not learn about crimes committed if the police presence in a district is low. As an application of our framework and algorithm, we consider the predictive policing problem, in which the resource being allocated to each group is the number of police officers assigned to each district. The learning algorithm is trained on arrest data gathered from its own deployments on previous days, resulting in a potential feedback loop that our algorithm provably overcomes. In this case, the fairness constraint asks that the probability that an individual who has committed a crime is arrested should be independent of the district in which they live. We investigate the performance of our learning algorithm on the Philadelphia Crime Incidents dataset.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77314793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook 微目标社会分裂广告:以Facebook上与俄罗斯相关的广告活动为例
Pub Date : 2018-08-28 DOI: 10.1145/3287560.3287580
Filipe Nunes Ribeiro, Koustuv Saha, Mahmoudreza Babaei, Lucas Henrique, Johnnatan Messias, Fabrício Benevenuto, Oana Goga, K. Gummadi, Elissa M. Redmiles
Targeted advertising is meant to improve the efficiency of matching advertisers to their customers. However, targeted advertising can also be abused by malicious advertisers to efficiently reach people susceptible to false stories, stoke grievances, and incite social conflict. Since targeted ads are not seen by non-targeted and non-vulnerable people, malicious ads are likely to go unreported and their effects undetected. This work examines a specific case of malicious advertising, exploring the extent to which political ads1 from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S. elections exploited Facebook's targeted advertising infrastructure to efficiently target ads on divisive or polarizing topics (e.g., immigration, race-based policing) at vulnerable sub-populations. In particular, we do the following: (a) We conduct U.S. census-representative surveys to characterize how users with different political ideologies report, approve, and perceive truth in the content of the IRA ads. Our surveys show that many ads are "divisive": they elicit very different reactions from people belonging to different socially salient groups. (b) We characterize how these divisive ads are targeted to sub-populations that feel particularly aggrieved by the status quo. Our findings support existing calls for greater transparency of content and targeting of political ads. (c) We particularly focus on how the Facebook ad API facilitates such targeting. We show how the enormous amount of personal data Facebook aggregates about users and makes available to advertisers enables such malicious targeting.
定向广告旨在提高广告商与其客户匹配的效率。然而,定向广告也可能被恶意广告商滥用,以有效地接触易受虚假故事影响的人群,引发不满,并煽动社会冲突。由于定向广告不会被非定向和非易受攻击的人看到,恶意广告很可能不会被报道,其影响也不会被发现。这项工作研究了一个特定的恶意广告案例,探讨了俄罗斯情报研究机构(IRA)在2016年美国大选之前投放的政治广告1在多大程度上利用Facebook的定向广告基础设施,在弱势群体中有效地针对分裂或两极分化的话题(如移民、基于种族的警务)投放广告。特别是,我们做了以下工作:(a)我们进行了美国人口普查代表调查,以描述具有不同政治意识形态的用户如何报告、批准和感知IRA广告内容中的真相。我们的调查显示,许多广告是“分裂的”:它们引起了属于不同社会突出群体的人的非常不同的反应。(b)我们描述了这些分裂性广告是如何针对那些对现状感到特别委屈的亚群体的。我们的研究结果支持现有的呼吁,即提高内容透明度和政治广告的针对性。(c)我们特别关注Facebook广告API如何促进这种针对性。我们展示了Facebook收集并提供给广告商的大量用户个人数据是如何实现这种恶意定位的。
{"title":"On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook","authors":"Filipe Nunes Ribeiro, Koustuv Saha, Mahmoudreza Babaei, Lucas Henrique, Johnnatan Messias, Fabrício Benevenuto, Oana Goga, K. Gummadi, Elissa M. Redmiles","doi":"10.1145/3287560.3287580","DOIUrl":"https://doi.org/10.1145/3287560.3287580","url":null,"abstract":"Targeted advertising is meant to improve the efficiency of matching advertisers to their customers. However, targeted advertising can also be abused by malicious advertisers to efficiently reach people susceptible to false stories, stoke grievances, and incite social conflict. Since targeted ads are not seen by non-targeted and non-vulnerable people, malicious ads are likely to go unreported and their effects undetected. This work examines a specific case of malicious advertising, exploring the extent to which political ads1 from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S. elections exploited Facebook's targeted advertising infrastructure to efficiently target ads on divisive or polarizing topics (e.g., immigration, race-based policing) at vulnerable sub-populations. In particular, we do the following: (a) We conduct U.S. census-representative surveys to characterize how users with different political ideologies report, approve, and perceive truth in the content of the IRA ads. Our surveys show that many ads are \"divisive\": they elicit very different reactions from people belonging to different socially salient groups. (b) We characterize how these divisive ads are targeted to sub-populations that feel particularly aggrieved by the status quo. Our findings support existing calls for greater transparency of content and targeting of political ads. (c) We particularly focus on how the Facebook ad API facilitates such targeting. We show how the enormous amount of personal data Facebook aggregates about users and makes available to advertisers enables such malicious targeting.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75456209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Downstream Effects of Affirmative Action 平权法案的下游效应
Pub Date : 2018-08-27 DOI: 10.1145/3287560.3287578
Sampath Kannan, Aaron Roth, Juba Ziani
We study a two-stage model, in which students are 1) admitted to college on the basis of an entrance exam which is a noisy signal about their qualifications (type), and then 2) those students who were admitted to college can be hired by an employer as a function of their college grades, which are an independently drawn noisy signal of their type. Students are drawn from one of two populations, which might have different type distributions. We assume that the employer at the end of the pipeline is rational, in the sense that it computes a posterior distribution on student type conditional on all information that it has available (college admissions, grades, and group membership), and makes a decision based on posterior expectation. We then study what kinds of fairness goals can be achieved by the college by setting its admissions rule and grading policy. For example, the college might have the goal of guaranteeing equal opportunity across populations: that the probability of passing through the pipeline and being hired by the employer should be independent of group membership, conditioned on type. Alternately, the college might have the goal of incentivizing the employer to have a group blind hiring rule. We show that both goals can be achieved when the college does not report grades. On the other hand, we show that under reasonable conditions, these goals are impossible to achieve even in isolation when the college uses an (even minimally) informative grading policy.
我们研究了一个两阶段模型,在这个模型中,学生1)在入学考试的基础上被大学录取,这是一个关于他们的资格(类型)的噪声信号,然后2)那些被大学录取的学生可以被雇主雇用,作为他们的大学成绩的函数,这是一个独立绘制的噪声信号。学生来自两种人群中的一种,这两种人群可能具有不同的类型分布。我们假设管道末端的雇主是理性的,因为它根据所有可用信息(大学录取、成绩和群体成员)计算学生类型的后验分布,并根据后验期望做出决策。然后,我们研究了大学通过制定录取规则和评分政策可以实现什么样的公平目标。例如,学院的目标可能是保证所有人群的机会平等:通过管道并被雇主雇用的概率应该独立于群体成员,取决于类型。另一种情况是,学校的目标可能是激励雇主实行群体盲目招聘规则。我们表明,当大学不报告成绩时,这两个目标都可以实现。另一方面,我们表明,在合理的条件下,这些目标是不可能实现的,即使在孤立的情况下,当大学使用(即使是最低限度的)信息评分政策。
{"title":"Downstream Effects of Affirmative Action","authors":"Sampath Kannan, Aaron Roth, Juba Ziani","doi":"10.1145/3287560.3287578","DOIUrl":"https://doi.org/10.1145/3287560.3287578","url":null,"abstract":"We study a two-stage model, in which students are 1) admitted to college on the basis of an entrance exam which is a noisy signal about their qualifications (type), and then 2) those students who were admitted to college can be hired by an employer as a function of their college grades, which are an independently drawn noisy signal of their type. Students are drawn from one of two populations, which might have different type distributions. We assume that the employer at the end of the pipeline is rational, in the sense that it computes a posterior distribution on student type conditional on all information that it has available (college admissions, grades, and group membership), and makes a decision based on posterior expectation. We then study what kinds of fairness goals can be achieved by the college by setting its admissions rule and grading policy. For example, the college might have the goal of guaranteeing equal opportunity across populations: that the probability of passing through the pipeline and being hired by the employer should be independent of group membership, conditioned on type. Alternately, the college might have the goal of incentivizing the employer to have a group blind hiring rule. We show that both goals can be achieved when the college does not report grades. On the other hand, we show that under reasonable conditions, these goals are impossible to achieve even in isolation when the college uses an (even minimally) informative grading policy.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81562169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
The Disparate Effects of Strategic Manipulation 战略操纵的不同影响
Pub Date : 2018-08-27 DOI: 10.1145/3287560.3287597
Lily Hu, Nicole Immorlica, Jennifer Wortman Vaughan
When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval. Models of agent responsiveness, termed "strategic manipulation," analyze the interaction between a learner and agents in a world where all agents are equally able to manipulate their features in an attempt to "trick" a published classifier. In cases of real world classification, however, an agent's ability to adapt to an algorithm is not simply a function of her personal interest in receiving a positive classification, but is bound up in a complex web of social factors that affect her ability to pursue certain action responses. In this paper, we adapt models of strategic manipulation to capture dynamics that may arise in a setting of social inequality wherein candidate groups face different costs to manipulation. We find that whenever one group's costs are higher than the other's, the learner's equilibrium strategy exhibits an inequality-reinforcing phenomenon wherein the learner erroneously admits some members of the advantaged group, while erroneously excluding some members of the disadvantaged group. We also consider the effects of interventions in which a learner subsidizes members of the disadvantaged group, lowering their costs in order to improve her own classification performance. Here we encounter a paradoxical result: there exist cases in which providing a subsidy improves only the learner's utility while actually making both candidate groups worse-off---even the group receiving the subsidy. Our results reveal the potentially adverse social ramifications of deploying tools that attempt to evaluate an individual's "quality" when agents' capacities to adaptively respond differ.
当相应的决定是由算法输入的,个人可能会感到被迫改变自己的行为,以获得系统的认可。被称为“策略操纵”的智能体响应模型,分析了在一个所有智能体都能操纵自己的特征以试图“欺骗”已发布的分类器的世界中,学习者和智能体之间的互动。然而,在现实世界的分类中,智能体适应算法的能力不仅仅是她个人对接受积极分类的兴趣的函数,而是与一个复杂的社会因素网络联系在一起,这些社会因素会影响她追求某些行为反应的能力。在本文中,我们调整了战略操纵模型,以捕捉在社会不平等背景下可能出现的动态,其中候选群体面临不同的操纵成本。我们发现,当一个群体的成本高于另一个群体时,学习者的均衡策略表现出一种不平等强化现象,即学习者错误地接纳了一些优势群体的成员,而错误地排除了一些弱势群体的成员。我们还考虑了干预措施的影响,在干预措施中,学习者资助弱势群体的成员,降低他们的成本,以提高自己的分类表现。在这里,我们遇到了一个矛盾的结果:在某些情况下,提供补贴只提高了学习者的效用,而实际上却使两个候选群体——甚至是接受补贴的群体——的情况更糟。我们的研究结果表明,当代理人的适应性反应能力不同时,使用试图评估个人“素质”的工具可能会产生不利的社会后果。
{"title":"The Disparate Effects of Strategic Manipulation","authors":"Lily Hu, Nicole Immorlica, Jennifer Wortman Vaughan","doi":"10.1145/3287560.3287597","DOIUrl":"https://doi.org/10.1145/3287560.3287597","url":null,"abstract":"When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval. Models of agent responsiveness, termed \"strategic manipulation,\" analyze the interaction between a learner and agents in a world where all agents are equally able to manipulate their features in an attempt to \"trick\" a published classifier. In cases of real world classification, however, an agent's ability to adapt to an algorithm is not simply a function of her personal interest in receiving a positive classification, but is bound up in a complex web of social factors that affect her ability to pursue certain action responses. In this paper, we adapt models of strategic manipulation to capture dynamics that may arise in a setting of social inequality wherein candidate groups face different costs to manipulation. We find that whenever one group's costs are higher than the other's, the learner's equilibrium strategy exhibits an inequality-reinforcing phenomenon wherein the learner erroneously admits some members of the advantaged group, while erroneously excluding some members of the disadvantaged group. We also consider the effects of interventions in which a learner subsidizes members of the disadvantaged group, lowering their costs in order to improve her own classification performance. Here we encounter a paradoxical result: there exist cases in which providing a subsidy improves only the learner's utility while actually making both candidate groups worse-off---even the group receiving the subsidy. Our results reveal the potentially adverse social ramifications of deploying tools that attempt to evaluate an individual's \"quality\" when agents' capacities to adaptively respond differ.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80660983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 137
期刊
Proceedings of the Conference on Fairness, Accountability, and Transparency
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1