{"title":"Algorithmic Stereotypes: Implications for Fairness of Generalizing from Past Data","authors":"D. McNamara","doi":"10.1145/3306618.3314312","DOIUrl":null,"url":null,"abstract":"Background Algorithms are used to make or support decisions about people in a wide variety of contexts including the provision of financial credit, judicial risk assessments, applicant screening for employment, and online ad selection. Such algorithms often make predictions about the future behavior of individuals by generalizing from data recording the past behaviors of other individuals. Concerns have arisen about the fairness of these algorithms. Researchers have responded by developing definitions of fairness and algorithm designs that incorporate these definitions [2]. A common theme is the avoidance of discrimination on the basis of group membership, such as race or gender. This may be more complex than simply excluding the explicit consideration of an individual’s group membership, because other characteristics may be correlated with this group membership – a phenomenon known as redundant encoding [5]. Different definitions of fairness may be invoked by different stakeholders. The controversy associated with the COMPAS recidivism prediction system used in some parts of the United States showed this in practice. News organization ProPublica critiqued the system as unfair since among non-reoffenders, African-Americans were more likely to be marked high risk than whites, while among re-offenders, whites were more likely to be marked low risk than African-Americans [1]. COMPAS owner Equivant (formerly Northpointe) argued that the algorithm was not unfair since among those marked high risk, African-Americans were no less likely to reoffend than whites [4].","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314312","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Background Algorithms are used to make or support decisions about people in a wide variety of contexts including the provision of financial credit, judicial risk assessments, applicant screening for employment, and online ad selection. Such algorithms often make predictions about the future behavior of individuals by generalizing from data recording the past behaviors of other individuals. Concerns have arisen about the fairness of these algorithms. Researchers have responded by developing definitions of fairness and algorithm designs that incorporate these definitions [2]. A common theme is the avoidance of discrimination on the basis of group membership, such as race or gender. This may be more complex than simply excluding the explicit consideration of an individual’s group membership, because other characteristics may be correlated with this group membership – a phenomenon known as redundant encoding [5]. Different definitions of fairness may be invoked by different stakeholders. The controversy associated with the COMPAS recidivism prediction system used in some parts of the United States showed this in practice. News organization ProPublica critiqued the system as unfair since among non-reoffenders, African-Americans were more likely to be marked high risk than whites, while among re-offenders, whites were more likely to be marked low risk than African-Americans [1]. COMPAS owner Equivant (formerly Northpointe) argued that the algorithm was not unfair since among those marked high risk, African-Americans were no less likely to reoffend than whites [4].