Daniella Raz, Corinne Bintz, Vivian Guetler, Aaron Tam, Michael A. Katell, Dharma Dailey, Bernease Herman, P. Krafft, Meg Young
{"title":"Face Mis-ID: An Interactive Pedagogical Tool Demonstrating Disparate Accuracy Rates in Facial Recognition","authors":"Daniella Raz, Corinne Bintz, Vivian Guetler, Aaron Tam, Michael A. Katell, Dharma Dailey, Bernease Herman, P. Krafft, Meg Young","doi":"10.1145/3461702.3462627","DOIUrl":null,"url":null,"abstract":"This paper reports on the making of an interactive demo to illustrate algorithmic bias in facial recognition. Facial recognition technology has been demonstrated to be more likely to misidentify women and minoritized people. This risk, among others, has elevated facial recognition into policy discussions across the country, where many jurisdictions have already passed bans on its use. Whereas scholarship on the disparate impacts of algorithmic systems is growing, general public awareness of this set of problems is limited in part by the illegibility of machine learning systems to non-specialists. Inspired by discussions with community organizers advocating for tech fairness issues, we created the Face Mis-ID Demo to reveal the algorithmic functions behind facial recognition technology and to demonstrate its risks to policymakers and members of the community. In this paper, we share the design process behind this interactive demo, its form and function, and the design decisions that honed its accessibility, toward its use for improving legibility of algorithmic systems and awareness of the sources of their disparate impacts.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3461702.3462627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
This paper reports on the making of an interactive demo to illustrate algorithmic bias in facial recognition. Facial recognition technology has been demonstrated to be more likely to misidentify women and minoritized people. This risk, among others, has elevated facial recognition into policy discussions across the country, where many jurisdictions have already passed bans on its use. Whereas scholarship on the disparate impacts of algorithmic systems is growing, general public awareness of this set of problems is limited in part by the illegibility of machine learning systems to non-specialists. Inspired by discussions with community organizers advocating for tech fairness issues, we created the Face Mis-ID Demo to reveal the algorithmic functions behind facial recognition technology and to demonstrate its risks to policymakers and members of the community. In this paper, we share the design process behind this interactive demo, its form and function, and the design decisions that honed its accessibility, toward its use for improving legibility of algorithmic systems and awareness of the sources of their disparate impacts.