TikTok is a popular platform that enables users to see tailored content feeds, particularly short videos with novel content. In recent years, TikTok has been criticized at times for presenting users with overly homogenous feeds, thereby reducing the diversity of content with which each user engages. In this paper, we consider whether TikTok has an ethical obligation to employ a novelty bias in its content recommendation engine. We explicate the principal morally relevant values and interests of key stakeholders, and observe that key empirical questions must be answered before a precise recommendation can be provided. We argue that TikTok's own values and interests mean that its actions should be largely driven by the values and interests of its users and creators. Unlike some other content platforms, TikTok's ethical obligations are not at odds with the values of its users, and so whether it is obligated to include a novelty bias depends on what will actually advance its users' interests.
{"title":"Ethical Obligations to Provide Novelty","authors":"Paige Golden, D. Danks","doi":"10.1145/3461702.3462555","DOIUrl":"https://doi.org/10.1145/3461702.3462555","url":null,"abstract":"TikTok is a popular platform that enables users to see tailored content feeds, particularly short videos with novel content. In recent years, TikTok has been criticized at times for presenting users with overly homogenous feeds, thereby reducing the diversity of content with which each user engages. In this paper, we consider whether TikTok has an ethical obligation to employ a novelty bias in its content recommendation engine. We explicate the principal morally relevant values and interests of key stakeholders, and observe that key empirical questions must be answered before a precise recommendation can be provided. We argue that TikTok's own values and interests mean that its actions should be largely driven by the values and interests of its users and creators. Unlike some other content platforms, TikTok's ethical obligations are not at odds with the values of its users, and so whether it is obligated to include a novelty bias depends on what will actually advance its users' interests.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"91 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126028491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time series shapelets are discriminatory subsequences which are representative of a class, and their similarity to a time series can be used for successfully tackling the time series classification problem. The literature shows that Artificial Intelligence (AI) systems adopting classification models based on time series shapelets can be interpretable, more accurate, and significantly fast. Thus, in order to design a data-agnostic and interpretable classification approach, in this paper we first extend the notion of shapelets to different types of data, i.e., images, tabular and textual data. Then, based on this extended notion of shapelets we propose an interpretable data-agnostic classification method. Since the shapelets discovery can be time consuming, especially for data types more complex than time series, we exploit a notion of prototypes for finding candidate shapelets, and reducing both the time required to find a solution and the variance of shapelets. A wide experimentation on datasets of different types shows that the data-agnostic prototype-based shapelets returned by the proposed method empower an interpretable classification which is also fast, accurate, and stable. In addition, we show and we prove that shapelets can be at the basis of explainable AI methods.
{"title":"Designing Shapelets for Interpretable Data-Agnostic Classification","authors":"Riccardo Guidotti, A. Monreale","doi":"10.1145/3461702.3462553","DOIUrl":"https://doi.org/10.1145/3461702.3462553","url":null,"abstract":"Time series shapelets are discriminatory subsequences which are representative of a class, and their similarity to a time series can be used for successfully tackling the time series classification problem. The literature shows that Artificial Intelligence (AI) systems adopting classification models based on time series shapelets can be interpretable, more accurate, and significantly fast. Thus, in order to design a data-agnostic and interpretable classification approach, in this paper we first extend the notion of shapelets to different types of data, i.e., images, tabular and textual data. Then, based on this extended notion of shapelets we propose an interpretable data-agnostic classification method. Since the shapelets discovery can be time consuming, especially for data types more complex than time series, we exploit a notion of prototypes for finding candidate shapelets, and reducing both the time required to find a solution and the variance of shapelets. A wide experimentation on datasets of different types shows that the data-agnostic prototype-based shapelets returned by the proposed method empower an interpretable classification which is also fast, accurate, and stable. In addition, we show and we prove that shapelets can be at the basis of explainable AI methods.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pinar Barlas, K. Kyriakou, S. Kleanthous, Jahna Otterbacher
Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.
{"title":"Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging","authors":"Pinar Barlas, K. Kyriakou, S. Kleanthous, Jahna Otterbacher","doi":"10.1145/3461702.3462567","DOIUrl":"https://doi.org/10.1145/3461702.3462567","url":null,"abstract":"Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134333341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we critically examine the effectiveness of the requirement to conduct a Data Protection Impact Assessment (DPIA) in Article 35 of the General Data Protection Regulation (GDPR) in light of fairness metrics. Through this analysis, we explore the role of the fairness principle as introduced in Article 5(1)(a) and its multifaceted interpretation in the obligation to conduct a DPIA. Our paper argues that although there is a significant theoretical role for the considerations of fairness in the DPIA process, an analysis of the various guidance documents issued by data protection authorities on the obligation to conduct a DPIA reveals that they rarely mention the fairness principle in practice. Our analysis questions this omission, and assesses the capacity of fairness metrics to be truly operationalized within DPIAs. We conclude by exploring the practical effectiveness of DPIA with particular reference to (1) technical challenges that have an impact on the usefulness of DPIAs irrespective of a controller's willingness to actively engage in the process, (2) the context dependent nature of the fairness principle, and (3) the key role played by data controllers in the determination of what is fair.
{"title":"Fairness and Data Protection Impact Assessments","authors":"A. Kasirzadeh, Damian Clifford","doi":"10.1145/3461702.3462528","DOIUrl":"https://doi.org/10.1145/3461702.3462528","url":null,"abstract":"In this paper, we critically examine the effectiveness of the requirement to conduct a Data Protection Impact Assessment (DPIA) in Article 35 of the General Data Protection Regulation (GDPR) in light of fairness metrics. Through this analysis, we explore the role of the fairness principle as introduced in Article 5(1)(a) and its multifaceted interpretation in the obligation to conduct a DPIA. Our paper argues that although there is a significant theoretical role for the considerations of fairness in the DPIA process, an analysis of the various guidance documents issued by data protection authorities on the obligation to conduct a DPIA reveals that they rarely mention the fairness principle in practice. Our analysis questions this omission, and assesses the capacity of fairness metrics to be truly operationalized within DPIAs. We conclude by exploring the practical effectiveness of DPIA with particular reference to (1) technical challenges that have an impact on the usefulness of DPIAs irrespective of a controller's willingness to actively engage in the process, (2) the context dependent nature of the fairness principle, and (3) the key role played by data controllers in the determination of what is fair.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116334079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. A. Watkins, E. Moss, Jacob Metcalf, Ranjit Singh, M. C. Elish
Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools-through Algorithmic Impact Assessments (AIA)-is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.
{"title":"Governing Algorithmic Systems with Impact Assessments: Six Observations","authors":"E. A. Watkins, E. Moss, Jacob Metcalf, Ranjit Singh, M. C. Elish","doi":"10.1145/3461702.3462580","DOIUrl":"https://doi.org/10.1145/3461702.3462580","url":null,"abstract":"Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools-through Algorithmic Impact Assessments (AIA)-is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124603851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take "fairness" in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group (cf. Kusner et al. 2018).
基于预测的决策通常是通过利用机器学习工具做出的,影响着现代生活的几乎所有方面。对这种广泛实践的伦理关注已经产生了公平机器学习领域和许多公平措施,这些公平的数学精确定义旨在确定给定的基于预测的决策系统是否公平。在鲁宾·宾斯(Reuben Binns)(2017)之后,我们将这种情况下的“公平”视为各种规范平等主义考虑的占位符。我们探索了一些公平措施,以找出它们的平等主义根源,并对它们进行评估,既可以作为平等主义思想的形式化,也可以作为预测系统对公平要求的断言。我们特别关注最近流行的公平措施,即反事实公平,它认为,如果对个人的预测在现实世界和个人属于不同人口群体的任何反事实世界中是相同的,那么它就是公平的(参见Kusner et al. 2018)。
{"title":"Fairness and Machine Fairness","authors":"Clinton Castro, David R. O'Brien, Ben Schwan","doi":"10.1145/3461702.3462577","DOIUrl":"https://doi.org/10.1145/3461702.3462577","url":null,"abstract":"Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take \"fairness\" in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group (cf. Kusner et al. 2018).","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127874079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Approaches to non-discrimination are generally informed by two principles: striving for equality of treatment, and advancing various notions of equality of outcome. We consider when and why there are trade-offs in machine learning between respecting formalistic interpretations of equal treatment and advancing equality of outcome. Exploring a hypothetical discrimination suit against Facebook, we argue that interpretations of equal treatment which require blindness to difference may constrain how machine learning can be deployed to advance equality of outcome. When machine learning models predict outcomes that are unevenly distributed across racial groups, using those models to advance racial justice will often require deliberately taking race into account. We then explore the normative stakes of this tension. We describe three pragmatic policy options underpinned by distinct interpretations and applications of equal treatment. A status quo approach insists on blindness to difference, permitting the design of machine learning models that compound existing patterns of disadvantage. An industry-led approach would specify a narrow set of domains in which institutions were permitted to use protected characteristics to actively reduce inequalities of outcome. A government-led approach would impose positive duties that require institutions to consider how best to advance equality of outcomes and permit the use of protected characteristics to achieve that goal. We argue that while machine learning offers significant possibilities for advancing racial justice and outcome-based equality, harnessing those possibilities will require a shift in the normative commitments that underpin the interpretation and application of equal treatment in non-discrimination law and the governance of machine learning.
{"title":"Machine Learning and the Meaning of Equal Treatment","authors":"J. Simons, Sophia Adams Bhatti, Adrian Weller","doi":"10.1145/3461702.3462556","DOIUrl":"https://doi.org/10.1145/3461702.3462556","url":null,"abstract":"Approaches to non-discrimination are generally informed by two principles: striving for equality of treatment, and advancing various notions of equality of outcome. We consider when and why there are trade-offs in machine learning between respecting formalistic interpretations of equal treatment and advancing equality of outcome. Exploring a hypothetical discrimination suit against Facebook, we argue that interpretations of equal treatment which require blindness to difference may constrain how machine learning can be deployed to advance equality of outcome. When machine learning models predict outcomes that are unevenly distributed across racial groups, using those models to advance racial justice will often require deliberately taking race into account. We then explore the normative stakes of this tension. We describe three pragmatic policy options underpinned by distinct interpretations and applications of equal treatment. A status quo approach insists on blindness to difference, permitting the design of machine learning models that compound existing patterns of disadvantage. An industry-led approach would specify a narrow set of domains in which institutions were permitted to use protected characteristics to actively reduce inequalities of outcome. A government-led approach would impose positive duties that require institutions to consider how best to advance equality of outcomes and permit the use of protected characteristics to achieve that goal. We argue that while machine learning offers significant possibilities for advancing racial justice and outcome-based equality, harnessing those possibilities will require a shift in the normative commitments that underpin the interpretation and application of equal treatment in non-discrimination law and the governance of machine learning.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126750912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning research culture is driven by benchmark datasets to a greater degree than most other research fields. But the centrality of datasets also amplifies the harms associated with data, including privacy violation and underrepresentation or erasure of some populations. This has stirred a much-needed debate on the ethical responsibilities of dataset creators and users. I argue that clarity on this debate requires taking a step back to better understand the benefits of the dataset-driven approach. I show that benchmark datasets play at least six different roles and that the potential harms depend on the roles a dataset plays. By understanding this relationship, we can mitigate the harms while preserving what is scientifically valuable about the prevailing approach.
{"title":"The Ethics of Datasets: Moving Forward Requires Stepping Back","authors":"Arvind Narayanan","doi":"10.1145/3461702.3462643","DOIUrl":"https://doi.org/10.1145/3461702.3462643","url":null,"abstract":"Machine learning research culture is driven by benchmark datasets to a greater degree than most other research fields. But the centrality of datasets also amplifies the harms associated with data, including privacy violation and underrepresentation or erasure of some populations. This has stirred a much-needed debate on the ethical responsibilities of dataset creators and users. I argue that clarity on this debate requires taking a step back to better understand the benefits of the dataset-driven approach. I show that benchmark datasets play at least six different roles and that the potential harms depend on the roles a dataset plays. By understanding this relationship, we can mitigate the harms while preserving what is scientifically valuable about the prevailing approach.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121563439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. A. Javadi, Chris Norval, Richard Cloete, Jatinder Singh
Given the surge in interest in AI, we now see the emergence of Artificial Intelligence as a Service (AIaaS). AIaaS entails service providers offering remote access to ML models and capabilities at arms-length', through networked APIs. Such services will grow in popularity, as they enable access to state-of-the-art ML capabilities, 'on demand', 'out of the box', at low cost and without requiring training data or ML expertise. However, there is much public concern regarding AI. AIaaS raises particular considerations, given there is much potential for such services to be used to underpin and drive problematic, inappropriate, undesirable, controversial, or possibly even illegal applications. A key way forward is through service providers monitoring their AI services to identify potential situations of problematic use. Towards this, we elaborate the potential for 'misuse indicators' as a mechanism for uncovering patterns of usage behaviour warranting consideration or further investigation. We introduce a taxonomy for describing these indicators and their contextual considerations, and use exemplars to demonstrate the feasibility analysing AIaaS usage to highlight situations of possible concern. We also seek to draw more attention to AI services and the issues they raise, given AIaaS' increasing prominence, and the general calls for the more responsible and accountable use of AI.
{"title":"Monitoring AI Services for Misuse","authors":"S. A. Javadi, Chris Norval, Richard Cloete, Jatinder Singh","doi":"10.1145/3461702.3462566","DOIUrl":"https://doi.org/10.1145/3461702.3462566","url":null,"abstract":"Given the surge in interest in AI, we now see the emergence of Artificial Intelligence as a Service (AIaaS). AIaaS entails service providers offering remote access to ML models and capabilities at arms-length', through networked APIs. Such services will grow in popularity, as they enable access to state-of-the-art ML capabilities, 'on demand', 'out of the box', at low cost and without requiring training data or ML expertise. However, there is much public concern regarding AI. AIaaS raises particular considerations, given there is much potential for such services to be used to underpin and drive problematic, inappropriate, undesirable, controversial, or possibly even illegal applications. A key way forward is through service providers monitoring their AI services to identify potential situations of problematic use. Towards this, we elaborate the potential for 'misuse indicators' as a mechanism for uncovering patterns of usage behaviour warranting consideration or further investigation. We introduce a taxonomy for describing these indicators and their contextual considerations, and use exemplars to demonstrate the feasibility analysing AIaaS usage to highlight situations of possible concern. We also seek to draw more attention to AI services and the issues they raise, given AIaaS' increasing prominence, and the general calls for the more responsible and accountable use of AI.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134582703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniella Raz, Corinne Bintz, Vivian Guetler, Aaron Tam, Michael A. Katell, Dharma Dailey, Bernease Herman, P. Krafft, Meg Young
This paper reports on the making of an interactive demo to illustrate algorithmic bias in facial recognition. Facial recognition technology has been demonstrated to be more likely to misidentify women and minoritized people. This risk, among others, has elevated facial recognition into policy discussions across the country, where many jurisdictions have already passed bans on its use. Whereas scholarship on the disparate impacts of algorithmic systems is growing, general public awareness of this set of problems is limited in part by the illegibility of machine learning systems to non-specialists. Inspired by discussions with community organizers advocating for tech fairness issues, we created the Face Mis-ID Demo to reveal the algorithmic functions behind facial recognition technology and to demonstrate its risks to policymakers and members of the community. In this paper, we share the design process behind this interactive demo, its form and function, and the design decisions that honed its accessibility, toward its use for improving legibility of algorithmic systems and awareness of the sources of their disparate impacts.
{"title":"Face Mis-ID: An Interactive Pedagogical Tool Demonstrating Disparate Accuracy Rates in Facial Recognition","authors":"Daniella Raz, Corinne Bintz, Vivian Guetler, Aaron Tam, Michael A. Katell, Dharma Dailey, Bernease Herman, P. Krafft, Meg Young","doi":"10.1145/3461702.3462627","DOIUrl":"https://doi.org/10.1145/3461702.3462627","url":null,"abstract":"This paper reports on the making of an interactive demo to illustrate algorithmic bias in facial recognition. Facial recognition technology has been demonstrated to be more likely to misidentify women and minoritized people. This risk, among others, has elevated facial recognition into policy discussions across the country, where many jurisdictions have already passed bans on its use. Whereas scholarship on the disparate impacts of algorithmic systems is growing, general public awareness of this set of problems is limited in part by the illegibility of machine learning systems to non-specialists. Inspired by discussions with community organizers advocating for tech fairness issues, we created the Face Mis-ID Demo to reveal the algorithmic functions behind facial recognition technology and to demonstrate its risks to policymakers and members of the community. In this paper, we share the design process behind this interactive demo, its form and function, and the design decisions that honed its accessibility, toward its use for improving legibility of algorithmic systems and awareness of the sources of their disparate impacts.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}