Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects-which can be felt regardless of whether or not the technologies "work" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis-the outline of a research agenda bridging these fields---and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.
{"title":"Measuring Automated Influence: Between Empirical Evidence and Ethical Values","authors":"Daniel Susser, Vincent Grimaldi","doi":"10.1145/3461702.3462532","DOIUrl":"https://doi.org/10.1145/3461702.3462532","url":null,"abstract":"Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects-which can be felt regardless of whether or not the technologies \"work\" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis-the outline of a research agenda bridging these fields---and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"39 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114033524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Practitioners from diverse occupations and backgrounds are increasingly using machine learning (ML) methods. Nonetheless, studies on ML Practitioners typically draw populations from Big Tech and academia, as researchers have easier access to these communities. Through this selection bias, past research often excludes the broader, lesser-resourced ML community---for example, practitioners working at startups, at non-tech companies, and in the public sector. These practitioners share many of the same ML development difficulties and ethical conundrums as their Big Tech counterparts; however, their experiences are subject to additional under-studied challenges stemming from deploying ML with limited resources, increased existential risk, and absent access to in-house research teams. We contribute a qualitative analysis of 17 interviews with stakeholders from organizations which are less represented in prior studies. We uncover a number of tensions which are introduced or exacerbated by these organizations' resource constraints---tensions between privacy and ubiquity, resource management and performance optimization, and access and monopolization. Increased academic focus on these practitioners can facilitate a more holistic understanding of ML limitations, and so is useful for prescribing a research agenda to facilitate responsible ML development for all.
{"title":"Machine Learning Practices Outside Big Tech: How Resource Constraints Challenge Responsible Development","authors":"Aspen K. Hopkins, S. Booth","doi":"10.1145/3461702.3462527","DOIUrl":"https://doi.org/10.1145/3461702.3462527","url":null,"abstract":"Practitioners from diverse occupations and backgrounds are increasingly using machine learning (ML) methods. Nonetheless, studies on ML Practitioners typically draw populations from Big Tech and academia, as researchers have easier access to these communities. Through this selection bias, past research often excludes the broader, lesser-resourced ML community---for example, practitioners working at startups, at non-tech companies, and in the public sector. These practitioners share many of the same ML development difficulties and ethical conundrums as their Big Tech counterparts; however, their experiences are subject to additional under-studied challenges stemming from deploying ML with limited resources, increased existential risk, and absent access to in-house research teams. We contribute a qualitative analysis of 17 interviews with stakeholders from organizations which are less represented in prior studies. We uncover a number of tensions which are introduced or exacerbated by these organizations' resource constraints---tensions between privacy and ubiquity, resource management and performance optimization, and access and monopolization. Increased academic focus on these practitioners can facilitate a more holistic understanding of ML limitations, and so is useful for prescribing a research agenda to facilitate responsible ML development for all.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121063789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. However, these systems reflect a wide range of biases, from gender bias to a bias in which voices they represent. In this paper, a team including speakers of 9 languages - Chinese, Spanish, English, Arabic, German, French, Farsi, Urdu, and Wolof - reports and analyzes measurements of gender bias in the Wikipedia corpora for these 9 languages. In the process, we also document how our work exposes crucial gaps in the NLP-pipeline for many languages. Despite substantial investments in multilingual support, the modern NLP-pipeline still systematically and dramatically under-represents the majority of human voices in the NLP-guided decisions that are shaping our collective future. We develop extensions to profession-level and corpus-level gender bias metric calculations originally designed for English and apply them to 8 other languages, including languages like Spanish, Arabic, German, French and Urdu that have grammatically gendered nouns including different feminine, masculine and neuter profession words. We compare these gender bias measurements across the Wikipedia corpora in different languages as well as across some corpora of more traditional literature.
{"title":"Gender Bias and Under-Representation in Natural Language Processing Across Human Languages","authors":"Abigail V. Matthews","doi":"10.1145/3461702.3462530","DOIUrl":"https://doi.org/10.1145/3461702.3462530","url":null,"abstract":"Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. However, these systems reflect a wide range of biases, from gender bias to a bias in which voices they represent. In this paper, a team including speakers of 9 languages - Chinese, Spanish, English, Arabic, German, French, Farsi, Urdu, and Wolof - reports and analyzes measurements of gender bias in the Wikipedia corpora for these 9 languages. In the process, we also document how our work exposes crucial gaps in the NLP-pipeline for many languages. Despite substantial investments in multilingual support, the modern NLP-pipeline still systematically and dramatically under-represents the majority of human voices in the NLP-guided decisions that are shaping our collective future. We develop extensions to profession-level and corpus-level gender bias metric calculations originally designed for English and apply them to 8 other languages, including languages like Spanish, Arabic, German, French and Urdu that have grammatically gendered nouns including different feminine, masculine and neuter profession words. We compare these gender bias measurements across the Wikipedia corpora in different languages as well as across some corpora of more traditional literature.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116965314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The law and ethics of Western democratic states have their basis in liberalism. This extends to regulation and ethical discussion of technology and businesses doing data processing. Liberalism relies on the privacy and autonomy of individuals, their ordering through a public market, and, more recently, a measure of equality guaranteed by the state. We argue that these forms of regulation and ethical analysis are largely incompatible with the techno-political and techno-economic dimensions of artificial intelligence. By analyzing liberal regulatory solutions in the form of privacy and data protection, regulation of public markets, and fairness in AI, we expose how the data economy and artificial intelligence have transcended liberal legal imagination. Organizations use artificial intelligence to exceed the bounded rationality of individuals and each other. This has led to the private consolidation of markets and an unequal hierarchy of control operating mainly for the purpose of shareholder value. An artificial intelligence will be only as ethical as the purpose of the social system that operates it. Inspired by the science of artificial life as an alternative to artificial intelligence, we consider data intermediaries: sociotechnical systems composed of individuals associated around collectively pursued purposes. An attention cooperative, that prioritizes its incoming and outgoing data flows, is one model of a social system that could form and maintain its own autonomous purpose.
{"title":"Artificial Intelligence and the Purpose of Social Systems","authors":"Sebastian Benthall, Jake Goldenfein","doi":"10.1145/3461702.3462526","DOIUrl":"https://doi.org/10.1145/3461702.3462526","url":null,"abstract":"The law and ethics of Western democratic states have their basis in liberalism. This extends to regulation and ethical discussion of technology and businesses doing data processing. Liberalism relies on the privacy and autonomy of individuals, their ordering through a public market, and, more recently, a measure of equality guaranteed by the state. We argue that these forms of regulation and ethical analysis are largely incompatible with the techno-political and techno-economic dimensions of artificial intelligence. By analyzing liberal regulatory solutions in the form of privacy and data protection, regulation of public markets, and fairness in AI, we expose how the data economy and artificial intelligence have transcended liberal legal imagination. Organizations use artificial intelligence to exceed the bounded rationality of individuals and each other. This has led to the private consolidation of markets and an unequal hierarchy of control operating mainly for the purpose of shareholder value. An artificial intelligence will be only as ethical as the purpose of the social system that operates it. Inspired by the science of artificial life as an alternative to artificial intelligence, we consider data intermediaries: sociotechnical systems composed of individuals associated around collectively pursued purposes. An attention cooperative, that prioritizes its incoming and outgoing data flows, is one model of a social system that could form and maintain its own autonomous purpose.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"82 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114131033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper asks whether explanations of one's workplace and economic institutions are valuable in and of themselves. In doing so, it departs from much of the explainability literature in law, computer science, philosophy, and the social sciences, which examine the instrumental values that explainable AI has: explainable systems increase accountability and user trust, or reduce the risk of harm due to increased robustness. Think, however, of how you might feel if you went to your local administrative agency to apply for some benefit, or you were handed down a decision by a judge in a court. Let's stipulate that you know that the decision was just, even though neither the civil servant nor the judge explain to you why the decision was made, and you don't know the relevant rules; you just brought all the information you had about yourself, and hoped for the best. Is such a decision process defective? I argue that such a decision process is defective because it prevents individuals from accessing the normative explanations that are necessary to form an appropriate practical orientation towards their social world. A practical orientation is a reflective stance towards one's social world, which is expressed in one's actions and draws on one's cognitive architecture that allows one to navigate the various social practices and institutions. A practical orientation can range from rejection to silent endorsement, and is the sort of attitude for which there are the right kind of reasons, based in the world's normative character. It also determines how one fills out one's role obligations, and, more broadly, guides one's actions in the relevant institution: a teacher in the American South during the time of enforced racial segregation, for example, might choose where to teach on the basis of her rejection of the segregation of education. To form an appropriate practical orientation, one must have an understanding of the social world's normative character, which required a normative explanation And, since we spend so much of our lives at work and are constrained by economic institutions, we must understand their structure and how they function.
{"title":"Alienation in the AI-Driven Workplace","authors":"Kate Vredenburgh","doi":"10.1145/3461702.3462520","DOIUrl":"https://doi.org/10.1145/3461702.3462520","url":null,"abstract":"This paper asks whether explanations of one's workplace and economic institutions are valuable in and of themselves. In doing so, it departs from much of the explainability literature in law, computer science, philosophy, and the social sciences, which examine the instrumental values that explainable AI has: explainable systems increase accountability and user trust, or reduce the risk of harm due to increased robustness. Think, however, of how you might feel if you went to your local administrative agency to apply for some benefit, or you were handed down a decision by a judge in a court. Let's stipulate that you know that the decision was just, even though neither the civil servant nor the judge explain to you why the decision was made, and you don't know the relevant rules; you just brought all the information you had about yourself, and hoped for the best. Is such a decision process defective? I argue that such a decision process is defective because it prevents individuals from accessing the normative explanations that are necessary to form an appropriate practical orientation towards their social world. A practical orientation is a reflective stance towards one's social world, which is expressed in one's actions and draws on one's cognitive architecture that allows one to navigate the various social practices and institutions. A practical orientation can range from rejection to silent endorsement, and is the sort of attitude for which there are the right kind of reasons, based in the world's normative character. It also determines how one fills out one's role obligations, and, more broadly, guides one's actions in the relevant institution: a teacher in the American South during the time of enforced racial segregation, for example, might choose where to teach on the basis of her rejection of the segregation of education. To form an appropriate practical orientation, one must have an understanding of the social world's normative character, which required a normative explanation And, since we spend so much of our lives at work and are constrained by economic institutions, we must understand their structure and how they function.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127457296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization's bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model's fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts.
{"title":"Can We Obtain Fairness For Free?","authors":"Rashidul Islam, Shimei Pan, James R. Foulds","doi":"10.1145/3461702.3462614","DOIUrl":"https://doi.org/10.1145/3461702.3462614","url":null,"abstract":"There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization's bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model's fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126886451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clàudia Figueras, H. Verhagen, Teresa Cerratto Pargman
While AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.
{"title":"Trustworthy AI for the People?","authors":"Clàudia Figueras, H. Verhagen, Teresa Cerratto Pargman","doi":"10.1145/3461702.3462470","DOIUrl":"https://doi.org/10.1145/3461702.3462470","url":null,"abstract":"While AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131190438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been increasing acceptance that agents must act in a way that is sensitive to ethical considerations. These considerations have been cashed out as constraints, such that some actions are permissible, while others are impermissible. In this paper, we claim that, in addition to only performing those actions that are permissible, agents should only perform those courses of action that are _unambiguously_ permissible. By doing so they signal normative compliance: they communicate their understanding of, and commitment to abiding by, the normative constraints in play. Those courses of action (or plans) that succeed in signalling compliance in this sense, we term 'acceptable'. The problem this paper addresses is how to compute plans that signal compliance, that is, how to find plans that are acceptable as well as permissible. We do this by identifying those plans such that, were an observer to see only part of its execution, that observer would infer the plan enacted was permissible. This paper provides a formal definition of compliance signalling within the domain of AI planning, describes an algorithm for computing compliance signalling plans, provides preliminary experimental results and discusses possible improvements. The signalling of compliance is vital for communication, coordination and cooperation in situations where the agent is partially observed. It is equally vital, therefore, to solve the computational problem of finding those plans that signal compliance. This is what this paper does.
{"title":"Computing Plans that Signal Normative Compliance","authors":"Alban Grastien, Claire Benn, S. Thiébaux","doi":"10.1145/3461702.3462607","DOIUrl":"https://doi.org/10.1145/3461702.3462607","url":null,"abstract":"There has been increasing acceptance that agents must act in a way that is sensitive to ethical considerations. These considerations have been cashed out as constraints, such that some actions are permissible, while others are impermissible. In this paper, we claim that, in addition to only performing those actions that are permissible, agents should only perform those courses of action that are _unambiguously_ permissible. By doing so they signal normative compliance: they communicate their understanding of, and commitment to abiding by, the normative constraints in play. Those courses of action (or plans) that succeed in signalling compliance in this sense, we term 'acceptable'. The problem this paper addresses is how to compute plans that signal compliance, that is, how to find plans that are acceptable as well as permissible. We do this by identifying those plans such that, were an observer to see only part of its execution, that observer would infer the plan enacted was permissible. This paper provides a formal definition of compliance signalling within the domain of AI planning, describes an algorithm for computing compliance signalling plans, provides preliminary experimental results and discusses possible improvements. The signalling of compliance is vital for communication, coordination and cooperation in situations where the agent is partially observed. It is equally vital, therefore, to solve the computational problem of finding those plans that signal compliance. This is what this paper does.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.
{"title":"Automating Procedurally Fair Feature Selection in Machine Learning","authors":"Clara Belitz, Lan Jiang, Nigel Bosch","doi":"10.1145/3461702.3462585","DOIUrl":"https://doi.org/10.1145/3461702.3462585","url":null,"abstract":"In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131059784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design - or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design 'myths and misconceptions' related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a 'new era' of co-design capable of addressing these challenges. Link to full text: https://bit.ly/3yZrb3y
{"title":"Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions","authors":"J. Donia, J. Shaw","doi":"10.1145/3461702.3462537","DOIUrl":"https://doi.org/10.1145/3461702.3462537","url":null,"abstract":"Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design - or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design 'myths and misconceptions' related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a 'new era' of co-design capable of addressing these challenges. Link to full text: https://bit.ly/3yZrb3y","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114866998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}