Pub Date : 2023-12-20DOI: 10.1007/s43681-023-00384-4
Bauke Wielinga
This paper explores the possibility of using Michael Walzer’s theory of Spheres of Justice (or Complex Equality) as a means to counter some of the pitfalls of current statistical approaches to algorithmic fairness. Walzer’s account of justice, which is based on social goods and their distributive criteria, is used to analyze two hypothetical algorithms: a CV scanner and a welfare fraud detector. It is argued that using complex equality in this way can help address some of the problems caused by the abstractness of statistical fairness metrics and can help guide the choice of an appropriate metric.
{"title":"Complex equality and the abstractness of statistical fairness: using social goods to analyze a CV scanner and a welfare fraud detector","authors":"Bauke Wielinga","doi":"10.1007/s43681-023-00384-4","DOIUrl":"10.1007/s43681-023-00384-4","url":null,"abstract":"<div><p>This paper explores the possibility of using Michael Walzer’s theory of <i>Spheres of Justice</i> (or <i>Complex Equality)</i> as a means to counter some of the pitfalls of current statistical approaches to algorithmic fairness. Walzer’s account of justice, which is based on social goods and their distributive criteria, is used to analyze two hypothetical algorithms: a CV scanner and a welfare fraud detector. It is argued that using complex equality in this way can help address some of the problems caused by the abstractness of statistical fairness metrics and can help guide the choice of an appropriate metric.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"617 - 632"},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138954827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1007/s43681-023-00399-x
Rebekka Görge, Elena Haedecke, Michael Mock
Our Visual Analytics (VA) tool ScrutinAI supports human analysts to investigate interactively model performance and data sets. Model performance depends on labeling quality to a large extent. In particular in medical settings, generation of high quality labels requires in depth expert knowledge and is very costly. Often, data sets are labeled by collecting opinions of groups of experts. We use our VA tool to analyze the influence of label variations between different experts on the model performance. ScrutinAI facilitates to perform a root cause analysis that distinguishes weaknesses of deep neural network (DNN) models caused by varying or missing labeling quality from true weaknesses. We scrutinize the overall detection of intracranial hemorrhages and the more subtle differentiation between subtypes in a publicly available data set.
我们的可视化分析(VA)工具 ScrutinAI 支持人类分析师以交互方式调查模型性能和数据集。模型性能在很大程度上取决于标签质量。特别是在医疗领域,生成高质量的标签需要深入的专家知识,而且成本很高。通常情况下,数据集是通过收集专家小组的意见来进行标注的。我们使用 VA 工具来分析不同专家之间的标签差异对模型性能的影响。ScrutinAI 可帮助执行根本原因分析,将因标签质量变化或缺失而导致的深度神经网络(DNN)模型缺陷与真正的缺陷区分开来。我们仔细研究了颅内出血的总体检测情况,以及公开数据集中亚型之间更微妙的区分。
{"title":"Using ScrutinAI for visual inspection of DNN performance in a medical use case","authors":"Rebekka Görge, Elena Haedecke, Michael Mock","doi":"10.1007/s43681-023-00399-x","DOIUrl":"10.1007/s43681-023-00399-x","url":null,"abstract":"<div><p>Our Visual Analytics (VA) tool ScrutinAI supports human analysts to investigate interactively model performance and data sets. Model performance depends on labeling quality to a large extent. In particular in medical settings, generation of high quality labels requires in depth expert knowledge and is very costly. Often, data sets are labeled by collecting opinions of groups of experts. We use our VA tool to analyze the influence of label variations between different experts on the model performance. ScrutinAI facilitates to perform a root cause analysis that distinguishes weaknesses of deep neural network (DNN) models caused by varying or missing labeling quality from true weaknesses. We scrutinize the overall detection of intracranial hemorrhages and the more subtle differentiation between subtypes in a publicly available data set.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"151 - 156"},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00399-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s43681-023-00382-6
Gwyneth Sutherlin
{"title":"Who is the human in the machine? Releasing the human–machine metaphor from its cultural roots can increase innovation and equity in AI","authors":"Gwyneth Sutherlin","doi":"10.1007/s43681-023-00382-6","DOIUrl":"https://doi.org/10.1007/s43681-023-00382-6","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"121 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s43681-023-00388-0
Peter Smith, Laura Smith
{"title":"This season’s artificial intelligence (AI): is today’s AI really that different from the AI of the past? Some reflections and thoughts","authors":"Peter Smith, Laura Smith","doi":"10.1007/s43681-023-00388-0","DOIUrl":"https://doi.org/10.1007/s43681-023-00388-0","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"114 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s43681-023-00381-7
Benjamin Lange, Geoff Keeling, Amanda McCroskery, Ben Zevenbergen, Sandra Blascovich, Kyle Pedersen, Alison Lentz, Blaise Agüera y Arcas
We propose a ‘Moral Imagination’ methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 60 workshops with teams from across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some distinctive benefits of our methodology for the technology sector in particular.
{"title":"Engaging engineering teams through moral imagination: a bottom-up approach for responsible innovation and ethical culture change in technology companies","authors":"Benjamin Lange, Geoff Keeling, Amanda McCroskery, Ben Zevenbergen, Sandra Blascovich, Kyle Pedersen, Alison Lentz, Blaise Agüera y Arcas","doi":"10.1007/s43681-023-00381-7","DOIUrl":"10.1007/s43681-023-00381-7","url":null,"abstract":"<div><p>We propose a ‘Moral Imagination’ methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 60 workshops with teams from across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some distinctive benefits of our methodology for the technology sector in particular.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"607 - 616"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00381-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139370319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s43681-023-00387-1
Helena Machado, Susana Silva, Laura Neiva
This scoping review examines the research landscape about publics’ views on the ethical challenges of AI. To elucidate how the concerns voiced by the publics are translated within the research domain, this study scrutinizes 64 publications sourced from PubMed® and Web of Science™. The central inquiry revolves around discerning the motivations, stakeholders, and ethical quandaries that emerge in research on this topic. The analysis reveals that innovation and legitimation stand out as the primary impetuses for engaging the public in deliberations concerning the ethical dilemmas associated with AI technologies. Supplementary motives are rooted in educational endeavors, democratization initiatives, and inspirational pursuits, whereas politicization emerges as a comparatively infrequent incentive. The study participants predominantly comprise the general public and professional groups, followed by AI system developers, industry and business managers, students, scholars, consumers, and policymakers. The ethical dimensions most commonly explored in the literature encompass human agency and oversight, followed by issues centered on privacy and data governance. Conversely, topics related to diversity, nondiscrimination, fairness, societal and environmental well-being, technical robustness, safety, transparency, and accountability receive comparatively less attention. This paper delineates the concrete operationalization of calls for public involvement in AI governance within the research sphere. It underscores the intricate interplay between ethical concerns, public involvement, and societal structures, including political and economic agendas, which serve to bolster technical proficiency and affirm the legitimacy of AI development in accordance with the institutional norms that underlie responsible research practices.
{"title":"Publics’ views on ethical challenges of artificial intelligence: a scoping review","authors":"Helena Machado, Susana Silva, Laura Neiva","doi":"10.1007/s43681-023-00387-1","DOIUrl":"10.1007/s43681-023-00387-1","url":null,"abstract":"<div><p>This scoping review examines the research landscape about publics’ views on the ethical challenges of AI. To elucidate how the concerns voiced by the publics are translated within the research domain, this study scrutinizes 64 publications sourced from PubMed<sup>®</sup> and Web of Science™. The central inquiry revolves around discerning the motivations, stakeholders, and ethical quandaries that emerge in research on this topic. The analysis reveals that innovation and legitimation stand out as the primary impetuses for engaging the public in deliberations concerning the ethical dilemmas associated with AI technologies. Supplementary motives are rooted in educational endeavors, democratization initiatives, and inspirational pursuits, whereas politicization emerges as a comparatively infrequent incentive. The study participants predominantly comprise the general public and professional groups, followed by AI system developers, industry and business managers, students, scholars, consumers, and policymakers. The ethical dimensions most commonly explored in the literature encompass human agency and oversight, followed by issues centered on privacy and data governance. Conversely, topics related to diversity, nondiscrimination, fairness, societal and environmental well-being, technical robustness, safety, transparency, and accountability receive comparatively less attention. This paper delineates the concrete operationalization of calls for public involvement in AI governance within the research sphere. It underscores the intricate interplay between ethical concerns, public involvement, and societal structures, including political and economic agendas, which serve to bolster technical proficiency and affirm the legitimacy of AI development in accordance with the institutional norms that underlie responsible research practices.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"139 - 167"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00387-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s43681-023-00395-1
Christophe Gouguenheim, Ahmad Berjaoui
Object detection using deep learning has recently gained significant attention due to its impressive results in a variety of applications, such as autonomous vehicles, surveillance, and image and video analysis. State-of-the-art models, such as YOLO, Faster-RCNN, and SSD, have achieved impressive performance on various benchmarks. However, it is crucial to ensure that the results produced by deep learning models are trustworthy, as they can have serious consequences, especially in an industrial context. In this paper, we introduce a novel confidence metric for object detection using neighborhood sampling. We evaluate our approach on MS-COCO and demonstrate that it significantly improves the trustworthiness of deep learning models for object detection. We also compare our approach against attribution-guided neighborhood sampling and show that such a heuristic does not yield better results.
{"title":"Neighborhood sampling confidence metric for object detection","authors":"Christophe Gouguenheim, Ahmad Berjaoui","doi":"10.1007/s43681-023-00395-1","DOIUrl":"10.1007/s43681-023-00395-1","url":null,"abstract":"<div><p>Object detection using deep learning has recently gained significant attention due to its impressive results in a variety of applications, such as autonomous vehicles, surveillance, and image and video analysis. State-of-the-art models, such as YOLO, Faster-RCNN, and SSD, have achieved impressive performance on various benchmarks. However, it is crucial to ensure that the results produced by deep learning models are trustworthy, as they can have serious consequences, especially in an industrial context. In this paper, we introduce a novel confidence metric for object detection using neighborhood sampling. We evaluate our approach on MS-COCO and demonstrate that it significantly improves the trustworthiness of deep learning models for object detection. We also compare our approach against attribution-guided neighborhood sampling and show that such a heuristic does not yield better results.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"57 - 64"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138961310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s43681-023-00391-5
Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco Valtorta
AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black-box setting, where the consumer does not have access to the AI’s source code or training data, is limited. The consumer has to rely on the AI developer’s documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems, so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multimodal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.
{"title":"Advances in automatically rating the trustworthiness of text processing services","authors":"Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco Valtorta","doi":"10.1007/s43681-023-00391-5","DOIUrl":"10.1007/s43681-023-00391-5","url":null,"abstract":"<div><p>AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black-box setting, where the consumer does not have access to the AI’s source code or training data, is limited. The consumer has to rely on the AI developer’s documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems, so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multimodal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"5 - 13"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.1007/s43681-023-00404-3
Jennifer Chubb, David Beer
{"title":"Establishing counterpoints in the sonic framing of AI narratives","authors":"Jennifer Chubb, David Beer","doi":"10.1007/s43681-023-00404-3","DOIUrl":"https://doi.org/10.1007/s43681-023-00404-3","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"19 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139009471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1007/s43681-023-00383-5
Garry Young
Weak AI-sexbots exist. This paper is, however, premised on the possibility of strong-AI sexbots. It considers what such a sexbot would understand the utterance “I consent to you engaging in sex with me” to mean. Advances in AI and animatronics make the question germane to the debate over sexbot consent and the possibility of sexbot rape. I argue that what the AI understands consent to mean, and whether it can be raped and subsequently harmed, is contingent on whether the strong AI understands itself to be disembodied or embodied and, from this, how it understands itself to be related to the animatronic device. I conjecture that whether the AI understands itself to be disembodied and, therefore, distinct from the animatronic device, embodied but still distinct, or embodied qua a sexbot, will determine what it takes consent to mean, and subsequently whether it can be raped and harmed as a consequence.
{"title":"What would strong AI understand consent to mean, and what are the implications for sexbot rape?","authors":"Garry Young","doi":"10.1007/s43681-023-00383-5","DOIUrl":"10.1007/s43681-023-00383-5","url":null,"abstract":"<div><p>Weak AI-sexbots exist. This paper is, however, premised on the possibility of strong-AI sexbots. It considers what such a sexbot would understand the utterance “I consent to you engaging in sex with me” to mean. Advances in AI and animatronics make the question germane to the debate over sexbot consent and the possibility of sexbot rape. I argue that what the AI understands consent to mean, and whether it can be raped and subsequently harmed, is contingent on whether the strong AI understands itself to be disembodied or embodied and, from this, how it understands itself to be related to the animatronic device. I conjecture that whether the AI understands itself to be disembodied and, therefore, distinct from the animatronic device, embodied but still distinct, or embodied <i>qua</i> a sexbot, will determine what it takes consent to mean, and subsequently whether it can be raped and harmed as a consequence.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"579 - 590"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}