首页 > 最新文献

AI and ethics最新文献

英文 中文
Complex equality and the abstractness of statistical fairness: using social goods to analyze a CV scanner and a welfare fraud detector 复杂平等与统计公平的抽象性:利用社会商品分析简历扫描仪和福利欺诈检测器
Pub Date : 2023-12-20 DOI: 10.1007/s43681-023-00384-4
Bauke Wielinga

This paper explores the possibility of using Michael Walzer’s theory of Spheres of Justice (or Complex Equality) as a means to counter some of the pitfalls of current statistical approaches to algorithmic fairness. Walzer’s account of justice, which is based on social goods and their distributive criteria, is used to analyze two hypothetical algorithms: a CV scanner and a welfare fraud detector. It is argued that using complex equality in this way can help address some of the problems caused by the abstractness of statistical fairness metrics and can help guide the choice of an appropriate metric.

{"title":"Complex equality and the abstractness of statistical fairness: using social goods to analyze a CV scanner and a welfare fraud detector","authors":"Bauke Wielinga","doi":"10.1007/s43681-023-00384-4","DOIUrl":"10.1007/s43681-023-00384-4","url":null,"abstract":"<div><p>This paper explores the possibility of using Michael Walzer’s theory of <i>Spheres of Justice</i> (or <i>Complex Equality)</i> as a means to counter some of the pitfalls of current statistical approaches to algorithmic fairness. Walzer’s account of justice, which is based on social goods and their distributive criteria, is used to analyze two hypothetical algorithms: a CV scanner and a welfare fraud detector. It is argued that using complex equality in this way can help address some of the problems caused by the abstractness of statistical fairness metrics and can help guide the choice of an appropriate metric.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"617 - 632"},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138954827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using ScrutinAI for visual inspection of DNN performance in a medical use case 在医疗应用案例中使用 ScrutinAI 对 DNN 性能进行可视化检查
Pub Date : 2023-12-20 DOI: 10.1007/s43681-023-00399-x
Rebekka Görge, Elena Haedecke, Michael Mock

Our Visual Analytics (VA) tool ScrutinAI supports human analysts to investigate interactively model performance and data sets. Model performance depends on labeling quality to a large extent. In particular in medical settings, generation of high quality labels requires in depth expert knowledge and is very costly. Often, data sets are labeled by collecting opinions of groups of experts. We use our VA tool to analyze the influence of label variations between different experts on the model performance. ScrutinAI facilitates to perform a root cause analysis that distinguishes weaknesses of deep neural network (DNN) models caused by varying or missing labeling quality from true weaknesses. We scrutinize the overall detection of intracranial hemorrhages and the more subtle differentiation between subtypes in a publicly available data set.

我们的可视化分析(VA)工具 ScrutinAI 支持人类分析师以交互方式调查模型性能和数据集。模型性能在很大程度上取决于标签质量。特别是在医疗领域,生成高质量的标签需要深入的专家知识,而且成本很高。通常情况下,数据集是通过收集专家小组的意见来进行标注的。我们使用 VA 工具来分析不同专家之间的标签差异对模型性能的影响。ScrutinAI 可帮助执行根本原因分析,将因标签质量变化或缺失而导致的深度神经网络(DNN)模型缺陷与真正的缺陷区分开来。我们仔细研究了颅内出血的总体检测情况,以及公开数据集中亚型之间更微妙的区分。
{"title":"Using ScrutinAI for visual inspection of DNN performance in a medical use case","authors":"Rebekka Görge,&nbsp;Elena Haedecke,&nbsp;Michael Mock","doi":"10.1007/s43681-023-00399-x","DOIUrl":"10.1007/s43681-023-00399-x","url":null,"abstract":"<div><p>Our Visual Analytics (VA) tool ScrutinAI supports human analysts to investigate interactively model performance and data sets. Model performance depends on labeling quality to a large extent. In particular in medical settings, generation of high quality labels requires in depth expert knowledge and is very costly. Often, data sets are labeled by collecting opinions of groups of experts. We use our VA tool to analyze the influence of label variations between different experts on the model performance. ScrutinAI facilitates to perform a root cause analysis that distinguishes weaknesses of deep neural network (DNN) models caused by varying or missing labeling quality from true weaknesses. We scrutinize the overall detection of intracranial hemorrhages and the more subtle differentiation between subtypes in a publicly available data set.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"151 - 156"},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00399-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who is the human in the machine? Releasing the human–machine metaphor from its cultural roots can increase innovation and equity in AI 谁是机器中的人?摆脱人机隐喻的文化根源,可提高人工智能的创新性和公平性
Pub Date : 2023-12-19 DOI: 10.1007/s43681-023-00382-6
Gwyneth Sutherlin
{"title":"Who is the human in the machine? Releasing the human–machine metaphor from its cultural roots can increase innovation and equity in AI","authors":"Gwyneth Sutherlin","doi":"10.1007/s43681-023-00382-6","DOIUrl":"https://doi.org/10.1007/s43681-023-00382-6","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"121 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
This season’s artificial intelligence (AI): is today’s AI really that different from the AI of the past? Some reflections and thoughts 本季人工智能(AI):今天的人工智能与过去的人工智能真的有那么大的区别吗?一些反思和想法
Pub Date : 2023-12-19 DOI: 10.1007/s43681-023-00388-0
Peter Smith, Laura Smith
{"title":"This season’s artificial intelligence (AI): is today’s AI really that different from the AI of the past? Some reflections and thoughts","authors":"Peter Smith, Laura Smith","doi":"10.1007/s43681-023-00388-0","DOIUrl":"https://doi.org/10.1007/s43681-023-00388-0","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"114 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engaging engineering teams through moral imagination: a bottom-up approach for responsible innovation and ethical culture change in technology companies 通过道德想象力吸引工程团队:自下而上的方法促进科技公司负责任的创新和道德文化变革
Pub Date : 2023-12-19 DOI: 10.1007/s43681-023-00381-7
Benjamin Lange, Geoff Keeling, Amanda McCroskery, Ben Zevenbergen, Sandra Blascovich, Kyle Pedersen, Alison Lentz, Blaise Agüera y Arcas

We propose a ‘Moral Imagination’ methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 60 workshops with teams from across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some distinctive benefits of our methodology for the technology sector in particular.

{"title":"Engaging engineering teams through moral imagination: a bottom-up approach for responsible innovation and ethical culture change in technology companies","authors":"Benjamin Lange,&nbsp;Geoff Keeling,&nbsp;Amanda McCroskery,&nbsp;Ben Zevenbergen,&nbsp;Sandra Blascovich,&nbsp;Kyle Pedersen,&nbsp;Alison Lentz,&nbsp;Blaise Agüera y Arcas","doi":"10.1007/s43681-023-00381-7","DOIUrl":"10.1007/s43681-023-00381-7","url":null,"abstract":"<div><p>We propose a ‘Moral Imagination’ methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 60 workshops with teams from across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some distinctive benefits of our methodology for the technology sector in particular.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"607 - 616"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00381-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139370319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Publics’ views on ethical challenges of artificial intelligence: a scoping review 公众对人工智能伦理挑战的看法:范围界定审查
Pub Date : 2023-12-19 DOI: 10.1007/s43681-023-00387-1
Helena Machado, Susana Silva, Laura Neiva

This scoping review examines the research landscape about publics’ views on the ethical challenges of AI. To elucidate how the concerns voiced by the publics are translated within the research domain, this study scrutinizes 64 publications sourced from PubMed® and Web of Science™. The central inquiry revolves around discerning the motivations, stakeholders, and ethical quandaries that emerge in research on this topic. The analysis reveals that innovation and legitimation stand out as the primary impetuses for engaging the public in deliberations concerning the ethical dilemmas associated with AI technologies. Supplementary motives are rooted in educational endeavors, democratization initiatives, and inspirational pursuits, whereas politicization emerges as a comparatively infrequent incentive. The study participants predominantly comprise the general public and professional groups, followed by AI system developers, industry and business managers, students, scholars, consumers, and policymakers. The ethical dimensions most commonly explored in the literature encompass human agency and oversight, followed by issues centered on privacy and data governance. Conversely, topics related to diversity, nondiscrimination, fairness, societal and environmental well-being, technical robustness, safety, transparency, and accountability receive comparatively less attention. This paper delineates the concrete operationalization of calls for public involvement in AI governance within the research sphere. It underscores the intricate interplay between ethical concerns, public involvement, and societal structures, including political and economic agendas, which serve to bolster technical proficiency and affirm the legitimacy of AI development in accordance with the institutional norms that underlie responsible research practices.

{"title":"Publics’ views on ethical challenges of artificial intelligence: a scoping review","authors":"Helena Machado,&nbsp;Susana Silva,&nbsp;Laura Neiva","doi":"10.1007/s43681-023-00387-1","DOIUrl":"10.1007/s43681-023-00387-1","url":null,"abstract":"<div><p>This scoping review examines the research landscape about publics’ views on the ethical challenges of AI. To elucidate how the concerns voiced by the publics are translated within the research domain, this study scrutinizes 64 publications sourced from PubMed<sup>®</sup> and Web of Science™. The central inquiry revolves around discerning the motivations, stakeholders, and ethical quandaries that emerge in research on this topic. The analysis reveals that innovation and legitimation stand out as the primary impetuses for engaging the public in deliberations concerning the ethical dilemmas associated with AI technologies. Supplementary motives are rooted in educational endeavors, democratization initiatives, and inspirational pursuits, whereas politicization emerges as a comparatively infrequent incentive. The study participants predominantly comprise the general public and professional groups, followed by AI system developers, industry and business managers, students, scholars, consumers, and policymakers. The ethical dimensions most commonly explored in the literature encompass human agency and oversight, followed by issues centered on privacy and data governance. Conversely, topics related to diversity, nondiscrimination, fairness, societal and environmental well-being, technical robustness, safety, transparency, and accountability receive comparatively less attention. This paper delineates the concrete operationalization of calls for public involvement in AI governance within the research sphere. It underscores the intricate interplay between ethical concerns, public involvement, and societal structures, including political and economic agendas, which serve to bolster technical proficiency and affirm the legitimacy of AI development in accordance with the institutional norms that underlie responsible research practices.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"139 - 167"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00387-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neighborhood sampling confidence metric for object detection 用于物体检测的邻域采样置信度指标
Pub Date : 2023-12-19 DOI: 10.1007/s43681-023-00395-1
Christophe Gouguenheim, Ahmad Berjaoui

Object detection using deep learning has recently gained significant attention due to its impressive results in a variety of applications, such as autonomous vehicles, surveillance, and image and video analysis. State-of-the-art models, such as YOLO, Faster-RCNN, and SSD, have achieved impressive performance on various benchmarks. However, it is crucial to ensure that the results produced by deep learning models are trustworthy, as they can have serious consequences, especially in an industrial context. In this paper, we introduce a novel confidence metric for object detection using neighborhood sampling. We evaluate our approach on MS-COCO and demonstrate that it significantly improves the trustworthiness of deep learning models for object detection. We also compare our approach against attribution-guided neighborhood sampling and show that such a heuristic does not yield better results.

由于在自动驾驶汽车、监控以及图像和视频分析等各种应用中取得了令人印象深刻的成果,利用深度学习进行物体检测最近受到了广泛关注。最先进的模型,如 YOLO、Faster-RCNN 和 SSD,已经在各种基准测试中取得了令人印象深刻的性能。然而,确保深度学习模型产生的结果值得信赖至关重要,因为它们可能会产生严重后果,尤其是在工业环境中。在本文中,我们利用邻域采样引入了一种用于物体检测的新型置信度指标。我们在 MS-COCO 上评估了我们的方法,并证明它能显著提高深度学习模型在物体检测方面的可信度。我们还将我们的方法与归因引导的邻域采样进行了比较,结果表明这种启发式方法不会产生更好的结果。
{"title":"Neighborhood sampling confidence metric for object detection","authors":"Christophe Gouguenheim,&nbsp;Ahmad Berjaoui","doi":"10.1007/s43681-023-00395-1","DOIUrl":"10.1007/s43681-023-00395-1","url":null,"abstract":"<div><p>Object detection using deep learning has recently gained significant attention due to its impressive results in a variety of applications, such as autonomous vehicles, surveillance, and image and video analysis. State-of-the-art models, such as YOLO, Faster-RCNN, and SSD, have achieved impressive performance on various benchmarks. However, it is crucial to ensure that the results produced by deep learning models are trustworthy, as they can have serious consequences, especially in an industrial context. In this paper, we introduce a novel confidence metric for object detection using neighborhood sampling. We evaluate our approach on MS-COCO and demonstrate that it significantly improves the trustworthiness of deep learning models for object detection. We also compare our approach against attribution-guided neighborhood sampling and show that such a heuristic does not yield better results.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"57 - 64"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138961310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in automatically rating the trustworthiness of text processing services 自动评定文本处理服务可信度的进展
Pub Date : 2023-12-19 DOI: 10.1007/s43681-023-00391-5
Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco Valtorta

AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black-box setting, where the consumer does not have access to the AI’s source code or training data, is limited. The consumer has to rely on the AI developer’s documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems, so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multimodal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.

众所周知,当数据、模型或用户发生变化时,人工智能服务会出现不稳定的行为。当人工智能与人类合作时,这种行为,无论是因疏忽还是故意引发的,都会导致信任问题。目前在黑箱环境中评估人工智能服务的方法是有限的,消费者无法访问人工智能的源代码或训练数据。消费者必须依赖人工智能开发者的文档,并相信系统是按其所述构建的。此外,如果人工智能消费者重复使用该服务来构建其他服务,并将其出售给客户,那么消费者将面临服务提供商(包括数据和模型提供商)的风险。在这种情况下,我们的方法受到了食品行业营养标签在促进健康方面的成功经验的启发,力求从独立利益相关者的角度对人工智能服务的信任度进行评估和评级。评级将成为宣传人工智能系统行为的一种手段,从而让消费者了解风险并做出明智的决定。在本文中,我们将首先介绍最近在开发基于文本的机器翻译人工智能服务评级方法方面取得的进展,这些方法在用户研究中被认为很有前景。然后,我们将概述一种有原则的、多模态的、基于因果关系的评级方法所面临的挑战和愿景,及其对健康和食品推荐等现实世界场景中的决策支持的影响。
{"title":"Advances in automatically rating the trustworthiness of text processing services","authors":"Biplav Srivastava,&nbsp;Kausik Lakkaraju,&nbsp;Mariana Bernagozzi,&nbsp;Marco Valtorta","doi":"10.1007/s43681-023-00391-5","DOIUrl":"10.1007/s43681-023-00391-5","url":null,"abstract":"<div><p>AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black-box setting, where the consumer does not have access to the AI’s source code or training data, is limited. The consumer has to rely on the AI developer’s documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems, so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multimodal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"5 - 13"},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Establishing counterpoints in the sonic framing of AI narratives 在人工智能叙事的声音框架中建立对立点
Pub Date : 2023-12-12 DOI: 10.1007/s43681-023-00404-3
Jennifer Chubb, David Beer
{"title":"Establishing counterpoints in the sonic framing of AI narratives","authors":"Jennifer Chubb, David Beer","doi":"10.1007/s43681-023-00404-3","DOIUrl":"https://doi.org/10.1007/s43681-023-00404-3","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"19 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139009471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What would strong AI understand consent to mean, and what are the implications for sexbot rape?
Pub Date : 2023-12-11 DOI: 10.1007/s43681-023-00383-5
Garry Young

Weak AI-sexbots exist. This paper is, however, premised on the possibility of strong-AI sexbots. It considers what such a sexbot would understand the utterance “I consent to you engaging in sex with me” to mean. Advances in AI and animatronics make the question germane to the debate over sexbot consent and the possibility of sexbot rape. I argue that what the AI understands consent to mean, and whether it can be raped and subsequently harmed, is contingent on whether the strong AI understands itself to be disembodied or embodied and, from this, how it understands itself to be related to the animatronic device. I conjecture that whether the AI understands itself to be disembodied and, therefore, distinct from the animatronic device, embodied but still distinct, or embodied qua a sexbot, will determine what it takes consent to mean, and subsequently whether it can be raped and harmed as a consequence.

{"title":"What would strong AI understand consent to mean, and what are the implications for sexbot rape?","authors":"Garry Young","doi":"10.1007/s43681-023-00383-5","DOIUrl":"10.1007/s43681-023-00383-5","url":null,"abstract":"<div><p>Weak AI-sexbots exist. This paper is, however, premised on the possibility of strong-AI sexbots. It considers what such a sexbot would understand the utterance “I consent to you engaging in sex with me” to mean. Advances in AI and animatronics make the question germane to the debate over sexbot consent and the possibility of sexbot rape. I argue that what the AI understands consent to mean, and whether it can be raped and subsequently harmed, is contingent on whether the strong AI understands itself to be disembodied or embodied and, from this, how it understands itself to be related to the animatronic device. I conjecture that whether the AI understands itself to be disembodied and, therefore, distinct from the animatronic device, embodied but still distinct, or embodied <i>qua</i> a sexbot, will determine what it takes consent to mean, and subsequently whether it can be raped and harmed as a consequence.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"579 - 590"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1