首页 > 最新文献

AI and ethics最新文献

英文 中文
Moral consideration for AI systems by 2030 2030 年人工智能系统的道德考量
Pub Date : 2023-12-11 DOI: 10.1007/s43681-023-00379-1
Jeff Sebo, Robert Long

This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.

{"title":"Moral consideration for AI systems by 2030","authors":"Jeff Sebo,&nbsp;Robert Long","doi":"10.1007/s43681-023-00379-1","DOIUrl":"10.1007/s43681-023-00379-1","url":null,"abstract":"<div><p>This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"591 - 606"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00379-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equity, autonomy, and the ethical risks and opportunities of generalist medical AI 公平、自主以及全科医学人工智能的伦理风险和机遇
Pub Date : 2023-12-05 DOI: 10.1007/s43681-023-00380-8
Reuben Sass

This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.

{"title":"Equity, autonomy, and the ethical risks and opportunities of generalist medical AI","authors":"Reuben Sass","doi":"10.1007/s43681-023-00380-8","DOIUrl":"10.1007/s43681-023-00380-8","url":null,"abstract":"<div><p>This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"567 - 577"},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Ought we align the values of artificial moral agents? 更正:我们是否应该调整人造道德主体的价值观?
Pub Date : 2023-12-04 DOI: 10.1007/s43681-023-00403-4
Erez Firt
{"title":"Correction: Ought we align the values of artificial moral agents?","authors":"Erez Firt","doi":"10.1007/s43681-023-00403-4","DOIUrl":"10.1007/s43681-023-00403-4","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 2","pages":"283 - 283"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142409676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensuring a ‘Responsible’ AI future in India: RRI as an approach for identifying the ethical challenges from an Indian perspective 确保印度 "负责任 "的人工智能未来:RRI 作为一种从印度角度确定伦理挑战的方法
Pub Date : 2023-12-04 DOI: 10.1007/s43681-023-00370-w
Nitika Bhalla, Laurence Brooks, Tonii Leach

Artificial intelligence (AI) can be seen to be at an inflexion point in India, a country which is keen to adopt and exploit new technologies, but needs to carefully consider how they do this. AI is usually deployed with good intentions, to unlock value and create opportunities for the people; however it does not come without its challenges. There are a set of ethical–social issues associated with AI, which include concerns around privacy, data protection, job displacement, historical bias and discrimination. Through a series of focus groups with knowledgeable people embedded in India and its culture, this research explores the ethical–societal changes and challenges that India now faces. Further, it investigates whether the principles and practices of responsible research and innovation (RRI) might provide a framework to help identify and deal with these issues. The results show that the areas in which RRI could offer scope to improve this outlook include education, policy and governance, legislation and regulation, and innovation and industry practices. Some significant challenges described by participants included: the lack of awareness of AI by the public as well as policy makers; India’s access and implementation of Western datasets, resulting in a lack of diversity, exacerbation of existing power asymmetries, increase in social inequality and the creation of bias; the potential replacement of jobs by AI. One option was to look at a hybrid approach, a mix of AI and humans, with expansion and upskilling of the current workforce. In terms of strategy, there seems to be a gap between the rhetoric of the government and what is seen on the ground, and therefore going forward there needs to be a much greater engagement with a wider audience of stakeholders.

人工智能(AI)在印度正处于一个转折点,印度热衷于采用和利用新技术,但需要仔细考虑如何做到这一点。人工智能的部署通常是出于好意,目的是为人们释放价值和创造机会;但它也并非没有挑战。与人工智能相关的一系列道德-社会问题包括对隐私、数据保护、失业、历史偏见和歧视的担忧。通过与印度及其文化中的知识分子进行一系列焦点小组讨论,本研究探讨了印度目前面临的伦理-社会变化和挑战。此外,研究还探讨了负责任研究与创新(RRI)的原则和实践是否可以提供一个框架,帮助识别和处理这些问题。研究结果表明,负责任的研究与创新(RRI)可以为改善这一前景提供空间的领域包括教育、政策与治理、立法与监管以及创新与行业实践。与会者描述的一些重大挑战包括:公众和政策制定者缺乏对人工智能的认识;印度对西方数据集的获取和实施,导致缺乏多样性、加剧现有的权力不对称、增加社会不平等和产生偏见;人工智能可能取代工作。一种选择是研究一种混合方法,即人工智能与人类的混合,同时扩大现有劳动力队伍并提高其技能。在战略方面,政府的言论与实际情况之间似乎存在差距,因此在未来需要与更广泛的利益相关者进行接触。
{"title":"Ensuring a ‘Responsible’ AI future in India: RRI as an approach for identifying the ethical challenges from an Indian perspective","authors":"Nitika Bhalla,&nbsp;Laurence Brooks,&nbsp;Tonii Leach","doi":"10.1007/s43681-023-00370-w","DOIUrl":"10.1007/s43681-023-00370-w","url":null,"abstract":"<div><p>Artificial intelligence (AI) can be seen to be at an inflexion point in India, a country which is keen to adopt and exploit new technologies, but needs to carefully consider how they do this. AI is usually deployed with good intentions, to unlock value and create opportunities for the people; however it does not come without its challenges. There are a set of ethical–social issues associated with AI, which include concerns around privacy, data protection, job displacement, historical bias and discrimination. Through a series of focus groups with knowledgeable people embedded in India and its culture, this research explores the ethical–societal changes and challenges that India now faces. Further, it investigates whether the principles and practices of responsible research and innovation (RRI) might provide a framework to help identify and deal with these issues. The results show that the areas in which RRI could offer scope to improve this outlook include education, policy and governance, legislation and regulation, and innovation and industry practices. Some significant challenges described by participants included: the lack of awareness of AI by the public as well as policy makers; India’s access and implementation of Western datasets, resulting in a lack of diversity, exacerbation of existing power asymmetries, increase in social inequality and the creation of bias; the potential replacement of jobs by AI. One option was to look at a hybrid approach, a mix of AI and humans, with expansion and upskilling of the current workforce. In terms of strategy, there seems to be a gap between the rhetoric of the government and what is seen on the ground, and therefore going forward there needs to be a much greater engagement with a wider audience of stakeholders.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 4","pages":"1409 - 1422"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00370-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating in the moral landscape: analysing bias and discrimination in AI through philosophical inquiry 在道德景观中航行:通过哲学探究分析人工智能中的偏见和歧视
Pub Date : 2023-11-22 DOI: 10.1007/s43681-023-00377-3
Serap Keles

This article embarks on a philosophical inquiry into the ethical virtues, particularly, kindness, empathy and compassion within the realm of artificial intelligence (AI), seeking to explicate its essence and explore its philosophical foundations. By delving into different philosophical theories of virtues, we can discover how these theories can be applied to the complex terrain of AI. Central challenges are addressed, including issues of bias, discrimination, fairness, transparency and accountability in the pursuit of promoting ethical principles in AI. Moreover, this exploration encompasses a critical examination of universal ethical principles such as beneficence, non-maleficence, and respect for human dignity, specifically in the context of AI. This scrutiny underscores the pressing need for interdisciplinary collaboration between ethicists, technologists, and policymakers to forge robust frameworks that effectively promote values in AI. In pursuit of a comprehensive understanding, it is essential to subject various arguments and perspectives to evaluation. This entails engaging with philosophical theories such as utilitarianism, deontology and virtue ethics. Throughout the article, an extensive array of supporting evidence is employed to bolster the arguments presented by virtue ethics, such as the integration of compelling case studies, empirical research findings, and lived experiences that serve to illustrate and illuminate the practical implications of the discourse. By thoroughly exploring these multifaceted dimensions, this article offers nuanced philosophical insights. Its interdisciplinary approach and rigorous analysis aim to engender a comprehensive understanding of this complex issue, illuminating potential avenues for ethical progress within the realm of AI.

{"title":"Navigating in the moral landscape: analysing bias and discrimination in AI through philosophical inquiry","authors":"Serap Keles","doi":"10.1007/s43681-023-00377-3","DOIUrl":"10.1007/s43681-023-00377-3","url":null,"abstract":"<div><p>This article embarks on a philosophical inquiry into the ethical virtues, particularly, kindness, empathy and compassion within the realm of artificial intelligence (AI), seeking to explicate its essence and explore its philosophical foundations. By delving into different philosophical theories of virtues, we can discover how these theories can be applied to the complex terrain of AI. Central challenges are addressed, including issues of bias, discrimination, fairness, transparency and accountability in the pursuit of promoting ethical principles in AI. Moreover, this exploration encompasses a critical examination of universal ethical principles such as beneficence, non-maleficence, and respect for human dignity, specifically in the context of AI. This scrutiny underscores the pressing need for interdisciplinary collaboration between ethicists, technologists, and policymakers to forge robust frameworks that effectively promote values in AI. In pursuit of a comprehensive understanding, it is essential to subject various arguments and perspectives to evaluation. This entails engaging with philosophical theories such as utilitarianism, deontology and virtue ethics. Throughout the article, an extensive array of supporting evidence is employed to bolster the arguments presented by virtue ethics, such as the integration of compelling case studies, empirical research findings, and lived experiences that serve to illustrate and illuminate the practical implications of the discourse. By thoroughly exploring these multifaceted dimensions, this article offers nuanced philosophical insights. Its interdisciplinary approach and rigorous analysis aim to engender a comprehensive understanding of this complex issue, illuminating potential avenues for ethical progress within the realm of AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"555 - 565"},"PeriodicalIF":0.0,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139250646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI ethics and ordoliberalism 2.0: towards a ‘Digital Bill of Rights’
Pub Date : 2023-11-21 DOI: 10.1007/s43681-023-00367-5
Manuel Wörsdörfer

This article analyzes AI ethics from a distinct business ethics perspective, i.e., ‘ordoliberalism 2.0.’ It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper suggests not only introducing hard-law legislation with a more effective oversight structure but also merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy. However, this link between AI ethics, regulation, and antitrust is not yet adequately discussed in the academic literature and beyond. The paper thus closes a significant gap in the academic literature and adds to the predominantly legal-political and philosophical discourse on AI governance. The paper’s research questions and goals are twofold: first, it identifies ordoliberal-inspired AI ethics principles that could serve as the foundation for a ‘digital bill of rights.’ Second, it shows how those principles could be implemented at the macro level with the help of ordoliberal competition and regulatory policy.

{"title":"AI ethics and ordoliberalism 2.0: towards a ‘Digital Bill of Rights’","authors":"Manuel Wörsdörfer","doi":"10.1007/s43681-023-00367-5","DOIUrl":"10.1007/s43681-023-00367-5","url":null,"abstract":"<div><p>This article analyzes AI ethics from a distinct business ethics perspective, i.e., ‘ordoliberalism 2.0.’ It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper suggests not only introducing hard-law legislation with a more effective oversight structure but also merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy. However, this link between AI ethics, regulation, and antitrust is not yet adequately discussed in the academic literature and beyond. The paper thus closes a significant gap in the academic literature and adds to the predominantly legal-political and philosophical discourse on AI governance. The paper’s research questions and goals are twofold: first, it identifies ordoliberal-inspired AI ethics principles that could serve as the foundation for a ‘digital bill of rights.’ Second, it shows how those principles could be implemented at the macro level with the help of ordoliberal competition and regulatory policy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"507 - 525"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing value-sensitive AI: a critical review and recommendations for socio-technical design processes 设计对价值敏感的人工智能:对社会技术设计流程的批判性审查和建议
Pub Date : 2023-11-21 DOI: 10.1007/s43681-023-00373-7
Malak Sadek, Rafael A. Calvo, Céline Mougenot

This paper presents a critical review of how different socio-technical design processes for AI-based systems, from scholarly works and industry, support the creation of value-sensitive AI (VSAI). The review contributes to the emerging field of human-centred AI, and the even more embryonic space of VSAI in four ways: (i) it introduces three criteria for the review of VSAI based on their contribution to design processes’ overall value-sensitivity, and as a response to criticisms that current interventions are lacking in these aspects: comprehensiveness, level of guidance offered, and methodological value-sensitivity, (ii) it provides a novel review of socio-technical design processes for AI-based systems, (iii) it assesses each process based on the mentioned criteria and synthesises the results into broader trends, and (iv) it offers a resulting set of recommendations for the design of VSAI. The objective of the paper is to help creators and followers of design processes—whether scholarly or industry-based—to understand the level of value-sensitivity offered by different socio-technical design processes and act accordingly based on their needs: to adopt or adapt existing processes or to create new ones.

本文对学术著作和行业中不同的人工智能系统社会技术设计流程如何支持创造价值敏感型人工智能(VSAI)进行了批判性评述。这篇评论从四个方面为新兴的以人为本的人工智能领域,以及更为萌芽的 VSAI 领域做出了贡献:(i) 它根据 VSAI 对设计流程的整体价值敏感性的贡献,引入了三项 VSAI 评论标准,并对当前干预措施缺乏这些方面的批评做出了回应:(ii)本文对基于人工智能的系统的社会技术设计流程进行了新颖的评述,(iii)本文根据上述标准对每个流程进行了评估,并将评估结果综合为更广泛的趋势,(iv)本文为 VSAI 的设计提出了一系列建议。本文的目的是帮助设计流程的创造者和追随者--无论是学术界的还是工业界的--了解不同的社会技术设计流程所提供的价值敏感性水平,并根据自己的需要采取相应的行动:采用或调整现有流程或创建新流程。
{"title":"Designing value-sensitive AI: a critical review and recommendations for socio-technical design processes","authors":"Malak Sadek,&nbsp;Rafael A. Calvo,&nbsp;Céline Mougenot","doi":"10.1007/s43681-023-00373-7","DOIUrl":"10.1007/s43681-023-00373-7","url":null,"abstract":"<div><p>This paper presents a critical review of how different socio-technical design processes for AI-based systems, from scholarly works and industry, support the creation of value-sensitive AI (VSAI). The review contributes to the emerging field of human-centred AI, and the even more embryonic space of VSAI in four ways: (i) it introduces three criteria for the review of VSAI based on their contribution to design processes’ overall value-sensitivity, and as a response to criticisms that current interventions are lacking in these aspects: comprehensiveness, level of guidance offered, and methodological value-sensitivity, (ii) it provides a novel review of socio-technical design processes for AI-based systems, (iii) it assesses each process based on the mentioned criteria and synthesises the results into broader trends, and (iv) it offers a resulting set of recommendations for the design of VSAI. The objective of the paper is to help creators and followers of design processes—whether scholarly or industry-based—to understand the level of value-sensitivity offered by different socio-technical design processes and act accordingly based on their needs: to adopt or adapt existing processes or to create new ones.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 4","pages":"949 - 967"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00373-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139253928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-produced certainties in health care: current and future challenges 人工智能在医疗保健领域产生的确定性:当前和未来的挑战
Pub Date : 2023-11-21 DOI: 10.1007/s43681-023-00374-6
Max Tretter, Tabea Ott, Peter Dabrock

Since uncertainty is a major challenge in medicine and bears the risk of causing incorrect diagnoses and harmful treatment, there are many efforts to tackle it. For some time, AI technologies have been increasingly implemented in medicine and used to reduce medical uncertainties. What initially seems desirable, however, poses challenges. We use a multimethod approach that combines philosophical inquiry, conceptual analysis, and ethical considerations to identify key challenges that arise when AI is used for medical certainty purposes. We identify several challenges. Where AI is used to reduce medical uncertainties, it is likely to result in (a) patients being stripped down to their measurable data points, and being made disambiguous. Additionally, the widespread use of AI technologies in health care bears the risk of (b) human physicians being pushed out of the medical decision-making process, and patient participation being more and more limited. Further, the successful use of AI requires extensive and invasive monitoring of patients, which raises (c) questions about surveillance as well as privacy and security issues. We outline these several challenges and show that they are immediate consequences of AI-driven security efforts. If not addressed, they could entail unfavorable consequences. We contend that diminishing medical uncertainties through AI involves a tradeoff. The advantages, including enhanced precision, personalization, and overall improvement in medicine, are accompanied by several novel challenges. This paper addresses them and gives suggestions about how to use AI for certainty purposes without causing harm to patients.

{"title":"AI-produced certainties in health care: current and future challenges","authors":"Max Tretter,&nbsp;Tabea Ott,&nbsp;Peter Dabrock","doi":"10.1007/s43681-023-00374-6","DOIUrl":"10.1007/s43681-023-00374-6","url":null,"abstract":"<div><p>Since uncertainty is a major challenge in medicine and bears the risk of causing incorrect diagnoses and harmful treatment, there are many efforts to tackle it. For some time, AI technologies have been increasingly implemented in medicine and used to reduce medical uncertainties. What initially seems desirable, however, poses challenges. We use a multimethod approach that combines philosophical inquiry, conceptual analysis, and ethical considerations to identify key challenges that arise when AI is used for medical certainty purposes. We identify several challenges. Where AI is used to reduce medical uncertainties, it is likely to result in (a) patients being stripped down to their measurable data points, and being made disambiguous. Additionally, the widespread use of AI technologies in health care bears the risk of (b) human physicians being pushed out of the medical decision-making process, and patient participation being more and more limited. Further, the successful use of AI requires extensive and invasive monitoring of patients, which raises (c) questions about surveillance as well as privacy and security issues. We outline these several challenges and show that they are immediate consequences of AI-driven security efforts. If not addressed, they could entail unfavorable consequences. We contend that diminishing medical uncertainties through AI involves a tradeoff. The advantages, including enhanced precision, personalization, and overall improvement in medicine, are accompanied by several novel challenges. This paper addresses them and gives suggestions about how to use AI for certainty purposes without causing harm to patients.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"497 - 506"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00374-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139251981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Socialisation approach to AI value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence 人工智能价值获取的社会化方法:实现灵活的道德导航,内置接受社会影响的能力
Pub Date : 2023-11-21 DOI: 10.1007/s43681-023-00372-8
Joel Janhonen

This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.

{"title":"Socialisation approach to AI value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence","authors":"Joel Janhonen","doi":"10.1007/s43681-023-00372-8","DOIUrl":"10.1007/s43681-023-00372-8","url":null,"abstract":"<div><p>This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"527 - 553"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00372-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139251958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using structured ethical techniques to facilitate reasoning in technology ethics
Pub Date : 2023-11-20 DOI: 10.1007/s43681-023-00371-9
Matt A. Murphy

Despite many experts’ best intentions, technology ethics continues to embody a commonly used definition of insanity—by repeatedly trying to achieve ethical outcomes through the same methods that don’t work. One of the most intractable problems in technology ethics is how to translate ethical principles into actual practice. This challenge persists for many reasons including a gap between theoretical and technical language, a lack of enforceable mechanisms, misaligned incentives, and others that this paper will outline. With popular and often contentious fields like artificial intelligence (AI), a slew of technical and functional (used here to mean primarily “non-technical”) approaches are continually developed by diverse organizations to bridge the theoretical-practical divide. Technical approaches and coding interventions are useful for programmers and developers, but often lack contextually sensitive thinking that incorporates project teams or a wider group of stakeholders. Contrarily, functional approaches tend to be too conceptual and immaterial, lacking actionable steps for implementation into product development processes. Despite best efforts, many current approaches are therefore impractical or challenging to use in any meaningful way. After surveying a variety of different fields for current approaches to technology ethics, I propose a set of originally developed methods called Structured Ethical Techniques (SETs) that pull from best practices to build out a middle ground between functional and technical methods. SETs provide a way to add deliberative ethics to any technology’s development while acknowledging the business realities that often curb ethical deliberation, such as efficiency concerns, pressures to innovate, internal resource limitations, and more.

{"title":"Using structured ethical techniques to facilitate reasoning in technology ethics","authors":"Matt A. Murphy","doi":"10.1007/s43681-023-00371-9","DOIUrl":"10.1007/s43681-023-00371-9","url":null,"abstract":"<div><p>Despite many experts’ best intentions, technology ethics continues to embody a commonly used definition of insanity—by repeatedly trying to achieve ethical outcomes through the same methods that don’t work. One of the most intractable problems in technology ethics is how to translate ethical principles into actual practice. This challenge persists for many reasons including a gap between theoretical and technical language, a lack of enforceable mechanisms, misaligned incentives, and others that this paper will outline. With popular and often contentious fields like artificial intelligence (AI), a slew of technical and functional (used here to mean primarily “non-technical”) approaches are continually developed by diverse organizations to bridge the theoretical-practical divide. Technical approaches and coding interventions are useful for programmers and developers, but often lack contextually sensitive thinking that incorporates project teams or a wider group of stakeholders. Contrarily, functional approaches tend to be too conceptual and immaterial, lacking actionable steps for implementation into product development processes. Despite best efforts, many current approaches are therefore impractical or challenging to use in any meaningful way. After surveying a variety of different fields for current approaches to technology ethics, I propose a set of originally developed methods called Structured Ethical Techniques (SETs) that pull from best practices to build out a middle ground between functional and technical methods. SETs provide a way to add deliberative ethics to any technology’s development while acknowledging the business realities that often curb ethical deliberation, such as efficiency concerns, pressures to innovate, internal resource limitations, and more.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"479 - 488"},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1