Pub Date : 2023-12-11DOI: 10.1007/s43681-023-00379-1
Jeff Sebo, Robert Long
This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.
{"title":"Moral consideration for AI systems by 2030","authors":"Jeff Sebo, Robert Long","doi":"10.1007/s43681-023-00379-1","DOIUrl":"10.1007/s43681-023-00379-1","url":null,"abstract":"<div><p>This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"591 - 606"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00379-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05DOI: 10.1007/s43681-023-00380-8
Reuben Sass
This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.
{"title":"Equity, autonomy, and the ethical risks and opportunities of generalist medical AI","authors":"Reuben Sass","doi":"10.1007/s43681-023-00380-8","DOIUrl":"10.1007/s43681-023-00380-8","url":null,"abstract":"<div><p>This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"567 - 577"},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1007/s43681-023-00403-4
Erez Firt
{"title":"Correction: Ought we align the values of artificial moral agents?","authors":"Erez Firt","doi":"10.1007/s43681-023-00403-4","DOIUrl":"10.1007/s43681-023-00403-4","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 2","pages":"283 - 283"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142409676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1007/s43681-023-00370-w
Nitika Bhalla, Laurence Brooks, Tonii Leach
Artificial intelligence (AI) can be seen to be at an inflexion point in India, a country which is keen to adopt and exploit new technologies, but needs to carefully consider how they do this. AI is usually deployed with good intentions, to unlock value and create opportunities for the people; however it does not come without its challenges. There are a set of ethical–social issues associated with AI, which include concerns around privacy, data protection, job displacement, historical bias and discrimination. Through a series of focus groups with knowledgeable people embedded in India and its culture, this research explores the ethical–societal changes and challenges that India now faces. Further, it investigates whether the principles and practices of responsible research and innovation (RRI) might provide a framework to help identify and deal with these issues. The results show that the areas in which RRI could offer scope to improve this outlook include education, policy and governance, legislation and regulation, and innovation and industry practices. Some significant challenges described by participants included: the lack of awareness of AI by the public as well as policy makers; India’s access and implementation of Western datasets, resulting in a lack of diversity, exacerbation of existing power asymmetries, increase in social inequality and the creation of bias; the potential replacement of jobs by AI. One option was to look at a hybrid approach, a mix of AI and humans, with expansion and upskilling of the current workforce. In terms of strategy, there seems to be a gap between the rhetoric of the government and what is seen on the ground, and therefore going forward there needs to be a much greater engagement with a wider audience of stakeholders.
{"title":"Ensuring a ‘Responsible’ AI future in India: RRI as an approach for identifying the ethical challenges from an Indian perspective","authors":"Nitika Bhalla, Laurence Brooks, Tonii Leach","doi":"10.1007/s43681-023-00370-w","DOIUrl":"10.1007/s43681-023-00370-w","url":null,"abstract":"<div><p>Artificial intelligence (AI) can be seen to be at an inflexion point in India, a country which is keen to adopt and exploit new technologies, but needs to carefully consider how they do this. AI is usually deployed with good intentions, to unlock value and create opportunities for the people; however it does not come without its challenges. There are a set of ethical–social issues associated with AI, which include concerns around privacy, data protection, job displacement, historical bias and discrimination. Through a series of focus groups with knowledgeable people embedded in India and its culture, this research explores the ethical–societal changes and challenges that India now faces. Further, it investigates whether the principles and practices of responsible research and innovation (RRI) might provide a framework to help identify and deal with these issues. The results show that the areas in which RRI could offer scope to improve this outlook include education, policy and governance, legislation and regulation, and innovation and industry practices. Some significant challenges described by participants included: the lack of awareness of AI by the public as well as policy makers; India’s access and implementation of Western datasets, resulting in a lack of diversity, exacerbation of existing power asymmetries, increase in social inequality and the creation of bias; the potential replacement of jobs by AI. One option was to look at a hybrid approach, a mix of AI and humans, with expansion and upskilling of the current workforce. In terms of strategy, there seems to be a gap between the rhetoric of the government and what is seen on the ground, and therefore going forward there needs to be a much greater engagement with a wider audience of stakeholders.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 4","pages":"1409 - 1422"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00370-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1007/s43681-023-00377-3
Serap Keles
This article embarks on a philosophical inquiry into the ethical virtues, particularly, kindness, empathy and compassion within the realm of artificial intelligence (AI), seeking to explicate its essence and explore its philosophical foundations. By delving into different philosophical theories of virtues, we can discover how these theories can be applied to the complex terrain of AI. Central challenges are addressed, including issues of bias, discrimination, fairness, transparency and accountability in the pursuit of promoting ethical principles in AI. Moreover, this exploration encompasses a critical examination of universal ethical principles such as beneficence, non-maleficence, and respect for human dignity, specifically in the context of AI. This scrutiny underscores the pressing need for interdisciplinary collaboration between ethicists, technologists, and policymakers to forge robust frameworks that effectively promote values in AI. In pursuit of a comprehensive understanding, it is essential to subject various arguments and perspectives to evaluation. This entails engaging with philosophical theories such as utilitarianism, deontology and virtue ethics. Throughout the article, an extensive array of supporting evidence is employed to bolster the arguments presented by virtue ethics, such as the integration of compelling case studies, empirical research findings, and lived experiences that serve to illustrate and illuminate the practical implications of the discourse. By thoroughly exploring these multifaceted dimensions, this article offers nuanced philosophical insights. Its interdisciplinary approach and rigorous analysis aim to engender a comprehensive understanding of this complex issue, illuminating potential avenues for ethical progress within the realm of AI.
{"title":"Navigating in the moral landscape: analysing bias and discrimination in AI through philosophical inquiry","authors":"Serap Keles","doi":"10.1007/s43681-023-00377-3","DOIUrl":"10.1007/s43681-023-00377-3","url":null,"abstract":"<div><p>This article embarks on a philosophical inquiry into the ethical virtues, particularly, kindness, empathy and compassion within the realm of artificial intelligence (AI), seeking to explicate its essence and explore its philosophical foundations. By delving into different philosophical theories of virtues, we can discover how these theories can be applied to the complex terrain of AI. Central challenges are addressed, including issues of bias, discrimination, fairness, transparency and accountability in the pursuit of promoting ethical principles in AI. Moreover, this exploration encompasses a critical examination of universal ethical principles such as beneficence, non-maleficence, and respect for human dignity, specifically in the context of AI. This scrutiny underscores the pressing need for interdisciplinary collaboration between ethicists, technologists, and policymakers to forge robust frameworks that effectively promote values in AI. In pursuit of a comprehensive understanding, it is essential to subject various arguments and perspectives to evaluation. This entails engaging with philosophical theories such as utilitarianism, deontology and virtue ethics. Throughout the article, an extensive array of supporting evidence is employed to bolster the arguments presented by virtue ethics, such as the integration of compelling case studies, empirical research findings, and lived experiences that serve to illustrate and illuminate the practical implications of the discourse. By thoroughly exploring these multifaceted dimensions, this article offers nuanced philosophical insights. Its interdisciplinary approach and rigorous analysis aim to engender a comprehensive understanding of this complex issue, illuminating potential avenues for ethical progress within the realm of AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"555 - 565"},"PeriodicalIF":0.0,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139250646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.1007/s43681-023-00367-5
Manuel Wörsdörfer
This article analyzes AI ethics from a distinct business ethics perspective, i.e., ‘ordoliberalism 2.0.’ It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper suggests not only introducing hard-law legislation with a more effective oversight structure but also merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy. However, this link between AI ethics, regulation, and antitrust is not yet adequately discussed in the academic literature and beyond. The paper thus closes a significant gap in the academic literature and adds to the predominantly legal-political and philosophical discourse on AI governance. The paper’s research questions and goals are twofold: first, it identifies ordoliberal-inspired AI ethics principles that could serve as the foundation for a ‘digital bill of rights.’ Second, it shows how those principles could be implemented at the macro level with the help of ordoliberal competition and regulatory policy.
{"title":"AI ethics and ordoliberalism 2.0: towards a ‘Digital Bill of Rights’","authors":"Manuel Wörsdörfer","doi":"10.1007/s43681-023-00367-5","DOIUrl":"10.1007/s43681-023-00367-5","url":null,"abstract":"<div><p>This article analyzes AI ethics from a distinct business ethics perspective, i.e., ‘ordoliberalism 2.0.’ It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper suggests not only introducing hard-law legislation with a more effective oversight structure but also merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy. However, this link between AI ethics, regulation, and antitrust is not yet adequately discussed in the academic literature and beyond. The paper thus closes a significant gap in the academic literature and adds to the predominantly legal-political and philosophical discourse on AI governance. The paper’s research questions and goals are twofold: first, it identifies ordoliberal-inspired AI ethics principles that could serve as the foundation for a ‘digital bill of rights.’ Second, it shows how those principles could be implemented at the macro level with the help of ordoliberal competition and regulatory policy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"507 - 525"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.1007/s43681-023-00373-7
Malak Sadek, Rafael A. Calvo, Céline Mougenot
This paper presents a critical review of how different socio-technical design processes for AI-based systems, from scholarly works and industry, support the creation of value-sensitive AI (VSAI). The review contributes to the emerging field of human-centred AI, and the even more embryonic space of VSAI in four ways: (i) it introduces three criteria for the review of VSAI based on their contribution to design processes’ overall value-sensitivity, and as a response to criticisms that current interventions are lacking in these aspects: comprehensiveness, level of guidance offered, and methodological value-sensitivity, (ii) it provides a novel review of socio-technical design processes for AI-based systems, (iii) it assesses each process based on the mentioned criteria and synthesises the results into broader trends, and (iv) it offers a resulting set of recommendations for the design of VSAI. The objective of the paper is to help creators and followers of design processes—whether scholarly or industry-based—to understand the level of value-sensitivity offered by different socio-technical design processes and act accordingly based on their needs: to adopt or adapt existing processes or to create new ones.
{"title":"Designing value-sensitive AI: a critical review and recommendations for socio-technical design processes","authors":"Malak Sadek, Rafael A. Calvo, Céline Mougenot","doi":"10.1007/s43681-023-00373-7","DOIUrl":"10.1007/s43681-023-00373-7","url":null,"abstract":"<div><p>This paper presents a critical review of how different socio-technical design processes for AI-based systems, from scholarly works and industry, support the creation of value-sensitive AI (VSAI). The review contributes to the emerging field of human-centred AI, and the even more embryonic space of VSAI in four ways: (i) it introduces three criteria for the review of VSAI based on their contribution to design processes’ overall value-sensitivity, and as a response to criticisms that current interventions are lacking in these aspects: comprehensiveness, level of guidance offered, and methodological value-sensitivity, (ii) it provides a novel review of socio-technical design processes for AI-based systems, (iii) it assesses each process based on the mentioned criteria and synthesises the results into broader trends, and (iv) it offers a resulting set of recommendations for the design of VSAI. The objective of the paper is to help creators and followers of design processes—whether scholarly or industry-based—to understand the level of value-sensitivity offered by different socio-technical design processes and act accordingly based on their needs: to adopt or adapt existing processes or to create new ones.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 4","pages":"949 - 967"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00373-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139253928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.1007/s43681-023-00374-6
Max Tretter, Tabea Ott, Peter Dabrock
Since uncertainty is a major challenge in medicine and bears the risk of causing incorrect diagnoses and harmful treatment, there are many efforts to tackle it. For some time, AI technologies have been increasingly implemented in medicine and used to reduce medical uncertainties. What initially seems desirable, however, poses challenges. We use a multimethod approach that combines philosophical inquiry, conceptual analysis, and ethical considerations to identify key challenges that arise when AI is used for medical certainty purposes. We identify several challenges. Where AI is used to reduce medical uncertainties, it is likely to result in (a) patients being stripped down to their measurable data points, and being made disambiguous. Additionally, the widespread use of AI technologies in health care bears the risk of (b) human physicians being pushed out of the medical decision-making process, and patient participation being more and more limited. Further, the successful use of AI requires extensive and invasive monitoring of patients, which raises (c) questions about surveillance as well as privacy and security issues. We outline these several challenges and show that they are immediate consequences of AI-driven security efforts. If not addressed, they could entail unfavorable consequences. We contend that diminishing medical uncertainties through AI involves a tradeoff. The advantages, including enhanced precision, personalization, and overall improvement in medicine, are accompanied by several novel challenges. This paper addresses them and gives suggestions about how to use AI for certainty purposes without causing harm to patients.
{"title":"AI-produced certainties in health care: current and future challenges","authors":"Max Tretter, Tabea Ott, Peter Dabrock","doi":"10.1007/s43681-023-00374-6","DOIUrl":"10.1007/s43681-023-00374-6","url":null,"abstract":"<div><p>Since uncertainty is a major challenge in medicine and bears the risk of causing incorrect diagnoses and harmful treatment, there are many efforts to tackle it. For some time, AI technologies have been increasingly implemented in medicine and used to reduce medical uncertainties. What initially seems desirable, however, poses challenges. We use a multimethod approach that combines philosophical inquiry, conceptual analysis, and ethical considerations to identify key challenges that arise when AI is used for medical certainty purposes. We identify several challenges. Where AI is used to reduce medical uncertainties, it is likely to result in (a) patients being stripped down to their measurable data points, and being made disambiguous. Additionally, the widespread use of AI technologies in health care bears the risk of (b) human physicians being pushed out of the medical decision-making process, and patient participation being more and more limited. Further, the successful use of AI requires extensive and invasive monitoring of patients, which raises (c) questions about surveillance as well as privacy and security issues. We outline these several challenges and show that they are immediate consequences of AI-driven security efforts. If not addressed, they could entail unfavorable consequences. We contend that diminishing medical uncertainties through AI involves a tradeoff. The advantages, including enhanced precision, personalization, and overall improvement in medicine, are accompanied by several novel challenges. This paper addresses them and gives suggestions about how to use AI for certainty purposes without causing harm to patients.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"497 - 506"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00374-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139251981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.1007/s43681-023-00372-8
Joel Janhonen
This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.
{"title":"Socialisation approach to AI value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence","authors":"Joel Janhonen","doi":"10.1007/s43681-023-00372-8","DOIUrl":"10.1007/s43681-023-00372-8","url":null,"abstract":"<div><p>This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"527 - 553"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00372-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139251958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-20DOI: 10.1007/s43681-023-00371-9
Matt A. Murphy
Despite many experts’ best intentions, technology ethics continues to embody a commonly used definition of insanity—by repeatedly trying to achieve ethical outcomes through the same methods that don’t work. One of the most intractable problems in technology ethics is how to translate ethical principles into actual practice. This challenge persists for many reasons including a gap between theoretical and technical language, a lack of enforceable mechanisms, misaligned incentives, and others that this paper will outline. With popular and often contentious fields like artificial intelligence (AI), a slew of technical and functional (used here to mean primarily “non-technical”) approaches are continually developed by diverse organizations to bridge the theoretical-practical divide. Technical approaches and coding interventions are useful for programmers and developers, but often lack contextually sensitive thinking that incorporates project teams or a wider group of stakeholders. Contrarily, functional approaches tend to be too conceptual and immaterial, lacking actionable steps for implementation into product development processes. Despite best efforts, many current approaches are therefore impractical or challenging to use in any meaningful way. After surveying a variety of different fields for current approaches to technology ethics, I propose a set of originally developed methods called Structured Ethical Techniques (SETs) that pull from best practices to build out a middle ground between functional and technical methods. SETs provide a way to add deliberative ethics to any technology’s development while acknowledging the business realities that often curb ethical deliberation, such as efficiency concerns, pressures to innovate, internal resource limitations, and more.
{"title":"Using structured ethical techniques to facilitate reasoning in technology ethics","authors":"Matt A. Murphy","doi":"10.1007/s43681-023-00371-9","DOIUrl":"10.1007/s43681-023-00371-9","url":null,"abstract":"<div><p>Despite many experts’ best intentions, technology ethics continues to embody a commonly used definition of insanity—by repeatedly trying to achieve ethical outcomes through the same methods that don’t work. One of the most intractable problems in technology ethics is how to translate ethical principles into actual practice. This challenge persists for many reasons including a gap between theoretical and technical language, a lack of enforceable mechanisms, misaligned incentives, and others that this paper will outline. With popular and often contentious fields like artificial intelligence (AI), a slew of technical and functional (used here to mean primarily “non-technical”) approaches are continually developed by diverse organizations to bridge the theoretical-practical divide. Technical approaches and coding interventions are useful for programmers and developers, but often lack contextually sensitive thinking that incorporates project teams or a wider group of stakeholders. Contrarily, functional approaches tend to be too conceptual and immaterial, lacking actionable steps for implementation into product development processes. Despite best efforts, many current approaches are therefore impractical or challenging to use in any meaningful way. After surveying a variety of different fields for current approaches to technology ethics, I propose a set of originally developed methods called Structured Ethical Techniques (SETs) that pull from best practices to build out a middle ground between functional and technical methods. SETs provide a way to add deliberative ethics to any technology’s development while acknowledging the business realities that often curb ethical deliberation, such as efficiency concerns, pressures to innovate, internal resource limitations, and more.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"479 - 488"},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}