Pub Date : 2025-01-01Epub Date: 2025-10-06DOI: 10.1007/s10676-025-09868-9
Henning Lahmann, Bart Custers, Benjamyn I Scott
This article analyses ideas to use AI-supported systems to counter 'cognitive warfare' and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of 'cognitive warfare' as used in contemporary public security discourse, the article describes the emergence of generative AI tools that are expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilize AI to devise countermeasures, ranging from AI-based early warning systems to state-run content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, communication rights, and self-determination. This article argues that such proposals insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causality and attribution. Reliance on the precautionary principle might offer a justificatory frame for AI-enabled measures to counter 'cognitive warfare' in the absence of conclusive empirical evidence of harm. However, any such state intervention must be based in law and adhere to strict proportionality.
{"title":"The fundamental rights risks of countering cognitive warfare with artificial intelligence.","authors":"Henning Lahmann, Bart Custers, Benjamyn I Scott","doi":"10.1007/s10676-025-09868-9","DOIUrl":"10.1007/s10676-025-09868-9","url":null,"abstract":"<p><p>This article analyses ideas to use AI-supported systems to counter 'cognitive warfare' and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of 'cognitive warfare' as used in contemporary public security discourse, the article describes the emergence of generative AI tools that are expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilize AI to devise countermeasures, ranging from AI-based early warning systems to state-run content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, communication rights, and self-determination. This article argues that such proposals insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causality and attribution. Reliance on the precautionary principle might offer a justificatory frame for AI-enabled measures to counter 'cognitive warfare' in the absence of conclusive empirical evidence of harm. However, any such state intervention must be based in law and adhere to strict proportionality.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 4","pages":"49"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12500826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-06DOI: 10.1007/s10676-025-09862-1
Ariel Conn, Ingvild Bode
The use of AI technologies in weapons systems has triggered a decade-long international debate, especially with regard to human control, responsibility, and accountability around autonomous and intelligent systems (AIS) in defence. However, most of these ethical and legal discussions have revolved primarily around the point of use of a hypothetical AIS, and in doing so, one critical component still remains under-appreciated: human decision-making across the full timeline of the AIS lifecycle. When discussions around human involvement start at the point at which a hypothetical AIS has taken some undesirable action, they typically prompt the question: "what happens next?" This approach primarily concerns the technology at the time of use and may be appropriate for conventional weapons systems, for which humans have clear lines of control and therefore accountability at the time of use. However, this is not precisely the case for AIS. Rather than focusing first on the system in its comparatively most autonomous state, it is more helpful to consider when, along the lifecycle, humans have more clear, direct control over the system (e.g. through research, design, testing, or procurement) and how, at those earlier times, human decision-makers can take steps to decrease the likelihood that an AIS will perform 'inappropriately' or take incorrect actions. In this paper, we therefore argue that addressing many arising concerns requires a shift in how and when participants of the international debate on AI in the military domain think about, talk about, and plan for human involvement across the full lifecycle of AIS in defence. This shift includes a willingness to hold human decision-makers accountable, even if their roles occurred at much earlier stages of the lifecycle. Of course, this raises another question: "How?" We close by formulating a number of recommendations, including the adoption of the IEEE-SA Lifecycle Framework, the consideration of policy knots, and the adoption of Human Readiness Levels.
{"title":"Establishing human responsibility and accountability at early stages of the lifecycle for AI-based defence systems.","authors":"Ariel Conn, Ingvild Bode","doi":"10.1007/s10676-025-09862-1","DOIUrl":"10.1007/s10676-025-09862-1","url":null,"abstract":"<p><p>The use of AI technologies in weapons systems has triggered a decade-long international debate, especially with regard to human control, responsibility, and accountability around autonomous and intelligent systems (AIS) in defence. However, most of these ethical and legal discussions have revolved primarily around the point of use of a hypothetical AIS, and in doing so, one critical component still remains under-appreciated: human decision-making across the full timeline of the AIS lifecycle. When discussions around human involvement start at the point at which a hypothetical AIS has taken some undesirable action, they typically prompt the question: \"what happens next?\" This approach primarily concerns the technology at the time of use and may be appropriate for conventional weapons systems, for which humans have clear lines of control and therefore accountability at the time of use. However, this is not precisely the case for AIS. Rather than focusing first on the system in its comparatively most autonomous state, it is more helpful to consider when, along the lifecycle, humans have more clear, direct control over the system (e.g. through research, design, testing, or procurement) and how, at those earlier times, human decision-makers can take steps to decrease the likelihood that an AIS will perform 'inappropriately' or take incorrect actions. In this paper, we therefore argue that addressing many arising concerns requires a shift in how and when participants of the international debate on AI in the military domain think about, talk about, and plan for human involvement across the full lifecycle of AIS in defence. This shift includes a willingness to hold human decision-makers accountable, even if their roles occurred at much earlier stages of the lifecycle. Of course, this raises another question: \"How?\" We close by formulating a number of recommendations, including the adoption of the IEEE-SA Lifecycle Framework, the consideration of policy knots, and the adoption of Human Readiness Levels.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 4","pages":"51"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12500784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-15DOI: 10.1007/s10676-025-09858-x
Atay Kozlovski
The rapid proliferation of AI systems has raised many concerns about safety and responsibility in their design and use. The philosophical framework of Meaningful Human Control (MHC) was developed in response to these concerns, and tries to provide a standard for designing and evaluating such systems. While promising, the framework still requires further theoretical and practical refinement. This paper contributes to that effort by drawing on research in axiology and rational decision theory to identify a critical gap in the framework. Specifically, it argues that while 'reasons' play a central role in MHC, there has been little discussion of the possibility that, when weighed against each other, reasons may not always point to a single, rationally preferable course of action. I refer to these cases as instances of reasons underdetermination, and this paper discusses the need to address this issue within the MHC framework. The paper begins by providing an overview of the key concepts of the MHC framework and then examines the role of 'reasons' in the framework's two main conditions - Tracking and Tracing. It then discusses the phenomenon of reasons underdetermination and shows how it poses a challenge for the achievement of both Tracking and Tracing.
{"title":"Reasons underdetermination in meaningful human control.","authors":"Atay Kozlovski","doi":"10.1007/s10676-025-09858-x","DOIUrl":"10.1007/s10676-025-09858-x","url":null,"abstract":"<p><p>The rapid proliferation of AI systems has raised many concerns about safety and responsibility in their design and use. The philosophical framework of Meaningful Human Control (MHC) was developed in response to these concerns, and tries to provide a standard for designing and evaluating such systems. While promising, the framework still requires further theoretical and practical refinement. This paper contributes to that effort by drawing on research in axiology and rational decision theory to identify a critical gap in the framework. Specifically, it argues that while 'reasons' play a central role in MHC, there has been little discussion of the possibility that, when weighed against each other, reasons may not always point to a single, rationally preferable course of action. I refer to these cases as instances of reasons underdetermination, and this paper discusses the need to address this issue within the MHC framework. The paper begins by providing an overview of the key concepts of the MHC framework and then examines the role of 'reasons' in the framework's two main conditions - Tracking and Tracing. It then discusses the phenomenon of reasons underdetermination and shows how it poses a challenge for the achievement of both Tracking and Tracing.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 4","pages":"59"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12528354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-22DOI: 10.1007/s10676-025-09866-x
Adam Poulsen, Ian B Hickie, Min K Chong, Haley M LaMonica, Ashlee Turner, Frank Iorfino
Digital health is typically driven, in part, by the principle of personalised care. However, the underlying values and associated ethical design considerations at the intersection of personalised care, youth mental health, and digital technology are underexplored. Through a value sensitive design lens, this work aims to contribute a prototype conceptual framework for the ethical design and evaluation of personalised youth digital mental health technology, which comprises three values-personalisation, empowerment, and autonomy-and 15 design norms as fundamental yet non-exhaustive ethical criteria. Furthermore, it provides illustrative applications of the framework by applying it to (1) the proactive design of two exemplary digital mental health technologies to draw out emerging ethical considerations and (2) the retrospective evaluation of three existing technologies to assess whether they are designed to support personalisation, empowerment, and autonomy. This work creates an understanding of personalised care and related values in this socio-technical context, with key design recommendations going forward for youth digital mental health research, practice, and associated policy.
Supplementary information: The online version contains supplementary material available at 10.1007/s10676-025-09866-x.
{"title":"Personalised care, youth mental health, and digital technology: A value sensitive design perspective and framework.","authors":"Adam Poulsen, Ian B Hickie, Min K Chong, Haley M LaMonica, Ashlee Turner, Frank Iorfino","doi":"10.1007/s10676-025-09866-x","DOIUrl":"10.1007/s10676-025-09866-x","url":null,"abstract":"<p><p>Digital health is typically driven, in part, by the principle of personalised care. However, the underlying values and associated ethical design considerations at the intersection of personalised care, youth mental health, and digital technology are underexplored. Through a value sensitive design lens, this work aims to contribute a prototype conceptual framework for the ethical design and evaluation of personalised youth digital mental health technology, which comprises three values-personalisation, empowerment, and autonomy-and 15 design norms as fundamental yet non-exhaustive ethical criteria. Furthermore, it provides illustrative applications of the framework by applying it to (1) the proactive design of two exemplary digital mental health technologies to draw out emerging ethical considerations and (2) the retrospective evaluation of three existing technologies to assess whether they are designed to support personalisation, empowerment, and autonomy. This work creates an understanding of personalised care and related values in this socio-technical context, with key design recommendations going forward for youth digital mental health research, practice, and associated policy.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10676-025-09866-x.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 4","pages":"61"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12546520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145373334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-11-23DOI: 10.1007/s10676-024-09812-3
Javier Argota Sánchez-Vaquerizo
Urban Digital Twins (UDTs) have become the new buzzword for researchers, planners, policymakers, and industry experts when it comes to designing, planning, and managing sustainable and efficient cities. It encapsulates the last iteration of the technocratic and ultra-efficient, post-modernist vision of smart cities. However, while more applications branded as UDTs appear around the world, its conceptualization remains ambiguous. Beyond being technically prescriptive about what UDTs are, this article focuses on their aspects of interaction and operationalization in connection to people in cities, and how enhanced by metaverse ideas they can deepen societal divides by offering divergent urban experiences based on different stakeholder preferences. Therefore, firstly this article repositions the term UDTs by comparing existing concrete and located applications that have a focus on interaction and participation, including some that may be closer to the concept of UDT than is commonly assumed. Based on the components found separately in the different studied cases, it is possible to hypothesize about possible future, more advanced realizations of UDTs. This enables us to contrast their positive and negative societal impacts. While the development of new immersive interactive digital worlds can improve planning using collective knowledge for more inclusive and diverse cities, they pose significant risks not only the common ones regarding privacy, transparency, or fairness, but also social fragmentation based on urban digital multiplicities. The potential benefits and challenges of integrating this multiplicity of UDTs into participatory urban governance emphasize the need for human-centric approaches to promote socio-technical frameworks able to mitigate risks as social division.
{"title":"Urban Digital Twins and metaverses towards city multiplicities: uniting or dividing urban experiences?","authors":"Javier Argota Sánchez-Vaquerizo","doi":"10.1007/s10676-024-09812-3","DOIUrl":"10.1007/s10676-024-09812-3","url":null,"abstract":"<p><p>Urban Digital Twins (UDTs) have become the new buzzword for researchers, planners, policymakers, and industry experts when it comes to designing, planning, and managing sustainable and efficient cities. It encapsulates the last iteration of the technocratic and ultra-efficient, post-modernist vision of smart cities. However, while more applications branded as UDTs appear around the world, its conceptualization remains ambiguous. Beyond being technically prescriptive about what UDTs are, this article focuses on their aspects of interaction and operationalization in connection to people in cities, and how enhanced by metaverse ideas they can deepen societal divides by offering divergent urban experiences based on different stakeholder preferences. Therefore, firstly this article repositions the term UDTs by comparing existing concrete and located applications that have a focus on interaction and participation, including some that may be closer to the concept of UDT than is commonly assumed. Based on the components found separately in the different studied cases, it is possible to hypothesize about possible future, more advanced realizations of UDTs. This enables us to contrast their positive and negative societal impacts. While the development of new immersive interactive digital worlds can improve planning using collective knowledge for more inclusive and diverse cities, they pose significant risks not only the common ones regarding privacy, transparency, or fairness, but also social fragmentation based on urban digital multiplicities. The potential benefits and challenges of integrating this multiplicity of UDTs into participatory urban governance emphasize the need for human-centric approaches to promote socio-technical frameworks able to mitigate risks as social division.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 1","pages":"4"},"PeriodicalIF":3.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11584446/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142710416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-04DOI: 10.1007/s10676-025-09837-2
Adam Dahlgren Lindström, Leila Methnani, Lea Krause, Petter Ericson, Íñigo Martínez de Rituerto de Troya, Dimitri Coelho Mollo, Roel Dobbe
This paper critically evaluates the attempts to align Artificial Intelligence (AI) systems, especially Large Language Models (LLMs), with human values and intentions through Reinforcement Learning from Feedback methods, involving either human feedback (RLHF) or AI feedback (RLAIF). Specifically, we show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness. Through a multidisciplinary sociotechnical critique, we examine both the theoretical underpinnings and practical implementations of RLHF techniques, revealing significant limitations in their approach to capturing the complexities of human ethics, and contributing to AI safety. We highlight tensions inherent in the goals of RLHF, as captured in the HHH principle (helpful, harmless and honest). In addition, we discuss ethically-relevant issues that tend to be neglected in discussions about alignment and RLHF, among which the trade-offs between user-friendliness and deception, flexibility and interpretability, and system safety. We offer an alternative vision for AI safety and ethics which positions RLHF approaches within a broader context of comprehensive design across institutions, processes and technological systems, and suggest the establishment of AI safety as a sociotechnical discipline that is open to the normative and political dimensions of artificial intelligence.
{"title":"Helpful, harmless, honest? Sociotechnical limits of AI alignment and safety through Reinforcement Learning from Human Feedback.","authors":"Adam Dahlgren Lindström, Leila Methnani, Lea Krause, Petter Ericson, Íñigo Martínez de Rituerto de Troya, Dimitri Coelho Mollo, Roel Dobbe","doi":"10.1007/s10676-025-09837-2","DOIUrl":"10.1007/s10676-025-09837-2","url":null,"abstract":"<p><p>This paper critically evaluates the attempts to align Artificial Intelligence (AI) systems, especially Large Language Models (LLMs), with human values and intentions through Reinforcement Learning from Feedback methods, involving either human feedback (RLHF) or AI feedback (RLAIF). Specifically, we show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness. Through a multidisciplinary sociotechnical critique, we examine both the theoretical underpinnings and practical implementations of RLHF techniques, revealing significant limitations in their approach to capturing the complexities of human ethics, and contributing to AI safety. We highlight tensions inherent in the goals of RLHF, as captured in the HHH principle (helpful, harmless and honest). In addition, we discuss ethically-relevant issues that tend to be neglected in discussions about alignment and RLHF, among which the trade-offs between user-friendliness and deception, flexibility and interpretability, and system safety. We offer an alternative vision for AI safety and ethics which positions RLHF approaches within a broader context of comprehensive design across institutions, processes and technological systems, and suggest the establishment of AI safety as a sociotechnical discipline that is open to the normative and political dimensions of artificial intelligence.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 2","pages":"28"},"PeriodicalIF":3.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-20DOI: 10.1007/s10676-025-09843-4
Florian van Daalen, Marine Jacquemin, Johan van Soest, Nina Stahl, David Townend, Andre Dekker, Inigo Bermejo
Access to large datasets, the rise of the Internet of Things (IoT) and the ease of collecting personal data, have led to significant breakthroughs in machine learning. However, they have also raised new concerns about privacy data protection. Controversies like the Facebook-Cambridge Analytica scandal highlight unethical practices in today's digital landscape. Historical privacy incidents have led to the development of technical and legal solutions to protect data subjects' right to privacy. However, within machine learning, these problems have largely been approached from a mathematical point of view, ignoring the larger context in which privacy is relevant. This technical approach has benefited data-controllers and failed to protect individuals adequately. Moreover, it has aligned with Big Tech organizations' interests and allowed them to further push the discussion in a direction that is favorable to their interests. This paper reflects on current privacy approaches in machine learning and explores how various big organizations guide the public discourse, and how this harms data subjects. It also critiques the current data protection regulations, as they allow superficial compliance without addressing deeper ethical issues. Finally, it argues that redefining privacy to focus on harm to data subjects rather than on data breaches would benefit data subjects as well as society at large.
{"title":"A critique of current approaches to privacy in machine learning.","authors":"Florian van Daalen, Marine Jacquemin, Johan van Soest, Nina Stahl, David Townend, Andre Dekker, Inigo Bermejo","doi":"10.1007/s10676-025-09843-4","DOIUrl":"10.1007/s10676-025-09843-4","url":null,"abstract":"<p><p>Access to large datasets, the rise of the Internet of Things (IoT) and the ease of collecting personal data, have led to significant breakthroughs in machine learning. However, they have also raised new concerns about privacy data protection. Controversies like the Facebook-Cambridge Analytica scandal highlight unethical practices in today's digital landscape. Historical privacy incidents have led to the development of technical and legal solutions to protect data subjects' right to privacy. However, within machine learning, these problems have largely been approached from a mathematical point of view, ignoring the larger context in which privacy is relevant. This technical approach has benefited data-controllers and failed to protect individuals adequately. Moreover, it has aligned with Big Tech organizations' interests and allowed them to further push the discussion in a direction that is favorable to their interests. This paper reflects on current privacy approaches in machine learning and explores how various big organizations guide the public discourse, and how this harms data subjects. It also critiques the current data protection regulations, as they allow superficial compliance without addressing deeper ethical issues. Finally, it argues that redefining privacy to focus on harm to data subjects rather than on data breaches would benefit data subjects as well as society at large.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 3","pages":"32"},"PeriodicalIF":3.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12181200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s10676-023-09739-1
Eleanor Drage, Kerry McInerney, Jude Browne
{"title":"Engineers on responsibility: feminist approaches to who’s responsible for ethical AI","authors":"Eleanor Drage, Kerry McInerney, Jude Browne","doi":"10.1007/s10676-023-09739-1","DOIUrl":"https://doi.org/10.1007/s10676-023-09739-1","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"109 21","pages":"1-13"},"PeriodicalIF":3.6,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139391278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-03-04DOI: 10.1007/s10676-024-09754-w
Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer
This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.
{"title":"AI and the need for justification (to the patient).","authors":"Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer","doi":"10.1007/s10676-024-09754-w","DOIUrl":"10.1007/s10676-024-09754-w","url":null,"abstract":"<p><p>This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 1","pages":"16"},"PeriodicalIF":3.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140051468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-08-12DOI: 10.1007/s10676-024-09790-6
Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum
Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.
Supplementary information: The online version contains supplementary material available at 10.1007/s10676-024-09790-6.
{"title":"Trustworthiness of voting advice applications in Europe.","authors":"Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum","doi":"10.1007/s10676-024-09790-6","DOIUrl":"10.1007/s10676-024-09790-6","url":null,"abstract":"<p><p>Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10676-024-09790-6.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 3","pages":"55"},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}