The use of social robotics in care for older persons is increasingly discussed as one way of meeting emerging care needs due to scarce resources. While many potential benefits are associated with robotic care technologies, there is a variety of ethical challenges. To support steps towards a responsible implementation and use, this review develops an overview on ethical aspects of the use of social robots in care for older people from a decision-makers' perspective. Electronic databases were queried using a comprehensive search strategy based on the key concepts of "ethical aspects", "social robotics" and "elderly care". Abstract and title screening was conducted by two authors independently. Full-text screening was conducted by one author following a joint consolidation phase. Data was extracted using MAXQDA24 by one author, based on a consolidated coding framework. Analysis was performed through modified qualitative content analysis. A total of 1,518 publications were screened, and 248 publications were included. We have organized our analysis in a scheme of ethical hazards, ethical opportunities and unsettled questions, identifying at least 60 broad ethical aspects affecting three different stakeholder groups. While some ethical issues are well-known and broadly discussed our analysis shows a plethora of potentially relevant aspects, often only marginally recognized, that are worthy of consideration from a practical perspective. The findings highlight the need for a contextual and detailed evaluation of implementation scenarios. To make use of the vast knowledge of the ethical discourse, we hypothesize that decision-makers need to understand the specific nature of this discourse to be able to engage in careful ethical deliberation.
This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the Hastings Center Report.
Since the early days of the explainable artificial intelligence movement, post hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient-safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this paper, we aim to defend the value of post hoc explanations against this recent critique. We argue that even if post hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users’ functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post hoc explanations are not a silver-bullet solution to the black box problem in medical AI, they remain a useful strategy for addressing it.
Being a moral agent was once thought to be an irreplaceable, uniquely human role for nurses and other health care professionals who care for patients and their families during illness and hospitalization. Today, however, artificial intelligence systems are often referred to as “artificial moral agents,” “agentic,” and “autonomous agents.” As these systems begin to function in various capacities within health care organizations and to perform specialized duties, the question arises as to whether the next step will be to replace nurses and other health care professionals as moral agents. Focusing primarily on nurses, this essay explores the concept of moral agency, asking whether it remains exclusive to humans or can be conferred on AI systems. We argue that AI systems should not supplant nurses’ moral agency, as patients come to hospitals or any other health care setting to be heard, seen, and valued by skilled professionals, not to seek care from machines.
Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.

