This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the Hastings Center Report.
This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the Hastings Center Report.
Since the early days of the explainable artificial intelligence movement, post hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient-safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this paper, we aim to defend the value of post hoc explanations against this recent critique. We argue that even if post hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users’ functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post hoc explanations are not a silver-bullet solution to the black box problem in medical AI, they remain a useful strategy for addressing it.
Being a moral agent was once thought to be an irreplaceable, uniquely human role for nurses and other health care professionals who care for patients and their families during illness and hospitalization. Today, however, artificial intelligence systems are often referred to as “artificial moral agents,” “agentic,” and “autonomous agents.” As these systems begin to function in various capacities within health care organizations and to perform specialized duties, the question arises as to whether the next step will be to replace nurses and other health care professionals as moral agents. Focusing primarily on nurses, this essay explores the concept of moral agency, asking whether it remains exclusive to humans or can be conferred on AI systems. We argue that AI systems should not supplant nurses’ moral agency, as patients come to hospitals or any other health care setting to be heard, seen, and valued by skilled professionals, not to seek care from machines.
Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.
Whether children have a right to know that they were created via “donated” gametes has generated debate for a quarter of a century. Pro-transparency theorists use policies and attitudes concerning adoption to argue for changes in regulations related to “donor” gametes. Anti-transparency theorists claim that discussions about whether children have a right to know their genetic origins must consider natural reproduction (and not just adoption). They argue that if we use an analogy to natural reproduction instead, we begin to see the problems with requiring transparency. I will argue that adoption is the more appropriate analogy for this debate and that we can make an argument for a strong right to know. I end with some further reasons that we can apply stronger regulation to the use of “donor” gametes than we would to natural reproduction.
Advance directives have historically relied upon human agents. But what happens when a patient appoints an artificial intelligence system as an agent? This essay introduces the idea of roboagents—chatbots authorized to make medical decisions when individuals lose capacity. After describing potential models, including a personal AI companion and a chatbot that has not been trained on a patient's values and preferences, the essay explores the ethical tensions these roboagents generate regarding autonomy, bias, consent, family trust, and physician well-being. This essay then calls for legal clarity and ethical guidance regarding the status of roboagents in light of their potential as alternative health care agents.
On the cover: Polish Chess Players, by Paul Powis, acrylic on card © Paul Powis. All rights reserved 2026 / Bridgeman Images
Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.
On the cover: Polish Chess Players, by Paul Powis, acrylic on card © Paul Powis. All rights reserved 2026 / Bridgeman Images
“Shield laws” declare that, for purposes of reproductive health care, the law of the jurisdiction in which the clinician practices governs when state laws conflict. In 2024, approximately 100,000 pregnant people living in states that criminalize abortion provision received pills for a medication abortion from a clinician living in one of the eight states with these laws. One of these clinicians is New York's Margaret Carpenter, who was criminally charged in Louisiana and fined and enjoined in Texas. Carpenter's case testing shield laws, which is likely to go to the U.S. Supreme Court, should be framed as a “right to travel case” because telemedicine should be understood as a modern version of travel. If the Supreme Court ultimately accepts Louisiana and Texas's likely argument that it's a narrow “state regulation of medicine” case, the Court will be limiting the constitutional right to travel to people who have the money and time to physically travel for medical care, and withholding it from people who need the same care but who can afford to access it only through virtual travel.

