Connie M. Ulrich, Oonjee Oh, Sang Bin You, Maxim Topaz, Zahra Rahemi, Liz Stokes, Lisiane Pruinelli, George Demiris, Patricia Flatley Brennan
Being a moral agent was once thought to be an irreplaceable, uniquely human role for nurses and other health care professionals who care for patients and their families during illness and hospitalization. Today, however, artificial intelligence systems are often referred to as “artificial moral agents,” “agentic,” and “autonomous agents.” As these systems begin to function in various capacities within health care organizations and to perform specialized duties, the question arises as to whether the next step will be to replace nurses and other health care professionals as moral agents. Focusing primarily on nurses, this essay explores the concept of moral agency, asking whether it remains exclusive to humans or can be conferred on AI systems. We argue that AI systems should not supplant nurses’ moral agency, as patients come to hospitals or any other health care setting to be heard, seen, and valued by skilled professionals, not to seek care from machines.
{"title":"What Does Moral Agency Mean for Nurses in the Era of Artificial Intelligence?","authors":"Connie M. Ulrich, Oonjee Oh, Sang Bin You, Maxim Topaz, Zahra Rahemi, Liz Stokes, Lisiane Pruinelli, George Demiris, Patricia Flatley Brennan","doi":"10.1002/hast.70030","DOIUrl":"10.1002/hast.70030","url":null,"abstract":"<p>Being a moral agent was once thought to be an irreplaceable, uniquely human role for nurses and other health care professionals who care for patients and their families during illness and hospitalization. Today, however, artificial intelligence systems are often referred to as “artificial moral agents,” “agentic,” and “autonomous agents.” As these systems begin to function in various capacities within health care organizations and to perform specialized duties, the question arises as to whether the next step will be to replace nurses and other health care professionals as moral agents. Focusing primarily on nurses, this essay explores the concept of moral agency, asking whether it remains exclusive to humans or can be conferred on AI systems. We argue that AI systems should not supplant nurses’ moral agency, as patients come to hospitals or any other health care setting to be heard, seen, and valued by skilled professionals, not to seek care from machines.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"18-23"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872599/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the Hastings Center Report.
这封信是对Jasmine Gunkel和Franklin G. Miller在《黑斯廷斯中心报告》2025年9 - 10月刊上发表的文章《异种移植:解决器官危机的不公正、伤害和替代方案》的回应。
{"title":"Implications for All Animal Research","authors":"Christopher Bobier, Daniel J. Hurst","doi":"10.1002/hast.70039","DOIUrl":"10.1002/hast.70039","url":null,"abstract":"<p>This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the <i>Hastings Center Report</i>.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1080/08989621.2026.2626740
Adrian Barnett, Jennifer Byrne
Scientific fakery is a centuries old problem. Twinned with the long history of hard-working scientists earning fame for genuine discoveries, runs a tawdry history of those who were willing fabricate results to falsely gain prestige. Fraud in the past relied on bespoke fakery, but today's fraudsters can exploit the online scientific world to quickly create realistic looking papers on an industrial scale. Fraudsters are using open data sets to create meaningless analyses and combining these results with text from large language models. There has been an explosion of these low value papers using openly available and highly regarded data sets, such as the US National Health and Nutrition Examination Survey (NHANES). The paper miners will likely exploit whatever open data resources they can find until data custodians put more stringent controls in place, or journals and publishers push back. Some scientific data may be too open, even though making research data openly available is a recommended policy for increasing research integrity. Journals and researchers need to be aware of this new threat to research integrity.
{"title":"Closing the paper mines.","authors":"Adrian Barnett, Jennifer Byrne","doi":"10.1080/08989621.2026.2626740","DOIUrl":"https://doi.org/10.1080/08989621.2026.2626740","url":null,"abstract":"<p><p>Scientific fakery is a centuries old problem. Twinned with the long history of hard-working scientists earning fame for genuine discoveries, runs a tawdry history of those who were willing fabricate results to falsely gain prestige. Fraud in the past relied on bespoke fakery, but today's fraudsters can exploit the online scientific world to quickly create realistic looking papers on an industrial scale. Fraudsters are using open data sets to create meaningless analyses and combining these results with text from large language models. There has been an explosion of these low value papers using openly available and highly regarded data sets, such as the US National Health and Nutrition Examination Survey (NHANES). The paper miners will likely exploit whatever open data resources they can find until data custodians put more stringent controls in place, or journals and publishers push back. Some scientific data may be too open, even though making research data openly available is a recommended policy for increasing research integrity. Journals and researchers need to be aware of this new threat to research integrity.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2626740"},"PeriodicalIF":4.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.
{"title":"How Should Clinical Ethics Evolve to Ensure Moral Use of AI?","authors":"Colleen P. Lyons","doi":"10.1002/hast.70016","DOIUrl":"https://doi.org/10.1002/hast.70016","url":null,"abstract":"<p>Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. <i>Encoding Bioethics: AI in Clinical Decision-Making</i>, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication <i>An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action</i>, which came out after the book, <i>Encoding Bioethics</i> goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"47-49"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Whether children have a right to know that they were created via “donated” gametes has generated debate for a quarter of a century. Pro-transparency theorists use policies and attitudes concerning adoption to argue for changes in regulations related to “donor” gametes. Anti-transparency theorists claim that discussions about whether children have a right to know their genetic origins must consider natural reproduction (and not just adoption). They argue that if we use an analogy to natural reproduction instead, we begin to see the problems with requiring transparency. I will argue that adoption is the more appropriate analogy for this debate and that we can make an argument for a strong right to know. I end with some further reasons that we can apply stronger regulation to the use of “donor” gametes than we would to natural reproduction.
{"title":"Arguments and Analogies: Do Children Have a Right to Know Their Genetic Origins?","authors":"Sonya Charles","doi":"10.1002/hast.4970","DOIUrl":"10.1002/hast.4970","url":null,"abstract":"<p>Whether children have a right to know that they were created via “donated” gametes has generated debate for a quarter of a century. Pro-transparency theorists use policies and attitudes concerning adoption to argue for changes in regulations related to “donor” gametes. Anti-transparency theorists claim that discussions about whether children have a right to know their genetic origins must consider natural reproduction (and not just adoption). They argue that if we use an analogy to natural reproduction instead, we begin to see the problems with requiring transparency. I will argue that adoption is the more appropriate analogy for this debate and that we can make an argument for a strong right to know. I end with some further reasons that we can apply stronger regulation to the use of “donor” gametes than we would to natural reproduction.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"32-39"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872598/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advance directives have historically relied upon human agents. But what happens when a patient appoints an artificial intelligence system as an agent? This essay introduces the idea of roboagents—chatbots authorized to make medical decisions when individuals lose capacity. After describing potential models, including a personal AI companion and a chatbot that has not been trained on a patient's values and preferences, the essay explores the ethical tensions these roboagents generate regarding autonomy, bias, consent, family trust, and physician well-being. This essay then calls for legal clarity and ethical guidance regarding the status of roboagents in light of their potential as alternative health care agents.
{"title":"The Roboagents Are Coming!: The Promise and Challenge of Artificial Intelligence Advance Directives","authors":"Jacob M. Appel","doi":"10.1002/hast.70042","DOIUrl":"10.1002/hast.70042","url":null,"abstract":"<p>Advance directives have historically relied upon human agents. But what happens when a patient appoints an artificial intelligence system as an agent? This essay introduces the idea of <i>roboagents</i>—chatbots authorized to make medical decisions when individuals lose capacity. After describing potential models, including a personal AI companion and a chatbot that has not been trained on a patient's values and preferences, the essay explores the ethical tensions these roboagents generate regarding autonomy, bias, consent, family trust, and physician well-being. This essay then calls for legal clarity and ethical guidance regarding the status of roboagents in light of their potential as alternative health care agents.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"6-12"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1016/j.jhg.2026.01.007
E.W. Marthe Deij, Kyle Harper, Ron J.A. van Lammeren, Kirsten M. de Beurs
{"title":"The role of malaria, urbanism, and soil in the European Marriage Pattern of the eighteenth-century Dutch Republic","authors":"E.W. Marthe Deij, Kyle Harper, Ron J.A. van Lammeren, Kirsten M. de Beurs","doi":"10.1016/j.jhg.2026.01.007","DOIUrl":"https://doi.org/10.1016/j.jhg.2026.01.007","url":null,"abstract":"","PeriodicalId":47094,"journal":{"name":"Journal of Historical Geography","volume":"91 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"历史学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.
{"title":"How Should Clinical Ethics Evolve to Ensure Moral Use of AI?","authors":"Colleen P. Lyons","doi":"10.1002/hast.70016","DOIUrl":"https://doi.org/10.1002/hast.70016","url":null,"abstract":"<p>Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. <i>Encoding Bioethics: AI in Clinical Decision-Making</i>, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication <i>An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action</i>, which came out after the book, <i>Encoding Bioethics</i> goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"47-49"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}