首页 > 最新文献

Hastings Center Report最新文献

英文 中文
Implications for All Animal Research 对所有动物研究的启示。
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70039
Christopher Bobier, Daniel J. Hurst

This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the Hastings Center Report.

这封信是对Jasmine Gunkel和Franklin G. Miller在《黑斯廷斯中心报告》2025年9 - 10月刊上发表的文章《异种移植:解决器官危机的不公正、伤害和替代方案》的回应。
{"title":"Implications for All Animal Research","authors":"Christopher Bobier,&nbsp;Daniel J. Hurst","doi":"10.1002/hast.70039","DOIUrl":"10.1002/hast.70039","url":null,"abstract":"<p>This letter responds to the article “Xenotransplantation: Injustice, Harm, and Alternatives for Addressing the Organ Crisis,” by Jasmine Gunkel and Franklin G. Miller in the September-October 2025 issue of the <i>Hastings Center Report</i>.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In Defense of Post Hoc Explanations in Medical AI 为医疗人工智能的事后解释辩护。
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.4971
Joshua Hatherley, Lauritz Aastrup Munch, Jens Christian Bjerring

Since the early days of the explainable artificial intelligence movement, post hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient-safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this paper, we aim to defend the value of post hoc explanations against this recent critique. We argue that even if post hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users’ functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post hoc explanations are not a silver-bullet solution to the black box problem in medical AI, they remain a useful strategy for addressing it.

自可解释人工智能运动的早期以来,事后解释因其在黑匣子医疗人工智能系统中提高用户理解、促进信任和降低患者安全风险的潜力而受到称赞。然而,最近批评人士认为,事后解释的好处被大大夸大了,因为它们只是近似,而不是复制黑箱系统得出其输出的实际推理过程。在本文中,我们的目标是捍卫事后解释的价值,反对最近的批评。我们认为,即使事后解释不能复制黑匣子系统的确切推理过程,它们仍然可以提高用户对黑匣子系统的功能理解,提高临床医生-人工智能团队的准确性,并帮助临床医生证明他们的人工智能决策是合理的。虽然事后解释不是解决医疗人工智能黑箱问题的灵丹妙药,但它们仍然是解决这一问题的有用策略。
{"title":"In Defense of Post Hoc Explanations in Medical AI","authors":"Joshua Hatherley,&nbsp;Lauritz Aastrup Munch,&nbsp;Jens Christian Bjerring","doi":"10.1002/hast.4971","DOIUrl":"10.1002/hast.4971","url":null,"abstract":"<p>Since the early days of the explainable artificial intelligence movement, post hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient-safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this paper, we aim to defend the value of post hoc explanations against this recent critique. We argue that even if post hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users’ functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post hoc explanations are not a silver-bullet solution to the black box problem in medical AI, they remain a useful strategy for addressing it.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"40-46"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Does Moral Agency Mean for Nurses in the Era of Artificial Intelligence? 人工智能时代护士的道德能动性意味着什么?
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70030
Connie M. Ulrich, Oonjee Oh, Sang Bin You, Maxim Topaz, Zahra Rahemi, Liz Stokes, Lisiane Pruinelli, George Demiris, Patricia Flatley Brennan

Being a moral agent was once thought to be an irreplaceable, uniquely human role for nurses and other health care professionals who care for patients and their families during illness and hospitalization. Today, however, artificial intelligence systems are often referred to as “artificial moral agents,” “agentic,” and “autonomous agents.” As these systems begin to function in various capacities within health care organizations and to perform specialized duties, the question arises as to whether the next step will be to replace nurses and other health care professionals as moral agents. Focusing primarily on nurses, this essay explores the concept of moral agency, asking whether it remains exclusive to humans or can be conferred on AI systems. We argue that AI systems should not supplant nurses’ moral agency, as patients come to hospitals or any other health care setting to be heard, seen, and valued by skilled professionals, not to seek care from machines.

作为一个道德的代理人,曾经被认为是护士和其他在生病和住院期间照顾病人及其家人的卫生保健专业人员不可替代的、独特的人类角色。然而,今天,人工智能系统通常被称为“人工道德代理”、“代理”和“自主代理”。随着这些系统开始在卫生保健组织中以各种能力发挥作用并履行专门职责,下一步是否会取代护士和其他卫生保健专业人员作为道德代理人的问题就出现了。这篇文章主要关注护士,探讨了道德代理的概念,询问它是否仍然是人类独有的,还是可以赋予人工智能系统。我们认为,人工智能系统不应该取代护士的道德代理,因为患者来医院或任何其他医疗机构是为了被熟练的专业人员听到、看到和重视,而不是向机器寻求治疗。
{"title":"What Does Moral Agency Mean for Nurses in the Era of Artificial Intelligence?","authors":"Connie M. Ulrich,&nbsp;Oonjee Oh,&nbsp;Sang Bin You,&nbsp;Maxim Topaz,&nbsp;Zahra Rahemi,&nbsp;Liz Stokes,&nbsp;Lisiane Pruinelli,&nbsp;George Demiris,&nbsp;Patricia Flatley Brennan","doi":"10.1002/hast.70030","DOIUrl":"10.1002/hast.70030","url":null,"abstract":"<p>Being a moral agent was once thought to be an irreplaceable, uniquely human role for nurses and other health care professionals who care for patients and their families during illness and hospitalization. Today, however, artificial intelligence systems are often referred to as “artificial moral agents,” “agentic,” and “autonomous agents.” As these systems begin to function in various capacities within health care organizations and to perform specialized duties, the question arises as to whether the next step will be to replace nurses and other health care professionals as moral agents. Focusing primarily on nurses, this essay explores the concept of moral agency, asking whether it remains exclusive to humans or can be conferred on AI systems. We argue that AI systems should not supplant nurses’ moral agency, as patients come to hospitals or any other health care setting to be heard, seen, and valued by skilled professionals, not to seek care from machines.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"18-23"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872599/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Should Clinical Ethics Evolve to Ensure Moral Use of AI? 临床伦理应该如何发展以确保人工智能的道德使用?
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70016
Colleen P. Lyons

Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.

人工智能正在重塑临床决策,挑战以患者为中心的护理、道德责任和专业判断的假设。查尔斯·宾克利和泰勒·洛夫特斯所著的《编码生物伦理学:临床决策中的人工智能》,从临床护理的堑壕中开始了对这一主题的伦理反思。与美国国家医学院出版的《卫生和医学人工智能行为准则:协调行动的基本指导》一起,《编码生物伦理学》在为医生、患者、开发人员和卫生系统领导者提供可操作指导方面走了很长一段路。通过解释、探究性问题和案例研究,Binkley和lottus阐明了不透明、偏见和临床角色转变所带来的伦理困难。然而,他们的分析并没有确定治理工具和运营结构,而这些对于实现以患者为中心、对道德负责、加强临床判断的人工智能至关重要。这篇综述文章认为,连接伦理和实践需要关注心理安全、组织动力学和实施科学,以确保人工智能支持——而不是取代——伦理关怀。
{"title":"How Should Clinical Ethics Evolve to Ensure Moral Use of AI?","authors":"Colleen P. Lyons","doi":"10.1002/hast.70016","DOIUrl":"https://doi.org/10.1002/hast.70016","url":null,"abstract":"<p>Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. <i>Encoding Bioethics: AI in Clinical Decision-Making</i>, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication <i>An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action</i>, which came out after the book, <i>Encoding Bioethics</i> goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"47-49"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Arguments and Analogies: Do Children Have a Right to Know Their Genetic Origins? 争论与类比:儿童有权知道自己的基因起源吗?
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.4970
Sonya Charles

Whether children have a right to know that they were created via “donated” gametes has generated debate for a quarter of a century. Pro-transparency theorists use policies and attitudes concerning adoption to argue for changes in regulations related to “donor” gametes. Anti-transparency theorists claim that discussions about whether children have a right to know their genetic origins must consider natural reproduction (and not just adoption). They argue that if we use an analogy to natural reproduction instead, we begin to see the problems with requiring transparency. I will argue that adoption is the more appropriate analogy for this debate and that we can make an argument for a strong right to know. I end with some further reasons that we can apply stronger regulation to the use of “donor” gametes than we would to natural reproduction.

孩子们是否有权知道他们是通过“捐赠”的配子创造出来的,这一争论已经持续了四分之一个世纪。支持透明度的理论家利用有关收养的政策和态度来主张改变与“供体”配子有关的法规。反透明理论家声称,在讨论儿童是否有权知道自己的基因起源时,必须考虑到自然繁殖(而不仅仅是收养)。他们认为,如果我们用自然繁殖来类比,我们就会开始看到要求透明度的问题。我认为,收养是这场辩论更合适的类比,我们可以为强烈的知情权辩护。最后,我提出了一些进一步的理由,即我们可以对“供体”配子的使用实施比自然生殖更严格的监管。
{"title":"Arguments and Analogies: Do Children Have a Right to Know Their Genetic Origins?","authors":"Sonya Charles","doi":"10.1002/hast.4970","DOIUrl":"10.1002/hast.4970","url":null,"abstract":"<p>Whether children have a right to know that they were created via “donated” gametes has generated debate for a quarter of a century. Pro-transparency theorists use policies and attitudes concerning adoption to argue for changes in regulations related to “donor” gametes. Anti-transparency theorists claim that discussions about whether children have a right to know their genetic origins must consider natural reproduction (and not just adoption). They argue that if we use an analogy to natural reproduction instead, we begin to see the problems with requiring transparency. I will argue that adoption is the more appropriate analogy for this debate and that we can make an argument for a strong right to know. I end with some further reasons that we can apply stronger regulation to the use of “donor” gametes than we would to natural reproduction.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"32-39"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872598/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Roboagents Are Coming!: The Promise and Challenge of Artificial Intelligence Advance Directives 机器人特工来了!:人工智能预先指令的希望与挑战。
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70042
Jacob M. Appel

Advance directives have historically relied upon human agents. But what happens when a patient appoints an artificial intelligence system as an agent? This essay introduces the idea of roboagents—chatbots authorized to make medical decisions when individuals lose capacity. After describing potential models, including a personal AI companion and a chatbot that has not been trained on a patient's values and preferences, the essay explores the ethical tensions these roboagents generate regarding autonomy, bias, consent, family trust, and physician well-being. This essay then calls for legal clarity and ethical guidance regarding the status of roboagents in light of their potential as alternative health care agents.

事先指示历来依赖于人类代理人。但是,当患者指定人工智能系统作为代理时会发生什么呢?本文介绍了机器人代理的概念——当个人丧失能力时,被授权做出医疗决策的聊天机器人。在描述了潜在的模型之后,包括一个个人人工智能伴侣和一个没有接受过病人价值观和偏好训练的聊天机器人,这篇文章探讨了这些机器人代理在自主、偏见、同意、家庭信任和医生福祉方面产生的伦理紧张关系。鉴于机器人代理作为替代医疗保健代理的潜力,本文呼吁对机器人代理的地位进行法律上的明确和道德上的指导。
{"title":"The Roboagents Are Coming!: The Promise and Challenge of Artificial Intelligence Advance Directives","authors":"Jacob M. Appel","doi":"10.1002/hast.70042","DOIUrl":"10.1002/hast.70042","url":null,"abstract":"<p>Advance directives have historically relied upon human agents. But what happens when a patient appoints an artificial intelligence system as an agent? This essay introduces the idea of <i>roboagents</i>—chatbots authorized to make medical decisions when individuals lose capacity. After describing potential models, including a personal AI companion and a chatbot that has not been trained on a patient's values and preferences, the essay explores the ethical tensions these roboagents generate regarding autonomy, bias, consent, family trust, and physician well-being. This essay then calls for legal clarity and ethical guidance regarding the status of roboagents in light of their potential as alternative health care agents.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"6-12"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Issue Information and About the Cover Art 发行信息和关于封面艺术
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70049

On the cover: Polish Chess Players, by Paul Powis, acrylic on card © Paul Powis. All rights reserved 2026 / Bridgeman Images

封面:波兰象棋选手,保罗·波伊斯,丙烯酸卡©保罗·波伊斯。版权所有2026 / Bridgeman Images
{"title":"Issue Information and About the Cover Art","authors":"","doi":"10.1002/hast.70049","DOIUrl":"https://doi.org/10.1002/hast.70049","url":null,"abstract":"<p><b>On the cover:</b> <i>Polish Chess Players</i>, by Paul Powis, acrylic on card © Paul Powis. All rights reserved 2026 / Bridgeman Images</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"ii-v"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hast.70049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Should Clinical Ethics Evolve to Ensure Moral Use of AI? 临床伦理应该如何发展以确保人工智能的道德使用?
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70016
Colleen P. Lyons

Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. Encoding Bioethics: AI in Clinical Decision-Making, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, which came out after the book, Encoding Bioethics goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.

人工智能正在重塑临床决策,挑战以患者为中心的护理、道德责任和专业判断的假设。查尔斯·宾克利和泰勒·洛夫特斯所著的《编码生物伦理学:临床决策中的人工智能》,从临床护理的堑壕中开始了对这一主题的伦理反思。与美国国家医学院出版的《卫生和医学人工智能行为准则:协调行动的基本指导》一起,《编码生物伦理学》在为医生、患者、开发人员和卫生系统领导者提供可操作指导方面走了很长一段路。通过解释、探究性问题和案例研究,Binkley和lottus阐明了不透明、偏见和临床角色转变所带来的伦理困难。然而,他们的分析并没有确定治理工具和运营结构,而这些对于实现以患者为中心、对道德负责、加强临床判断的人工智能至关重要。这篇综述文章认为,连接伦理和实践需要关注心理安全、组织动力学和实施科学,以确保人工智能支持——而不是取代——伦理关怀。
{"title":"How Should Clinical Ethics Evolve to Ensure Moral Use of AI?","authors":"Colleen P. Lyons","doi":"10.1002/hast.70016","DOIUrl":"https://doi.org/10.1002/hast.70016","url":null,"abstract":"<p>Artificial intelligence is reshaping clinical decision-making in ways that challenge assumptions about patient-centered care, moral responsibility, and professional judgment. <i>Encoding Bioethics: AI in Clinical Decision-Making</i>, by Charles Binkley and Tyler Loftus, begins where ethical reflection on this topic should begin—in the trenches of clinical care. Together with the National Academy of Medicine's publication <i>An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action</i>, which came out after the book, <i>Encoding Bioethics</i> goes a long way toward offering physicians, patients, developers, and health-system leaders actionable guidance. Through explanation, probing questions, and case studies, Binkley and Loftus illuminate the ethical difficulties posed by opacity, bias, and shifting clinical roles. Yet their analysis stops short of identifying the governance tools and operational structures that are essential for achieving patient-centered, morally responsible AI that strengthens clinical judgment. This review essay argues that bridging ethics and practice requires attention to psychological safety, organizational dynamics, and implementation science to ensure that AI supports—not supplants—ethical care.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"47-49"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Issue Information and About the Cover Art 发行信息和关于封面艺术
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70049

On the cover: Polish Chess Players, by Paul Powis, acrylic on card © Paul Powis. All rights reserved 2026 / Bridgeman Images

封面:波兰象棋选手,保罗·波伊斯,丙烯酸卡©保罗·波伊斯。版权所有2026 / Bridgeman Images
{"title":"Issue Information and About the Cover Art","authors":"","doi":"10.1002/hast.70049","DOIUrl":"https://doi.org/10.1002/hast.70049","url":null,"abstract":"<p><b>On the cover:</b> <i>Polish Chess Players</i>, by Paul Powis, acrylic on card © Paul Powis. All rights reserved 2026 / Bridgeman Images</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":"ii-v"},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hast.70049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abortion Access Persists, but So Do the Threats 堕胎渠道依然存在,但威胁也在继续。
IF 2.3 3区 哲学 Q1 ETHICS Pub Date : 2026-02-04 DOI: 10.1002/hast.70043
Katie Watson

“Shield laws” declare that, for purposes of reproductive health care, the law of the jurisdiction in which the clinician practices governs when state laws conflict. In 2024, approximately 100,000 pregnant people living in states that criminalize abortion provision received pills for a medication abortion from a clinician living in one of the eight states with these laws. One of these clinicians is New York's Margaret Carpenter, who was criminally charged in Louisiana and fined and enjoined in Texas. Carpenter's case testing shield laws, which is likely to go to the U.S. Supreme Court, should be framed as a “right to travel case” because telemedicine should be understood as a modern version of travel. If the Supreme Court ultimately accepts Louisiana and Texas's likely argument that it's a narrow “state regulation of medicine” case, the Court will be limiting the constitutional right to travel to people who have the money and time to physically travel for medical care, and withholding it from people who need the same care but who can afford to access it only through virtual travel.

“保护法”宣布,为了生殖保健的目的,当州法律发生冲突时,临床医生执业的司法管辖区的法律适用。2024年,生活在将堕胎条款定为犯罪的州的大约10万名孕妇从生活在有这些法律的8个州之一的临床医生那里获得了药物流产的药片。其中一位临床医生是纽约的玛格丽特·卡彭特(Margaret Carpenter),她在路易斯安那州受到刑事指控,在德克萨斯州受到罚款和禁令。卡彭特的测试保护法案很可能会提交给美国最高法院,它应该被框定为“旅行权案”,因为远程医疗应该被理解为现代版的旅行。如果最高法院最终接受路易斯安那州和德克萨斯州可能的论点,即这是一个狭隘的“州对医疗的监管”案件,法院将限制宪法规定的旅行权利给那些有金钱和时间的人,而不给那些需要同样的照顾,但只能通过虚拟旅行获得医疗服务的人。
{"title":"Abortion Access Persists, but So Do the Threats","authors":"Katie Watson","doi":"10.1002/hast.70043","DOIUrl":"10.1002/hast.70043","url":null,"abstract":"<p>“Shield laws” declare that, for purposes of reproductive health care, the law of the jurisdiction in which the clinician practices governs when state laws conflict. In 2024, approximately 100,000 pregnant people living in states that criminalize abortion provision received pills for a medication abortion from a clinician living in one of the eight states with these laws. One of these clinicians is New York's Margaret Carpenter, who was criminally charged in Louisiana and fined and enjoined in Texas. Carpenter's case testing shield laws, which is likely to go to the U.S. Supreme Court, should be framed as a “right to travel case” because telemedicine should be understood as a modern version of travel. If the Supreme Court ultimately accepts Louisiana and Texas's likely argument that it's a narrow “state regulation of medicine” case, the Court will be limiting the constitutional right to travel to people who have the money and time to physically travel for medical care, and withholding it from people who need the same care but who can afford to access it only through virtual travel.</p>","PeriodicalId":55073,"journal":{"name":"Hastings Center Report","volume":"56 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Hastings Center Report
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1