Pub Date : 2025-09-01Epub Date: 2025-01-31DOI: 10.1007/s11673-024-10401-8
J P Winters, E Hutchinson
We argue that Aged Residential Care (ARC) facilities should be allowed to create and adopt an informed "No Chest Compression" (NCC) policy. Potential residents are informed before admission that staff will not provide chest compressions to a pulseless resident. All residents would receive standard choking care, and a fully discussed advance directive would be utilized to determine if the resident wanted a one-minute trial of rescue breaths (to clear their airway) or utilization of the automatic defibrillator in case of arrest. The benefits of chest compressions for residents in ARC are dubious, and the burdens are high. For frail elderly people without a pulse, chest compressions are arguably unethical because the chance of benefit is minuscule, the procedure is violent, painful, and challenging to perform correctly, and procedures detract from a peaceful end of life. These burdens fall on residents, their families, ARC facilities providers, and society. We further argue that limitations on universal invasive resuscitation, such as advance directives, need to be more consistently sought and applied. The goals of an informed NCC policy are twofold: removing added suffering from a person's end-of-life experience and increasing ARC residents' understanding of the burdens of ineffective treatments for pulselessness.
{"title":"By Their Side, Not on Their Chest: Ethical Arguments to Allow Residential Aged Care Admission Policies to Forego Full Cardiac Resuscitation.","authors":"J P Winters, E Hutchinson","doi":"10.1007/s11673-024-10401-8","DOIUrl":"10.1007/s11673-024-10401-8","url":null,"abstract":"<p><p>We argue that Aged Residential Care (ARC) facilities should be allowed to create and adopt an informed \"No Chest Compression\" (NCC) policy. Potential residents are informed before admission that staff will not provide chest compressions to a pulseless resident. All residents would receive standard choking care, and a fully discussed advance directive would be utilized to determine if the resident wanted a one-minute trial of rescue breaths (to clear their airway) or utilization of the automatic defibrillator in case of arrest. The benefits of chest compressions for residents in ARC are dubious, and the burdens are high. For frail elderly people without a pulse, chest compressions are arguably unethical because the chance of benefit is minuscule, the procedure is violent, painful, and challenging to perform correctly, and procedures detract from a peaceful end of life. These burdens fall on residents, their families, ARC facilities providers, and society. We further argue that limitations on universal invasive resuscitation, such as advance directives, need to be more consistently sought and applied. The goals of an informed NCC policy are twofold: removing added suffering from a person's end-of-life experience and increasing ARC residents' understanding of the burdens of ineffective treatments for pulselessness.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"679-688"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-02-06DOI: 10.1007/s11673-024-10405-4
N D Brantly
Non-communicable (chronic) and communicable (infectious) diseases constitute the leading causes of death worldwide. They appear to impact populations in developed and developing nations differently with changing trends in the landscape of human conditions. Greater understanding of changing disease burdens should influence the planning of health programmes, the implementation of related interventions, and policymaking efforts on a national and global scale. However, the knowledge of disease burdens does not reflect how states and global health organizations prioritize their efforts in addressing them. This work aims to address the discrepancy in public health priority setting by improving our understanding of how the two disease categories impact the human condition. It reviews two case studies, COVID-19 and type 2 diabetes, as representative cases of an infectious and a chronic disease, respectively, to answer the following question. How does biopolitics, as the governance of human bodies, at the nexus of infectious and chronic disease, impact national and global public health priorities? This work contextualizes and reframes the relationship towards disease categories by focusing on three primary themes: risk, current public health interventions, and funding priorities for each case study analysed. It argues that the politics over life at the nexus of chronic and infectious diseases, best conceived as future-oriented economic optimization, directs the efforts of prioritization in healthcare based on risk and responsibility-based relationship between multiple stakeholders.
{"title":"Biopolitics at the Nexus of Chronic and Infectious Diseases.","authors":"N D Brantly","doi":"10.1007/s11673-024-10405-4","DOIUrl":"10.1007/s11673-024-10405-4","url":null,"abstract":"<p><p>Non-communicable (chronic) and communicable (infectious) diseases constitute the leading causes of death worldwide. They appear to impact populations in developed and developing nations differently with changing trends in the landscape of human conditions. Greater understanding of changing disease burdens should influence the planning of health programmes, the implementation of related interventions, and policymaking efforts on a national and global scale. However, the knowledge of disease burdens does not reflect how states and global health organizations prioritize their efforts in addressing them. This work aims to address the discrepancy in public health priority setting by improving our understanding of how the two disease categories impact the human condition. It reviews two case studies, COVID-19 and type 2 diabetes, as representative cases of an infectious and a chronic disease, respectively, to answer the following question. How does biopolitics, as the governance of human bodies, at the nexus of infectious and chronic disease, impact national and global public health priorities? This work contextualizes and reframes the relationship towards disease categories by focusing on three primary themes: risk, current public health interventions, and funding priorities for each case study analysed. It argues that the politics over life at the nexus of chronic and infectious diseases, best conceived as future-oriented economic optimization, directs the efforts of prioritization in healthcare based on risk and responsibility-based relationship between multiple stakeholders.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"689-705"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143257195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-10-27DOI: 10.1007/s11673-025-10515-7
Michaela Estelle Okninski
{"title":"A Hospital and Health Service v C [2025] QSC 178-Termination of Pregnancy for a Minor: Consideration of \"Best Interests\" Post Enactment of the Human Rights Act 2019 (Qld).","authors":"Michaela Estelle Okninski","doi":"10.1007/s11673-025-10515-7","DOIUrl":"10.1007/s11673-025-10515-7","url":null,"abstract":"","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"477-481"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-12DOI: 10.1007/s11673-025-10424-9
J M Paterson
The increasing prevalence of AI in all facets of human lives raises profound questions of ethics, policy, and law. Interactions with AI in situations that traditionally involve humans demonstrate the growing sophistication and adaptivity of the technology. For this very reason, we may demand some basic rules of engagement from these interactions-AI should not deceive humans into believing it is human or that it has human-like capacities and should be transparent about its artificial status. Law increasingly makes these demands. We may further question as a matter of practical ethics, if not law, whether even "well-trained" AI should be used at all in intimate or personal interactions with humans. This essay seeks to explore these issues by reference to a series of examples in which AI seeks to mimic or interpret humans: AI influencers on social media, AI companions, AI mental health therapy chatbots, and AI emotion detection tools.
{"title":"AI Mimicking and Interpreting Humans: Legal and Ethical Reflections.","authors":"J M Paterson","doi":"10.1007/s11673-025-10424-9","DOIUrl":"10.1007/s11673-025-10424-9","url":null,"abstract":"<p><p>The increasing prevalence of AI in all facets of human lives raises profound questions of ethics, policy, and law. Interactions with AI in situations that traditionally involve humans demonstrate the growing sophistication and adaptivity of the technology. For this very reason, we may demand some basic rules of engagement from these interactions-AI should not deceive humans into believing it is human or that it has human-like capacities and should be transparent about its artificial status. Law increasingly makes these demands. We may further question as a matter of practical ethics, if not law, whether even \"well-trained\" AI should be used at all in intimate or personal interactions with humans. This essay seeks to explore these issues by reference to a series of examples in which AI seeks to mimic or interpret humans: AI influencers on social media, AI companions, AI mental health therapy chatbots, and AI emotion detection tools.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"539-550"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-01-06DOI: 10.1007/s11673-024-10404-5
Sergei Shevchenko, Alexey Zhavoronkov
Scholars usually distinguish between testimonial and hermeneutical epistemic injustice in healthcare. The former arises from negative stereotyping and stigmatization, while the latter occurs when the hermeneutical resources of the dominant community are inadequate for articulating the experience of one's illness. However, the heuristics provided by these two types of epistemic predicaments tend to overlook salient forms of epistemic injustice. In this paper, we prove this argument on the example of the temporality of patients with drug dependence. We identify three temporal dimensions of epistemic injustice affecting drug-dependent patients: the temporal features of their cognitive processes, their individual temporal experience, and the mismatch of social temporality. Notably, the last aspect, which highlights the disparity between the availability of care and its accessibility, does not fit neatly into the categories of testimonial or hermeneutical injustice. (We should note that the International Network of People Who Use Drugs (INPUD) and The Asian Network of People who use Drugs (ANPUD) consider the term "drug addiction" to be associated with disempowerment and negative stereotyping. Instead, they suggest the expression "drug dependence" (INPUD 2020). However, the concept of "drug addiction" is still being used in the current public health, philosophy, and sociology debates that concern the specific field of addiction studies. Replacing the notion of drug addiction with "drug dependence" would not eliminate existing epistemic injustices or allow us to avoid creating new ones, such as those related to ignoring pain claims (O'Brien 2011). Still, for the sake of clarity we will use the notion "drug dependence" when speaking of people while retaining the term "drug addiction" for labelling healthcare practices and the topic for philosophy of healthcare.).
{"title":"Temporal Aspects of Epistemic Injustice: The Case of Patients with Drug Dependence.","authors":"Sergei Shevchenko, Alexey Zhavoronkov","doi":"10.1007/s11673-024-10404-5","DOIUrl":"10.1007/s11673-024-10404-5","url":null,"abstract":"<p><p>Scholars usually distinguish between testimonial and hermeneutical epistemic injustice in healthcare. The former arises from negative stereotyping and stigmatization, while the latter occurs when the hermeneutical resources of the dominant community are inadequate for articulating the experience of one's illness. However, the heuristics provided by these two types of epistemic predicaments tend to overlook salient forms of epistemic injustice. In this paper, we prove this argument on the example of the temporality of patients with drug dependence. We identify three temporal dimensions of epistemic injustice affecting drug-dependent patients: the temporal features of their cognitive processes, their individual temporal experience, and the mismatch of social temporality. Notably, the last aspect, which highlights the disparity between the availability of care and its accessibility, does not fit neatly into the categories of testimonial or hermeneutical injustice. (We should note that the International Network of People Who Use Drugs (INPUD) and The Asian Network of People who use Drugs (ANPUD) consider the term \"drug addiction\" to be associated with disempowerment and negative stereotyping. Instead, they suggest the expression \"drug dependence\" (INPUD 2020). However, the concept of \"drug addiction\" is still being used in the current public health, philosophy, and sociology debates that concern the specific field of addiction studies. Replacing the notion of drug addiction with \"drug dependence\" would not eliminate existing epistemic injustices or allow us to avoid creating new ones, such as those related to ignoring pain claims (O'Brien 2011). Still, for the sake of clarity we will use the notion \"drug dependence\" when speaking of people while retaining the term \"drug addiction\" for labelling healthcare practices and the topic for philosophy of healthcare.).</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"667-677"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142933439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-07-21DOI: 10.1007/s11673-025-10429-4
A S Bayındır, J Danaher
It is now possible for AI systems to generate novel inventions without meaningful human direction and control. Should such inventions be patented? The prevailing consensus, confirmed in recent test cases and official guidance, is that patent law only covers inventions by natural persons (i.e., humans). This, however, sometimes creates an odd situation in which AI-generated inventions cannot be patented, nor can the humans responsible for those systems gain patent rights indirectly through the operation of the law. In this article, we argue against this prevailing consensus. We present five reasons for thinking that AI-generated inventions should be patentable and that AI systems should be legally recognized as inventors. In making this argument, we do not claim that modern AI systems have acquired some significant legal or moral status that is equivalent to humans. Our argument is more practical in nature. We argue that failing to recognize AI inventorship will have negative repercussions for economic development and innovation, at a time when AI assistance is needed.
{"title":"Why We Should Recognize AI as an Inventor.","authors":"A S Bayındır, J Danaher","doi":"10.1007/s11673-025-10429-4","DOIUrl":"10.1007/s11673-025-10429-4","url":null,"abstract":"<p><p>It is now possible for AI systems to generate novel inventions without meaningful human direction and control. Should such inventions be patented? The prevailing consensus, confirmed in recent test cases and official guidance, is that patent law only covers inventions by natural persons (i.e., humans). This, however, sometimes creates an odd situation in which AI-generated inventions cannot be patented, nor can the humans responsible for those systems gain patent rights indirectly through the operation of the law. In this article, we argue against this prevailing consensus. We present five reasons for thinking that AI-generated inventions should be patentable and that AI systems should be legally recognized as inventors. In making this argument, we do not claim that modern AI systems have acquired some significant legal or moral status that is equivalent to humans. Our argument is more practical in nature. We argue that failing to recognize AI inventorship will have negative repercussions for economic development and innovation, at a time when AI assistance is needed.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"515-525"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-05-20DOI: 10.1007/s11673-025-10422-x
James Tibballs, Neera Bhatia
The triad of clinical signs, (extensive bilateral retinal haemorrhages, subdural haematoma, and encephalopathy) is regarded by some expert witnesses as pathognomonic proof that an infant was deliberately shaken and head injured (shaken baby syndrome / abusive head injury). However, that view is controversial since scientific evidence does not support the diagnostic accuracy of the triad. In contrast to previous cases, a Victorian Supreme Court jury found an accused not guilty of homicide of a one-month-old infant afflicted with the triad. Prosecution witnesses were heavily criticized for failing to provide impartial testimony and to abide by Supreme Court expert evidence rules. We argue that there is a need to reassess the manner in which expert witness testimony is considered by the courts in shaken baby cases where injury has caused the death of the infant.
{"title":"Shaken Baby Syndrome/Abusive Head Injury: The Role of Expert Witness Testimony and a Recent Case Development.","authors":"James Tibballs, Neera Bhatia","doi":"10.1007/s11673-025-10422-x","DOIUrl":"10.1007/s11673-025-10422-x","url":null,"abstract":"<p><p>The triad of clinical signs, (extensive bilateral retinal haemorrhages, subdural haematoma, and encephalopathy) is regarded by some expert witnesses as pathognomonic proof that an infant was deliberately shaken and head injured (shaken baby syndrome / abusive head injury). However, that view is controversial since scientific evidence does not support the diagnostic accuracy of the triad. In contrast to previous cases, a Victorian Supreme Court jury found an accused not guilty of homicide of a one-month-old infant afflicted with the triad. Prosecution witnesses were heavily criticized for failing to provide impartial testimony and to abide by Supreme Court expert evidence rules. We argue that there is a need to reassess the manner in which expert witness testimony is considered by the courts in shaken baby cases where injury has caused the death of the infant.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"483-491"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575584/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144112428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As AI systems increasingly operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged. This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices. Synthesizing recent developments in AI ethics, including concepts of distributed responsibility and ethical AI by design, the paper proposes a functionalist perspective as a framework. This perspective views moral responsibility not as an individual trait but as a role within a socio-technical system, distributed among human and artificial agents. As an example of "AI ethical by design," we present Basti and Vitiello's implementation. They suggest that AI can act as artificial moral agents by learning ethical guidelines and using Deontic Higher-Order Logic to assess decisions ethically. Motivated by the possible speed and scale beyond human supervision and ethical implications, the paper argues for "AI ethical by design," while acknowledging the distributed, shared, and dynamic nature of responsibility. This functionalist approach offers a practical framework for navigating the complexities of AI ethics in a rapidly evolving technological landscape.
随着人工智能系统越来越具有自主性和适应性,技术社会系统中道德责任的传统界限正在受到挑战。本文探讨了关于将责任委托给智能自主代理以及此类实践的伦理含义的不断发展的话语。综合人工智能伦理的最新发展,包括分布式责任和设计伦理人工智能的概念,本文提出了一个功能主义视角作为框架。这种观点认为,道德责任不是一种个人特质,而是社会技术系统中的一种角色,分布在人类和人工代理人之间。作为“人工智能伦理设计”的一个例子,我们将介绍basi和Vitiello的实现。他们认为,人工智能可以通过学习伦理准则和使用道义高阶逻辑(Deontic high - order Logic)来道德地评估决策,从而充当人工道德主体。在超越人类监督和伦理影响的可能速度和规模的激励下,该论文主张“人工智能伦理设计”,同时承认责任的分布式、共享和动态性质。这种功能主义的方法为在快速发展的技术环境中驾驭人工智能伦理的复杂性提供了一个实用的框架。
{"title":"Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits.","authors":"Gordana Dodig-Crnkovic, Gianfranco Basti, Tobias Holstein","doi":"10.1007/s11673-025-10428-5","DOIUrl":"10.1007/s11673-025-10428-5","url":null,"abstract":"<p><p>As AI systems increasingly operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged. This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices. Synthesizing recent developments in AI ethics, including concepts of distributed responsibility and ethical AI by design, the paper proposes a functionalist perspective as a framework. This perspective views moral responsibility not as an individual trait but as a role within a socio-technical system, distributed among human and artificial agents. As an example of \"AI ethical by design,\" we present Basti and Vitiello's implementation. They suggest that AI can act as artificial moral agents by learning ethical guidelines and using Deontic Higher-Order Logic to assess decisions ethically. Motivated by the possible speed and scale beyond human supervision and ethical implications, the paper argues for \"AI ethical by design,\" while acknowledging the distributed, shared, and dynamic nature of responsibility. This functionalist approach offers a practical framework for navigating the complexities of AI ethics in a rapidly evolving technological landscape.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"507-514"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575491/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144112365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-08-12DOI: 10.1007/s11673-024-10413-4
Katrien Devolder, Joshua Rottman, Qinyu Xiao, Guy Kahane, Lucius Caviola, Lauren Yip, Nadira S Faber
Recent medical research involving human-monkey chimeras, human brain organoids in rats, and the transplantation of a gene-edited pig heart and gene-edited pig kidneys in living human beings have intensified the debate about whether we should create human-animal chimeras for biomedical purposes and, if so, how we should treat them. Influential views in the debate frequently appeal to assumptions regarding how people will react to such chimeras. It has, for example, been argued that the most important objection against creating such chimeras is that this will result in inexorable moral confusion about species boundaries and will, as a result, threaten the social order. But is this indeed the case? We conducted three empirical studies to examine laypeople's views on the creation and treatment of various types of human-animal chimeras. Our studies indicate that laypeople find typical cases of xenotransplantation (i.e., the transplantation of an animal organ into a human patient) morally unproblematic. They assign the same moral status to humans with animal organs as to non-chimeric humans. By contrast, they sometimes (but not always) assign slightly higher moral status to animals with human organs than to non-chimeric animals. Overall, however, there is little indication of chimera technology blurring the line between humans and animals, and thus of the technology causing moral confusion.
{"title":"Will Human-Animal Chimeras Cause Moral Confusion? Exploring Public Attitudes.","authors":"Katrien Devolder, Joshua Rottman, Qinyu Xiao, Guy Kahane, Lucius Caviola, Lauren Yip, Nadira S Faber","doi":"10.1007/s11673-024-10413-4","DOIUrl":"10.1007/s11673-024-10413-4","url":null,"abstract":"<p><p>Recent medical research involving human-monkey chimeras, human brain organoids in rats, and the transplantation of a gene-edited pig heart and gene-edited pig kidneys in living human beings have intensified the debate about whether we should create human-animal chimeras for biomedical purposes and, if so, how we should treat them. Influential views in the debate frequently appeal to assumptions regarding how people will react to such chimeras. It has, for example, been argued that the most important objection against creating such chimeras is that this will result in inexorable moral confusion about species boundaries and will, as a result, threaten the social order. But is this indeed the case? We conducted three empirical studies to examine laypeople's views on the creation and treatment of various types of human-animal chimeras. Our studies indicate that laypeople find typical cases of xenotransplantation (i.e., the transplantation of an animal organ into a human patient) morally unproblematic. They assign the same moral status to humans with animal organs as to non-chimeric humans. By contrast, they sometimes (but not always) assign slightly higher moral status to animals with human organs than to non-chimeric animals. Overall, however, there is little indication of chimera technology blurring the line between humans and animals, and thus of the technology causing moral confusion.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"733-744"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-03DOI: 10.1007/s11673-025-10462-3
Jackie Leach Scully, Georgia van Toorn, Sandra Gendera
Over the last decade, bioethics has begun to address the ethical issues emerging as artificial intelligence (AI) and associated technological processes such as automated decision-making (ADM) become part of healthcare and research. Recent work on justice in AI demonstrates that supposedly neutral AI systems can perpetuate the marginalization of various communities. But so far, there has been little exploration of the interaction of AI and disability. In this empirically based project, we have explored the implications of ADM in the lives of people with disability in Australia. This paper focuses on a point that was consistently raised in discussion by disabled participants but is rarely encountered in the AI ethics literature, especially in relation to disability: the problem of automated systems' failures of recognition.
{"title":"Automating Misrecognition: The Case of Disability.","authors":"Jackie Leach Scully, Georgia van Toorn, Sandra Gendera","doi":"10.1007/s11673-025-10462-3","DOIUrl":"10.1007/s11673-025-10462-3","url":null,"abstract":"<p><p>Over the last decade, bioethics has begun to address the ethical issues emerging as artificial intelligence (AI) and associated technological processes such as automated decision-making (ADM) become part of healthcare and research. Recent work on justice in AI demonstrates that supposedly neutral AI systems can perpetuate the marginalization of various communities. But so far, there has been little exploration of the interaction of AI and disability. In this empirically based project, we have explored the implications of ADM in the lives of people with disability in Australia. This paper focuses on a point that was consistently raised in discussion by disabled participants but is rarely encountered in the AI ethics literature, especially in relation to disability: the problem of automated systems' failures of recognition.</p>","PeriodicalId":50252,"journal":{"name":"Journal of Bioethical Inquiry","volume":" ","pages":"593-600"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}