Consent-GPT: is it ethical to delegate procedural consent to conversational AI?

IF 3.3 2区 哲学 Q1 ETHICS Journal of Medical Ethics Pub Date : 2024-01-23 DOI:10.1136/jme-2023-109347
Jemima Winifred Allen, Brian D Earp, Julian Koplin, Dominic Wilkinson
{"title":"Consent-GPT: is it ethical to delegate procedural consent to conversational AI?","authors":"Jemima Winifred Allen, Brian D Earp, Julian Koplin, Dominic Wilkinson","doi":"10.1136/jme-2023-109347","DOIUrl":null,"url":null,"abstract":"<p><p>Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients' access to the relevant procedural information and therefore enhance informed decision-making.In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"77-83"},"PeriodicalIF":3.3000,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10850653/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2023-109347","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients' access to the relevant procedural information and therefore enhance informed decision-making.In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
同意GPT:将程序同意委托给对话人工智能是否合乎道德?
在医疗或外科手术之前获得患者的知情同意是安全和合乎道德的临床实践的基本组成部分。目前,将同意程序的很大一部分委托给不执行该程序的临床团队成员(如初级医生)是惯例。然而,通常情况下,接受同意的代表缺乏足够的时间和临床知识来充分促进患者的自主性和知情决策。这些问题可以通过多种方式加以解决。这种临床困境的一个可能解决方案是使用大型语言模型(LLM)使用会话人工智能。人们对这种模式在医学上的潜在好处相当感兴趣。对于委托程序同意,LLM可以改善患者获得相关程序信息的机会,从而增强知情决策。在本文中,我们首先概述了一个在手术前向LLM委托同意的假设例子。然后,我们讨论了现有的同意委托临床指南,以及当前实践可能无法满足知情同意伦理目的的一些方式。我们概述并讨论了在医学中将同意委托给LLM的伦理含义,得出的结论是,至少在某些临床情况下,LLM的益处可能远远超过当前实践的益处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
期刊最新文献
Strengthening harm-theoretic pro-life views. Wish to die trying to live: unwise or incapacitous? The case of University Hospitals Birmingham NHS Foundation Trust versus 'ST'. Pregnant women are often not listened to, but pathologising pregnancy isn't the solution. How ectogestation can impact the gestational versus moral parenthood debate. If not a right to children because of gestation, then not a duty towards them either.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1