Accuracy is inaccurate: Why a focus on diagnostic accuracy for medical chatbot AIs will not lead to improved health outcomes.

IF 1.7 2区 哲学 Q2 ETHICS Bioethics Pub Date : 2025-02-01 Epub Date: 2024-10-30 DOI:10.1111/bioe.13365
Stephen R Milford
{"title":"Accuracy is inaccurate: Why a focus on diagnostic accuracy for medical chatbot AIs will not lead to improved health outcomes.","authors":"Stephen R Milford","doi":"10.1111/bioe.13365","DOIUrl":null,"url":null,"abstract":"<p><p>Since its launch in November 2022, ChatGPT has become a global phenomenon, sparking widespread public interest in chatbot artificial intelligences (AIs) generally. While not approved for medical use, it is capable of passing all three United States medical licensing exams and offers diagnostic accuracy comparable to a human doctor. It seems inevitable that it, and tools like it, are and will be used by the general public to provide medical diagnostic information or treatment plans. Before we are taken in by the promise of a golden age for chatbot medical AIs, it would be wise to consider the implications of using these tools as either supplements to, or substitutes for, human doctors. With the rise of publicly available chatbot AIs, there has been a keen focus on research into the diagnostic accuracy of these tools. This, however, has left a notable gap in our understanding of the implications for health outcomes of these tools. Diagnosis accuracy is only part of good health care. For example, crucial to positive health outcomes is the doctor-patient relationship. This paper challenges the recent focus on diagnostic accuracy by drawing attention to the causal relationship between doctor-patient relationships and health outcomes arguing that chatbot AIs may even hinder outcomes in numerous ways including subtracting the elements of perception and observation that are crucial to clinical consultations. The paper offers brief suggestions to improve chatbot medical AIs so as to positively impact health outcomes.</p>","PeriodicalId":55379,"journal":{"name":"Bioethics","volume":" ","pages":"163-169"},"PeriodicalIF":1.7000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754992/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1111/bioe.13365","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/30 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Since its launch in November 2022, ChatGPT has become a global phenomenon, sparking widespread public interest in chatbot artificial intelligences (AIs) generally. While not approved for medical use, it is capable of passing all three United States medical licensing exams and offers diagnostic accuracy comparable to a human doctor. It seems inevitable that it, and tools like it, are and will be used by the general public to provide medical diagnostic information or treatment plans. Before we are taken in by the promise of a golden age for chatbot medical AIs, it would be wise to consider the implications of using these tools as either supplements to, or substitutes for, human doctors. With the rise of publicly available chatbot AIs, there has been a keen focus on research into the diagnostic accuracy of these tools. This, however, has left a notable gap in our understanding of the implications for health outcomes of these tools. Diagnosis accuracy is only part of good health care. For example, crucial to positive health outcomes is the doctor-patient relationship. This paper challenges the recent focus on diagnostic accuracy by drawing attention to the causal relationship between doctor-patient relationships and health outcomes arguing that chatbot AIs may even hinder outcomes in numerous ways including subtracting the elements of perception and observation that are crucial to clinical consultations. The paper offers brief suggestions to improve chatbot medical AIs so as to positively impact health outcomes.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
准确是不准确的:为什么关注医疗聊天机器人人工智能的诊断准确性不会带来更好的医疗效果?
自 2022 年 11 月推出以来,ChatGPT 已成为一种全球现象,引发了公众对聊天机器人人工智能(AI)的广泛兴趣。虽然它未被批准用于医疗用途,但它能通过美国所有三项医疗执照考试,诊断准确率可与人类医生媲美。它和类似的工具似乎不可避免地会被大众用来提供医疗诊断信息或治疗方案。在我们被聊天机器人医疗人工智能黄金时代的承诺所迷惑之前,最好先考虑一下使用这些工具作为人类医生的补充或替代品的影响。随着可公开获取的聊天机器人人工智能的兴起,人们开始热衷于研究这些工具的诊断准确性。然而,我们对这些工具对健康结果的影响的理解还存在明显差距。诊断准确性只是良好医疗保健的一部分。例如,医患关系对积极的健康结果至关重要。本文挑战了最近对诊断准确性的关注,提请人们注意医患关系与健康结果之间的因果关系,认为聊天机器人人工智能甚至可能以多种方式阻碍健康结果,包括减少对临床咨询至关重要的感知和观察元素。本文提出了改进聊天机器人医疗人工智能的简要建议,以便对健康结果产生积极影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Bioethics
Bioethics 医学-医学:伦理
CiteScore
4.20
自引率
9.10%
发文量
127
审稿时长
6-12 weeks
期刊介绍: As medical technology continues to develop, the subject of bioethics has an ever increasing practical relevance for all those working in philosophy, medicine, law, sociology, public policy, education and related fields. Bioethics provides a forum for well-argued articles on the ethical questions raised by current issues such as: international collaborative clinical research in developing countries; public health; infectious disease; AIDS; managed care; genomics and stem cell research. These questions are considered in relation to concrete ethical, legal and policy problems, or in terms of the fundamental concepts, principles and theories used in discussions of such problems. Bioethics also features regular Background Briefings on important current debates in the field. These feature articles provide excellent material for bioethics scholars, teachers and students alike.
期刊最新文献
'Bioethics: What? and why?' : Revisited. Trading one problem for two: The case against tobacco bans. Addressing the COVID-induced healthcare backlog: How can we balance the interests of people and nature? Clinical research vehicles as a modality for medical research education and conduct of decentralized trials, supporting justice, equity, and diversity in research. Palliative care-based arguments against assisted dying.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1