公众对医疗保健领域人工智能的看法:伦理问题与以患者为中心的护理机会。

IF 3 1区 哲学 Q1 ETHICS BMC Medical Ethics Pub Date : 2024-06-22 DOI:10.1186/s12910-024-01066-4
Kaila Witkowski, Ratna Okhai, Stephen R Neely
{"title":"公众对医疗保健领域人工智能的看法:伦理问题与以患者为中心的护理机会。","authors":"Kaila Witkowski, Ratna Okhai, Stephen R Neely","doi":"10.1186/s12910-024-01066-4","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care.</p><p><strong>Methods: </strong>An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10-21, 2023), and responses were collected to be representative of the state's population based on age, gender, race/ethnicity, and political affiliation.</p><p><strong>Results: </strong>Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the 'human touch' associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits.</p><p><strong>Conclusion: </strong>The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.</p>","PeriodicalId":55348,"journal":{"name":"BMC Medical Ethics","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11193174/pdf/","citationCount":"0","resultStr":"{\"title\":\"Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care.\",\"authors\":\"Kaila Witkowski, Ratna Okhai, Stephen R Neely\",\"doi\":\"10.1186/s12910-024-01066-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care.</p><p><strong>Methods: </strong>An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10-21, 2023), and responses were collected to be representative of the state's population based on age, gender, race/ethnicity, and political affiliation.</p><p><strong>Results: </strong>Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the 'human touch' associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits.</p><p><strong>Conclusion: </strong>The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.</p>\",\"PeriodicalId\":55348,\"journal\":{\"name\":\"BMC Medical Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11193174/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1186/s12910-024-01066-4\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1186/s12910-024-01066-4","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:为了提高医疗质量,以患者为中心的医疗理念几乎融入了医疗界的方方面面。尽管以患者为中心的护理理念已被患者和从业人员广泛接受,但仍有人担心,人工智能的快速发展可能会威胁到以患者为中心的护理理念,例如与护理提供者的个人关系以及由患者主导的选择。本研究探讨了患者在个人护理方面对使用这些技术的信心和适应程度,并确定了可能符合或威胁以患者为中心的护理要素的领域:我们采用了一种探索性的混合方法来分析来自佛罗里达州 600 名美国成年人的调查数据。调查是通过一家领先的市场调研提供商进行的(2023 年 8 月 10 日至 21 日),根据年龄、性别、种族/民族和政治派别收集了具有代表性的回答:受访者更倾向于在与医患关系无关的健康相关任务中使用人工智能,如安排患者预约或复诊(84.2%)。害怕失去与医生相关的 "人情味 "是定性编码中的一个共同主题,这表明人工智能的实施与以患者为中心的护理之间存在潜在冲突。此外,决策自我效能与人工智能的舒适度较高有关,但也有人担心失去决策控制权、劳动力变化和成本问题。一小部分参与者提到,人工智能对医生有用,可以带来更公平的医疗服务,但必须在一定范围内使用:结论:人工智能在医疗护理中的应用正在迅速发展,但在以患者为中心的护理的关键方面缺乏监督、监管和指导。虽然目前还没有证据表明人工智能会破坏患者与医生之间的关系,但患者对人工智能在医疗护理中的应用,特别是与医生的互动表示担忧。在坚持 "以病人为中心 "的医疗原则的同时,需要对人工智能的应用进行医疗指导,以明确人工智能将如何增强医疗服务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care.

Background: In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care.

Methods: An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10-21, 2023), and responses were collected to be representative of the state's population based on age, gender, race/ethnicity, and political affiliation.

Results: Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the 'human touch' associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits.

Conclusion: The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMC Medical Ethics
BMC Medical Ethics MEDICAL ETHICS-
CiteScore
5.20
自引率
7.40%
发文量
108
审稿时长
>12 weeks
期刊介绍: BMC Medical Ethics is an open access journal publishing original peer-reviewed research articles in relation to the ethical aspects of biomedical research and clinical practice, including professional choices and conduct, medical technologies, healthcare systems and health policies.
期刊最新文献
Public perceptions of the Hippocratic Oath in the U.K. 2023. Ethical challenges in organ transplantation for Syrian refugees in Türkiye. What ethical conflicts do internists in Spain, México and Argentina encounter? An international cross-sectional observational study based on a self-administrated survey. Medical futility at the end of life: the first qualitative study of ethical decision-making methods among Turkish doctors. Financial conflicts of interest among authors of clinical practice guideline for headache disorders in Japan.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1