AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation.

JMIR nursing Pub Date : 2024-07-25 DOI:10.2196/55962
Dirk R M Lukkien, Nathalie E Stolwijk, Sima Ipakchian Askari, Bob M Hofstede, Henk Herman Nap, Wouter P C Boon, Alexander Peine, Ellen H M Moors, Mirella M N Minkman
{"title":"AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation.","authors":"Dirk R M Lukkien, Nathalie E Stolwijk, Sima Ipakchian Askari, Bob M Hofstede, Henk Herman Nap, Wouter P C Boon, Alexander Peine, Ellen H M Moors, Mirella M N Minkman","doi":"10.2196/55962","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Although the use of artificial intelligence (AI)-based technologies, such as AI-based decision support systems (AI-DSSs), can help sustain and improve the quality and efficiency of care, their deployment creates ethical and social challenges. In recent years, a growing prevalence of high-level guidelines and frameworks for responsible AI innovation has been observed. However, few studies have specified the responsible embedding of AI-based technologies, such as AI-DSSs, in specific contexts, such as the nursing process in long-term care (LTC) for older adults.</p><p><strong>Objective: </strong>Prerequisites for responsible AI-assisted decision-making in nursing practice were explored from the perspectives of nurses and other professional stakeholders in LTC.</p><p><strong>Methods: </strong>Semistructured interviews were conducted with 24 care professionals in Dutch LTC, including nurses, care coordinators, data specialists, and care centralists. A total of 2 imaginary scenarios about AI-DSSs were developed beforehand and used to enable participants articulate their expectations regarding the opportunities and risks of AI-assisted decision-making. In addition, 6 high-level principles for responsible AI were used as probing themes to evoke further consideration of the risks associated with using AI-DSSs in LTC. Furthermore, the participants were asked to brainstorm possible strategies and actions in the design, implementation, and use of AI-DSSs to address or mitigate these risks. A thematic analysis was performed to identify the opportunities and risks of AI-assisted decision-making in nursing practice and the associated prerequisites for responsible innovation in this area.</p><p><strong>Results: </strong>The stance of care professionals on the use of AI-DSSs is not a matter of purely positive or negative expectations but rather a nuanced interplay of positive and negative elements that lead to a weighed perception of the prerequisites for responsible AI-assisted decision-making. Both opportunities and risks were identified in relation to the early identification of care needs, guidance in devising care strategies, shared decision-making, and the workload of and work experience of caregivers. To optimally balance the opportunities and risks of AI-assisted decision-making, seven categories of prerequisites for responsible AI-assisted decision-making in nursing practice were identified: (1) regular deliberation on data collection; (2) a balanced proactive nature of AI-DSSs; (3) incremental advancements aligned with trust and experience; (4) customization for all user groups, including clients and caregivers; (5) measures to counteract bias and narrow perspectives; (6) human-centric learning loops; and (7) the routinization of using AI-DSSs.</p><p><strong>Conclusions: </strong>The opportunities of AI-assisted decision-making in nursing practice could turn into drawbacks depending on the specific shaping of the design and deployment of AI-DSSs. Therefore, we recommend considering the responsible use of AI-DSSs as a balancing act. Moreover, considering the interrelatedness of the identified prerequisites, we call for various actors, including developers and users of AI-DSSs, to cohesively address the different factors important to the responsible embedding of AI-DSSs in practice.</p>","PeriodicalId":73556,"journal":{"name":"JMIR nursing","volume":"7 ","pages":"e55962"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11310645/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR nursing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/55962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Although the use of artificial intelligence (AI)-based technologies, such as AI-based decision support systems (AI-DSSs), can help sustain and improve the quality and efficiency of care, their deployment creates ethical and social challenges. In recent years, a growing prevalence of high-level guidelines and frameworks for responsible AI innovation has been observed. However, few studies have specified the responsible embedding of AI-based technologies, such as AI-DSSs, in specific contexts, such as the nursing process in long-term care (LTC) for older adults.

Objective: Prerequisites for responsible AI-assisted decision-making in nursing practice were explored from the perspectives of nurses and other professional stakeholders in LTC.

Methods: Semistructured interviews were conducted with 24 care professionals in Dutch LTC, including nurses, care coordinators, data specialists, and care centralists. A total of 2 imaginary scenarios about AI-DSSs were developed beforehand and used to enable participants articulate their expectations regarding the opportunities and risks of AI-assisted decision-making. In addition, 6 high-level principles for responsible AI were used as probing themes to evoke further consideration of the risks associated with using AI-DSSs in LTC. Furthermore, the participants were asked to brainstorm possible strategies and actions in the design, implementation, and use of AI-DSSs to address or mitigate these risks. A thematic analysis was performed to identify the opportunities and risks of AI-assisted decision-making in nursing practice and the associated prerequisites for responsible innovation in this area.

Results: The stance of care professionals on the use of AI-DSSs is not a matter of purely positive or negative expectations but rather a nuanced interplay of positive and negative elements that lead to a weighed perception of the prerequisites for responsible AI-assisted decision-making. Both opportunities and risks were identified in relation to the early identification of care needs, guidance in devising care strategies, shared decision-making, and the workload of and work experience of caregivers. To optimally balance the opportunities and risks of AI-assisted decision-making, seven categories of prerequisites for responsible AI-assisted decision-making in nursing practice were identified: (1) regular deliberation on data collection; (2) a balanced proactive nature of AI-DSSs; (3) incremental advancements aligned with trust and experience; (4) customization for all user groups, including clients and caregivers; (5) measures to counteract bias and narrow perspectives; (6) human-centric learning loops; and (7) the routinization of using AI-DSSs.

Conclusions: The opportunities of AI-assisted decision-making in nursing practice could turn into drawbacks depending on the specific shaping of the design and deployment of AI-DSSs. Therefore, we recommend considering the responsible use of AI-DSSs as a balancing act. Moreover, considering the interrelatedness of the identified prerequisites, we call for various actors, including developers and users of AI-DSSs, to cohesively address the different factors important to the responsible embedding of AI-DSSs in practice.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
长期护理中的人工智能辅助决策:关于负责任创新前提条件的定性研究。
背景:尽管人工智能(AI)技术(如基于人工智能的决策支持系统(AI-DSS))的使用有助于维持和提高医疗质量和效率,但其部署也带来了伦理和社会挑战。近年来,有关负责任的人工智能创新的高级指南和框架越来越多。然而,很少有研究明确指出,在具体情况下,如在老年人长期护理(LTC)的护理过程中,如何负责任地嵌入基于人工智能的技术,如人工智能决策支持系统(AI-DSS):从护士和长期护理领域其他专业利益相关者的角度探讨了护理实践中负责任的人工智能辅助决策的先决条件:对荷兰 LTC 的 24 名护理专业人员(包括护士、护理协调员、数据专家和护理中心主任)进行了半结构式访谈。事先共设计了两个有关人工智能辅助决策系统的假想场景,用于让参与者表达他们对人工智能辅助决策的机遇和风险的预期。此外,还将负责任的人工智能的 6 项高层次原则作为探究主题,以唤起人们进一步思考在长期护理中使用人工智能辅助决策系统的相关风险。此外,与会者还被要求集思广益,在设计、实施和使用人工智能辅助决策系统时采取可能的策略和行动,以应对或降低这些风险。我们进行了专题分析,以确定护理实践中人工智能辅助决策的机遇和风险,以及在该领域进行负责任创新的相关前提条件:结果:护理专业人员对使用人工智能辅助决策系统的态度并不是纯粹的积极或消极期望,而是积极和消极因素的微妙相互作用,从而导致对负责任的人工智能辅助决策前提条件的权衡认识。在早期识别护理需求、指导制定护理策略、共同决策以及护理人员的工作量和工作经验方面,机遇和风险并存。为了在人工智能辅助决策的机遇与风险之间取得最佳平衡,在护理实践中确定了七类负责任的人工智能辅助决策的先决条件:(1)定期审议数据收集;(2)人工智能辅助决策系统的平衡主动性;(3)根据信任和经验逐步推进;(4)为包括客户和护理人员在内的所有用户群体定制;(5)采取措施消除偏见和狭隘观点;(6)以人为本的学习循环;(7)人工智能辅助决策系统的常规化使用:人工智能辅助决策在护理实践中的机遇可能转化为弊端,这取决于人工智能辅助决策系统的具体设计和部署。因此,我们建议将负责任地使用人工智能辅助决策系统视为一种平衡行为。此外,考虑到已确定的先决条件之间的相互关联性,我们呼吁包括 AI-DSSs 开发者和使用者在内的各种参与者,共同解决对将 AI-DSSs 负责任地嵌入实践中十分重要的不同因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.20
自引率
0.00%
发文量
0
审稿时长
16 weeks
期刊最新文献
Unobtrusive Nighttime Movement Monitoring to Support Nursing Home Continence Care: Algorithm Development and Validation Study. Educators' perceptions and experiences of online teaching to foster caring professions students' development of virtual caring skills: A sequential explanatory mixed-methods study. Assessing Visitor Expectations of AI Nursing Robots in Hospital Settings: Cross-Sectional Study Using the Kano Model. Calculating Optimal Patient to Nursing Capacity: Comparative Analysis of Traditional and New Methods. Remote Patient Monitoring at Home in Patients With COVID-19: Narrative Review.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1