用于常规结果监测和临床反馈的易用人工智能

IF 1.2 Q3 PSYCHOLOGY, CLINICAL Counselling & Psychotherapy Research Pub Date : 2024-05-02 DOI:10.1002/capr.12764
Hans Jacob Westbye, Christian Moltu, Andrew A. McAleavey
{"title":"用于常规结果监测和临床反馈的易用人工智能","authors":"Hans Jacob Westbye,&nbsp;Christian Moltu,&nbsp;Andrew A. McAleavey","doi":"10.1002/capr.12764","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data-driven decision-support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data-driven recommendations and clinical judgement. While AI/ML-based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision-making process in a manner that is comprehensible to humans. The key to this approach is that end-users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.</p>","PeriodicalId":46997,"journal":{"name":"Counselling & Psychotherapy Research","volume":"25 1","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/capr.12764","citationCount":"0","resultStr":"{\"title\":\"eXplainable AI for routine outcome monitoring and clinical feedback\",\"authors\":\"Hans Jacob Westbye,&nbsp;Christian Moltu,&nbsp;Andrew A. McAleavey\",\"doi\":\"10.1002/capr.12764\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data-driven decision-support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data-driven recommendations and clinical judgement. While AI/ML-based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision-making process in a manner that is comprehensible to humans. The key to this approach is that end-users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.</p>\",\"PeriodicalId\":46997,\"journal\":{\"name\":\"Counselling & Psychotherapy Research\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/capr.12764\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Counselling & Psychotherapy Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/capr.12764\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY, CLINICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Counselling & Psychotherapy Research","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/capr.12764","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI),特别是机器学习(ML),善于从治疗过程中的常规结果监测(ROM)和临床反馈的大量数据中识别模式和洞察力。将人工智能/ML 模型应用于患者反馈数据时,可帮助临床医生预测治疗结果。临床医生抵制在临床实践中整合数据驱动型决策支持工具的常见原因包括对技术可靠性、相关性和实用性的担忧,以及认为数据驱动型建议与临床判断之间存在冲突。虽然基于人工智能/ML 的工具可以精确地指导治疗决策,但由于实施、可接受性和伦理方面的原因,目前可能还无法实现其潜力。在本文中,我们将概述可解释人工智能(XAI)的概念,这是解决这些问题的潜在方案。XAI 指的是一种人工智能,旨在以人类可理解的方式阐明其目的、原理和决策过程。这种方法的关键在于,终端用户可以看到从输入数据到建议之间清晰易懂的路径。我们使用真实的 Norse Feedback 数据来介绍一个人工智能/人工智能示例,展示 XAI 的一个用例。此外,我们还讨论了我们将在未来的 XAI 实施中采用的关键学习点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
eXplainable AI for routine outcome monitoring and clinical feedback

Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data-driven decision-support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data-driven recommendations and clinical judgement. While AI/ML-based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision-making process in a manner that is comprehensible to humans. The key to this approach is that end-users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Counselling & Psychotherapy Research
Counselling & Psychotherapy Research PSYCHOLOGY, CLINICAL-
CiteScore
4.40
自引率
12.50%
发文量
80
期刊介绍: Counselling and Psychotherapy Research is an innovative international peer-reviewed journal dedicated to linking research with practice. Pluralist in orientation, the journal recognises the value of qualitative, quantitative and mixed methods strategies of inquiry and aims to promote high-quality, ethical research that informs and develops counselling and psychotherapy practice. CPR is a journal of the British Association of Counselling and Psychotherapy, promoting reflexive research strongly linked to practice. The journal has its own website: www.cprjournal.com. The aim of this site is to further develop links between counselling and psychotherapy research and practice by offering accessible information about both the specific contents of each issue of CPR, as well as wider developments in counselling and psychotherapy research. The aims are to ensure that research remains relevant to practice, and for practice to continue to inform research development.
期刊最新文献
Clients' Reasons for Dropping Out of Therapy: A Qualitative Study The Effects of Dialectical Behavioural Therapy (DBT) on Cognitive and Emotional Symptoms of Adult ADHD: A Randomised Pilot Study Prevalence of Secondary Trauma, Compassion Fatigue and Burnout Among Trauma Therapists in Spain Instatherapy: A Content Analysis of Psychotherapists' Instagram Posts and User Engagement Enhancing Self-Esteem: Evaluating the Effects of a Self-Affirmation Intervention Among Indian Adults With Subclinical Depression
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1