Hans Jacob Westbye, Christian Moltu, Andrew A. McAleavey
{"title":"用于常规结果监测和临床反馈的易用人工智能","authors":"Hans Jacob Westbye, Christian Moltu, Andrew A. McAleavey","doi":"10.1002/capr.12764","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data-driven decision-support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data-driven recommendations and clinical judgement. While AI/ML-based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision-making process in a manner that is comprehensible to humans. The key to this approach is that end-users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.</p>","PeriodicalId":46997,"journal":{"name":"Counselling & Psychotherapy Research","volume":"25 1","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/capr.12764","citationCount":"0","resultStr":"{\"title\":\"eXplainable AI for routine outcome monitoring and clinical feedback\",\"authors\":\"Hans Jacob Westbye, Christian Moltu, Andrew A. McAleavey\",\"doi\":\"10.1002/capr.12764\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data-driven decision-support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data-driven recommendations and clinical judgement. While AI/ML-based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision-making process in a manner that is comprehensible to humans. The key to this approach is that end-users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.</p>\",\"PeriodicalId\":46997,\"journal\":{\"name\":\"Counselling & Psychotherapy Research\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/capr.12764\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Counselling & Psychotherapy Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/capr.12764\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY, CLINICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Counselling & Psychotherapy Research","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/capr.12764","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
eXplainable AI for routine outcome monitoring and clinical feedback
Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data-driven decision-support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data-driven recommendations and clinical judgement. While AI/ML-based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision-making process in a manner that is comprehensible to humans. The key to this approach is that end-users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.
期刊介绍:
Counselling and Psychotherapy Research is an innovative international peer-reviewed journal dedicated to linking research with practice. Pluralist in orientation, the journal recognises the value of qualitative, quantitative and mixed methods strategies of inquiry and aims to promote high-quality, ethical research that informs and develops counselling and psychotherapy practice. CPR is a journal of the British Association of Counselling and Psychotherapy, promoting reflexive research strongly linked to practice. The journal has its own website: www.cprjournal.com. The aim of this site is to further develop links between counselling and psychotherapy research and practice by offering accessible information about both the specific contents of each issue of CPR, as well as wider developments in counselling and psychotherapy research. The aims are to ensure that research remains relevant to practice, and for practice to continue to inform research development.