利用功能磁共振成像数据治疗创伤后应激障碍的可靠临床决策支持系统

J. Bhattacharya;A. Gupta;M. N. Dretsch;T. S. Denney;G. Deshpande
{"title":"利用功能磁共振成像数据治疗创伤后应激障碍的可靠临床决策支持系统","authors":"J. Bhattacharya;A. Gupta;M. N. Dretsch;T. S. Denney;G. Deshpande","doi":"10.1109/TAI.2024.3411596","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an upsurge in artificial intelligence (AI) systems. These systems, along with efficient performance and predictability, also need to incorporate the power of explainability and interpretability. This can significantly aid clinical decision support by providing explainable predictions to assist clinicians. Explainability generally involves uncovering key input features important for classification. However, characterizing the uncertainty underlying the decisions of the AI system is an important aspect needed for interpreting the decisions. This is especially important in clinical decision support systems, given considerations of medical ethics such as nonmaleficence and beneficence. In this study, we develop methods for characterizing the decision certainty of machine learning (ML)-based clinical decision support systems. As an illustrative example, we introduce a framework for ML-based posttraumatic stress disorder (PTSD) diagnostic classification that classifies the subjects into pure and mixed classes. Accordingly, a clinician can have very high confidence (\n<inline-formula><tex-math>$\\geq$</tex-math></inline-formula>\n95% probability) about the diagnosis of a subject in a pure PTSD or combat control class. Remaining sample points for which the AI classification tool does not have very high confidence (\n<inline-formula><tex-math>$&lt;$</tex-math></inline-formula>\n95% probability) are grouped into a mixed class. Such a scheme will address ethical considerations of nonmaleficence and beneficence since the clinicians can use the AI system to identify those subjects whose diagnosis has a very high degree of confidence (and proceed with treatment accordingly), and refer those in the uncertain/mixed group to further tests. This is a novel approach, in contrast to the existing framework which aims to maximize classification.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5605-5615"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Reliable Clinical Decision Support System for Posttraumatic Stress Disorder Using Functional Magnetic Resonance Imaging Data\",\"authors\":\"J. Bhattacharya;A. Gupta;M. N. Dretsch;T. S. Denney;G. Deshpande\",\"doi\":\"10.1109/TAI.2024.3411596\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been an upsurge in artificial intelligence (AI) systems. These systems, along with efficient performance and predictability, also need to incorporate the power of explainability and interpretability. This can significantly aid clinical decision support by providing explainable predictions to assist clinicians. Explainability generally involves uncovering key input features important for classification. However, characterizing the uncertainty underlying the decisions of the AI system is an important aspect needed for interpreting the decisions. This is especially important in clinical decision support systems, given considerations of medical ethics such as nonmaleficence and beneficence. In this study, we develop methods for characterizing the decision certainty of machine learning (ML)-based clinical decision support systems. As an illustrative example, we introduce a framework for ML-based posttraumatic stress disorder (PTSD) diagnostic classification that classifies the subjects into pure and mixed classes. Accordingly, a clinician can have very high confidence (\\n<inline-formula><tex-math>$\\\\geq$</tex-math></inline-formula>\\n95% probability) about the diagnosis of a subject in a pure PTSD or combat control class. Remaining sample points for which the AI classification tool does not have very high confidence (\\n<inline-formula><tex-math>$&lt;$</tex-math></inline-formula>\\n95% probability) are grouped into a mixed class. Such a scheme will address ethical considerations of nonmaleficence and beneficence since the clinicians can use the AI system to identify those subjects whose diagnosis has a very high degree of confidence (and proceed with treatment accordingly), and refer those in the uncertain/mixed group to further tests. This is a novel approach, in contrast to the existing framework which aims to maximize classification.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 11\",\"pages\":\"5605-5615\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10552624/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10552624/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,人工智能(AI)系统激增。这些系统除了具有高效的性能和可预测性外,还需要具备可解释性和可解释性。这可以通过提供可解释的预测来协助临床医生,从而极大地帮助临床决策支持。可解释性通常包括发现对分类非常重要的关键输入特征。然而,表征人工智能系统决策背后的不确定性也是解释决策所需的一个重要方面。考虑到非渎职和受益等医学伦理因素,这一点在临床决策支持系统中尤为重要。在本研究中,我们开发了描述基于机器学习(ML)的临床决策支持系统决策确定性的方法。作为一个示例,我们介绍了基于 ML 的创伤后应激障碍(PTSD)诊断分类框架,该框架将受试者分为纯类和混合类。因此,临床医生可以有很高的信心(95% 的概率)将受试者诊断为纯创伤后应激障碍或战斗控制类。剩下的样本点,如果人工智能分类工具没有很高的可信度(95% 的概率),则被归入混合类。这样的方案可以解决非公益性和公益性的伦理问题,因为临床医生可以使用人工智能系统来识别那些诊断可信度非常高的受试者(并据此进行治疗),并将那些不确定/混合组的受试者转入进一步的测试。这是一种新颖的方法,与旨在最大化分类的现有框架形成鲜明对比。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Reliable Clinical Decision Support System for Posttraumatic Stress Disorder Using Functional Magnetic Resonance Imaging Data
In recent years, there has been an upsurge in artificial intelligence (AI) systems. These systems, along with efficient performance and predictability, also need to incorporate the power of explainability and interpretability. This can significantly aid clinical decision support by providing explainable predictions to assist clinicians. Explainability generally involves uncovering key input features important for classification. However, characterizing the uncertainty underlying the decisions of the AI system is an important aspect needed for interpreting the decisions. This is especially important in clinical decision support systems, given considerations of medical ethics such as nonmaleficence and beneficence. In this study, we develop methods for characterizing the decision certainty of machine learning (ML)-based clinical decision support systems. As an illustrative example, we introduce a framework for ML-based posttraumatic stress disorder (PTSD) diagnostic classification that classifies the subjects into pure and mixed classes. Accordingly, a clinician can have very high confidence ( $\geq$ 95% probability) about the diagnosis of a subject in a pure PTSD or combat control class. Remaining sample points for which the AI classification tool does not have very high confidence ( $<$ 95% probability) are grouped into a mixed class. Such a scheme will address ethical considerations of nonmaleficence and beneficence since the clinicians can use the AI system to identify those subjects whose diagnosis has a very high degree of confidence (and proceed with treatment accordingly), and refer those in the uncertain/mixed group to further tests. This is a novel approach, in contrast to the existing framework which aims to maximize classification.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
Table of Contents Front Cover IEEE Transactions on Artificial Intelligence Publication Information Table of Contents Front Cover
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1