J. Bhattacharya;A. Gupta;M. N. Dretsch;T. S. Denney;G. Deshpande
{"title":"利用功能磁共振成像数据治疗创伤后应激障碍的可靠临床决策支持系统","authors":"J. Bhattacharya;A. Gupta;M. N. Dretsch;T. S. Denney;G. Deshpande","doi":"10.1109/TAI.2024.3411596","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an upsurge in artificial intelligence (AI) systems. These systems, along with efficient performance and predictability, also need to incorporate the power of explainability and interpretability. This can significantly aid clinical decision support by providing explainable predictions to assist clinicians. Explainability generally involves uncovering key input features important for classification. However, characterizing the uncertainty underlying the decisions of the AI system is an important aspect needed for interpreting the decisions. This is especially important in clinical decision support systems, given considerations of medical ethics such as nonmaleficence and beneficence. In this study, we develop methods for characterizing the decision certainty of machine learning (ML)-based clinical decision support systems. As an illustrative example, we introduce a framework for ML-based posttraumatic stress disorder (PTSD) diagnostic classification that classifies the subjects into pure and mixed classes. Accordingly, a clinician can have very high confidence (\n<inline-formula><tex-math>$\\geq$</tex-math></inline-formula>\n95% probability) about the diagnosis of a subject in a pure PTSD or combat control class. Remaining sample points for which the AI classification tool does not have very high confidence (\n<inline-formula><tex-math>$<$</tex-math></inline-formula>\n95% probability) are grouped into a mixed class. Such a scheme will address ethical considerations of nonmaleficence and beneficence since the clinicians can use the AI system to identify those subjects whose diagnosis has a very high degree of confidence (and proceed with treatment accordingly), and refer those in the uncertain/mixed group to further tests. This is a novel approach, in contrast to the existing framework which aims to maximize classification.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5605-5615"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Reliable Clinical Decision Support System for Posttraumatic Stress Disorder Using Functional Magnetic Resonance Imaging Data\",\"authors\":\"J. Bhattacharya;A. Gupta;M. N. Dretsch;T. S. Denney;G. Deshpande\",\"doi\":\"10.1109/TAI.2024.3411596\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been an upsurge in artificial intelligence (AI) systems. These systems, along with efficient performance and predictability, also need to incorporate the power of explainability and interpretability. This can significantly aid clinical decision support by providing explainable predictions to assist clinicians. Explainability generally involves uncovering key input features important for classification. However, characterizing the uncertainty underlying the decisions of the AI system is an important aspect needed for interpreting the decisions. This is especially important in clinical decision support systems, given considerations of medical ethics such as nonmaleficence and beneficence. In this study, we develop methods for characterizing the decision certainty of machine learning (ML)-based clinical decision support systems. As an illustrative example, we introduce a framework for ML-based posttraumatic stress disorder (PTSD) diagnostic classification that classifies the subjects into pure and mixed classes. Accordingly, a clinician can have very high confidence (\\n<inline-formula><tex-math>$\\\\geq$</tex-math></inline-formula>\\n95% probability) about the diagnosis of a subject in a pure PTSD or combat control class. Remaining sample points for which the AI classification tool does not have very high confidence (\\n<inline-formula><tex-math>$<$</tex-math></inline-formula>\\n95% probability) are grouped into a mixed class. Such a scheme will address ethical considerations of nonmaleficence and beneficence since the clinicians can use the AI system to identify those subjects whose diagnosis has a very high degree of confidence (and proceed with treatment accordingly), and refer those in the uncertain/mixed group to further tests. This is a novel approach, in contrast to the existing framework which aims to maximize classification.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 11\",\"pages\":\"5605-5615\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10552624/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10552624/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
近年来,人工智能(AI)系统激增。这些系统除了具有高效的性能和可预测性外,还需要具备可解释性和可解释性。这可以通过提供可解释的预测来协助临床医生,从而极大地帮助临床决策支持。可解释性通常包括发现对分类非常重要的关键输入特征。然而,表征人工智能系统决策背后的不确定性也是解释决策所需的一个重要方面。考虑到非渎职和受益等医学伦理因素,这一点在临床决策支持系统中尤为重要。在本研究中,我们开发了描述基于机器学习(ML)的临床决策支持系统决策确定性的方法。作为一个示例,我们介绍了基于 ML 的创伤后应激障碍(PTSD)诊断分类框架,该框架将受试者分为纯类和混合类。因此,临床医生可以有很高的信心(95% 的概率)将受试者诊断为纯创伤后应激障碍或战斗控制类。剩下的样本点,如果人工智能分类工具没有很高的可信度(95% 的概率),则被归入混合类。这样的方案可以解决非公益性和公益性的伦理问题,因为临床医生可以使用人工智能系统来识别那些诊断可信度非常高的受试者(并据此进行治疗),并将那些不确定/混合组的受试者转入进一步的测试。这是一种新颖的方法,与旨在最大化分类的现有框架形成鲜明对比。
A Reliable Clinical Decision Support System for Posttraumatic Stress Disorder Using Functional Magnetic Resonance Imaging Data
In recent years, there has been an upsurge in artificial intelligence (AI) systems. These systems, along with efficient performance and predictability, also need to incorporate the power of explainability and interpretability. This can significantly aid clinical decision support by providing explainable predictions to assist clinicians. Explainability generally involves uncovering key input features important for classification. However, characterizing the uncertainty underlying the decisions of the AI system is an important aspect needed for interpreting the decisions. This is especially important in clinical decision support systems, given considerations of medical ethics such as nonmaleficence and beneficence. In this study, we develop methods for characterizing the decision certainty of machine learning (ML)-based clinical decision support systems. As an illustrative example, we introduce a framework for ML-based posttraumatic stress disorder (PTSD) diagnostic classification that classifies the subjects into pure and mixed classes. Accordingly, a clinician can have very high confidence (
$\geq$
95% probability) about the diagnosis of a subject in a pure PTSD or combat control class. Remaining sample points for which the AI classification tool does not have very high confidence (
$<$
95% probability) are grouped into a mixed class. Such a scheme will address ethical considerations of nonmaleficence and beneficence since the clinicians can use the AI system to identify those subjects whose diagnosis has a very high degree of confidence (and proceed with treatment accordingly), and refer those in the uncertain/mixed group to further tests. This is a novel approach, in contrast to the existing framework which aims to maximize classification.