Megan Su, Stephanie Hu, Hong Xiong, Elias Baedorf Kassis, Li-Wei H Lehman
Sepsis is a life-threatening condition that occurs when the body's normal response to an infection is out of balance. A key part of managing sepsis involves the administration of intravenous fluids and vasopressors. In this work, we explore the application of G-Net, a deep sequential modeling framework for g-computation, to predict outcomes under counterfactual fluid treatment strategies in a real-world cohort of sepsis patients. Utilizing observational data collected from the intensive care unit (ICU), we evaluate the performance of multiple deep learning implementations of G-Net and compare their predictive performance with linear models in forecasting patient outcomes and trajectories over time under the observational treatment regime. We then demonstrate that G-Net can generate counterfactual prediction of covariate trajectories that align with clinical expectations across various fluid limiting regimes. Our study demonstrates the potential clinical utility of G-Net in predicting counterfactual treatment outcomes, aiding clinicians in informed decision-making for sepsis patients in the ICU.
{"title":"Counterfactual Sepsis Outcome Prediction Under Dynamic and Time-Varying Treatment Regimes.","authors":"Megan Su, Stephanie Hu, Hong Xiong, Elias Baedorf Kassis, Li-Wei H Lehman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Sepsis is a life-threatening condition that occurs when the body's normal response to an infection is out of balance. A key part of managing sepsis involves the administration of intravenous fluids and vasopressors. In this work, we explore the application of G-Net, a deep sequential modeling framework for g-computation, to predict outcomes under counterfactual fluid treatment strategies in a real-world cohort of sepsis patients. Utilizing observational data collected from the intensive care unit (ICU), we evaluate the performance of multiple deep learning implementations of G-Net and compare their predictive performance with linear models in forecasting patient outcomes and trajectories over time under the observational treatment regime. We then demonstrate that G-Net can generate counterfactual prediction of covariate trajectories that align with clinical expectations across various fluid limiting regimes. Our study demonstrates the potential clinical utility of G-Net in predicting counterfactual treatment outcomes, aiding clinicians in informed decision-making for sepsis patients in the ICU.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"285-294"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir I Mina, Jessi U Espino, Allison M Bradley, Parthasarathy D Thirumala, Kayhan Batmanghelich, Shyam Visweswaran
Monitoring cerebral neuronal activity via electroencephalography (EEG) during surgery can detect ischemia, a precursor to stroke. However, current neurophysiologist-based monitoring is prone to error. In this study, we evaluated machine learning (ML) for efficient and accurate ischemia detection. We trained supervised ML models on a dataset of 802 patients with intraoperative ischemia labels and evaluated them on an independent validation dataset of 30 patients with refined labels from five neurophysiologists. Our results show moderate-to-substantial agreement between neurophysiologists, with Cohen's kappa values between 0.59 and 0.74. Neurophysiologist performance ranged from 58-93% for sensitivity and 83-96% for specificity, while ML models demonstrated comparable ranges of 63-89% and 85-96%. Random Forest (RF), LightGBM (LGBM), and XGBoost RF achieved area under the receiver operating characteristic curve (AUROC) values of 0.92-0.93 and area under the precision-recall curve (AUPRC) values of 0.79-0.83. ML has the potential to improve intraoperative monitoring, enhancing patient safety and reducing costs.
{"title":"Detecting Cerebral Ischemia From Electroencephalography During Carotid Endarterectomy Using Machine Learning.","authors":"Amir I Mina, Jessi U Espino, Allison M Bradley, Parthasarathy D Thirumala, Kayhan Batmanghelich, Shyam Visweswaran","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Monitoring cerebral neuronal activity via electroencephalography (EEG) during surgery can detect ischemia, a precursor to stroke. However, current neurophysiologist-based monitoring is prone to error. In this study, we evaluated machine learning (ML) for efficient and accurate ischemia detection. We trained supervised ML models on a dataset of 802 patients with intraoperative ischemia labels and evaluated them on an independent validation dataset of 30 patients with refined labels from five neurophysiologists. Our results show moderate-to-substantial agreement between neurophysiologists, with Cohen's kappa values between 0.59 and 0.74. Neurophysiologist performance ranged from 58-93% for sensitivity and 83-96% for specificity, while ML models demonstrated comparable ranges of 63-89% and 85-96%. Random Forest (RF), LightGBM (LGBM), and XGBoost RF achieved area under the receiver operating characteristic curve (AUROC) values of 0.92-0.93 and area under the precision-recall curve (AUPRC) values of 0.79-0.83. ML has the potential to improve intraoperative monitoring, enhancing patient safety and reducing costs.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"613-622"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Acronyms, abbreviations, and symbols play a significant role in clinical notes. Acronym and symbol sense disambiguation are crucial natural language processing (NLP) tasks that ensure the clarity and consistency of clinical notes and downstream NLP processing. Previous studies using traditional machine learning methods have been relatively successful in tackling this issue. In our research, we conducted an evaluation of large language models (LLMs), including ChatGPT 3.5 and 4, as well as other open LLMs, and BERT-based models, across three NLP tasks: acronym and symbol sense disambiguation, semantic similarity, and relatedness. Our findings emphasize ChatGPT's remarkable ability to distinguish between senses with minimal or zero-shot training. Additionally, open source LLM Mixtrial-8x7B exhibited high accuracy for acronyms with fewer senses, and moderate accuracy for symbol sense accuracy. BERT-based models outperformed previous machine learning approaches, achieving an impressive accuracy rate of over 95%, showcasing their effectiveness in addressing the challenge of acronym and symbol sense disambiguation. Furthermore, ChatGPT exhibited a strong correlation, surpassing 70%, with human gold standards when evaluating similarity and relatedness.
{"title":"Exploring Large Language Models for Acronym, Symbol Sense Disambiguation, and Semantic Similarity and Relatedness Assessment.","authors":"Ying Liu, Genevieve B Melton, Rui Zhang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Acronyms, abbreviations, and symbols play a significant role in clinical notes. Acronym and symbol sense disambiguation are crucial natural language processing (NLP) tasks that ensure the clarity and consistency of clinical notes and downstream NLP processing. Previous studies using traditional machine learning methods have been relatively successful in tackling this issue. In our research, we conducted an evaluation of large language models (LLMs), including ChatGPT 3.5 and 4, as well as other open LLMs, and BERT-based models, across three NLP tasks: acronym and symbol sense disambiguation, semantic similarity, and relatedness. Our findings emphasize ChatGPT's remarkable ability to distinguish between senses with minimal or zero-shot training. Additionally, open source LLM Mixtrial-8x7B exhibited high accuracy for acronyms with fewer senses, and moderate accuracy for symbol sense accuracy. BERT-based models outperformed previous machine learning approaches, achieving an impressive accuracy rate of over 95%, showcasing their effectiveness in addressing the challenge of acronym and symbol sense disambiguation. Furthermore, ChatGPT exhibited a strong correlation, surpassing 70%, with human gold standards when evaluating similarity and relatedness.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"324-333"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shizhuo Mu, Jingxuan Bao, Hanxiang Xu, Manu Shivakumar, Shu Yang, Xia Ning, Dokyoon Kim, Christos Davatzikos, Haochang Shou, Li Shen
Neurodegenerative processes are increasingly recognized as potential causative factors in Alzheimer's disease (AD) pathogenesis. While many studies have leveraged mediation analysis models to elucidate the underlying mechanisms linking genetic variants to AD diagnostic outcomes, the majority have predominantly focused on regional brain measure as a mediator, thereby compromising the granularity of the imaging data. In our investigation, using the imaging genetics data from a landmark AD cohort, we contrasted both region-based and voxel-based brain measurements as imaging endophenotypes, and examined their roles in mediating genetic effects on AD outcomes. Our findings underscored that using voxel-based morphometry offers enhanced statistical power. Moreover, we delineated specific mediation pathways between SNP, brain volume, and AD outcomes, shedding light on the intricate relationship among these variables.
{"title":"Multivariate mediation analysis with voxel-based morphometry revealed the neurodegeneration pathways from genetic variants to Alzheimer's Disease.","authors":"Shizhuo Mu, Jingxuan Bao, Hanxiang Xu, Manu Shivakumar, Shu Yang, Xia Ning, Dokyoon Kim, Christos Davatzikos, Haochang Shou, Li Shen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Neurodegenerative processes are increasingly recognized as potential causative factors in Alzheimer's disease (AD) pathogenesis. While many studies have leveraged mediation analysis models to elucidate the underlying mechanisms linking genetic variants to AD diagnostic outcomes, the majority have predominantly focused on regional brain measure as a mediator, thereby compromising the granularity of the imaging data. In our investigation, using the imaging genetics data from a landmark AD cohort, we contrasted both region-based and voxel-based brain measurements as imaging endophenotypes, and examined their roles in mediating genetic effects on AD outcomes. Our findings underscored that using voxel-based morphometry offers enhanced statistical power. Moreover, we delineated specific mediation pathways between SNP, brain volume, and AD outcomes, shedding light on the intricate relationship among these variables.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"344-353"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gondy Leroy, David Kauchak, Philip Harber, Ankit Pal, Akash Shukla
Text and audio simplification to increase information comprehension are important in healthcare. With the introduction of ChatGPT, evaluation of its simplification performance is needed. We provide a systematic comparison of human and ChatGPT simplified texts using fourteen metrics indicative of text difficulty. We briefly introduce our online editor where these simplification tools, including ChatGPT, are available. We scored twelve corpora using our metrics: six text, one audio, and five ChatGPT simplified corpora (using five different prompts). We then compare these corpora with texts simplified and verified in a prior user study. Finally, a medical domain expert evaluated the user study texts and five, new ChatGPT simplified versions. We found that simple corpora show higher similarity with the human simplified texts. ChatGPT simplification moves metrics in the right direction. The medical domain expert's evaluation showed a preference for the ChatGPT style, but the text itself was rated lower for content retention.
{"title":"Text and Audio Simplification: Human vs. ChatGPT.","authors":"Gondy Leroy, David Kauchak, Philip Harber, Ankit Pal, Akash Shukla","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Text and audio simplification to increase information comprehension are important in healthcare. With the introduction of ChatGPT, evaluation of its simplification performance is needed. We provide a systematic comparison of human and ChatGPT simplified texts using fourteen metrics indicative of text difficulty. We briefly introduce our online editor where these simplification tools, including ChatGPT, are available. We scored twelve corpora using our metrics: six text, one audio, and five ChatGPT simplified corpora (using five different prompts). We then compare these corpora with texts simplified and verified in a prior user study. Finally, a medical domain expert evaluated the user study texts and five, new ChatGPT simplified versions. We found that simple corpora show higher similarity with the human simplified texts. ChatGPT simplification moves metrics in the right direction. The medical domain expert's evaluation showed a preference for the ChatGPT style, but the text itself was rated lower for content retention.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"295-304"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rishivardhan Krishnamoorthy, Vishal Nagarajan, Hayden Pour, Supreeth P Shashikumar, Aaron Boussina, Emilia Farcas, Shamim Nemati, Christopher S Josef
Social Determinants of Health (SDoH) have been shown to have profound impacts on health-related outcomes, yet this data suffers from high rates of missingness in electronic health records (EHR). Moreover, limited English proficiency in the United States can be a barrier to communication with health care providers. In this study, we have designed a multilingual conversational agent capable of conducting SDoH surveys for use in healthcare environments. The agent asks questions in the patient's native language, translates responses into English, and subsequently maps these responses via a large language model (LLM) to structured options in a SDoH survey. This tool can be extended to a variety of survey instruments in either hospital or home settings, enabling the extraction of structured insights from free-text answers. The proposed approach heralds a shift towards more inclusive and insightful data collection, marking a significant stride in SDoH data enrichment for optimizing health outcome predictions and interventions.
{"title":"Voice-Enabled Response Analysis Agent (VERAA): Leveraging Large Language Models to Map Voice Responses in SDoH Survey.","authors":"Rishivardhan Krishnamoorthy, Vishal Nagarajan, Hayden Pour, Supreeth P Shashikumar, Aaron Boussina, Emilia Farcas, Shamim Nemati, Christopher S Josef","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Social Determinants of Health (SDoH) have been shown to have profound impacts on health-related outcomes, yet this data suffers from high rates of missingness in electronic health records (EHR). Moreover, limited English proficiency in the United States can be a barrier to communication with health care providers. In this study, we have designed a multilingual conversational agent capable of conducting SDoH surveys for use in healthcare environments. The agent asks questions in the patient's native language, translates responses into English, and subsequently maps these responses via a large language model (LLM) to structured options in a SDoH survey. This tool can be extended to a variety of survey instruments in either hospital or home settings, enabling the extraction of structured insights from free-text answers. The proposed approach heralds a shift towards more inclusive and insightful data collection, marking a significant stride in SDoH data enrichment for optimizing health outcome predictions and interventions.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"258-265"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141834/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prosanta Barai, Gondy Leroy, Prakash Bisht, Joshua M Rothman, Sumi Lee, Jennifer Andrews, Sydney A Rice, Arif Ahmed
Large Language Models (LLMs) have demonstrated immense potential in artificial intelligence across various domains, including healthcare. However, their efficacy is hindered by the need for high-quality labeled data, which is often expensive and time-consuming to create, particularly in low-resource domains like healthcare. To address these challenges, we propose a crowdsourcing (CS) framework enriched with quality control measures at the pre-, real-time-, and post-data gathering stages. Our study evaluated the effectiveness of enhancing data quality through its impact on LLMs (Bio-BERT) for predicting autism-related symptoms. The results show that real-time quality control improves data quality by 19% compared to pre-quality control. Fine-tuning Bio-BERT using crowdsourced data generally increased recall compared to the Bio-BERT baseline but lowered precision. Our findings highlighted the potential of crowdsourcing and quality control in resource-constrained environments and offered insights into optimizing healthcare LLMs for informed decision-making and improved patient care.
{"title":"Crowdsourcing with Enhanced Data Quality Assurance: An Efficient Approach to Mitigate Resource Scarcity Challenges in Training Large Language Models for Healthcare.","authors":"Prosanta Barai, Gondy Leroy, Prakash Bisht, Joshua M Rothman, Sumi Lee, Jennifer Andrews, Sydney A Rice, Arif Ahmed","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Large Language Models (LLMs) have demonstrated immense potential in artificial intelligence across various domains, including healthcare. However, their efficacy is hindered by the need for high-quality labeled data, which is often expensive and time-consuming to create, particularly in low-resource domains like healthcare. To address these challenges, we propose a crowdsourcing (CS) framework enriched with quality control measures at the pre-, real-time-, and post-data gathering stages. Our study evaluated the effectiveness of enhancing data quality through its impact on LLMs (Bio-BERT) for predicting autism-related symptoms. The results show that real-time quality control improves data quality by 19% compared to pre-quality control. Fine-tuning Bio-BERT using crowdsourced data generally increased recall compared to the Bio-BERT baseline but lowered precision. Our findings highlighted the potential of crowdsourcing and quality control in resource-constrained environments and offered insights into optimizing healthcare LLMs for informed decision-making and improved patient care.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"75-84"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The volume of information, and in particular personal information, generated each day is increasing at a staggering rate. The ability to leverage such information depends greatly on being able to satisfy the many compliance and privacy regulations that are appearing all over the world. We present READI, a utility preserving framework for the unstructured document de-identification. READI leverages Named Entity Recognition and Relation Extraction technology to improve the quality of the entity detection, thus improving the overall quality of the data de-identification process. In this proof of concept study, we evaluate the proposed approach on two different datasets and compare with the existing state-of-the-art approaches. We show that Relation Extraction-based Approach for De-Identification (READI) notably reduces the number of false positives and improves the utility of the de-identified text.
{"title":"Pragmatic De-Identification of Cross-Domain Unstructured Documents: A Utility-Preserving Approach with Relation Extraction Filtering.","authors":"Liubov Nedoshivina, Anisa Halimi, Joao Bettencourt-Silva, Stefano Braghin","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The volume of information, and in particular personal information, generated each day is increasing at a staggering rate. The ability to leverage such information depends greatly on being able to satisfy the many compliance and privacy regulations that are appearing all over the world. We present READI, a utility preserving framework for the unstructured document de-identification. READI leverages Named Entity Recognition and Relation Extraction technology to improve the quality of the entity detection, thus improving the overall quality of the data de-identification process. In this proof of concept study, we evaluate the proposed approach on two different datasets and compare with the existing state-of-the-art approaches. We show that Relation Extraction-based Approach for De-Identification (READI) notably reduces the number of false positives and improves the utility of the de-identified text.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"85-94"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coronary artery calcium (CAC) as assessed by computed tomography (CT) is a marker of subclinical coronary atherosclerosis. However, routine application of CAC scoring via CT is limited by high costs and accessibility. An electrocardiogram (ECG) is a widely-used, sensitive, cost-effective, non-invasive, and radiation-free diagnostic tool. Considering this, if artificial intelligence (AI)-enabled electrocardiograms (ECGs) could opportunistically detect CAC, it would be particularly beneficial for the asymptomatic or subclinical populations, acting as an initial screening measure, paving the way for further confirmatory tests and preventive strategies, a step ahead of conventional practices. With this aim, we developed an AI-enabled ECG framework that not only predicts a CAC score ≥400 but also offers a visual explanation of the associated potential morphological ECG changes, and tested its efficacy on individuals undergoing health checkups, a group primarily comprising healthy or subclinical individuals. To ensure broader applicability, we performed external validation at a separate institution.
{"title":"An Explainable Artificial Intelligence-enabled ECG Framework for the Prediction of Subclinical Coronary Atherosclerosis.","authors":"Changho Han, Dukyong Yoon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Coronary artery calcium (CAC) as assessed by computed tomography (CT) is a marker of subclinical coronary atherosclerosis. However, routine application of CAC scoring via CT is limited by high costs and accessibility. An electrocardiogram (ECG) is a widely-used, sensitive, cost-effective, non-invasive, and radiation-free diagnostic tool. Considering this, if artificial intelligence (AI)-enabled electrocardiograms (ECGs) could opportunistically detect CAC, it would be particularly beneficial for the asymptomatic or subclinical populations, acting as an initial screening measure, paving the way for further confirmatory tests and preventive strategies, a step ahead of conventional practices. With this aim, we developed an AI-enabled ECG framework that not only predicts a CAC score ≥400 but also offers a visual explanation of the associated potential morphological ECG changes, and tested its efficacy on individuals undergoing health checkups, a group primarily comprising healthy or subclinical individuals. To ensure broader applicability, we performed external validation at a separate institution.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"535-544"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141849/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abheet Singh Sachdeva, Avery Bell, Dr Jacob Furst, Dorothy A Kozlowski, Sonya Crabtree-Nelson, Daniela Raicu
Research studies have presented an unappreciated relationship between intimate partner violence (IPV) survivors and symptoms of traumatic brain injuries (TBI). Within these IPV survivors, resulting TBIs are not always identified during emergency room visits. This demonstrates a need for a prescreening tool that identifies IPV survivors who should receive TBI screening. We present a model that measures similarities to clinical reports for confirmed TBI cases to identify whether a patient should be screened for TBI. This is done through an ensemble of three supervised learning classifiers which work in two distinct feature spaces. Individual classifiers are trained on clinical reports and then used to create an ensemble that needs only one positive label to indicate a patient should be screened for TBI.
{"title":"A Traumatic Brain Injury Prescreening Tool for Intimate Partner Violence Patients Using Initial Clinical Reports and Machine Learning.","authors":"Abheet Singh Sachdeva, Avery Bell, Dr Jacob Furst, Dorothy A Kozlowski, Sonya Crabtree-Nelson, Daniela Raicu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Research studies have presented an unappreciated relationship between intimate partner violence (IPV) survivors and symptoms of traumatic brain injuries (TBI). Within these IPV survivors, resulting TBIs are not always identified during emergency room visits. This demonstrates a need for a prescreening tool that identifies IPV survivors who should receive TBI screening. We present a model that measures similarities to clinical reports for confirmed TBI cases to identify whether a patient should be screened for TBI. This is done through an ensemble of three supervised learning classifiers which work in two distinct feature spaces. Individual classifiers are trained on clinical reports and then used to create an ensemble that needs only one positive label to indicate a patient should be screened for TBI.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2024 ","pages":"401-408"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}