Yanbo Feng, Bojian Hou, Ari Klein, Karen O'Connor, Jiong Chen, Andrées Mondragóon, Shu Yang, Graciela Gonzalez-Hernandez, Li Shen
Dementia profoundly impacts patients and their families, making it essential to understand the experiences and concerns offamily caregivers for enhanced support and care. This study introduces a novel approach to analyzing tweets from individuals whose family members suffer from dementia. We preprocessed our collected Twitter (now X) data using advanced natural language processing techniques and enhanced conventional topic model-Gibbs Sampling Dirichlet Multinomial Mixture Model (GSDMM)-with term-weighting strategies to improve topic clarity. This enhanced approach enabled the identification of key topics among dementia-affected families, offering semantically rich and contextually coherent topics, demonstrating that our method outperforms the state-of-the-art BERTopic model in clarity and consistency. Leveraging ChatGPT 4 alongside two human experts, we uncovered the multifaceted challenges faced by family caregivers. This work aims to provide healthcare professionals, researchers, and support organizations with a valuable tool to better understand and address the needs offamily caregivers.
{"title":"Analyzing Dementia Caregivers' Experiences on Twitter: A Term-Weighted Topic Modeling Approach.","authors":"Yanbo Feng, Bojian Hou, Ari Klein, Karen O'Connor, Jiong Chen, Andrées Mondragóon, Shu Yang, Graciela Gonzalez-Hernandez, Li Shen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Dementia profoundly impacts patients and their families, making it essential to understand the experiences and concerns offamily caregivers for enhanced support and care. This study introduces a novel approach to analyzing tweets from individuals whose family members suffer from dementia. We preprocessed our collected Twitter (now X) data using advanced natural language processing techniques and enhanced conventional topic model-Gibbs Sampling Dirichlet Multinomial Mixture Model (GSDMM)-with term-weighting strategies to improve topic clarity. This enhanced approach enabled the identification of key topics among dementia-affected families, offering semantically rich and contextually coherent topics, demonstrating that our method outperforms the state-of-the-art BERTopic model in clarity and consistency. Leveraging ChatGPT 4 alongside two human experts, we uncovered the multifaceted challenges faced by family caregivers. This work aims to provide healthcare professionals, researchers, and support organizations with a valuable tool to better understand and address the needs offamily caregivers.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"407-416"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099380/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akhila Chekuri, Armaan S Johal, Matthew R Allen, John W Ayers, Michael Hogarth, Emilia Farcas
The volume of patient-provider messages is on the rise, and Large Language Models (LLMs) can potentially streamline the clinical messaging process, but their success hinges on triaging messages they can optimally address. In this study, we analyzed Electronic Health Records with over 4 million messages exchanged between patients and providers to characterize the utility of using LLMs for messages containing knowledge questions. We implemented a rule-based Syntactic Question Detector as a triage tool, and we evaluated it on 500 messages. The interrater reliability metrics and comparison with LLMs show the difficulty of detecting questions due to the informal text and implicit requests. Our results show that 25% of MyChart messages with questions do not have a response from the clinical team. This paper provides insights into the challenges of real-world data, highlights the importance and non-triviality of detecting questions, and suggests a pipeline for LLM use in healthcare.
{"title":"Towards Optimizing LLM Use in Healthcare: Identifying Patient Questions in MyChart Messages.","authors":"Akhila Chekuri, Armaan S Johal, Matthew R Allen, John W Ayers, Michael Hogarth, Emilia Farcas","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The volume of patient-provider messages is on the rise, and Large Language Models (LLMs) can potentially streamline the clinical messaging process, but their success hinges on triaging messages they can optimally address. In this study, we analyzed Electronic Health Records with over 4 million messages exchanged between patients and providers to characterize the utility of using LLMs for messages containing knowledge questions. We implemented a rule-based Syntactic Question Detector as a triage tool, and we evaluated it on 500 messages. The interrater reliability metrics and comparison with LLMs show the difficulty of detecting questions due to the informal text and implicit requests. Our results show that 25% of MyChart messages with questions do not have a response from the clinical team. This paper provides insights into the challenges of real-world data, highlights the importance and non-triviality of detecting questions, and suggests a pipeline for LLM use in healthcare.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"232-241"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099336/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sowmya S Sundaram, Benjamin Solomon, Avani Khatri, Anisha Laumas, Purvesh Khatri, Mark A Musen
Metadata play a crucial role in ensuring the findability, accessibility, interoperability, and reusability of datasets. This paper investigates the potential of large language models (LLMs), specifically GPT-4, to improve adherence to metadata standards in existing datasets. We conducted experiments on 200 random data records describing human samples relating to lung cancer from the NCBI BioSample repository, evaluating GPT-4's ability to suggest edits for adherence to metadata standards. We computed the adherence accuracy of field name-field value pairs through a peer review process, and we observed a marginal average improvement in adherence to the standard data dictionary from 79% to 80% when using GPT-4. We then prompted GPT-4 with domain information in the form of the textual descriptions of CEDAR metadata templates and recorded a statistically significant improvement to 97% from 79% (p<0.01). These results indicate that LLMs show promise for use in automated metadata curation when integrated with a structured knowledge base, though they may struggle when unaided.
{"title":"Structured Knowledge Base Enhances Effective Use of Large Language Models for Metadata Curation.","authors":"Sowmya S Sundaram, Benjamin Solomon, Avani Khatri, Anisha Laumas, Purvesh Khatri, Mark A Musen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p><i>Metadata play a crucial role in ensuring the findability, accessibility, interoperability, and reusability of datasets. This paper investigates the potential of large language models (LLMs), specifically GPT-4, to improve adherence to metadata standards in existing datasets. We conducted experiments on 200 random data records describing human samples relating to lung cancer from the NCBI BioSample repository, evaluating GPT-4's ability to suggest edits for adherence to metadata standards. We computed the adherence accuracy of</i> field name-field value <i>pairs through a peer review process, and we observed a marginal average improvement in adherence to the standard data dictionary from 79% to 80% when using GPT-4. We then prompted GPT-4 with domain information in the form of the textual descriptions of CEDAR metadata templates and recorded a statistically significant improvement to 97% from 79% (p<0.01). These results indicate that LLMs show promise for use in automated metadata curation when integrated with a structured knowledge base, though they may struggle when unaided.</i></p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"1050-1058"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iyad Majid, Vaibhav Mishra, Rohith Ravindranath, Sophia Y Wang
This study compared large language models (LLMs) and Bidirectional Encoder Representations from Transformers (BERT) models in identifying medication names, routes, and frequencies from publicly available free-text ophthalmology progress notes of 480 patients. 5,520 lines of annotated text were divided into train (N=3,864), validation (N=1,104), and test sets (N=552). We evaluated ChatGPT-3.5, ChatGPT-4, PaLM 2, and Gemini to identify these medication entities. We fine-tuned BERT, BioBERT, ClinicalBERT, DistilBERT, and RoBERTa for the same task using the training set. On the test set, GPT-4 achieved the best performance (macro-averaged F1 0.962). Among the BERT models, BioBERT achieved the best performance (macro-averaged F1 0.875). Modern LLMs outperformed BERT models even in the highly domain-specific task of identifying ophthalmic medication information from progress notes, showcasing the potential of LLMs for medical named entity recognition to enhance patient care.
{"title":"Evaluating the Performance of Large Language Models for Named Entity Recognition in Ophthalmology Clinical Free-Text Notes.","authors":"Iyad Majid, Vaibhav Mishra, Rohith Ravindranath, Sophia Y Wang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study compared large language models (LLMs) and Bidirectional Encoder Representations from Transformers (BERT) models in identifying medication names, routes, and frequencies from publicly available free-text ophthalmology progress notes of 480 patients. 5,520 lines of annotated text were divided into train (N=3,864), validation (N=1,104), and test sets (N=552). We evaluated ChatGPT-3.5, ChatGPT-4, PaLM 2, and Gemini to identify these medication entities. We fine-tuned BERT, BioBERT, ClinicalBERT, DistilBERT, and RoBERTa for the same task using the training set. On the test set, GPT-4 achieved the best performance (macro-averaged F1 0.962). Among the BERT models, BioBERT achieved the best performance (macro-averaged F1 0.875). Modern LLMs outperformed BERT models even in the highly domain-specific task of identifying ophthalmic medication information from progress notes, showcasing the potential of LLMs for medical named entity recognition to enhance patient care.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"778-787"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Causal inference, or the task of estimating the causal effect of an exposure or interventional variable on an outcome from an observational dataset, requires precise and rigorous methods, based on assumptions about the system under study. Such assumptions can be articulated as a causal diagram, however use of this technique in medicine is uncommon due to challenges in causal diagram construction in high-dimensional settings. Recent introduction of cluster causal diagrams or C-DAGs promise to ease the task of diagram construction by allowing for the representation of some unknown or partially defined relationships. We evaluate the practical application of C-DAGs in simulated medical contexts. We estimate causal effects under varying sets of assumptions, determined by both causal diagrams and C-DAGs and compare our results. Our findings show empirically similar results, with little discrepancy between causal effect sizes or variance across experimental runs, although estimation and efficiency challenges remain to be explored.
{"title":"Leveraging Cluster Causal Diagrams for Determining Causal Effects in Medicine.","authors":"Tara V Anand, George Hripcsak","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Causal inference, or the task of estimating the causal effect of an exposure or interventional variable on an outcome from an observational dataset, requires precise and rigorous methods, based on assumptions about the system under study. Such assumptions can be articulated as a causal diagram, however use of this technique in medicine is uncommon due to challenges in causal diagram construction in high-dimensional settings. Recent introduction of cluster causal diagrams or C-DAGs promise to ease the task of diagram construction by allowing for the representation of some unknown or partially defined relationships. We evaluate the practical application of C-DAGs in simulated medical contexts. We estimate causal effects under varying sets of assumptions, determined by both causal diagrams and C-DAGs and compare our results. Our findings show empirically similar results, with little discrepancy between causal effect sizes or variance across experimental runs, although estimation and efficiency challenges remain to be explored.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"134-141"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zach Landis-Lewis, Yidan Cao, Hana Chung, Peter Boisvert, Anjana Deep Renji, Patrick Galante, Ayshwarya Jagadeesan, Farid Seifi, Allison Janda, Nirav Shah, Andrew Krumm, Allen Flynn
Healthcare providers learn continuously, but better support for provider learning is needed as new biomedical knowledge is produced at an increasing rate alongside widespread use of EHR data for clinical performance measurement. Precision feedback is an approach to improve support for provider learning by prioritizing coaching and appreciation messages based on each message's motivational potential for a specific recipient. We developed a Precision Feedback Knowledge Base as an open resource to support precision feedback systems, containing knowledge models that hold potential as key infrastructure for learning health systems. We describe the design and development of the Precision Feedback Knowledge Base, as well as its key components, including quality measures, feedback message templates, causal pathway models, signal detectors, and prioritization algorithms. Presently, the knowledge base is implemented in a national-scale quality improvement consortium for anesthesia care, to enhance provider feedback email messages.
{"title":"Modeling Precision Feedback Knowledge for Healthcare Professional Learning and Quality Improvement.","authors":"Zach Landis-Lewis, Yidan Cao, Hana Chung, Peter Boisvert, Anjana Deep Renji, Patrick Galante, Ayshwarya Jagadeesan, Farid Seifi, Allison Janda, Nirav Shah, Andrew Krumm, Allen Flynn","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Healthcare providers learn continuously, but better support for provider learning is needed as new biomedical knowledge is produced at an increasing rate alongside widespread use of EHR data for clinical performance measurement. Precision feedback is an approach to improve support for provider learning by prioritizing coaching and appreciation messages based on each message's motivational potential for a specific recipient. We developed a Precision Feedback Knowledge Base as an open resource to support precision feedback systems, containing knowledge models that hold potential as key infrastructure for learning health systems. We describe the design and development of the Precision Feedback Knowledge Base, as well as its key components, including quality measures, feedback message templates, causal pathway models, signal detectors, and prioritization algorithms. Presently, the knowledge base is implemented in a national-scale quality improvement consortium for anesthesia care, to enhance provider feedback email messages.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"628-637"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guojun Tang, Jason E Black, Tyler S Williamson, Steve H Drew
Integrating Electronic Health Records (EHR) and the application of machine learning present opportunities for enhancing the accuracy and accessibility of data-driven diabetes prediction. In particular, developing data-driven machine learning models can provide early identification of patients with high risk for diabetes, potentially leading to more effective therapeutic strategies and reduced healthcare costs. However, regulation restrictions create barriers to developing centralized predictive models. This paper addresses the challenges by introducing a federated learning approach, which amalgamates predictive models without centralized data storage and processing, thus avoiding privacy issues. This marks the first application of federated learning to predict diabetes using real clinical datasets in Canada extracted from the Canadian Primary Care Sentinel Surveillance Network (CPCSSN) without cross-province patient data sharing. We address class-imbalance issues through downsampling techniques and compare federated learning performance against province-based and centralized models. Experimental results show that the federated MLP model presents a similar or higher performance compared to the model trained with the centralized approach. However, the federated logistic regression model showed inferior performance compared to its centralized peer.
{"title":"Federated Diabetes Prediction in Canadian Adults Using Real-world Cross-Province Primary Care Data.","authors":"Guojun Tang, Jason E Black, Tyler S Williamson, Steve H Drew","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Integrating Electronic Health Records (EHR) and the application of machine learning present opportunities for enhancing the accuracy and accessibility of data-driven diabetes prediction. In particular, developing data-driven machine learning models can provide early identification of patients with high risk for diabetes, potentially leading to more effective therapeutic strategies and reduced healthcare costs. However, regulation restrictions create barriers to developing centralized predictive models. This paper addresses the challenges by introducing a federated learning approach, which amalgamates predictive models without centralized data storage and processing, thus avoiding privacy issues. This marks the first application of federated learning to predict diabetes using real clinical datasets in Canada extracted from the Canadian Primary Care Sentinel Surveillance Network (CPCSSN) without cross-province patient data sharing. We address class-imbalance issues through downsampling techniques and compare federated learning performance against province-based and centralized models. Experimental results show that the federated MLP model presents a similar or higher performance compared to the model trained with the centralized approach. However, the federated logistic regression model showed inferior performance compared to its centralized peer.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"1099-1108"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes to use the K-medoids clustering method to identify subtypes of Intensive Care Unit (ICU)-acquired acute kidney injury (AKI) patients based on serum electrolyte data. Three distinct AKI subtypes with different serum electrolyte characteristics were identified by clustering analysis. Further, descriptive analysis was employed to characterize in-hospital mortality and renal replacement therapy, diuretic and vasopressor usage in the three subtypes, and Chi-square tests were conducted to check the differences of prognosis and treatments among the identified subtypes. This study enables the subclassification of AKI patients in the ICU, facilitating ICU physicians to make timely clinical decisions about AKI, and ultimately may contribute to patient outcome improvement.
{"title":"Identifying acute kidney injury subtypes based on serum electrolyte data in ICU via <i>K</i>-medoids clustering.","authors":"Wentie Liu, Tongyue Shi, Haowei Xu, Huiying Zhao, Jianguo Hao, Guilan Kong","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study proposes to use the K-medoids clustering method to identify subtypes of Intensive Care Unit (ICU)-acquired acute kidney injury (AKI) patients based on serum electrolyte data. Three distinct AKI subtypes with different serum electrolyte characteristics were identified by clustering analysis. Further, descriptive analysis was employed to characterize in-hospital mortality and renal replacement therapy, diuretic and vasopressor usage in the three subtypes, and Chi-square tests were conducted to check the differences of prognosis and treatments among the identified subtypes. This study enables the subclassification of AKI patients in the ICU, facilitating ICU physicians to make timely clinical decisions about AKI, and ultimately may contribute to patient outcome improvement.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"733-737"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diane Rizkallah, Neil L Greenberg, Rishabh Khurana, Vadivelan Palanisamy, Ben Alencherry, Carl Ammoury, Yezan Salam, Lisa Lamovsky, Haitham Fares, Robert Geschke, Richard Grimm, Christopher Nguyen, David Chen, Deborah H Kwon
Clinical reporting of cardiac magnetic resonance (CMR) imaging exams is commonly performed with a dictation approach which requires great care to capture both consistent and comprehensive data. We sought to transform the reporting process by utilizing a structured report framework for reporting standardization, by incorporating automated transfer of data semi-automated segmentation tools for efficiency, and rule-based reporting requirements to improve quality and standardization. Interfaces between the applications used to schedule and protocol exams and to analyze the acquired images were created to bring the source information directly into the structured reporting environment. The physicians reporting CMR were surveyed to determine satisfaction and improved efficiency with the new process through self-reported reporting time. Quality improvement was assessed by examining the consistency of reported parameters with the inclusion of rule-based requirements. The designed structured reporting process with automated measurements and rule-based requirements resulted in significant improvement in report efficiency and quality.
{"title":"Impact of Automated Transfer of Semi-Automated Segmentation and Structured Report Rule Requirements on Cardiac MRI Report Quality, Standardization, and Efficiency.","authors":"Diane Rizkallah, Neil L Greenberg, Rishabh Khurana, Vadivelan Palanisamy, Ben Alencherry, Carl Ammoury, Yezan Salam, Lisa Lamovsky, Haitham Fares, Robert Geschke, Richard Grimm, Christopher Nguyen, David Chen, Deborah H Kwon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Clinical reporting of cardiac magnetic resonance (CMR) imaging exams is commonly performed with a dictation approach which requires great care to capture both consistent and comprehensive data. We sought to transform the reporting process by utilizing a structured report framework for reporting standardization, by incorporating automated transfer of data semi-automated segmentation tools for efficiency, and rule-based reporting requirements to improve quality and standardization. Interfaces between the applications used to schedule and protocol exams and to analyze the acquired images were created to bring the source information directly into the structured reporting environment. The physicians reporting CMR were surveyed to determine satisfaction and improved efficiency with the new process through self-reported reporting time. Quality improvement was assessed by examining the consistency of reported parameters with the inclusion of rule-based requirements. The designed structured reporting process with automated measurements and rule-based requirements resulted in significant improvement in report efficiency and quality.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"950-959"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suraj Sood, Jawad S Shah, Saeed Alqarn, Yugyung Lee
Building on the success of the Segment Anything Model (SAM) in image segmentation, "PathSAM: SAM for Pathological Images in Oral Cancer Detection" addresses the unique challenges associated with diagnosing oral cancer. Although SAM is versatile, its application to pathological images is hindered by its inherent complexity and variability. PathSAM advances beyond traditional deep-learning methods by delivering superior accuracy and detail in segmenting critical datasets like ORCA and OCDC, as demonstrated through both quantitative and qualitative evaluations. The integration of Large Language Models (LLMs) further enhances PathSAM by providing clear, interpretable segmentation results, facilitating accurate tumor identification, and improving communication between patients and healthcare providers. This innovation positions PathSAM as a valuable tool in medical diagnostics.
{"title":"PathSAM: Enhancing Oral Cancer Detection with Advanced Segmentation and Explainability.","authors":"Suraj Sood, Jawad S Shah, Saeed Alqarn, Yugyung Lee","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Building on the success of the Segment Anything Model (SAM) in image segmentation, \"PathSAM: SAM for Pathological Images in Oral Cancer Detection\" addresses the unique challenges associated with diagnosing oral cancer. Although SAM is versatile, its application to pathological images is hindered by its inherent complexity and variability. PathSAM advances beyond traditional deep-learning methods by delivering superior accuracy and detail in segmenting critical datasets like ORCA and OCDC, as demonstrated through both quantitative and qualitative evaluations. The integration of Large Language Models (LLMs) further enhances PathSAM by providing clear, interpretable segmentation results, facilitating accurate tumor identification, and improving communication between patients and healthcare providers. This innovation positions PathSAM as a valuable tool in medical diagnostics.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2024 ","pages":"1069-1078"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12099372/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}