Thomas Savage, John Wang, Robert Gallo, Abdessalem Boukil, Vishwesh Patel, Seyed Amir Ahmad Safavi-Naini, Ali Soroush, Jonathan H Chen
Introduction: The inability of large language models (LLMs) to communicate uncertainty is a significant barrier to their use in medicine. Before LLMs can be integrated into patient care, the field must assess methods to estimate uncertainty in ways that are useful to physician-users.
Objective: Evaluate the ability for uncertainty proxies to quantify LLM confidence when performing diagnosis and treatment selection tasks by assessing the properties of discrimination and calibration.
Methods: We examined confidence elicitation (CE), token-level probability (TLP), and sample consistency (SC) proxies across GPT3.5, GPT4, Llama2, and Llama3. Uncertainty proxies were evaluated against 3 datasets of open-ended patient scenarios.
Results: SC discrimination outperformed TLP and CE methods. SC by sentence embedding achieved the highest discriminative performance (ROC AUC 0.68-0.79), yet with poor calibration. SC by GPT annotation achieved the second-best discrimination (ROC AUC 0.66-0.74) with accurate calibration. Verbalized confidence (CE) was found to consistently overestimate model confidence.
Discussion and conclusions: SC is the most effective method for estimating LLM uncertainty of the proxies evaluated. SC by sentence embedding can effectively estimate uncertainty if the user has a set of reference cases with which to re-calibrate their results, while SC by GPT annotation is the more effective method if the user does not have reference cases and requires accurate raw calibration. Our results confirm LLMs are consistently over-confident when verbalizing their confidence (CE).
{"title":"Large language model uncertainty proxies: discrimination and calibration for medical diagnosis and treatment.","authors":"Thomas Savage, John Wang, Robert Gallo, Abdessalem Boukil, Vishwesh Patel, Seyed Amir Ahmad Safavi-Naini, Ali Soroush, Jonathan H Chen","doi":"10.1093/jamia/ocae254","DOIUrl":"10.1093/jamia/ocae254","url":null,"abstract":"<p><strong>Introduction: </strong>The inability of large language models (LLMs) to communicate uncertainty is a significant barrier to their use in medicine. Before LLMs can be integrated into patient care, the field must assess methods to estimate uncertainty in ways that are useful to physician-users.</p><p><strong>Objective: </strong>Evaluate the ability for uncertainty proxies to quantify LLM confidence when performing diagnosis and treatment selection tasks by assessing the properties of discrimination and calibration.</p><p><strong>Methods: </strong>We examined confidence elicitation (CE), token-level probability (TLP), and sample consistency (SC) proxies across GPT3.5, GPT4, Llama2, and Llama3. Uncertainty proxies were evaluated against 3 datasets of open-ended patient scenarios.</p><p><strong>Results: </strong>SC discrimination outperformed TLP and CE methods. SC by sentence embedding achieved the highest discriminative performance (ROC AUC 0.68-0.79), yet with poor calibration. SC by GPT annotation achieved the second-best discrimination (ROC AUC 0.66-0.74) with accurate calibration. Verbalized confidence (CE) was found to consistently overestimate model confidence.</p><p><strong>Discussion and conclusions: </strong>SC is the most effective method for estimating LLM uncertainty of the proxies evaluated. SC by sentence embedding can effectively estimate uncertainty if the user has a set of reference cases with which to re-calibrate their results, while SC by GPT annotation is the more effective method if the user does not have reference cases and requires accurate raw calibration. Our results confirm LLMs are consistently over-confident when verbalizing their confidence (CE).</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"139-149"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew J Zimolzak, Sundas P Khan, Hardeep Singh, Jessica A Davila
Objectives: Missed and delayed cancer diagnoses are common, harmful, and often preventable. We previously validated a digital quality measure (dQM) of emergency presentation (EP) of lung cancer in 2 US health systems. This study aimed to apply the dQM to a new national electronic health record (EHR) database and examine demographic associations.
Materials and methods: We applied the dQM (emergency encounter followed by new lung cancer diagnosis within 30 days) to Epic Cosmos, a deidentified database covering 184 million US patients. We examined dQM associations with sociodemographic factors.
Results: The overall EP rate was 19.6%. EP rate was higher in Black vs White patients (24% vs 19%, P < .001) and patients with younger age, higher social vulnerability, lower-income ZIP code, and self-reported transport difficulties.
Discussion: We successfully applied a dQM based on cancer EP to the largest US EHR database.
Conclusion: This dQM could be a marker for sociodemographic vulnerabilities in cancer diagnosis.
目标:癌症漏诊和延误诊断是常见的、有害的,而且往往是可以预防的。我们曾在美国的两个医疗系统中验证了肺癌急诊(EP)的数字质量测量(dQM)。本研究旨在将 dQM 应用于一个新的全国电子健康记录(EHR)数据库,并研究人口统计学关联:我们将 dQM(急诊后 30 天内新诊断出肺癌)应用于 Epic Cosmos,这是一个涵盖 1.84 亿美国患者的去身份化数据库。我们研究了 dQM 与社会人口因素的关系:结果:总体 EP 率为 19.6%。黑人患者的 EP 率高于白人患者(24% 对 19%,P 讨论):我们在美国最大的电子病历数据库中成功应用了基于癌症 EP 的 dQM:结论:该 dQM 可以作为癌症诊断中社会人口脆弱性的标记。
{"title":"Application of a digital quality measure for cancer diagnosis in Epic Cosmos.","authors":"Andrew J Zimolzak, Sundas P Khan, Hardeep Singh, Jessica A Davila","doi":"10.1093/jamia/ocae253","DOIUrl":"10.1093/jamia/ocae253","url":null,"abstract":"<p><strong>Objectives: </strong>Missed and delayed cancer diagnoses are common, harmful, and often preventable. We previously validated a digital quality measure (dQM) of emergency presentation (EP) of lung cancer in 2 US health systems. This study aimed to apply the dQM to a new national electronic health record (EHR) database and examine demographic associations.</p><p><strong>Materials and methods: </strong>We applied the dQM (emergency encounter followed by new lung cancer diagnosis within 30 days) to Epic Cosmos, a deidentified database covering 184 million US patients. We examined dQM associations with sociodemographic factors.</p><p><strong>Results: </strong>The overall EP rate was 19.6%. EP rate was higher in Black vs White patients (24% vs 19%, P < .001) and patients with younger age, higher social vulnerability, lower-income ZIP code, and self-reported transport difficulties.</p><p><strong>Discussion: </strong>We successfully applied a dQM based on cancer EP to the largest US EHR database.</p><p><strong>Conclusion: </strong>This dQM could be a marker for sociodemographic vulnerabilities in cancer diagnosis.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"227-229"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aaron S Eisman, Elizabeth S Chen, Wen-Chih Wu, Karen M Crowley, Dilum P Aluthge, Katherine Brown, Indra Neil Sarkar
Objective: To demonstrate the potential for a centrally managed health information exchange standardized to a common data model (HIE-CDM) to facilitate semantic data flow needed to support a learning health system (LHS).
Materials and methods: The Rhode Island Quality Institute operates the Rhode Island (RI) statewide HIE, which aggregates RI health data for more than half of the state's population from 47 data partners. We standardized HIE data to the Observational Medical Outcomes Partnership (OMOP) CDM. Atherosclerotic cardiovascular disease (ASCVD) risk and primary prevention practices were selected to demonstrate LHS semantic data flow from 2013 to 2023.
Results: We calculated longitudinal 10-year ASCVD risk on 62,999 individuals. Nearly two-thirds had ASCVD risk factors from more than one data partner. This enabled granular tracking of individual ASCVD risk, primary prevention (ie, statin therapy), and incident disease. The population was on statins for fewer than half of the guideline-recommended days. We also found that individuals receiving care at Federally Qualified Health Centers were more likely to have unfavorable ASCVD risk profiles and more likely to be on statins. CDM transformation reduced data heterogeneity through a unified health record that adheres to defined terminologies per OMOP domain.
Discussion: We demonstrated the potential for an HIE-CDM to enable observational population health research. We also showed how to leverage existing health information technology infrastructure and health data best practices to break down LHS barriers.
Conclusion: HIE-CDM facilitates knowledge curation and health system intervention development at the individual, health system, and population levels.
{"title":"Learning health system linchpins: information exchange and a common data model.","authors":"Aaron S Eisman, Elizabeth S Chen, Wen-Chih Wu, Karen M Crowley, Dilum P Aluthge, Katherine Brown, Indra Neil Sarkar","doi":"10.1093/jamia/ocae277","DOIUrl":"10.1093/jamia/ocae277","url":null,"abstract":"<p><strong>Objective: </strong>To demonstrate the potential for a centrally managed health information exchange standardized to a common data model (HIE-CDM) to facilitate semantic data flow needed to support a learning health system (LHS).</p><p><strong>Materials and methods: </strong>The Rhode Island Quality Institute operates the Rhode Island (RI) statewide HIE, which aggregates RI health data for more than half of the state's population from 47 data partners. We standardized HIE data to the Observational Medical Outcomes Partnership (OMOP) CDM. Atherosclerotic cardiovascular disease (ASCVD) risk and primary prevention practices were selected to demonstrate LHS semantic data flow from 2013 to 2023.</p><p><strong>Results: </strong>We calculated longitudinal 10-year ASCVD risk on 62,999 individuals. Nearly two-thirds had ASCVD risk factors from more than one data partner. This enabled granular tracking of individual ASCVD risk, primary prevention (ie, statin therapy), and incident disease. The population was on statins for fewer than half of the guideline-recommended days. We also found that individuals receiving care at Federally Qualified Health Centers were more likely to have unfavorable ASCVD risk profiles and more likely to be on statins. CDM transformation reduced data heterogeneity through a unified health record that adheres to defined terminologies per OMOP domain.</p><p><strong>Discussion: </strong>We demonstrated the potential for an HIE-CDM to enable observational population health research. We also showed how to leverage existing health information technology infrastructure and health data best practices to break down LHS barriers.</p><p><strong>Conclusion: </strong>HIE-CDM facilitates knowledge curation and health system intervention development at the individual, health system, and population levels.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"9-19"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xubing Hao, Xiaojin Li, Yan Huang, Jay Shi, Rashmie Abeysinghe, Cui Tao, Kirk Roberts, Guo-Qiang Zhang, Licong Cui
Objective: SNOMED CT provides a standardized terminology for clinical concepts, allowing cohort queries over heterogeneous clinical data including Electronic Health Records (EHRs). While it is intuitive that missing and inaccurate subtype (or is-a) relations in SNOMED CT reduce the recall and precision of cohort queries, the extent of these impacts has not been formally assessed. This study fills this gap by developing quantitative metrics to measure these impacts and performing statistical analysis on their significance.
Material and methods: We used the Optum de-identified COVID-19 Electronic Health Record dataset. We defined micro-averaged and macro-averaged recall and precision metrics to assess the impact of missing and inaccurate is-a relations on cohort queries. Both practical and simulated analyses were performed. Practical analyses involved 407 missing and 48 inaccurate is-a relations confirmed by domain experts, with statistical testing using Wilcoxon signed-rank tests. Simulated analyses used two random sets of 400 is-a relations to simulate missing and inaccurate is-a relations.
Results: Wilcoxon signed-rank tests from both practical and simulated analyses (P-values < .001) showed that missing is-a relations significantly reduced the micro- and macro-averaged recall, and inaccurate is-a relations significantly reduced the micro- and macro-averaged precision.
Discussion: The introduced impact metrics can assist SNOMED CT maintainers in prioritizing critical hierarchical defects for quality enhancement. These metrics are generally applicable for assessing the quality impact of a terminology's subtype hierarchy on its cohort query applications.
Conclusion: Our results indicate a significant impact of missing and inaccurate is-a relations in SNOMED CT on the recall and precision of cohort queries. Our work highlights the importance of high-quality terminology hierarchy for cohort queries over EHR data and provides valuable insights for prioritizing quality improvements of SNOMED CT's hierarchy.
{"title":"Quantitatively assessing the impact of the quality of SNOMED CT subtype hierarchy on cohort queries.","authors":"Xubing Hao, Xiaojin Li, Yan Huang, Jay Shi, Rashmie Abeysinghe, Cui Tao, Kirk Roberts, Guo-Qiang Zhang, Licong Cui","doi":"10.1093/jamia/ocae272","DOIUrl":"10.1093/jamia/ocae272","url":null,"abstract":"<p><strong>Objective: </strong>SNOMED CT provides a standardized terminology for clinical concepts, allowing cohort queries over heterogeneous clinical data including Electronic Health Records (EHRs). While it is intuitive that missing and inaccurate subtype (or is-a) relations in SNOMED CT reduce the recall and precision of cohort queries, the extent of these impacts has not been formally assessed. This study fills this gap by developing quantitative metrics to measure these impacts and performing statistical analysis on their significance.</p><p><strong>Material and methods: </strong>We used the Optum de-identified COVID-19 Electronic Health Record dataset. We defined micro-averaged and macro-averaged recall and precision metrics to assess the impact of missing and inaccurate is-a relations on cohort queries. Both practical and simulated analyses were performed. Practical analyses involved 407 missing and 48 inaccurate is-a relations confirmed by domain experts, with statistical testing using Wilcoxon signed-rank tests. Simulated analyses used two random sets of 400 is-a relations to simulate missing and inaccurate is-a relations.</p><p><strong>Results: </strong>Wilcoxon signed-rank tests from both practical and simulated analyses (P-values < .001) showed that missing is-a relations significantly reduced the micro- and macro-averaged recall, and inaccurate is-a relations significantly reduced the micro- and macro-averaged precision.</p><p><strong>Discussion: </strong>The introduced impact metrics can assist SNOMED CT maintainers in prioritizing critical hierarchical defects for quality enhancement. These metrics are generally applicable for assessing the quality impact of a terminology's subtype hierarchy on its cohort query applications.</p><p><strong>Conclusion: </strong>Our results indicate a significant impact of missing and inaccurate is-a relations in SNOMED CT on the recall and precision of cohort queries. Our work highlights the importance of high-quality terminology hierarchy for cohort queries over EHR data and provides valuable insights for prioritizing quality improvements of SNOMED CT's hierarchy.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"89-96"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648736/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing a learning health system through biomedical and health informatics.","authors":"Suzanne Bakken","doi":"10.1093/jamia/ocae307","DOIUrl":"10.1093/jamia/ocae307","url":null,"abstract":"","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":"32 1","pages":"1-2"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142839965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Northrop, Anika Christofferson, Saumya Umashankar, Michelle Melisko, Paolo Castillo, Thelma Brown, Diane Heditsian, Susie Brain, Carol Simmons, Tina Hieken, Kathryn J Ruddy, Candace Mainor, Anosheh Afghahi, Sarah Tevis, Anne Blaes, Irene Kang, Adam Asare, Laura Esserman, Dawn L Hershman, Amrita Basu
Objectives: We describe the development and implementation of a system for monitoring patient-reported adverse events and quality of life using electronic Patient Reported Outcome (ePRO) instruments in the I-SPY2 Trial, a phase II clinical trial for locally advanced breast cancer. We describe the administration of technological, workflow, and behavior change interventions and their associated impact on questionnaire completion.
Materials and methods: Using the OpenClinica electronic data capture system, we developed rules-based logic to build automated ePRO surveys, customized to the I-SPY2 treatment schedule. We piloted ePROs at the University of California, San Francisco (UCSF) to optimize workflow in the context of trial treatment scenarios and staggered rollout of the ePRO system to 26 sites to ensure effective implementation of the technology.
Results: Increasing ePRO completion requires workflow solutions and research staff engagement. Over two years, we increased baseline survey completion from 25% to 80%. The majority of patients completed between 30% and 75% of the questionnaires they received, with no statistically significant variation in survey completion by age, race or ethnicity. Patients who completed the screening timepoint questionnaire were significantly more likely to complete more of the surveys they received at later timepoints (mean completion of 74.1% vs 35.5%, P < .0001). Baseline PROMIS social functioning and grade 2 or more PRO-CTCAE interference of Abdominal Pain, Decreased Appetite, Dizziness and Shortness of Breath was associated with lower survey completion rates.
Discussion and conclusion: By implementing ePROs, we have the potential to increase efficiency and accuracy of patient-reported clinical trial data collection, while improving quality of care, patient safety, and health outcomes. Our method is accessible across demographics and facilitates an ease of data collection and sharing across nationwide sites. We identify predictors of decreased completion that can optimize resource allocation by better targeting efforts such as in-person outreach, staff engagement, a robust technical workflow, and increased monitoring to improve overall completion rates.
{"title":"Implementation and impact of an electronic patient reported outcomes system in a phase II multi-site adaptive platform clinical trial for early-stage breast cancer.","authors":"Anna Northrop, Anika Christofferson, Saumya Umashankar, Michelle Melisko, Paolo Castillo, Thelma Brown, Diane Heditsian, Susie Brain, Carol Simmons, Tina Hieken, Kathryn J Ruddy, Candace Mainor, Anosheh Afghahi, Sarah Tevis, Anne Blaes, Irene Kang, Adam Asare, Laura Esserman, Dawn L Hershman, Amrita Basu","doi":"10.1093/jamia/ocae190","DOIUrl":"10.1093/jamia/ocae190","url":null,"abstract":"<p><strong>Objectives: </strong>We describe the development and implementation of a system for monitoring patient-reported adverse events and quality of life using electronic Patient Reported Outcome (ePRO) instruments in the I-SPY2 Trial, a phase II clinical trial for locally advanced breast cancer. We describe the administration of technological, workflow, and behavior change interventions and their associated impact on questionnaire completion.</p><p><strong>Materials and methods: </strong>Using the OpenClinica electronic data capture system, we developed rules-based logic to build automated ePRO surveys, customized to the I-SPY2 treatment schedule. We piloted ePROs at the University of California, San Francisco (UCSF) to optimize workflow in the context of trial treatment scenarios and staggered rollout of the ePRO system to 26 sites to ensure effective implementation of the technology.</p><p><strong>Results: </strong>Increasing ePRO completion requires workflow solutions and research staff engagement. Over two years, we increased baseline survey completion from 25% to 80%. The majority of patients completed between 30% and 75% of the questionnaires they received, with no statistically significant variation in survey completion by age, race or ethnicity. Patients who completed the screening timepoint questionnaire were significantly more likely to complete more of the surveys they received at later timepoints (mean completion of 74.1% vs 35.5%, P < .0001). Baseline PROMIS social functioning and grade 2 or more PRO-CTCAE interference of Abdominal Pain, Decreased Appetite, Dizziness and Shortness of Breath was associated with lower survey completion rates.</p><p><strong>Discussion and conclusion: </strong>By implementing ePROs, we have the potential to increase efficiency and accuracy of patient-reported clinical trial data collection, while improving quality of care, patient safety, and health outcomes. Our method is accessible across demographics and facilitates an ease of data collection and sharing across nationwide sites. We identify predictors of decreased completion that can optimize resource allocation by better targeting efforts such as in-person outreach, staff engagement, a robust technical workflow, and increased monitoring to improve overall completion rates.</p><p><strong>Trial registration: </strong>https://clinicaltrials.gov/study/NCT01042379.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"172-180"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648710/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sukanya Mohapatra, Mirna Issa, Vedrana Ivezic, Rose Doherty, Stephanie Marks, Esther Lan, Shawn Chen, Keith Rozett, Lauren Cullen, Wren Reynolds, Rose Rocchio, Gregg C Fonarow, Michael K Ong, William F Speier, Corey W Arnold
Objectives: Mobile health (mHealth) regimens can improve health through the continuous monitoring of biometric parameters paired with appropriate interventions. However, adherence to monitoring tends to decay over time. Our randomized controlled trial sought to determine: (1) if a mobile app with gamification and financial incentives significantly increases adherence to mHealth monitoring in a population of heart failure patients; and (2) if activity data correlate with disease-specific symptoms.
Materials and methods: We recruited individuals with heart failure into a prospective 180-day monitoring study with 3 arms. All 3 arms included monitoring with a connected weight scale and an activity tracker. The second arm included an additional mobile app with gamification, and the third arm included the mobile app and a financial incentive awarded based on adherence to mobile monitoring.
Results: We recruited 111 heart failure patients into the study. We found that the arm including the financial incentive led to significantly higher adherence to activity tracker (95% vs 72.2%, P = .01) and weight (87.5% vs 69.4%, P = .002) monitoring compared to the arm that included the monitoring devices alone. Furthermore, we found a significant correlation between daily steps and daily symptom severity.
Discussion and conclusion: Our findings indicate that mobile apps with added engagement features can be useful tools for improving adherence over time and may thus increase the impact of mHealth-driven interventions. Additionally, activity tracker data can provide passive monitoring of disease burden that may be used to predict future events.
{"title":"Increasing adherence and collecting symptom-specific biometric signals in remote monitoring of heart failure patients: a randomized controlled trial.","authors":"Sukanya Mohapatra, Mirna Issa, Vedrana Ivezic, Rose Doherty, Stephanie Marks, Esther Lan, Shawn Chen, Keith Rozett, Lauren Cullen, Wren Reynolds, Rose Rocchio, Gregg C Fonarow, Michael K Ong, William F Speier, Corey W Arnold","doi":"10.1093/jamia/ocae221","DOIUrl":"10.1093/jamia/ocae221","url":null,"abstract":"<p><strong>Objectives: </strong>Mobile health (mHealth) regimens can improve health through the continuous monitoring of biometric parameters paired with appropriate interventions. However, adherence to monitoring tends to decay over time. Our randomized controlled trial sought to determine: (1) if a mobile app with gamification and financial incentives significantly increases adherence to mHealth monitoring in a population of heart failure patients; and (2) if activity data correlate with disease-specific symptoms.</p><p><strong>Materials and methods: </strong>We recruited individuals with heart failure into a prospective 180-day monitoring study with 3 arms. All 3 arms included monitoring with a connected weight scale and an activity tracker. The second arm included an additional mobile app with gamification, and the third arm included the mobile app and a financial incentive awarded based on adherence to mobile monitoring.</p><p><strong>Results: </strong>We recruited 111 heart failure patients into the study. We found that the arm including the financial incentive led to significantly higher adherence to activity tracker (95% vs 72.2%, P = .01) and weight (87.5% vs 69.4%, P = .002) monitoring compared to the arm that included the monitoring devices alone. Furthermore, we found a significant correlation between daily steps and daily symptom severity.</p><p><strong>Discussion and conclusion: </strong>Our findings indicate that mobile apps with added engagement features can be useful tools for improving adherence over time and may thus increase the impact of mHealth-driven interventions. Additionally, activity tracker data can provide passive monitoring of disease burden that may be used to predict future events.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"181-192"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Ralf Bujotzek, Ünal Akünal, Stefan Denner, Peter Neher, Maximilian Zenk, Eric Frodl, Astha Jaiswal, Moon Kim, Nicolai R Krekiehn, Manuel Nickel, Richard Ruppel, Marcus Both, Felix Döllinger, Marcel Opitz, Thorsten Persigehl, Jens Kleesiek, Tobias Penzkofer, Klaus Maier-Hein, Andreas Bucher, Rickmer Braren
Objective: Federated Learning (FL) enables collaborative model training while keeping data locally. Currently, most FL studies in radiology are conducted in simulated environments due to numerous hurdles impeding its translation into practice. The few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles. To bridge this significant knowledge gap, we propose a comprehensive guide for real-world FL in radiology. Minding efforts to implement real-world FL, there is a lack of comprehensive assessments comparing FL to less complex alternatives in challenging real-world settings, which we address through extensive benchmarking.
Materials and methods: We developed our own FL infrastructure within the German Radiological Cooperative Network (RACOON) and demonstrated its functionality by training FL models on lung pathology segmentation tasks across six university hospitals. Insights gained while establishing our FL initiative and running the extensive benchmark experiments were compiled and categorized into the guide.
Results: The proposed guide outlines essential steps, identified hurdles, and implemented solutions for establishing successful FL initiatives conducting real-world experiments. Our experimental results prove the practical relevance of our guide and show that FL outperforms less complex alternatives in all evaluation scenarios.
Discussion and conclusion: Our findings justify the efforts required to translate FL into real-world applications by demonstrating advantageous performance over alternative approaches. Additionally, they emphasize the importance of strategic organization, robust management of distributed data and infrastructure in real-world settings. With the proposed guide, we are aiming to aid future FL researchers in circumventing pitfalls and accelerating translation of FL into radiological applications.
{"title":"Real-world federated learning in radiology: hurdles to overcome and benefits to gain.","authors":"Markus Ralf Bujotzek, Ünal Akünal, Stefan Denner, Peter Neher, Maximilian Zenk, Eric Frodl, Astha Jaiswal, Moon Kim, Nicolai R Krekiehn, Manuel Nickel, Richard Ruppel, Marcus Both, Felix Döllinger, Marcel Opitz, Thorsten Persigehl, Jens Kleesiek, Tobias Penzkofer, Klaus Maier-Hein, Andreas Bucher, Rickmer Braren","doi":"10.1093/jamia/ocae259","DOIUrl":"10.1093/jamia/ocae259","url":null,"abstract":"<p><strong>Objective: </strong>Federated Learning (FL) enables collaborative model training while keeping data locally. Currently, most FL studies in radiology are conducted in simulated environments due to numerous hurdles impeding its translation into practice. The few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles. To bridge this significant knowledge gap, we propose a comprehensive guide for real-world FL in radiology. Minding efforts to implement real-world FL, there is a lack of comprehensive assessments comparing FL to less complex alternatives in challenging real-world settings, which we address through extensive benchmarking.</p><p><strong>Materials and methods: </strong>We developed our own FL infrastructure within the German Radiological Cooperative Network (RACOON) and demonstrated its functionality by training FL models on lung pathology segmentation tasks across six university hospitals. Insights gained while establishing our FL initiative and running the extensive benchmark experiments were compiled and categorized into the guide.</p><p><strong>Results: </strong>The proposed guide outlines essential steps, identified hurdles, and implemented solutions for establishing successful FL initiatives conducting real-world experiments. Our experimental results prove the practical relevance of our guide and show that FL outperforms less complex alternatives in all evaluation scenarios.</p><p><strong>Discussion and conclusion: </strong>Our findings justify the efforts required to translate FL into real-world applications by demonstrating advantageous performance over alternative approaches. Additionally, they emphasize the importance of strategic organization, robust management of distributed data and infrastructure in real-world settings. With the proposed guide, we are aiming to aid future FL researchers in circumventing pitfalls and accelerating translation of FL into radiological applications.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"193-205"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648732/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica Sperling, Whitney Welsh, Erin Haseley, Stella Quenstedt, Perusi B Muhigaba, Adrian Brown, Patti Ephraim, Tariq Shafi, Michael Waitzkin, David Casarett, Benjamin A Goldstein
Objectives: This study aims to improve the ethical use of machine learning (ML)-based clinical prediction models (CPMs) in shared decision-making for patients with kidney failure on dialysis. We explore factors that inform acceptability, interpretability, and implementation of ML-based CPMs among multiple constituent groups.
Materials and methods: We collected and analyzed qualitative data from focus groups with varied end users, including: dialysis support providers (clinical providers and additional dialysis support providers such as dialysis clinic staff and social workers); patients; patients' caregivers (n = 52).
Results: Participants were broadly accepting of ML-based CPMs, but with concerns on data sources, factors included in the model, and accuracy. Use was desired in conjunction with providers' views and explanations. Differences among respondent types were minimal overall but most prevalent in discussions of CPM presentation and model use.
Discussion and conclusion: Evidence of acceptability of ML-based CPM usage provides support for ethical use, but numerous specific considerations in acceptability, model construction, and model use for shared clinical decision-making must be considered. There are specific steps that could be taken by data scientists and health systems to engender use that is accepted by end users and facilitates trust, but there are also ongoing barriers or challenges in addressing desires for use. This study contributes to emerging literature on interpretability, mechanisms for sharing complexities, including uncertainty regarding the model results, and implications for decision-making. It examines numerous stakeholder groups including providers, patients, and caregivers to provide specific considerations that can influence health system use and provide a basis for future research.
研究目的本研究旨在改善基于机器学习(ML)的临床预测模型(CPM)在透析肾衰竭患者共同决策中的伦理使用。我们探讨了多个组成群体对基于机器学习的临床预测模型的可接受性、可解释性和实施情况的影响因素:我们收集并分析了来自焦点小组的定性数据,这些焦点小组由不同的终端用户组成,包括:透析支持服务提供者(临床服务提供者和其他透析支持服务提供者,如透析诊所工作人员和社会工作者);患者;患者的护理人员(n = 52):结果:参与者普遍接受基于 ML 的 CPM,但对数据来源、模型中包含的因素和准确性表示担忧。他们希望结合医疗服务提供者的观点和解释来使用。受访者类型之间的差异总体上很小,但在 CPM 演示和模型使用的讨论中最为普遍:基于 ML 的 CPM 使用的可接受性证据为道德使用提供了支持,但在可接受性、模型构建和临床决策共享模型使用方面必须考虑许多具体因素。数据科学家和医疗系统可以采取一些具体步骤来促进最终用户接受和信任的使用,但在满足使用愿望方面也存在持续的障碍或挑战。本研究为有关可解释性、复杂性共享机制(包括模型结果的不确定性)以及对决策的影响的新兴文献做出了贡献。它对包括医疗服务提供者、患者和护理人员在内的众多利益相关者群体进行了研究,以提供可影响医疗系统使用的具体考虑因素,并为未来的研究奠定基础。
{"title":"Machine learning-based prediction models in medical decision-making in kidney disease: patient, caregiver, and clinician perspectives on trust and appropriate use.","authors":"Jessica Sperling, Whitney Welsh, Erin Haseley, Stella Quenstedt, Perusi B Muhigaba, Adrian Brown, Patti Ephraim, Tariq Shafi, Michael Waitzkin, David Casarett, Benjamin A Goldstein","doi":"10.1093/jamia/ocae255","DOIUrl":"10.1093/jamia/ocae255","url":null,"abstract":"<p><strong>Objectives: </strong>This study aims to improve the ethical use of machine learning (ML)-based clinical prediction models (CPMs) in shared decision-making for patients with kidney failure on dialysis. We explore factors that inform acceptability, interpretability, and implementation of ML-based CPMs among multiple constituent groups.</p><p><strong>Materials and methods: </strong>We collected and analyzed qualitative data from focus groups with varied end users, including: dialysis support providers (clinical providers and additional dialysis support providers such as dialysis clinic staff and social workers); patients; patients' caregivers (n = 52).</p><p><strong>Results: </strong>Participants were broadly accepting of ML-based CPMs, but with concerns on data sources, factors included in the model, and accuracy. Use was desired in conjunction with providers' views and explanations. Differences among respondent types were minimal overall but most prevalent in discussions of CPM presentation and model use.</p><p><strong>Discussion and conclusion: </strong>Evidence of acceptability of ML-based CPM usage provides support for ethical use, but numerous specific considerations in acceptability, model construction, and model use for shared clinical decision-making must be considered. There are specific steps that could be taken by data scientists and health systems to engender use that is accepted by end users and facilitates trust, but there are also ongoing barriers or challenges in addressing desires for use. This study contributes to emerging literature on interpretability, mechanisms for sharing complexities, including uncertainty regarding the model results, and implications for decision-making. It examines numerous stakeholder groups including providers, patients, and caregivers to provide specific considerations that can influence health system use and provide a basis for future research.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"51-62"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arihant Tripathi, Brett Ecker, Patrick Boland, Saum Ghodoussipour, Gregory R Riedlinger, Subhajyoti De
Objectives: Cancer diagnosis comes as a shock to many patients, and many of them feel unprepared to handle the complexity of the life-changing event, understand technicalities of the diagnostic reports, and fully engage with the clinical team regarding the personalized clinical decision-making.
Materials and methods: We develop Oncointerpreter.ai an interactive resource to offer personalized summarization of clinical cancer genomic and pathological data, and frame questions or address queries about therapeutic opportunities in near-real time via a graphical interface. It is built on the Mistral-7B and Llama-2 7B large language models trained on a local database trained using a large, curated corpus.
Results: We showcase its utility with case studies, where Oncointerpreter.ai extracted key clinical and molecular attributes from deidentified pathology and clinical genomics reports, summarized their contextual significance and answered queries on pertinent treatment options. Oncointerpreter also provided personalized summary of currently active clinical trials that match the patients' disease status, their selection criteria, and geographic locations. Benchmarking and comparative assessment indicated that the model responses were generally consistent, and hallucination, ie, factually incorrect or nonsensical response was rare; treatment- and outcome related queries led to context-aware responses, and response time correlated with verbosity.
Discussion: The choice of model and domain-specific training also affected the response quality.
Conclusion: Oncointerpreter.ai can aid the existing clinical care with interactive, individualized summarization of diagnostics data to promote informed dialogs with the patients with new cancer diagnoses.
{"title":"Oncointerpreter.ai enables interactive, personalized summarization of cancer diagnostics data.","authors":"Arihant Tripathi, Brett Ecker, Patrick Boland, Saum Ghodoussipour, Gregory R Riedlinger, Subhajyoti De","doi":"10.1093/jamia/ocae284","DOIUrl":"10.1093/jamia/ocae284","url":null,"abstract":"<p><strong>Objectives: </strong>Cancer diagnosis comes as a shock to many patients, and many of them feel unprepared to handle the complexity of the life-changing event, understand technicalities of the diagnostic reports, and fully engage with the clinical team regarding the personalized clinical decision-making.</p><p><strong>Materials and methods: </strong>We develop Oncointerpreter.ai an interactive resource to offer personalized summarization of clinical cancer genomic and pathological data, and frame questions or address queries about therapeutic opportunities in near-real time via a graphical interface. It is built on the Mistral-7B and Llama-2 7B large language models trained on a local database trained using a large, curated corpus.</p><p><strong>Results: </strong>We showcase its utility with case studies, where Oncointerpreter.ai extracted key clinical and molecular attributes from deidentified pathology and clinical genomics reports, summarized their contextual significance and answered queries on pertinent treatment options. Oncointerpreter also provided personalized summary of currently active clinical trials that match the patients' disease status, their selection criteria, and geographic locations. Benchmarking and comparative assessment indicated that the model responses were generally consistent, and hallucination, ie, factually incorrect or nonsensical response was rare; treatment- and outcome related queries led to context-aware responses, and response time correlated with verbosity.</p><p><strong>Discussion: </strong>The choice of model and domain-specific training also affected the response quality.</p><p><strong>Conclusion: </strong>Oncointerpreter.ai can aid the existing clinical care with interactive, individualized summarization of diagnostics data to promote informed dialogs with the patients with new cancer diagnoses.</p><p><strong>Availability: </strong>https://github.com/Siris2314/Oncointerpreter.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"129-138"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}