Background: Acute care use (ACU) represents a major economic burden in oncology, which can ideally be prevented. Existing models effectively predict such events.
Objective: We aimed to quantify the cost savings achieved by implementing a model to predict ACU in oncology patients undergoing systemic therapy.
Methods: This retrospective cohort study analyzed patients with cancer at an academic medical center from 2010 to 2022. We included patients who received systemic therapy and identified ACU events occurring after treatment initiation, excluding those with known death dates within the study period. Data on ACU-related expenses were gathered from Medicare claims and mapped to service codes in electronic health records, yielding average daily costs for each patient over 180 days following the start of therapy. The exposure was an ACU event.
Results: The main outcome was the average daily cost per patient at the end of the first 180 days of systemic therapy. We observed that expense accumulation flattened earlier and more rapidly among non-ACU patients. This study included 20,556 patients, of whom 3820 (18.58%) experienced at least 1 ACU. The average daily cost per patient for those with and without ACU was US $94.62 (SD US $72.54; 95% CI US $92.32-$96.92) and US $53.28 (SD US $59.92; 95% CI US $52.37-$54.19), respectively. The average total cost per ACU and non-ACU patient was US $17,031.92 (SD US $13,056.63; 95% CI US $16,616.74-$17,445.09) and US $9591.06 (SD US $10,785.83; 95% CI US $9427.64-$9754.48), respectively. To estimate the long-term financial impact of deploying the predictive model, we conducted a cost-benefit analysis based on an annual cohort size of 2177 patients. In the first year alone, the model yielded projected savings of US $910,000. By year 6, projected savings grew to US $9.46 million annually. The cumulative avoided costs over a 6-year deployment period totaled approximately US $31.11 million. These estimates compared the baseline cost model to the intervention model assuming a prevention rate of 35% for preventable ACU events and an average ACU cost of US $17,031.92 (SD US $13,037).
Conclusions: Predictive analytics can significantly reduce costs associated with ACU events, enhancing economic efficiency in cancer care. Further research is needed to explore potential health benefits.
Background: The use of large language models (LLMs) in radiology is expanding rapidly, offering new possibilities in report generation, decision support, and workflow optimization. However, a comprehensive evaluation of their applications, performance, and limitations across the radiology domain remains limited.
Objective: This review aimed to map current applications of LLMs in radiology, evaluate their performance across key tasks, and identify prevailing limitations and directions for future research.
Methods: A scoping review was conducted in accordance with the framework by Arksey and O'Malley framework and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. Three databases-PubMed, ScopusCOPUS, and IEEE Xplore-were searched for peer-reviewed studies published between January 2022 and December 2024. Eligible studies included empirical evaluations of LLMs applied to radiological data or workflows. Commentaries, reviews, and technical model proposals without evaluation were excluded. Two reviewers independently screened studies and extracted data on study characteristics, LLM type, radiological use case, data modality, and evaluation metrics. A thematic synthesis was used to identify key domains of application. No formal risk-of-bias assessment was performed, but a narrative appraisal of dataset representativeness and study quality was included.
Results: A total of 67 studies were included. (n/N, %)GPT-4 was the most frequently used model (n=28, 42%), with text-based corpora as the primary type of data used (n=43, 64%). Identified use cases fell into three thematic domains: (1) decision support (n=39, 58%), (2) report generation and summarization (n=16, 24%), and (3) workflow optimization (n=12, 18%). While LLMs demonstrated strong performance in structured-text tasks (eg, report simplification with >94% accuracy), diagnostic performance varied widely (16%-86%) and was limited by dataset bias, lack of fine tuning, and minimal clinical validation. Most studies (n=53, 79.1%) had single-center, proof-of-concept designs with limited generalizability.
Conclusions: LLMs show strong potential for augmenting radiological workflows, particularly for structured reporting, summarization, and educational tasks. However, their diagnostic performance remains inconsistent, and current implementations lack robust external validation. Future work should prioritize prospective, multicenter validation of domain-adapted and multimodal models to support safe clinical integration.
Background: Liver failure often results in significant coagulation dysfunction, which is a major complication. Artificial liver support systems (ALSS) have been used to ameliorate coagulation parameters, but the dynamic nature of these improvements and the development of predictive models remain insufficiently explored.
Objective: This study aimed to evaluate the effects of ALSS on coagulation function and to develop a dynamic prediction model using machine learning techniques to predict the improvement trends of coagulation parameters.
Methods: A systematic search was conducted in PubMed, Embase, and other databases to identify relevant studies, resulting in 18 studies comprising 1771 patients. A meta-analysis was performed to assess the impact of ALSS on coagulation parameters, including international normalized ratio (INR), prothrombin time (PT), activated partial thromboplastin time (APTT), and fibrinogen levels. In addition, clinical data from the Medical Information Mart for Intensive Care database were used to construct prediction models using logistic regression, extreme gradient boosting, random forest, and long short-term memory networks.
Results: Meta-analysis results showed that ALSS significantly improved INR, PT, APTT, and fibrinogen levels (all P<.05), with the treatment efficacy varying by modality. Among the machine learning models, the random forest model demonstrated the best performance, achieving an area under the curve of 92.12%. Dynamic INR was identified as the key predictor for coagulation abnormalities.
Conclusions: This study systematically evaluated the effects of ALSS on coagulation function in patients with liver failure, demonstrating significant improvements in key parameters such as INR, PT, and APTT, with efficacy varying across different treatment modalities. Simultaneously, a machine learning model built using intensive care unit clinical data exhibited strong predictive capability for identifying the risk of coagulation dysfunction, particularly useful in supporting early-stage clinical recognition of high-risk patients and guiding personalized coagulation management strategies. It is important to emphasize that this model is positioned as a dynamic risk alert and assessment tool, intended to assist clinical baseline evaluation and nursing interventions, rather than serving as direct validation of ALSS therapeutic efficacy.
Background: Falls among hospitalized patients are a critical issue that often leads to prolonged hospital stays and increased health care costs. Traditional fall risk assessments typically rely on standardized scoring systems; however, these may fail to capture the complex and multifactorial nature of fall risk factors.
Objective: This retrospective observational multicenter study aimed to develop and validate a machine learning-based model to predict in-hospital falls and to evaluate its performance in terms of discrimination and calibration.
Methods: We analyzed the data of 83,917 inpatients aged 65 years and older with a hospital stay of at least 3 days. Using Diagnosis Procedure Combination data and laboratory results, we extracted demographic, clinical, functional, and pharmacological variables. Following the selection of 30 key features, 4 predictive models were constructed: logistic regression, extreme gradient boosting, light gradient boosting machine (LGBM), and categorical boosting (CatBoost). The synthetic minority oversampling technique and isotonic regression calibration were applied to improve the prediction quality and address class imbalance.
Results: Falls occurred in 2173 (2.6%) patients. CatBoost achieved the highest F1-score (0.189, 95% CI 0.162-0.215) and area under the precision-recall curve (0.112, 95% CI 0.091-0.136), whereas LGBM had the best calibration slope (0.964, 95% CI 0.858-1.070) with good discrimination (F1-score 0.182, 95% CI 0.156-0.209; area under the precision-recall curve 0.094, 95% CI 0.078-0.113). Logistic regression had the lowest discrimination (F1-score 0.120, 95% CI 0.100-0.143). Shapley Additive Explanations analysis consistently identified low albumin, impaired transfer ability, and the use of sedative-hypnotics or diabetes medications as major contributors to fall risk. In incident report analysis (n=435), 49.2% of falls were toileting-related, peaking between 4 and 6 AM, with bedside falls predominating in high or very high risk groups.
Conclusions: CatBoost and LGBM offer clinically valuable prediction performance, with CatBoost favored for high-risk patient identification and LGBM for probability-based intervention thresholds. Integrating such models into electronic health records could enable real-time risk scoring and trigger targeted interventions (eg, toileting assistance and mobility support). Future work should incorporate dynamic, time-varying patient data to improve real-time risk prediction.

