Objectives: Vasoplegia is a common complication of cardiac surgery that uses cardiopulmonary bypass and contributes to morbidity and mortality, yet a consensus definition does not exist. The objective of this study was to evaluate the diagnostic criteria and definitions used to characterize vasoplegia and how different criteria influence incidence estimates.
Data sources: Ovid Embase, Ovid MEDLINE, Scopus, Web of Science Core Collection, ClinicalTrials.gov, Ovid Cochrane Central Register of Controlled Trials, and the World Health Organization's International Clinical Trials Registry Platform clinical trials registry.
Study selection: Randomized clinical trials and observational studies reporting on vasoplegia in adults undergoing any type of cardiac surgery that used cardiopulmonary bypass.
Data extraction: Proportional meta-analysis using a random-effects model and the inverse variance method was used to calculate the pooled incidence of vasoplegia and its clinical outcomes.
Data synthesis: A total of 68 studies encompassing 56,580 patients were identified, from which 63 unique vasoplegia definitions were used. Blood pressure (n = 57 studies, 84%) and cardiac output (n = 50 studies, 74%) were among the most common criteria used in vasoplegia definitions; however, there was a vast variety of threshold values applied within these criteria and all other criteria comprising the definitions. The pooled incidence of vasoplegia was 21% (95% CI, 17-25%), acute kidney injury was 32% (95% CI, 21-45%), and mortality was 12% (95% CI, 9-16%). Subgroup analysis revealed that transplantation and left ventricular assist device implantation surgeries, and those with baseline left ventricular ejection fraction less than 40% had a significantly greater incidence of vasoplegia.
Conclusions: The published literature varies greatly in the criteria used to define vasoplegia associated with on-pump cardiac surgery. Generation and adoption of a unified definition for vasoplegia must be an international priority.
Objective: Acute respiratory distress syndrome (ARDS) is estimated to be prevalent in 10% of ICU patients and results in high mortality rates of up to 45%. The recognition of ARDS can be complex and is often delayed or missed entirely. Recognition of increased ARDS risk among critically ill patients may prompt judicious care management strategies and initiation of preventative therapies known to improve survival.
Design: Retrospective observational cohort study.
Setting: In-patient tertiary hospital.
Patients: Among 1160 patients (2017-2018), 761 had adequate duration and quality of monitoring waveform data for analysis.
Interventions: None.
Measurements and main results: This is an observational, retrospective, institutional review board-approved study of patients admitted to ICUs at a tertiary hospital system. Physiologic data were captured among critically ill patients who developed ARDS (n = 62) and matched controls (n = 699) during their hospitalization. Machine learning algorithms were evaluated against statistical features from continuous electrocardiogram (ECG) and sparse clinical data. Waveform-derived cardiorespiratory features, namely measures relating to heart rate variability were found to be robust and reliable features that predicted ARDS up to 2 days before onset. The combined model consisting of waveform features and clinical data with 12-hour prediction horizon achieved an area under the receiver operating characteristic curve and positive predictive value of 0.92 (95% CI, 0.91-0.93) and 0.58 (95% CI, 0.55-0.62), surpassing a model with the clinical data removed (0.86 [95% CI, 0.85-0.88] and 0.49 [95% CI, 0.46-0.52]) and the Lung Injury Prediction Score's maximum of 0.88 and 0.18.
Conclusions: Waveform markers can combine with Electronic Medical Records (EMR) data to improve predictability of ARDS before onset. The markers appear to modulate the sparser EMR data. They also provide, in and of themselves, sufficient dynamical information for comparable results to models with EMR data. Further prospective validation is needed to evaluate the robustness of the model and potential clinical utility.
Background: Neurofibromatosis type 1 (NF1) is an autosomal dominant genetic disorder, characterized by neurocutaneous lesions. NF1 has a high degree of clinical variability, which can include multiple neoplasia as well as cutaneous, vascular, osseous, and cognitive features. When vascular involvement occurs, NF1 can lead to aneurysms or arteriovenous malformations, which may rupture and cause life-threatening complications.
Case summary: We present a case of primary subarachnoid hemorrhage, complicated by spontaneous and rapidly progressing hemorrhage from the left subclavian artery resulting in upper airway obstruction and hypoxia in a patient with NF1. Treatment of this patient included surgical airway management, emergency hematoma evacuation, and vascular reconstructive surgery. Close collaboration between radiology, vascular surgery, and anesthesiology was essential to prevent patient's death.
Conclusions: Awareness of rare diseases such as NF1 is essential in critical care settings. Patients presenting with café-au-lait spots or cutaneous neurofibromas are at risk of vascular complications due to vascular fragility. This case of dual bleeding sources and airway obstruction from a neck hematoma underscores the need for interdisciplinary management. The role of proactive vascular screening in critically ill NF1 patients remains uncertain. Future approaches may incorporate advanced imaging and biomarker development to better stratify vascular risk and guide individualized care.
Objectives: Above cuff vocalization (ACV) is used in patients with a tracheostomy in the ICU despite limited evidence. This early-stage decision-analytic model (DAM) for ACV evaluates the expected cost-effectiveness exploring the impact of uncertainty to identify key drivers of cost and effect and critical further research priorities.
Perspective: U.K. National Health Service.
Setting: Hypothetical cohort of general ICU patients with a tracheostomy, 63 years old, 64% male.
Methods: A de novo decision-analytic health economic model comparing ACV to usual care (UC). Model parameters were acquired from the literature review and expert opinion. One-way sensitivity analyses were conducted to identify key drivers of cost-effectiveness.
Results: The daily cost of ACV in the ICU ranged from £75 to 89 (USD 101-120), with most of this cost attributable to staff resources for delivery. The base-case scenario revealed ACV is potentially cost-effective, dominating UC with cost savings of £9,488 (USD 12,808) and 0.395 Quality-Adjusted Life Years gained. Most sensitivity analyses revealed that ACV dominated UC, costing less and being more effective. When ACV had a negative impact on ICU and ward length of stay (LoS), or had no effect on the speed of weaning, it was not cost-effective. The primary driver of cost was whether ACV affected the speed of weaning and ICU LoS. The two primary drivers of effect were: i) whether ACV impacted which end state a patient transitioned to and ii) whether ACV had a sustained positive impact on quality of life.
Conclusions: Despite the substantial input required from speech-language pathologists-a typically scarce resource in ICU settings-ACV demonstrates strong potential for cost-effectiveness. There is no reason for decision-makers to de-adopt ACV, and delaying adoption may result in loss of opportunity costs. Improved reporting of mortality and utility data in critical care research would increase the reliability of early-stage DAMs.
Objectives: This systematic review evaluates artificial intelligence (AI)-based predictive models developed for early sepsis detection in adult hospitalized patients. It explores model types, input features, validation strategies, performance metrics, clinical integration, and implementation challenges.
Data sources: A systematic search was conducted across PubMed, Scopus, Web of Science, Google Scholar, and CENTRAL for studies published between January 2015 and March 2025.
Study selection: Eligible studies included those developing or validating AI models for adult inpatient sepsis prediction using electronic health record data and reporting at least one performance metric (area under the curve [AUC], sensitivity, specificity, or F1 score). Studies focusing on pediatric populations, lacking quantitative evaluation, or unpublished in peer-reviewed journals were excluded.
Data extraction: Data extraction followed preferred reporting items for systematic reviews and meta-analyses guidelines. Extracted variables included study design, patient population, model type, input features, validation approach, and performance outcomes.
Data synthesis: A total of 52 studies met the inclusion criteria. Most used retrospective designs, with limited prospective or real-time clinical validation. Commonly used algorithms included random forests, neural networks, support vector machines, and deep learning architectures (long short-term memory, convolutional neural network). Input data varied from structured sources (vital signs, laboratory values, demographics) to unstructured clinical notes processed via natural language processing. Reported AUC values ranged from 0.79 to 0.96, indicating strong predictive performance across models.
Conclusions: AI models demonstrate significant promise for early sepsis detection, outperforming conventional scoring systems in many cases. However, generalizability, interpretability, and clinical implementation remain major challenges. Future research should emphasize externally validated, explainable, and scalable AI solutions integrated into real-time clinical workflows.
Objectives: To test whether urine olfactomedin 4 (uOLFM4) can predict furosemide responsiveness in patients at high risk for acute kidney injury (AKI) early in the PICU course. A secondary outcome was prediction of kidney replacement therapy (KRT) initiation in this cohort.
Design: Prospective observational cohort study.
Setting: Two quaternary care PICUs.
Patients: Two hundred forty PICU patients with a renal angina index greater than or equal to 8 and a urine sample collected on PICU days 0-1. Fifty-six patients received a furosemide dose on PICU days 1-4 and 44 received KRT.
Interventions: None.
Measurements and main results: uOLFM4 was measured via enzyme-linked immunosorbent assay. Urine neutrophil gelatinase-associated lipocalin (uNGAL) was measured via particle-enhanced turbidimetric immunoassay by the clinical laboratory. We compared groups using Mann-Whitney U tests or Kruskal-Wallis tests and calculated area under the receiver operating characteristic curve for performance of uOLFM4 and uNGAL to predict furosemide responsiveness on PICU days 1-4 and KRT receipt. Median (interquartile range) uOLFM4 and uNGAL concentrations were higher in patients who were furosemide nonresponsive (uOLFM4 694 ng/mL [214-1478 ng/mL] vs. 139 ng/mL [46-529 ng/mL]; p = 0.0004 and uNGAL 1149 ng/mL [204-2284 ng/mL] vs. 53 ng/mL [50-1533 ng/mL]; p = 0.0076) and higher in patients who received KRT. uOLFM4 and uNGAL had similar moderate discriminatory ability to predict furosemide responsiveness (area under the curve, 0.77 [95% CI, 0.65-0.90]; p = 0.0005 and 0.71 [95% CI, 0.57-0.85]; p = 0.0088, respectively). uOLFM4 of 156 ng/mL had 59% sensitivity, 96% specificity, a positive predictive value of 64%, and negative predictive value (NPV) of 95% to predict furosemide responsiveness.
Conclusions: In critically ill children at high risk for AKI, both uOLFM4 and uNGAL have moderate discriminatory ability to predict furosemide responsiveness and KRT receipt on the first day of PICU stay. The NPV greater than or equal to 95% for uOLFM4 for both outcomes make it a promising candidate for implementation into clinical decision support to facilitate early KRT initiation decision-making.
Importance: IV fluids are the cornerstone for management of acute kidney injury (AKI) after sepsis but can cause fluid overload. A restrictive fluid strategy may benefit some patients; however, identifying them is challenging. Novel causal machine learning (ML) techniques can estimate heterogenous treatment effects (HTEs) of IV fluids among these patients.
Objectives: To develop and validate a causal-ML framework to identify patients who benefit from restrictive fluids (< 500 mL fluids within 24 hr after AKI).
Design setting and participants: We conducted a retrospective study among patients with sepsis who developed acute kidney injury (AKI) within 48 hours of ICU admission. We developed a causal-ML approach to estimate individualized treatment effects and guide fluid therapy. We developed the model in Medical Information Mart for Intensive Care IV and externally validated it in Salzburg Intensive Care database.
Main outcomes and measures: Our primary outcome was early AKI reversal at 24 hours. Secondary outcomes included sustained AKI reversal and major adverse kidney events by 30 days (MAKE30). Model performance to identify HTE of restrictive IV fluids was assessed using the area under the targeting operator characteristic curve (AUTOC), which quantifies how well a model captures HTE, and compared with a random forest model.
Results: Causal forest model outperformed random forest in identifying HTE of restrictive IV fluids with AUTOC 0.15 vs. -0.02 in external validation cohort. Among 1931 patients in external validation cohort, the model recommended restrictive fluids for 68.9%. Among these, patients who received restrictive fluids demonstrated significantly higher rates of early AKI reversal (53.9% vs. 33.2%, p < 0.001), sustained AKI reversal (34.2% vs. 18.0%, p < 0.001), and lower rates of MAKE30 (17.1% vs. 34.6%, p = 0.003). Results were consistent in the adjusted analysis.
Conclusions and relevance: Causal-ML framework outperformed random forest model in identifying patients with AKI and sepsis who benefit from restrictive fluid therapy. This provides a data-driven approach for personalized fluid management and merits prospective evaluation in clinical trials.
Objectives: To identify the prevalence of over-assistance from mechanical ventilation (MV) and to assess whether reducing MV support could be done safely in neurosurgical ICU patients in terms of risk of under-assistance and brain's oxygenation.
Design: Prospective observation study.
Setting: Neurosurgical trauma ICU, Toronto, ON, Canada.
Patients: Twenty-seven brain-injured patients on MV having indication of a spontaneous breathing trial (SBT).
Interventions: Level of pressure support ventilation (PSV).
Measurements and main results: In neurosurgical patients, regional ventilation distribution using electrical impedance tomography, patient's respiratory drive (airway occlusion at 100 ms [P0.1]), respiratory muscle pressure (Pmus), diaphragm and parasternal intercostal (PI) thickening fraction, brain oximetry, and electroencephalogram were assessed at clinical PSV (ClinPS), low PSV (LowPS, pressure support [PS] 5 cm H2O, positive end-expiratory pressure [PEEP] 5 cm H2O), SBT, PS 0 cm H2O, and PEEP 0 cm H2O. Over-assistance was defined by pressure muscle index less than 0 cm H2O; under-assistance was defined as Pmus greater than or equal to 15 cm H2O. Mixed effects models were used for analysis. Imbalanced dorsal/ventral distribution of ventilation improved by reducing assistance while respiratory effort increased. Over-assistance was present in ten cases (37%) during ClinPS and in none at LowPS and SBT; under-assistance was present in two, four, and seven cases at ClinPS, LowPS, and SBT. During SBT, compliance and end-expiratory lung volume decreased (p < 0.0001). Brain activity did not vary. P0.1 greater than or equal to 4 cm H2O was associated with Pmus greater than or equal to 15 cm H2O with 80% sensitivity and 91% specificity during SBT.
Conclusions: Neurosurgical patients seem to frequently be overassisted under PSV. Reducing the ventilatory support is often feasible and Pmus and P0.1 can help with detecting under-assistance.
Importance: Long-term functional outcomes and health-related quality of life (HRQoL) in survivors of cardiogenic shock treated with venoarterial extracorporeal membrane oxygenation (ECMO) remain poorly understood.
Objectives: This study aimed to evaluate these outcomes in a cohort of venoarterial ECMO survivors.
Design, setting, and participants: This single-center observational study was conducted in the ICU of a French academic hospital and included consecutive adult patients treated with venoarterial ECMO who were discharged alive between February 2016 and December 2021.
Main outcomes and measures: The primary endpoint was a favorable functional outcome at least one year after ICU discharge, defined as a score on the modified Rankin Scale of 0 or 1, indicating no functional limitations affecting usual activities. Secondary endpoints included HRQoL, assessed using the EuroQol 5D five levels (EQ-5D-5L) and 36-item short-form health survey (SF-36) questionnaires. Of 79 hospital survivors, 65 patients were evaluated after a median follow-up of 2.8 years (1.2-4.2 yr). A favorable functional outcome was observed in 35 of 65 patients (54%). No association was found between ICU admission characteristics, serum neurobiomarkers (neuron-specific enolase, S100B), electroencephalogram findings during venoarterial ECMO, and functional outcome. Male sex was the only parameter associated with higher odds of favorable functional outcome (adjusted odds ratio, 4.19; 95% CI, 1.35-14.5). HRQoL assessments showed moderate-to-severe issues in 15% of patients, mainly affecting mobility, pain/discomfort, and mental health. Patients with favorable outcomes reported better scores across all domains of the EQ-5D-5L and higher scores on both the physical and mental components of the SF-36.
Conclusions and relevance: Approximately half of venoarterial ECMO survivors achieved excellent long-term functional outcomes. Nonetheless, a subset experienced ongoing limitations, particularly related to physical function and mental health, underscoring the need for targeted long-term follow-up and support.

