Aims: Risk stratification and individual risk prediction play a key role in making treatment decisions in patients with complex coronary artery disease (CAD). The aim of this study was to assess whether machine learning (ML) algorithms can improve discriminative ability and identify unsuspected, but potentially important, factors in the prediction of long-term mortality following percutaneous coronary intervention or coronary artery bypass grafting in patients with complex CAD.
Methods and results: To predict long-term mortality, the ML algorisms were applied to the SYNTAXES database with 75 pre-procedural variables including demographic and clinical factors, blood sampling, imaging, and patient-reported outcomes. The discriminative ability and feature importance of the ML model was assessed in the derivation cohort of the SYNTAXES trial using a 10-fold cross-validation approach. The ML model showed an acceptable discrimination (area under the curve = 0.76) in cross-validation. C-reactive protein, patient-reported pre-procedural mental status, gamma-glutamyl transferase, and HbA1c were identified as important variables predicting 10-year mortality.
Conclusion: The ML algorithms disclosed unsuspected, but potentially important prognostic factors of very long-term mortality among patients with CAD. A 'mega-analysis' based on large randomized or non-randomized data, the so-called 'big data', may be warranted to confirm these findings.
Clinical trial registration: SYNTAXES ClinicalTrials.gov reference: NCT03417050, SYNTAX ClinicalTrials.gov reference: NCT00114972.
Aims: Over the last ten years, virtual Fractional Flow Reserve (vFFR) has improved the utility of Fractional Flow Reserve (FFR), a globally recommended assessment to guide coronary interventions. Although the speed of vFFR computation has accelerated, techniques utilising full 3D computational fluid dynamics (CFD) solutions rather than simplified analytical solutions still require significant time to compute.
Methods and results: This study investigated the speed, accuracy and cost of a novel 3D-CFD software method based upon a graphic processing unit (GPU) computation, compared with the existing fastest central processing unit (CPU)-based 3D-CFD technique, on 40 angiographic cases. The novel GPU simulation was significantly faster than the CPU method (median 31.7 s (Interquartile Range (IQR) 24.0-44.4s) vs. 607.5 s (490-964 s), P < 0.0001). The novel GPU technique was 99.6% (IQR 99.3-99.9) accurate relative to the CPU method. The initial cost of the GPU hardware was greater than the CPU (£4080 vs. £2876), but the median energy consumption per case was significantly less using the GPU method (8.44 (6.80-13.39) Wh vs. 2.60 (2.16-3.12) Wh, P < 0.0001).
Conclusion: This study demonstrates that vFFR can be computed using 3D-CFD with up to 28-fold acceleration than previous techniques with no clinically significant sacrifice in accuracy.
Aims: Life-threatening ventricular arrhythmias (LTVAs) are common manifestations of sepsis. The majority of sepsis patients with LTVA are unresponsive to initial standard treatment and thus have a poor prognosis. There are very limited studies focusing on the early identification of patients at high risk of LTVA in sepsis to perform optimal preventive treatment interventions. We aimed to develop a prediction model to predict LTVA in sepsis using machine learning (ML) approaches.
Methods and results: Six ML algorithms including CatBoost, LightGBM, and XGBoost were employed to perform the model fitting. The least absolute shrinkage and selection operator (LASSO) regression was used to identify key features. Methods of model evaluation involved in this study included area under the receiver operating characteristic curve (AUROC), for model discrimination, calibration curve, and Brier score, for model calibration. Finally, we validated the prediction model both internally and externally. A total of 27 139 patients with sepsis were identified in this study, 1136 (4.2%) suffered from LTVA during hospitalization. We screened out 10 key features from the initial 54 variables via LASSO regression to improve the practicability of the model. CatBoost showed the best prediction performance among the six ML algorithms, with excellent discrimination (AUROC = 0.874) and calibration (Brier score = 0.157). The remarkable performance of the model was presented in the external validation cohort (n = 9492), with an AUROC of 0.836, suggesting certain generalizability of the model. Finally, a nomogram with risk classification of LTVA was shown in this study.
Conclusion: We established and validated a machine leaning-based prediction model, which was conducive to early identification of high-risk LTVA patients in sepsis, thus appropriate methods could be conducted to improve outcomes.
Aims: This study aims to evaluate the ability of a deep-learning-based cardiovascular disease (CVD) retinal biomarker, Reti-CVD, to identify individuals with intermediate- and high-risk for CVD.
Methods and results: We defined the intermediate- and high-risk groups according to Pooled Cohort Equation (PCE), QRISK3, and modified Framingham Risk Score (FRS). Reti-CVD's prediction was compared to the number of individuals identified as intermediate- and high-risk according to standard CVD risk assessment tools, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated to assess the results. In the UK Biobank, among 48 260 participants, 20 643 (42.8%) and 7192 (14.9%) were classified into the intermediate- and high-risk groups according to PCE, and QRISK3, respectively. In the Singapore Epidemiology of Eye Diseases study, among 6810 participants, 3799 (55.8%) were classified as intermediate- and high-risk group according to modified FRS. Reti-CVD identified PCE-based intermediate- and high-risk groups with a sensitivity, specificity, PPV, and NPV of 82.7%, 87.6%, 86.5%, and 84.0%, respectively. Reti-CVD identified QRISK3-based intermediate- and high-risk groups with a sensitivity, specificity, PPV, and NPV of 82.6%, 85.5%, 49.9%, and 96.6%, respectively. Reti-CVD identified intermediate- and high-risk groups according to the modified FRS with a sensitivity, specificity, PPV, and NPV of 82.1%, 80.6%, 76.4%, and 85.5%, respectively.
Conclusion: The retinal photograph biomarker (Reti-CVD) was able to identify individuals with intermediate and high-risk for CVD, in accordance with existing risk assessment tools.
Aims: We aimed to investigate the concordance between heart rate variability (HRV) derived from the photoplethysmographic (PPG) signal of a commercially available smartwatch compared with the gold-standard high-resolution electrocardiogram (ECG)-derived HRV in patients with cardiovascular disease.
Methods and results: We prospectively enrolled 104 survivors of acute ST-elevation myocardial infarction, 129 patients after an ischaemic stroke, and 30 controls. All subjects underwent simultaneous recording of a smartwatch (Garmin vivoactive 4; Garmin Ltd, Olathe, KS, USA)-derived PPG signal and a high-resolution (1000 Hz) ECG for 30 min under standardized conditions. HRV measures in time and frequency domain, non-linear measures, as well as deceleration capacity (DC) were calculated according to previously published technologies from both signals. Lin's concordance correlation coefficient (ρc) between smartwatch-derived and ECG-based HRV markers was used as a measure of diagnostic accuracy. A very high concordance within the whole study cohort was observed for the mean heart rate (ρc = 0.9998), standard deviation of the averages of normal-to-normal (NN) intervals in all 5min segments (SDANN; ρc = 0.9617), and very low frequency power (VLF power; ρc = 0.9613). In contrast, detrended fluctuation analysis (DF-α1; ρc = 0.5919) and the square mean root of the sum of squares of adjacent NN-interval differences (rMSSD; ρc = 0.6617) showed only moderate concordance.
Conclusion: Smartwatch-derived HRV provides a practical alternative with excellent accuracy compared with ECG-based HRV for global markers and those characterizing lower frequency components. However, caution is warranted with HRV markers that predominantly assess short-term variability.
Aims: One of the most important complications of heart transplantation is organ rejection, which is diagnosed on endomyocardial biopsies by pathologists. Computer-based systems could assist in the diagnostic process and potentially improve reproducibility. Here, we evaluated the feasibility of using deep learning in predicting the degree of cellular rejection from pathology slides as defined by the International Society for Heart and Lung Transplantation (ISHLT) grading system.
Methods and results: We collected 1079 histopathology slides from 325 patients from three transplant centres in Germany. We trained an attention-based deep neural network to predict rejection in the primary cohort and evaluated its performance using cross-validation and by deploying it to three cohorts. For binary prediction (rejection yes/no), the mean area under the receiver operating curve (AUROC) was 0.849 in the cross-validated experiment and 0.734, 0.729, and 0.716 in external validation cohorts. For a prediction of the ISHLT grade (0R, 1R, 2/3R), AUROCs were 0.835, 0.633, and 0.905 in the cross-validated experiment and 0.764, 0.597, and 0.913; 0.631, 0.633, and 0.682; and 0.722, 0.601, and 0.805 in the validation cohorts, respectively. The predictions of the artificial intelligence model were interpretable by human experts and highlighted plausible morphological patterns.
Conclusion: We conclude that artificial intelligence can detect patterns of cellular transplant rejection in routine pathology, even when trained on small cohorts.