[This retracts the article DOI: 10.1155/2022/8005848.].
[This retracts the article DOI: 10.1155/2022/8005848.].
[This retracts the article DOI: 10.1155/2022/2482728.].
Introduction: Acute aortic syndrome (AAS) is a rare clinical syndrome with a high mortality rate. The Canadian clinical practice guideline for the diagnosis of AAS was developed in order to reduce the frequency of misdiagnoses. As part of the guideline, a clinical decision aid was developed to facilitate clinician decision-making (RIPP score). The aim of this study is to validate the diagnostic accuracy of this tool and assess its performance in comparison to other risk prediction tools that have been developed.
Methods: This was a historical case-control study. Consecutive cases and controls were recruited from three academic emergency departments from 2002-2020. Cases were identified through an admission, discharge, or death certificated diagnosis of acute aortic syndrome. Controls were identified through presenting complaint of chest, abdominal, flank, back pain, and/or perfusion deficit. We compared the clinical decision tools' C statistic and used the DeLong method to test for the significance of these differences and report sensitivity and specificity with 95% confidence intervals.
Results: We collected data on 379 cases of acute aortic syndrome and 1340 potential eligible controls; 379 patients were randomly selected from the final population. The RIPP score had a sensitivity of 99.7% (98.54-99.99). This higher sensitivity resulted in a lower specificity (53%) compared to the other clinical decision aids (63-86%). The DeLong comparison of the C statistics found that the RIPP score had a higher C statistic than the ADDRS (-0.0423 (95% confidence interval -0.07-0.02); P < 0.0009) and the AORTAs score (-0.05 (-0.07 to -0.02); P = 0.0002), no difference compared to the Lovy decision tool (0.02 (95% CI -0.01-0.05 P < 0.25)) and decreased compared to the Von Kodolitsch decision tool (0.04 (95% CI 0.01-0.07 P < 0.008)).
Conclusion: The Canadian clinical practice guideline's AAS clinical decision aid is a highly sensitive tool that uses readily available clinical information. It has the potential to improve diagnosis of AAS in the emergency department.
Purpose: To assess whether the COVID-19 pandemic had an influence on presentation of testicular torsion and/or increase in the frequency of orchiectomy. Patients and Methods. This retrospective study included boys under 18 years of age with testicular torsion divided in two groups: pre-COVID operated in 2019 vs. COVID-19 group from 2020. We compared demographic data as well as local and general symptoms. We analyzed additional tests, intraoperative findings, length of operation and hospitalization, and followup. Results. We analyzed the data collected from 44 patients (24 boys from first group vs. 20 boys from second group). The median age was 13.4 years vs. 14.5 years in the latter. The median time of symptoms duration was 6.5 hours and 8.5 hours, respectively. The main manifestation was testicular pain without additional signs. The results of the laboratory tests did not reflect local advancement. In the 2019 group, Doppler ultrasound showed absent blood flow in the affected testicle in 62% vs. 80% in 2020. The mean time from admission to surgery was virtually identical: 75 minutes in 2019 vs. 76 minutes in 2020. The mean duration of scrotal revision was similar in both groups. There was only one significant difference: the degree of twisting. In 2019, the mean was 360° vs. 540° in 2020. Incidence of orchiectomy also did not significantly vary between the analyzed time periods, with 21% during the pandemic and 35% during the pre-COVID-19 period. Conclusion. We did not observe an increase in the number of testicular torsion cases during the COVID-19 pandemic. Most importantly, the rates of orchiectomy did not significantly differ between the patients with testicular torsion presenting during the COVID-19 outbreak.
Objective: The present study was designed to establish and evaluate an early prediction model of epilepsy after encephalitis in childhood based on electroencephalogram (ECG) and clinical features.
Methods: 255 patients with encephalitis were randomly divided into training and verification sets and were divided into postencephalitic epilepsy (PE) and no postencephalitic epilepsy (no-PE) according to whether epilepsy occurred one year after discharge. Univariate and multivariate logistic regression analyses were used to screen the risk factors for PE. The identified risk factors were used to establish and verify a model.
Results: This study included 255 patients with encephalitis, including 209 in the non-PE group and 46 in the PE group. Univariate and multiple logistic regression analysis showed that hemoglobin (OR = 0.968, 95% CI = 0.951-0.958), epilepsy frequency (OR = 0.968, 95% CI = 0.951-0.958), and ECG slow wave/fast wave frequency (S/F) in the occipital region were independent influencing factors for PE (P < 0.05).The prediction model is based on the above factors: -0.031 × hemoglobin -2.113 × epilepsy frequency + 7.836 × occipital region S/F + 1.595. In the training set and the validation set, the area under the ROC curve (AUC) of the model for the diagnosis of PE was 0.835 and 0.712, respectively.
Conclusion: The peripheral blood hemoglobin, the number of epileptic seizures in the acute stage of encephalitis, and EEG slow wave/fast wave frequencies can be used as predictors of epilepsy after encephalitis.
Background: Malnutrition is prevalent among critically ill patients and has been associated with a poor prognosis. This study sought to determine whether the addition of a nutritional indicator to the various variables of prognostic scoring models can improve the prediction of mortality among trauma patients in the intensive care unit (ICU).
Methods: This study's cohort included 1,126 trauma patients hospitalized in the ICU between January 1, 2018, and December 31, 2021. Two nutritional indicators, the prognostic nutrition index (PNI), a calculation based on the serum albumin concentration and peripheral blood lymphocyte count, and the geriatric nutritional risk index (GNRI), a calculation based on the serum albumin concentration and the ratio of current body weight to ideal body weight, were examined for their association with the mortality outcome. The significant nutritional indicator was served as an additional variable in prognostic scoring models of the Trauma and Injury Severity Score (TRISS), the Acute Physiology and Chronic Health Evaluation (APACHE II), and the mortality prediction models (MPM II) at admission, 24, 48, and 72 h in the mortality outcome prediction. The predictive performance was determined by the area under the receiver operating characteristic curve.
Results: Multivariate logistic regression revealed that GNRI (OR, 0.97; 95% CI, 0.96-0.99; p=0.007), but not PNI (OR, 0.99; 95% CI, 0.97-1.02; p=0.518), was independent risk factor for mortality. However, none of these predictive scoring models showed a significant improvement in prediction when the GNRI variable is incorporated.
Conclusions: The addition of GNRI as a variable to the prognostic scoring models did not significantly enhance the performance of the predictors.
[This retracts the article DOI: 10.1155/2022/2711489.].
[This retracts the article DOI: 10.1155/2022/4774195.].
[This retracts the article DOI: 10.1155/2022/5314105.].
[This retracts the article DOI: 10.1155/2022/4797281.].