Pub Date : 2026-02-28eCollection Date: 2026-02-01DOI: 10.14740/jocmr6448
Rahul Menon, Krystal Hunter, Satyajeet Roy
Background: Diet is increasingly recognized as a modifiable determinant of mental health. High intake of ultra-processed foods (UPFs) can be associated with poor psychological outcomes; however, the protective role of hydration, particularly plain water intake, remains underexplored. We aimed to evaluate the independent and combined associations of UPF and water intake with moderate-to-severe depression among the adult population of the United States (US).
Methods: We analyzed cross-sectional data from the National Health and Nutrition Examination Survey (NHANES) 2021-2023. UPF intake was proxied using the percentage of daily calories from added sugars and categorized into quartiles. Water intake (g/day) was similarly categorized into quartiles. Moderate-to-severe depression was defined as a Patient Health Questionnaire-9 (PHQ-9) score ≥ 10. Survey-weighted logistic regression models assessed associations between diet exposures and depression risk, adjusting for age, sex, and body mass index (BMI). Subgroup, sensitivity, and interaction analyses were conducted.
Results: Prevalence of PHQ-9-based depression in our sample was 10.9%. Participants in the highest UPF quartile had higher odds of PHQ-9-based depression when compared to the lowest (odds ratio (OR) = 1.547, 95% confidence interval (CI): 1.545-1.550, P < 0.001). Conversely, those in the highest water intake quartile had lower odds of PHQ-9-based depression (OR = 0.486, 95% CI: 0.486-0.487, P < 0.001). The UPF-water interaction was statistically significant but of minimal clinical relevance. Subgroup analyses showed more severe vulnerabilities to depression from UPF consumption among males, Black and Hispanic individuals, and those with lower educational attainment. A small but statistically significant interaction (β = -0.07, P = 0.017) indicated that water intake modestly attenuated the UPF-depression relationship. Associations persisted after exclusion of extreme BMI values.
Conclusions: Increased UPF intake is associated with higher risk of depression in the US adults while increased water intake confers a protective effect. These findings underscore the need for dietary strategies that simultaneously reduce UPF intake and promote hydration, with tailored interventions for the high-risk groups.
背景:饮食越来越被认为是心理健康的一个可改变的决定因素。过量摄入超加工食品(upf)可能与不良的心理结果有关;然而,水合作用的保护作用,特别是普通水的摄入,仍然没有得到充分的探索。我们的目的是评估UPF和饮水量与美国成年人群中中度至重度抑郁症的独立和联合关联。方法:我们分析了来自2021-2023年国家健康与营养检查调查(NHANES)的横断面数据。UPF摄入量是用每日添加糖卡路里的百分比来表示的,并分为四分位数。饮水量(克/天)同样被划分为四分位数。中度至重度抑郁症定义为患者健康问卷-9 (PHQ-9)得分≥10。调查加权逻辑回归模型评估了饮食暴露与抑郁风险之间的关系,调整了年龄、性别和体重指数(BMI)。进行亚组、敏感性和相互作用分析。结果:我们的样本中基于phq -9的抑郁症患病率为10.9%。UPF最高四分位数的参与者与最低四分位数的参与者相比,phq -9抑郁的几率更高(比值比(OR) = 1.547, 95%可信区间(CI): 1.545-1.550, P < 0.001)。相反,饮水量最高的四分位数患者出现基于phq -9的抑郁症的几率较低(OR = 0.486, 95% CI: 0.486-0.487, P < 0.001)。upf -水相互作用具有统计学意义,但临床相关性很小。亚组分析显示,在男性、黑人和西班牙裔以及受教育程度较低的人群中,UPF消费更容易导致抑郁症。一个小而有统计学意义的交互作用(β = -0.07, P = 0.017)表明,饮水适度地减弱了upf -抑郁关系。排除极端BMI值后,相关性仍然存在。结论:在美国成年人中,UPF摄入量增加与抑郁症风险增加有关,而水摄入量增加具有保护作用。这些发现强调需要制定饮食策略,同时减少UPF摄入量和促进水合作用,并为高危人群提供量身定制的干预措施。
{"title":"Dietary Behavior and Risk of Depression: Effects of Ultra-Processed Food and Water Intake in a National Sample of the United States.","authors":"Rahul Menon, Krystal Hunter, Satyajeet Roy","doi":"10.14740/jocmr6448","DOIUrl":"https://doi.org/10.14740/jocmr6448","url":null,"abstract":"<p><strong>Background: </strong>Diet is increasingly recognized as a modifiable determinant of mental health. High intake of ultra-processed foods (UPFs) can be associated with poor psychological outcomes; however, the protective role of hydration, particularly plain water intake, remains underexplored. We aimed to evaluate the independent and combined associations of UPF and water intake with moderate-to-severe depression among the adult population of the United States (US).</p><p><strong>Methods: </strong>We analyzed cross-sectional data from the National Health and Nutrition Examination Survey (NHANES) 2021-2023. UPF intake was proxied using the percentage of daily calories from added sugars and categorized into quartiles. Water intake (g/day) was similarly categorized into quartiles. Moderate-to-severe depression was defined as a Patient Health Questionnaire-9 (PHQ-9) score ≥ 10. Survey-weighted logistic regression models assessed associations between diet exposures and depression risk, adjusting for age, sex, and body mass index (BMI). Subgroup, sensitivity, and interaction analyses were conducted.</p><p><strong>Results: </strong>Prevalence of PHQ-9-based depression in our sample was 10.9%. Participants in the highest UPF quartile had higher odds of PHQ-9-based depression when compared to the lowest (odds ratio (OR) = 1.547, 95% confidence interval (CI): 1.545-1.550, P < 0.001). Conversely, those in the highest water intake quartile had lower odds of PHQ-9-based depression (OR = 0.486, 95% CI: 0.486-0.487, P < 0.001). The UPF-water interaction was statistically significant but of minimal clinical relevance. Subgroup analyses showed more severe vulnerabilities to depression from UPF consumption among males, Black and Hispanic individuals, and those with lower educational attainment. A small but statistically significant interaction (β = -0.07, P = 0.017) indicated that water intake modestly attenuated the UPF-depression relationship. Associations persisted after exclusion of extreme BMI values.</p><p><strong>Conclusions: </strong>Increased UPF intake is associated with higher risk of depression in the US adults while increased water intake confers a protective effect. These findings underscore the need for dietary strategies that simultaneously reduce UPF intake and promote hydration, with tailored interventions for the high-risk groups.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 2","pages":"107-119"},"PeriodicalIF":2.0,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12978389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147446513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: With no approved pharmacological treatments for non-alcoholic fatty liver disease (NAFLD) in Taiwan, identifying protective and risk factors is crucial for preventing disease progression. Given the clinical heterogeneity of NAFLD, this study aimed to identify clinically meaningful NAFLD phenotypes using electronic medical records (EMRs) and unsupervised clustering, stratify risk across different clusters, identify factors associated with disease progression, and derive a parsimonious set of predictors for high-risk phenotypes.
Methods: This study was a retrospective cohort study conducted in three steps with iterative model training. In step 1, patients diagnosed with NAFLD were identified, and all relevant patient data were extracted, followed by clustering analysis using the k-prototype algorithm. In step 2, survival analysis and Cox regression were applied to perform risk stratification across clusters. In step 3, Lasso regression, logistic regression, and receiver operating characteristic (ROC) curve analysis were used to identify potential protective and risk factors associated with NAFLD and to derive a parsimonious set of predictors for high-risk phenotypes across different risk strata.
Results: Step 1: The analysis of 6,023 patients identified four distinct phenotypic clusters. The first cluster had the most severe disease, the second the least. Step 2: Among 4,998 patients, the first cluster faced the highest risk for all outcomes, with a median survival of 3.06 years, significantly different from the others. There was no significant risk difference between the second and third clusters. Step 3: A comparison of the highest-risk and lowest-risk clusters finally identified 17 potential variables.
Conclusions: Using multiple analytical models, this study identified 17 potential risk factors associated with NAFLD progression. Their combined assessment may inform future risk stratification and hypothesis generation. Further validation is required before clinical application.
{"title":"Investigating Factors Influencing Disease Progression in Patients With Non-Alcoholic Fatty Liver Disease.","authors":"Yi-Chieh Tseng, Rewadee Jenraumjit, Ming-Jong Bair, Chung-Yu Chen, Fu-Shih Chen","doi":"10.14740/jocmr6424","DOIUrl":"https://doi.org/10.14740/jocmr6424","url":null,"abstract":"<p><strong>Background: </strong>With no approved pharmacological treatments for non-alcoholic fatty liver disease (NAFLD) in Taiwan, identifying protective and risk factors is crucial for preventing disease progression. Given the clinical heterogeneity of NAFLD, this study aimed to identify clinically meaningful NAFLD phenotypes using electronic medical records (EMRs) and unsupervised clustering, stratify risk across different clusters, identify factors associated with disease progression, and derive a parsimonious set of predictors for high-risk phenotypes.</p><p><strong>Methods: </strong>This study was a retrospective cohort study conducted in three steps with iterative model training. In step 1, patients diagnosed with NAFLD were identified, and all relevant patient data were extracted, followed by clustering analysis using the k-prototype algorithm. In step 2, survival analysis and Cox regression were applied to perform risk stratification across clusters. In step 3, Lasso regression, logistic regression, and receiver operating characteristic (ROC) curve analysis were used to identify potential protective and risk factors associated with NAFLD and to derive a parsimonious set of predictors for high-risk phenotypes across different risk strata.</p><p><strong>Results: </strong>Step 1: The analysis of 6,023 patients identified four distinct phenotypic clusters. The first cluster had the most severe disease, the second the least. Step 2: Among 4,998 patients, the first cluster faced the highest risk for all outcomes, with a median survival of 3.06 years, significantly different from the others. There was no significant risk difference between the second and third clusters. Step 3: A comparison of the highest-risk and lowest-risk clusters finally identified 17 potential variables.</p><p><strong>Conclusions: </strong>Using multiple analytical models, this study identified 17 potential risk factors associated with NAFLD progression. Their combined assessment may inform future risk stratification and hypothesis generation. Further validation is required before clinical application.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 2","pages":"83-98"},"PeriodicalIF":2.0,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12978391/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147446608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-28eCollection Date: 2026-02-01DOI: 10.14740/jocmr6465
Aida I Tarzimanova, Anna E Bragina, Liubov A Ponomareva, Liubov V Vasileva, Daria D Vanina, Ilya I Shvedov, Anna E Pokrovskaya, Tatiana A Safronova, Tatiana S Vargina, Irakli Zh Loriya, Elena N Popova, Paria Shooriberis, Yaroslav M Malinin, Valery I Podzolkov
Background: Atrial fibrillation (AF) is the most frequent arrhythmia worldwide that significantly elevates stroke and heart failure risks. Recent developments in imaging research have shown the need for exploring epicardial adipose tissue (EAT) as a contributor to atrial pathology.
Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and registered in PROSPERO (CRD42022360443), a systematic search was conducted across PubMed, Scopus and Google Scholar using terms related to AF and EAT quantified using computed tomography. Inclusion criteria encompassed in vivo studies assessing EAT's effect on AF, with reported outcomes including AF development. Publication bias was assessed through two complementary approaches: visual inspection of funnel plot symmetry and formal statistical testing using Egger's or Begg's tests. A two-tailed P value threshold of 0.05 was established for determining statistical significance throughout all analyses.
Results: Ten studies (851 patients) analyzed showed the relationship between total EAT and AF. Meta-analysis of aggregate data revealed a statistically significant standardized mean difference (SMD) of 0.70 (95% confidence interval (CI), 0.24-1.15; I2 = 91%; P < 0.01). Seven studies (579 patients) analyzed the relationship between periatrial EAT and AF. Meta-analysis of aggregate data revealed a statistically significant SMD of 1.13 (95% CI, 0.49-1.78; I2 = 91%; P < 0.01).
Conclusions: This meta-analysis demonstrates that total and periatrial EAT correlate with AF; however, periatrial EAT has a more convincing association with AF than total EAT.
{"title":"The Role of Epicardial Adipose Tissue in the Development of Atrial Fibrillation: A Systematic Review and Meta-Analysis.","authors":"Aida I Tarzimanova, Anna E Bragina, Liubov A Ponomareva, Liubov V Vasileva, Daria D Vanina, Ilya I Shvedov, Anna E Pokrovskaya, Tatiana A Safronova, Tatiana S Vargina, Irakli Zh Loriya, Elena N Popova, Paria Shooriberis, Yaroslav M Malinin, Valery I Podzolkov","doi":"10.14740/jocmr6465","DOIUrl":"https://doi.org/10.14740/jocmr6465","url":null,"abstract":"<p><strong>Background: </strong>Atrial fibrillation (AF) is the most frequent arrhythmia worldwide that significantly elevates stroke and heart failure risks. Recent developments in imaging research have shown the need for exploring epicardial adipose tissue (EAT) as a contributor to atrial pathology.</p><p><strong>Methods: </strong>Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and registered in PROSPERO (CRD42022360443), a systematic search was conducted across PubMed, Scopus and Google Scholar using terms related to AF and EAT quantified using computed tomography. Inclusion criteria encompassed <i>in vivo</i> studies assessing EAT's effect on AF, with reported outcomes including AF development. Publication bias was assessed through two complementary approaches: visual inspection of funnel plot symmetry and formal statistical testing using Egger's or Begg's tests. A two-tailed P value threshold of 0.05 was established for determining statistical significance throughout all analyses.</p><p><strong>Results: </strong>Ten studies (851 patients) analyzed showed the relationship between total EAT and AF. Meta-analysis of aggregate data revealed a statistically significant standardized mean difference (SMD) of 0.70 (95% confidence interval (CI), 0.24-1.15; I<sup>2</sup> = 91%; P < 0.01). Seven studies (579 patients) analyzed the relationship between periatrial EAT and AF. Meta-analysis of aggregate data revealed a statistically significant SMD of 1.13 (95% CI, 0.49-1.78; I<sup>2</sup> = 91%; P < 0.01).</p><p><strong>Conclusions: </strong>This meta-analysis demonstrates that total and periatrial EAT correlate with AF; however, periatrial EAT has a more convincing association with AF than total EAT.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 2","pages":"75-82"},"PeriodicalIF":2.0,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12978394/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147446635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-28eCollection Date: 2026-02-01DOI: 10.14740/jocmr6444
Jakob Mokros, Igor Matyukhin, Oliver Ritter, Daniel Patschan
Background: Chronic kidney disease (CKD) is a significant global health issue, primarily due to the rise in diabetes mellitus. This study aims to analyze the medications used in CKD patients with varying severity, focusing on dose adaptation.
Methods: This was a retrospective observational analysis of patients with CKD from various causes. CKD staging followed the 2024 Kidney Disease: Improving Global Outcomes (KDIGO) guidelines, and all medications given during the study were recorded, including documentation of dose adjustments due to reduced kidney function.
Results: The study included 106 CKD patients. A total of 209 active medications were examined, with an average of 11.2 ± 4.8 substances used per patient. The average number of medications did not differ significantly across CKD stages. Dose adjustments for reduced kidney function were required in 40.19% of patients, who received an average of 5.4 ± 4.6 medications requiring dose reduction, with appropriate adjustments made for 4.6 ± 2.2 substances on average.
Conclusions: The study found that polypharmacy is present in all stages of CKD, and the significant rate of dose adjustments suggests that physicians are aware of the need to manage medications for CKD patients.
{"title":"Polypharmacy and Dose Adjustment in Chronic Kidney Disease: A Cross-Sectional Study.","authors":"Jakob Mokros, Igor Matyukhin, Oliver Ritter, Daniel Patschan","doi":"10.14740/jocmr6444","DOIUrl":"https://doi.org/10.14740/jocmr6444","url":null,"abstract":"<p><strong>Background: </strong>Chronic kidney disease (CKD) is a significant global health issue, primarily due to the rise in diabetes mellitus. This study aims to analyze the medications used in CKD patients with varying severity, focusing on dose adaptation.</p><p><strong>Methods: </strong>This was a retrospective observational analysis of patients with CKD from various causes. CKD staging followed the 2024 Kidney Disease: Improving Global Outcomes (KDIGO) guidelines, and all medications given during the study were recorded, including documentation of dose adjustments due to reduced kidney function.</p><p><strong>Results: </strong>The study included 106 CKD patients. A total of 209 active medications were examined, with an average of 11.2 ± 4.8 substances used per patient. The average number of medications did not differ significantly across CKD stages. Dose adjustments for reduced kidney function were required in 40.19% of patients, who received an average of 5.4 ± 4.6 medications requiring dose reduction, with appropriate adjustments made for 4.6 ± 2.2 substances on average.</p><p><strong>Conclusions: </strong>The study found that polypharmacy is present in all stages of CKD, and the significant rate of dose adjustments suggests that physicians are aware of the need to manage medications for CKD patients.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 2","pages":"99-106"},"PeriodicalIF":2.0,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12978386/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147446599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-28eCollection Date: 2026-02-01DOI: 10.14740/jocmr6455
Christopher Pope, Priscilla Ahwin, Nikhil Kota, Ann Palathingal, Jason Peng, Harshini Suresh, Subhadra Thampi, Krystal Hunter, Satyajeet Roy
Background: Cancer and its various treatment modalities increase susceptibility to the development of sepsis. Because of the complex relationship between sepsis and cancer, we aimed to study the differences in risk factors and outcomes of sepsis in patients with cancer (SCa) compared to patients without cancer (SnoCa).
Methods: A retrospective cohort analysis of all adult patients who received care for sepsis in an urban tertiary healthcare center was conducted. Risk factors and outcomes were compared between the SCa and SnoCa groups.
Results: SCa group (n = 310) was older than SnoCa group (n = 628) (66.8 vs. 61.5 years; P < 0.01). There were higher associations of certain variables in the SCa group compared to the SnoCa group, such as male sex (55.8% vs. 48.2%; P = 0.03), White race (60.6% vs. 51.7%; P = 0.01), lower body mass index (BMI) (28.10 ± 9.3 vs. 30.02 ± 10.4 kg/m2; P = 0.01), and history of transient ischemic attack (TIA) (6.1% vs. 2.7%; P = 0.01). Conversely, there were lower associations of recreational drug use (10.0% vs. 17.0%; P = 0.01) and diabetes mellitus (DM) (35.9% vs. 45.9%; P = 0.01). Simple linear regression found that the SCa group had lower length of stay (LOS) (β = -0.08; P = 0.03). Logistic regression model showed that having cancer increased odds of all-cause mortality (odds ratio (OR) 1.82, 95% confidence interval (CI) 1.35-2.46; P < 0.01); however, the SCa group had comparable readmission rates, bloodstream infection, and in-hospital mortality.
Conclusion: Compared to patients with sepsis without cancer, patients with sepsis and cancer have higher association with older age, male sex, White race, lower BMI, and TIA, and lower association with recreational drug use and DM. Patients with sepsis and cancer have lower LOS, higher all-cause mortality and have no difference in readmissions, bloodstream infections, and in-hospital mortality.
{"title":"Risk Factors for Adverse Outcomes in Cancer Patients With Sepsis.","authors":"Christopher Pope, Priscilla Ahwin, Nikhil Kota, Ann Palathingal, Jason Peng, Harshini Suresh, Subhadra Thampi, Krystal Hunter, Satyajeet Roy","doi":"10.14740/jocmr6455","DOIUrl":"https://doi.org/10.14740/jocmr6455","url":null,"abstract":"<p><strong>Background: </strong>Cancer and its various treatment modalities increase susceptibility to the development of sepsis. Because of the complex relationship between sepsis and cancer, we aimed to study the differences in risk factors and outcomes of sepsis in patients with cancer (SCa) compared to patients without cancer (SnoCa).</p><p><strong>Methods: </strong>A retrospective cohort analysis of all adult patients who received care for sepsis in an urban tertiary healthcare center was conducted. Risk factors and outcomes were compared between the SCa and SnoCa groups.</p><p><strong>Results: </strong>SCa group (n = 310) was older than SnoCa group (n = 628) (66.8 vs. 61.5 years; P < 0.01). There were higher associations of certain variables in the SCa group compared to the SnoCa group, such as male sex (55.8% vs. 48.2%; P = 0.03), White race (60.6% vs. 51.7%; P = 0.01), lower body mass index (BMI) (28.10 ± 9.3 vs. 30.02 ± 10.4 kg/m<sup>2</sup>; P = 0.01), and history of transient ischemic attack (TIA) (6.1% vs. 2.7%; P = 0.01). Conversely, there were lower associations of recreational drug use (10.0% vs. 17.0%; P = 0.01) and diabetes mellitus (DM) (35.9% vs. 45.9%; P = 0.01). Simple linear regression found that the SCa group had lower length of stay (LOS) (β = -0.08; P = 0.03). Logistic regression model showed that having cancer increased odds of all-cause mortality (odds ratio (OR) 1.82, 95% confidence interval (CI) 1.35-2.46; P < 0.01); however, the SCa group had comparable readmission rates, bloodstream infection, and in-hospital mortality.</p><p><strong>Conclusion: </strong>Compared to patients with sepsis without cancer, patients with sepsis and cancer have higher association with older age, male sex, White race, lower BMI, and TIA, and lower association with recreational drug use and DM. Patients with sepsis and cancer have lower LOS, higher all-cause mortality and have no difference in readmissions, bloodstream infections, and in-hospital mortality.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 2","pages":"63-74"},"PeriodicalIF":2.0,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12978403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147446594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2026-01-01DOI: 10.14740/jocmr6415
Do Van Loi, Luu Thuy Hien, Tran Thi Tuoi, Nguyen Phuc Thanh, Tran Vuong The Vinh, Luu Quang Thuy, Le Thi Nguyet
Background: Ventilator-associated pneumonia (VAP) caused by multidrug-resistant (MDR) Gram-negative bacteria has presented significant treatment challenges in critical care. While intravenous colistin is commonly used, its nephrotoxicity and limited lung penetration raise concerns. This study aimed to compare the clinical efficacy and safety of aerosolized versus intravenous colistin in patients with VAP.
Methods: The study included 60 adult patients diagnosed with VAP caused by colistin-sensitive MDR Gram-negative bacteria. Treatment decisions (aerosolized or intravenous colistin) were made by attending physicians based on clinical judgment (n = 30 per each group). The primary outcome was clinical success; secondary outcomes included time to defervescence, Clinical Pulmonary Infection Score changes, and adverse events.
Results: Clinical success was achieved in 80.0% of patients in the aerosolized group compared with 70.0% in the intravenous group (P = 0.38). The time to defervescence was significantly shorter in the aerosolized group (3.0 ± 1.2 days) than in the intravenous group (5.0 ± 1.7 days; P = 0.002). Nephrotoxicity occurred in 13.3% of patients receiving aerosolized colistin and in 23.3% of those receiving intravenous colistin (odds ratio (OR) 0.51; 95% confidence interval (95% CI) 0.13-2.03; P = 0.19). Microbiological clearance was observed in 66.7% of the aerosolized group and 56.7% of the intravenous group (P = 0.44). Intensive care unit mortality was 16.7% in the aerosolized group and 23.3% in the intravenous group (P = 0.52).
Conclusion: Aerosolized colistin was feasible and generally well tolerated; however, these findings should be interpreted as descriptive and hypothesis-generating, and further studies are needed to confirm their clinical relevance.
{"title":"Clinical Outcomes of Aerosolized Versus Intravenous Colistin in Ventilator-Associated Pneumonia Caused by Multidrug-Resistant Gram-Negative Bacteria.","authors":"Do Van Loi, Luu Thuy Hien, Tran Thi Tuoi, Nguyen Phuc Thanh, Tran Vuong The Vinh, Luu Quang Thuy, Le Thi Nguyet","doi":"10.14740/jocmr6415","DOIUrl":"10.14740/jocmr6415","url":null,"abstract":"<p><strong>Background: </strong>Ventilator-associated pneumonia (VAP) caused by multidrug-resistant (MDR) Gram-negative bacteria has presented significant treatment challenges in critical care. While intravenous colistin is commonly used, its nephrotoxicity and limited lung penetration raise concerns. This study aimed to compare the clinical efficacy and safety of aerosolized versus intravenous colistin in patients with VAP.</p><p><strong>Methods: </strong>The study included 60 adult patients diagnosed with VAP caused by colistin-sensitive MDR Gram-negative bacteria. Treatment decisions (aerosolized or intravenous colistin) were made by attending physicians based on clinical judgment (n = 30 per each group). The primary outcome was clinical success; secondary outcomes included time to defervescence, Clinical Pulmonary Infection Score changes, and adverse events.</p><p><strong>Results: </strong>Clinical success was achieved in 80.0% of patients in the aerosolized group compared with 70.0% in the intravenous group (P = 0.38). The time to defervescence was significantly shorter in the aerosolized group (3.0 ± 1.2 days) than in the intravenous group (5.0 ± 1.7 days; P = 0.002). Nephrotoxicity occurred in 13.3% of patients receiving aerosolized colistin and in 23.3% of those receiving intravenous colistin (odds ratio (OR) 0.51; 95% confidence interval (95% CI) 0.13-2.03; P = 0.19). Microbiological clearance was observed in 66.7% of the aerosolized group and 56.7% of the intravenous group (P = 0.44). Intensive care unit mortality was 16.7% in the aerosolized group and 23.3% in the intravenous group (P = 0.52).</p><p><strong>Conclusion: </strong>Aerosolized colistin was feasible and generally well tolerated; however, these findings should be interpreted as descriptive and hypothesis-generating, and further studies are needed to confirm their clinical relevance.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 1","pages":"42-49"},"PeriodicalIF":2.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12861513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2026-01-01DOI: 10.14740/jocmr6390
Juan Carlos Rivas Nieto, Brayan Daniel Cordoba-Melo, Juan Pablo Arango-Ibanez, Sebastian Seni-Molina, Mario Miguel Barbosa Rengifo, Carlos Alberto Miranda-Bastidas, Andres Felipe Casanova Rojas, Andres Fernando Mina Sanchez, Cesar J Herrera, Miguel Angel Quintana Da Silva, Andres Felipe Buitrago, Maria Lorena Coronel Gilio, Freddy Pow-Chon-Long, Juan Esteban Gomez-Mesa
Background: Psychopathological manifestations are key features of long COVID, contributing to a considerable global mental health burden. Neuropsychiatric sequelae such as anxiety, depression, cognitive dysfunction, and perceived stress may persist for months or years after infection. Latin American populations remain underrepresented, despite a high prevalence of long COVID and unique socio-demographic characteristics. Understanding these impacts is essential for targeted screening and interventions.
Methods: We conducted a prospective study of patients hospitalized for severe COVID-19. Psychiatric evaluation used the General Anxiety Disorder-7, Patient Health Questionnaire-9, Perceived Stress Scale-14, and Addenbrooke's Cognitive Examination-III (ACE-III), at an average of 24.5 months post-illness. Bivariate analyses evaluated differences by sex and intensive care unit (ICU) admission. Multivariable linear regression was used to examine associations between cognitive scores and age, sex, education, socioeconomic status, ICU admission, body mass index, smoking exposure, hypertension, and diabetes.
Results: We included 152 patients; the mean age was 56 years, and 58.5% were male. Anxiety symptoms were present in 33%, depression in 49%, and both perceived stress and cognitive dysfunction were each observed in 11% of patients. Women exhibited significantly higher levels of depression (P = 0.02) and stress (P = 0.011), whereas patients admitted to the ICU demonstrated greater cognitive impairment (P < 0.001). In multivariable regression, male sex (P = 0.002), higher education (P < 0.001), and hypertension (P = 0.037) were significantly associated with higher ACE-III scores, while ICU admission was associated with lower scores (P = 0.017).
Conclusion: Our study reveals a high prevalence of mental health symptoms and cognitive dysfunction among patients 2 years after severe COVID-19. Anxiety showed no differences by sex or ICU requirement. Women exhibited higher rates of depression and perceived stress, while ICU admission was associated with poorer cognitive performance. Our findings should encourage systematic screening, diagnosis, and management of long-term neuropsychiatric sequelae in COVID-19 survivors. However, due to the limitations of the single-center design, further longitudinal and multicenter studies are warranted to better elucidate the long-term psychiatric impact of COVID-19.
{"title":"Long-Term Mental Health Evaluation After COVID-19: Insights From the CARDIO COVID 20-21 Registry.","authors":"Juan Carlos Rivas Nieto, Brayan Daniel Cordoba-Melo, Juan Pablo Arango-Ibanez, Sebastian Seni-Molina, Mario Miguel Barbosa Rengifo, Carlos Alberto Miranda-Bastidas, Andres Felipe Casanova Rojas, Andres Fernando Mina Sanchez, Cesar J Herrera, Miguel Angel Quintana Da Silva, Andres Felipe Buitrago, Maria Lorena Coronel Gilio, Freddy Pow-Chon-Long, Juan Esteban Gomez-Mesa","doi":"10.14740/jocmr6390","DOIUrl":"10.14740/jocmr6390","url":null,"abstract":"<p><strong>Background: </strong>Psychopathological manifestations are key features of long COVID, contributing to a considerable global mental health burden. Neuropsychiatric sequelae such as anxiety, depression, cognitive dysfunction, and perceived stress may persist for months or years after infection. Latin American populations remain underrepresented, despite a high prevalence of long COVID and unique socio-demographic characteristics. Understanding these impacts is essential for targeted screening and interventions.</p><p><strong>Methods: </strong>We conducted a prospective study of patients hospitalized for severe COVID-19. Psychiatric evaluation used the General Anxiety Disorder-7, Patient Health Questionnaire-9, Perceived Stress Scale-14, and Addenbrooke's Cognitive Examination-III (ACE-III), at an average of 24.5 months post-illness. Bivariate analyses evaluated differences by sex and intensive care unit (ICU) admission. Multivariable linear regression was used to examine associations between cognitive scores and age, sex, education, socioeconomic status, ICU admission, body mass index, smoking exposure, hypertension, and diabetes.</p><p><strong>Results: </strong>We included 152 patients; the mean age was 56 years, and 58.5% were male. Anxiety symptoms were present in 33%, depression in 49%, and both perceived stress and cognitive dysfunction were each observed in 11% of patients. Women exhibited significantly higher levels of depression (P = 0.02) and stress (P = 0.011), whereas patients admitted to the ICU demonstrated greater cognitive impairment (P < 0.001). In multivariable regression, male sex (P = 0.002), higher education (P < 0.001), and hypertension (P = 0.037) were significantly associated with higher ACE-III scores, while ICU admission was associated with lower scores (P = 0.017).</p><p><strong>Conclusion: </strong>Our study reveals a high prevalence of mental health symptoms and cognitive dysfunction among patients 2 years after severe COVID-19. Anxiety showed no differences by sex or ICU requirement. Women exhibited higher rates of depression and perceived stress, while ICU admission was associated with poorer cognitive performance. Our findings should encourage systematic screening, diagnosis, and management of long-term neuropsychiatric sequelae in COVID-19 survivors. However, due to the limitations of the single-center design, further longitudinal and multicenter studies are warranted to better elucidate the long-term psychiatric impact of COVID-19.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 1","pages":"18-30"},"PeriodicalIF":2.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12861518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2026-01-01DOI: 10.14740/jocmr6321
Gui Zhen Zhu, Xiao Min Gong, Yang Lu, Xu Ma, Guo Sheng Yan, Jing Yi Wan, Hong Tao Zhang
Background: Hypernatremia is a common complication among neurocritical care patients. This study aimed to investigate the effectiveness and safety of regional citrate anticoagulation (RCA) vs. no anticoagulation (NA) in neurocritical patients receiving continuous renal replacement therapy (CRRT) who also had chronic severe hypernatremia and an elevated risk of bleeding.
Methods: From March 2020 to August 2024, electronic medical records of neuro-critically ill patients who underwent CRRT for chronic severe hypernatremia with elevated risk of bleeding at Henan Provincial People's Hospital's neurocritical intensive care unit (ICU) were retrospectively analyzed. Patients were divided into RCA (n = 70) and NA (n = 28) groups. The key effectiveness objective was the mean serum sodium correction, while the primary safety event was the occurrence of common anticoagulant adverse events. Original cohorts were matched using propensity score matching (PSM) between two groups (n = 21). Risk factors impacting the initial filter lifespan were analyzed using Cox proportional risk regression model.
Results: Both groups achieved similar sodium correction rates (0.5 ± 0.1 mmol/L/h). The RCA group had a lower incidence of both hemorrhagic (6/70 (8.6%) and 8/28 (28.6%), P = 0.021) and filter coagulation (0/70 (0%) and 17/28 (60.7%), P < 0.001) adverse events. After performing Kaplan-Meier curve and multivariable Cox regression, RCA was identified as an independent protective factor for first filter lifespan (hazard ratio (HR) = 0.09, 95% confidence interval (CI), 0.05-0.18).
Conclusion: RCA is safer and equally effective as NA for CRRT in neurocritical patients with chronic severe hypernatremia, reducing bleeding and filter clotting risks. While our retrospective study suggests that RCA is a safe and effective strategy in this population, the findings require validation in a large-scale, randomized controlled trial to establish conclusive evidence.
{"title":"Efficacy and Safety of Regional Citrate Anticoagulation in Neurocritical Care Patients With Chronic Severe Hypernatremia Undergoing Continuous Renal Replacement Therapy: A Single-Center Retrospective Study.","authors":"Gui Zhen Zhu, Xiao Min Gong, Yang Lu, Xu Ma, Guo Sheng Yan, Jing Yi Wan, Hong Tao Zhang","doi":"10.14740/jocmr6321","DOIUrl":"10.14740/jocmr6321","url":null,"abstract":"<p><strong>Background: </strong>Hypernatremia is a common complication among neurocritical care patients. This study aimed to investigate the effectiveness and safety of regional citrate anticoagulation (RCA) vs. no anticoagulation (NA) in neurocritical patients receiving continuous renal replacement therapy (CRRT) who also had chronic severe hypernatremia and an elevated risk of bleeding.</p><p><strong>Methods: </strong>From March 2020 to August 2024, electronic medical records of neuro-critically ill patients who underwent CRRT for chronic severe hypernatremia with elevated risk of bleeding at Henan Provincial People's Hospital's neurocritical intensive care unit (ICU) were retrospectively analyzed. Patients were divided into RCA (n = 70) and NA (n = 28) groups. The key effectiveness objective was the mean serum sodium correction, while the primary safety event was the occurrence of common anticoagulant adverse events. Original cohorts were matched using propensity score matching (PSM) between two groups (n = 21). Risk factors impacting the initial filter lifespan were analyzed using Cox proportional risk regression model.</p><p><strong>Results: </strong>Both groups achieved similar sodium correction rates (0.5 ± 0.1 mmol/L/h). The RCA group had a lower incidence of both hemorrhagic (6/70 (8.6%) and 8/28 (28.6%), P = 0.021) and filter coagulation (0/70 (0%) and 17/28 (60.7%), P < 0.001) adverse events. After performing Kaplan-Meier curve and multivariable Cox regression, RCA was identified as an independent protective factor for first filter lifespan (hazard ratio (HR) = 0.09, 95% confidence interval (CI), 0.05-0.18).</p><p><strong>Conclusion: </strong>RCA is safer and equally effective as NA for CRRT in neurocritical patients with chronic severe hypernatremia, reducing bleeding and filter clotting risks. While our retrospective study suggests that RCA is a safe and effective strategy in this population, the findings require validation in a large-scale, randomized controlled trial to establish conclusive evidence.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 1","pages":"31-41"},"PeriodicalIF":2.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12861514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Breast cancer is the leading cause of cancer death in women worldwide. Breast imaging, usually mammography and/or ultrasound, is classified using the Breast Imaging-Reporting and Data System (BI-RADS). At Lampang Hospital, mammography delays of up to 5 months postpone diagnosis in 40% of breast cancer cases. An urgent queue for palpable breast masses was introduced, but nearly half were benign, leading to inefficient prioritization. This study aimed to develop a two-step model based on high-risk ultrasound features and compare it with reference BI-RADS classifications.
Methods: This diagnostic prediction study collected retrospective data from Lampang Hospital between January 2021 and December 2023. Ultrasound images of 390 patients were independently reviewed by radiologists blinded to the reference BI-RADS classification. Stepwise multivariable risk difference regression analysis was applied to identify predictive characteristics from seven predefined ultrasound findings.
Results: Three predictive characteristics were identified: shape, margin, and echo pattern. The two-step model showed excellent discrimination, with an area under the receiver operating characteristic curve (AuROC) of 0.9801 (95% CI, 0.9696-0.9907) in step 1 and 0.9623 (95% CI, 0.9411-0.9835) in step 2. Internal validation with 200 bootstrap cycles confirmed minimal optimism. Using prevalence-based cut points, the model achieved 88.5% accuracy, with 6.7% underestimation in BI-RADS 4-5 (predicted as 3) and overestimation not exceeding 3% in any category.
Conclusions: A two-step ultrasound-based model using shape, margin, and echo pattern demonstrated excellent discrimination as well as high accuracy, with slightly increased underestimation and minimal overestimation. This re-scheduling strategy optimizes mammography queue prioritization, but external validation is required before clinical implementation.
{"title":"Predicting Breast Imaging-Reporting and Data System Classification of Palpable Breast Masses Using Ultrasound to Prioritize Mammography Queues.","authors":"Sarisa Thinyu, Thanin Lokeskrawee, Takumi Sakata, Natthaphon Pruksathorn, Suppachai Lawanaskol, Jayanton Patumanond, Suwapim Chanlaor, Wanwisa Bumrungpagdee, Chawalit Lakdee","doi":"10.14740/jocmr6409","DOIUrl":"10.14740/jocmr6409","url":null,"abstract":"<p><strong>Background: </strong>Breast cancer is the leading cause of cancer death in women worldwide. Breast imaging, usually mammography and/or ultrasound, is classified using the Breast Imaging-Reporting and Data System (BI-RADS). At Lampang Hospital, mammography delays of up to 5 months postpone diagnosis in 40% of breast cancer cases. An urgent queue for palpable breast masses was introduced, but nearly half were benign, leading to inefficient prioritization. This study aimed to develop a two-step model based on high-risk ultrasound features and compare it with reference BI-RADS classifications.</p><p><strong>Methods: </strong>This diagnostic prediction study collected retrospective data from Lampang Hospital between January 2021 and December 2023. Ultrasound images of 390 patients were independently reviewed by radiologists blinded to the reference BI-RADS classification. Stepwise multivariable risk difference regression analysis was applied to identify predictive characteristics from seven predefined ultrasound findings.</p><p><strong>Results: </strong>Three predictive characteristics were identified: shape, margin, and echo pattern. The two-step model showed excellent discrimination, with an area under the receiver operating characteristic curve (AuROC) of 0.9801 (95% CI, 0.9696-0.9907) in step 1 and 0.9623 (95% CI, 0.9411-0.9835) in step 2. Internal validation with 200 bootstrap cycles confirmed minimal optimism. Using prevalence-based cut points, the model achieved 88.5% accuracy, with 6.7% underestimation in BI-RADS 4-5 (predicted as 3) and overestimation not exceeding 3% in any category.</p><p><strong>Conclusions: </strong>A two-step ultrasound-based model using shape, margin, and echo pattern demonstrated excellent discrimination as well as high accuracy, with slightly increased underestimation and minimal overestimation. This re-scheduling strategy optimizes mammography queue prioritization, but external validation is required before clinical implementation.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 1","pages":"50-61"},"PeriodicalIF":2.0,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12861517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-04eCollection Date: 2026-01-01DOI: 10.14740/jocmr6360
Lali Barbakadze, Giorgi Gogitidze, Nikoloz Kintraia, Shota Kepuladze, George Burkadze
Background: Endometrial stromal tumors (ESTs) represent a heterogeneous group of uterine mesenchymal neoplasms with variable clinical outcomes. Although histological grading is a cornerstone for prognosis, the contribution of proliferative and immune microenvironment markers remains incompletely defined.
Methods: We retrospectively analyzed 90 patients diagnosed with endometrial stromal nodule (ESN) (n = 30), low-grade endometrial stromal sarcoma (LG-ESS, n = 30), and high-grade endometrial stromal sarcoma (HG-ESS, n = 30) between 2017 and 2025 across 35 public and private clinics in four Georgian cities. All specimens underwent standardized immunohistochemistry for estrogen receptor (ER), progesterone receptor (PR), Ki67, cyclinD1, cyclin-dependent kinase 4 (CDK4), CD117, forkhead box P3 (FOXP3), CD163, and CD34. Disease-free survival (DFS) was calculated from date of surgery to recurrence/metastasis. Kaplan-Meier curves and log-rank tests were used to assess survival differences, and data-driven cutoffs (Youden index) were employed to stratify biomarker expression. Multivariable Cox proportional hazards regression was applied to identify independent predictors of recurrence.
Results: Median follow-up was 55 months. DFS significantly differed by histology: not reached for ESN, 20.0 months for LG-ESS, and 5.0 months for HG-ESS (log-rank P < 0.0001). High Ki67, cyclinD1, CDK4, CD117, FOXP3, and CD163 predicted shortened DFS, while ER/PR expression correlated with prolonged DFS (all P < 0.0001). In adjusted models, lymphovascular space invasion (LVSI) (odds ratio (OR): 3.59, 95% confidence interval (CI): 3.21 - 3.87), Ki67 (OR: 4.65, 4.08 - 5.10), tumor necrosis (OR: 2.39, 2.06 - 2.79), cyclinD1 (OR: 2.20, 1.99 - 2.43), and CD163 (OR: 2.06, 1.72 - 2.51) remained independently associated with recurrence.
Conclusions: Beyond histological grade, proliferative signaling and M2 macrophage polarization strongly influence recurrence risk in ESS. These findings highlight potential diagnostic and therapeutic targets, suggesting integration of immune and cell-cycle biomarkers into future risk stratification models.
{"title":"Clinicopathologic and Immunohistochemical Correlates of Disease-Free Survival in Endometrial Stromal Sarcomas: A Multicenter Retrospective Study From 2017 to 2025.","authors":"Lali Barbakadze, Giorgi Gogitidze, Nikoloz Kintraia, Shota Kepuladze, George Burkadze","doi":"10.14740/jocmr6360","DOIUrl":"10.14740/jocmr6360","url":null,"abstract":"<p><strong>Background: </strong>Endometrial stromal tumors (ESTs) represent a heterogeneous group of uterine mesenchymal neoplasms with variable clinical outcomes. Although histological grading is a cornerstone for prognosis, the contribution of proliferative and immune microenvironment markers remains incompletely defined.</p><p><strong>Methods: </strong>We retrospectively analyzed 90 patients diagnosed with endometrial stromal nodule (ESN) (n = 30), low-grade endometrial stromal sarcoma (LG-ESS, n = 30), and high-grade endometrial stromal sarcoma (HG-ESS, n = 30) between 2017 and 2025 across 35 public and private clinics in four Georgian cities. All specimens underwent standardized immunohistochemistry for estrogen receptor (ER), progesterone receptor (PR), Ki67, cyclinD1, cyclin-dependent kinase 4 (CDK4), CD117, forkhead box P3 (FOXP3), CD163, and CD34. Disease-free survival (DFS) was calculated from date of surgery to recurrence/metastasis. Kaplan-Meier curves and log-rank tests were used to assess survival differences, and data-driven cutoffs (Youden index) were employed to stratify biomarker expression. Multivariable Cox proportional hazards regression was applied to identify independent predictors of recurrence.</p><p><strong>Results: </strong>Median follow-up was 55 months. DFS significantly differed by histology: not reached for ESN, 20.0 months for LG-ESS, and 5.0 months for HG-ESS (log-rank P < 0.0001). High Ki67, cyclinD1, CDK4, CD117, FOXP3, and CD163 predicted shortened DFS, while ER/PR expression correlated with prolonged DFS (all P < 0.0001). In adjusted models, lymphovascular space invasion (LVSI) (odds ratio (OR): 3.59, 95% confidence interval (CI): 3.21 - 3.87), Ki67 (OR: 4.65, 4.08 - 5.10), tumor necrosis (OR: 2.39, 2.06 - 2.79), cyclinD1 (OR: 2.20, 1.99 - 2.43), and CD163 (OR: 2.06, 1.72 - 2.51) remained independently associated with recurrence.</p><p><strong>Conclusions: </strong>Beyond histological grade, proliferative signaling and M2 macrophage polarization strongly influence recurrence risk in ESS. These findings highlight potential diagnostic and therapeutic targets, suggesting integration of immune and cell-cycle biomarkers into future risk stratification models.</p>","PeriodicalId":94329,"journal":{"name":"Journal of clinical medicine research","volume":"18 1","pages":"9-17"},"PeriodicalIF":2.0,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12861515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}