首页 > 最新文献

Healthcare analytics (New York, N.Y.)最新文献

英文 中文
An in-depth review and analysis of deep learning methods and applications in spinal cord imaging 深入回顾和分析深度学习方法及其在脊髓成像中的应用
Pub Date : 2025-10-28 DOI: 10.1016/j.health.2025.100429
Md Sabbir Hossain , Mostafijur Rahman , Mumtahina Ahmed , Ashifur Rahman , Md Mohsin Kabir , M.F. Mridha , Jungpil Shin
This systematic review explores the advances, technologies, and applications of deep learning in spinal cord magnetic resonance imaging (MRI). The current state of deep-learning techniques used for injury detection, disease diagnosis, and treatment planning in spinal cord imaging is thoroughly examined. This review includes a systematic analysis of over 100 studies from 2018 to 2025, selected based on clinical relevance, model performance, and innovation. Through a comprehensive analysis of recent literature, this review highlights the evolution and effectiveness of various deep-learning models in enhancing the accuracy and reliability of spinal cord MRI interpretations. Significant contributions of this review include identifying the most effective and innovative deep-learning approaches, such as Convolutional Neural Networks (CNNs) for precise lesion segmentation and Generative Adversarial Networks (GANs) for data augmentation. Additionally, it synthesizes current applications, such as improved injury detection and multiple sclerosis diagnosis, and explores deep-learning’s role in treatment planning. The review also addresses the challenges and limitations faced in this domain, including data scarcity, model interpretability, and computational demands, and proposes potential solutions and directions for future research. By offering these insights, this review provides a unique perspective on integrating deep-learning models into clinical workflows and their impact on clinical outcomes and patient care.
本系统综述探讨了脊髓磁共振成像(MRI)中深度学习的进展、技术和应用。深度学习技术用于损伤检测,疾病诊断和脊髓成像治疗计划的现状进行了彻底的检查。本综述包括对2018年至2025年的100多项研究的系统分析,这些研究是根据临床相关性、模型性能和创新来选择的。通过对近期文献的综合分析,本综述强调了各种深度学习模型在提高脊髓MRI解释的准确性和可靠性方面的发展和有效性。本综述的重要贡献包括确定最有效和创新的深度学习方法,例如用于精确病灶分割的卷积神经网络(cnn)和用于数据增强的生成对抗网络(gan)。此外,它还综合了当前的应用,如改进的损伤检测和多发性硬化症诊断,并探索了深度学习在治疗计划中的作用。本文还讨论了该领域面临的挑战和限制,包括数据稀缺性、模型可解释性和计算需求,并提出了未来研究的潜在解决方案和方向。通过提供这些见解,本综述为将深度学习模型集成到临床工作流程及其对临床结果和患者护理的影响提供了独特的视角。
{"title":"An in-depth review and analysis of deep learning methods and applications in spinal cord imaging","authors":"Md Sabbir Hossain ,&nbsp;Mostafijur Rahman ,&nbsp;Mumtahina Ahmed ,&nbsp;Ashifur Rahman ,&nbsp;Md Mohsin Kabir ,&nbsp;M.F. Mridha ,&nbsp;Jungpil Shin","doi":"10.1016/j.health.2025.100429","DOIUrl":"10.1016/j.health.2025.100429","url":null,"abstract":"<div><div>This systematic review explores the advances, technologies, and applications of deep learning in spinal cord magnetic resonance imaging (MRI). The current state of deep-learning techniques used for injury detection, disease diagnosis, and treatment planning in spinal cord imaging is thoroughly examined. This review includes a systematic analysis of over 100 studies from 2018 to 2025, selected based on clinical relevance, model performance, and innovation. Through a comprehensive analysis of recent literature, this review highlights the evolution and effectiveness of various deep-learning models in enhancing the accuracy and reliability of spinal cord MRI interpretations. Significant contributions of this review include identifying the most effective and innovative deep-learning approaches, such as Convolutional Neural Networks (CNNs) for precise lesion segmentation and Generative Adversarial Networks (GANs) for data augmentation. Additionally, it synthesizes current applications, such as improved injury detection and multiple sclerosis diagnosis, and explores deep-learning’s role in treatment planning. The review also addresses the challenges and limitations faced in this domain, including data scarcity, model interpretability, and computational demands, and proposes potential solutions and directions for future research. By offering these insights, this review provides a unique perspective on integrating deep-learning models into clinical workflows and their impact on clinical outcomes and patient care.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100429"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analytical study of external factors influencing emergency occurrences in healthcare 影响医疗卫生突发事件发生的外部因素分析研究
Pub Date : 2025-10-19 DOI: 10.1016/j.health.2025.100426
Félicien Hêche , Philipp Schiller , Oussama Barakat , Thibaut Desmettre , Stephan Robert-Nicoud
This study investigates the impact of 19 external factors, related to weather, road traffic conditions, air quality, and time, on the hourly occurrence of emergencies. The analysis relies on six years of dispatch records (2015–2021) from the Centre Hospitalier Universitaire Vaudois (CHUV), which oversees 18 ambulance stations across the French-speaking region of Switzerland. First, classical statistical methods, including Chi-squared test, Student’s t-test, and information value, are employed to identify dependencies between the occurrence of emergencies and the considered parameters. Additionally, SHapley Additive exPlanations (SHAP) values and permutation importance are computed using eXtreme Gradient Boosting (XGBoost) and Multilayer Perceptron (MLP) models. Training and hyperparameter optimization were performed on data from 2015–2020, while the 2021 data were held out for evaluation and for computing model interpretation metrics. Results indicate that temporal features – particularly the hour of the day – are the dominant drivers of emergency occurrences, whereas other external factors contribute minimally once temporal effects are accounted for. Subsequently, performance comparisons with a simplified model that considers only the hour of the day suggest that more complex machine learning approaches offer limited added value in this context. Operationally, this result supports the use of simple time-dependent demand curves for EMS planning. Such models can effectively guide staffing schedules and relocations without the overhead of integrating external data or maintaining complex pipelines. By highlighting the limited utility of external predictors, this study provides practical guidance for EMS organizations seeking efficient, data-driven resource allocation methods.
本研究考察了天气、道路交通状况、空气质量和时间等19个外部因素对每小时突发事件发生的影响。该分析基于瑞士沃杜瓦大学医院中心(CHUV) 6年(2015-2021年)的调度记录,该中心负责监管瑞士法语区18个救护站。首先,采用经典的统计方法,包括卡方检验、学生t检验和信息值,来确定突发事件的发生与所考虑的参数之间的依赖关系。此外,SHapley加性解释(SHAP)值和排列重要性使用极端梯度增强(XGBoost)和多层感知器(MLP)模型计算。对2015-2020年的数据进行训练和超参数优化,同时保留2021年的数据进行评估和计算模型解释指标。结果表明,时间特征——特别是一天中的时间——是紧急情况发生的主要驱动因素,而一旦考虑到时间影响,其他外部因素的作用就微乎其微。随后,与只考虑一天中的一个小时的简化模型的性能比较表明,在这种情况下,更复杂的机器学习方法提供的附加价值有限。从操作上讲,该结果支持使用简单的随时间变化的需求曲线进行EMS规划。这样的模型可以有效地指导人员安排和重新部署,而不需要集成外部数据或维护复杂的管道。通过强调外部预测因素的有限效用,本研究为EMS组织寻求有效的、数据驱动的资源分配方法提供了实用的指导。
{"title":"An analytical study of external factors influencing emergency occurrences in healthcare","authors":"Félicien Hêche ,&nbsp;Philipp Schiller ,&nbsp;Oussama Barakat ,&nbsp;Thibaut Desmettre ,&nbsp;Stephan Robert-Nicoud","doi":"10.1016/j.health.2025.100426","DOIUrl":"10.1016/j.health.2025.100426","url":null,"abstract":"<div><div>This study investigates the impact of 19 external factors, related to weather, road traffic conditions, air quality, and time, on the hourly occurrence of emergencies. The analysis relies on six years of dispatch records (2015–2021) from the Centre Hospitalier Universitaire Vaudois (CHUV), which oversees 18 ambulance stations across the French-speaking region of Switzerland. First, classical statistical methods, including Chi-squared test, Student’s <span><math><mi>t</mi></math></span>-test, and information value, are employed to identify dependencies between the occurrence of emergencies and the considered parameters. Additionally, SHapley Additive exPlanations (SHAP) values and permutation importance are computed using eXtreme Gradient Boosting (XGBoost) and Multilayer Perceptron (MLP) models. Training and hyperparameter optimization were performed on data from 2015–2020, while the 2021 data were held out for evaluation and for computing model interpretation metrics. Results indicate that temporal features – particularly the hour of the day – are the dominant drivers of emergency occurrences, whereas other external factors contribute minimally once temporal effects are accounted for. Subsequently, performance comparisons with a simplified model that considers only the hour of the day suggest that more complex machine learning approaches offer limited added value in this context. Operationally, this result supports the use of simple time-dependent demand curves for EMS planning. Such models can effectively guide staffing schedules and relocations without the overhead of integrating external data or maintaining complex pipelines. By highlighting the limited utility of external predictors, this study provides practical guidance for EMS organizations seeking efficient, data-driven resource allocation methods.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100426"},"PeriodicalIF":0.0,"publicationDate":"2025-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A machine learning framework for identifying phenotypes in chronic kidney disease 用于识别慢性肾脏疾病表型的机器学习框架
Pub Date : 2025-10-17 DOI: 10.1016/j.health.2025.100425
Marzieh Amiri Shahbazi , Mohammad Abdullah Al-Mamun , Todd Brothers , Imtiaz Ahmed
Identifying meaningful patient phenotypes is a cornerstone of data-driven healthcare, enabling risk stratification, resource allocation, and the design of personalized care strategies. Achieving this requires robust analytical methods that can uncover hidden structure in high-dimensional clinical data while ensuring stability and interpretability of results. In this study, we present a machine learning framework for phenotypic clustering that combines partition-based (k-means) and probabilistic (latent class analysis, LCA) approaches. By comparing subgroup assignments across these complementary methods, the framework provides an internal validation of clustering assignments. Rather than relying on a single method, the framework validates subgroup assignments through cross-method agreement, strengthening confidence in the robustness of the identified phenotypes and their utility for decision support. We apply the proposed framework to patients with chronic kidney disease (CKD) stratified by prior history of acute kidney injury (AKI), illustrating its value in uncovering population-level heterogeneity. While the mechanisms linking AKI to CKD phenotypic patterns remain poorly understood historically, this study investigates CKD trajectories in patients with and without prior AKI and identifies key phenotypic patterns. The analysis revealed consistent phenotypic structures, with over 80% agreement between the two clustering approaches. Distinct phenotypic patterns emerged between the AKI and non-AKI cohorts, with cardiovascular conditions consistently dominating in both groups. These findings demonstrate how stratified clustering can uncover risk signatures that traditional CKD staging systems may overlook. By combining complementary clustering algorithms, the framework strengthens the analytic foundation of phenotyping studies. Moreover, it enables the design of phenotype specific care pathways such as cluster aware monitoring panels and tailored coordination strategies, thus underscoring the broader potential of data-driven analytics to advance personalized medicine and healthcare decision support.
识别有意义的患者表型是数据驱动医疗保健的基石,可以实现风险分层、资源分配和个性化护理策略的设计。实现这一目标需要强大的分析方法,可以揭示高维临床数据中的隐藏结构,同时确保结果的稳定性和可解释性。在本研究中,我们提出了一种用于表型聚类的机器学习框架,该框架结合了基于分区(k-means)和概率(潜类分析,LCA)方法。通过比较这些互补方法中的子组分配,该框架提供了聚类分配的内部验证。该框架不是依赖于单一方法,而是通过跨方法协议验证子组分配,增强了对已识别表型的稳健性及其决策支持效用的信心。我们将提出的框架应用于按急性肾损伤(AKI)病史分层的慢性肾脏疾病(CKD)患者,说明其在揭示人群水平异质性方面的价值。虽然AKI与CKD表型模式之间的联系机制在历史上仍然知之甚少,但本研究调查了有和没有AKI的患者的CKD轨迹,并确定了关键的表型模式。分析揭示了一致的表型结构,两种聚类方法之间的一致性超过80%。在AKI和非AKI组之间出现了不同的表型模式,心血管疾病在两组中始终占主导地位。这些发现证明了分层聚类如何揭示传统CKD分期系统可能忽略的风险特征。通过结合互补聚类算法,该框架加强了表型研究的分析基础。此外,它能够设计特定表型的护理途径,如集群感知监测面板和量身定制的协调策略,从而强调数据驱动分析在推进个性化医疗和医疗保健决策支持方面的更广泛潜力。
{"title":"A machine learning framework for identifying phenotypes in chronic kidney disease","authors":"Marzieh Amiri Shahbazi ,&nbsp;Mohammad Abdullah Al-Mamun ,&nbsp;Todd Brothers ,&nbsp;Imtiaz Ahmed","doi":"10.1016/j.health.2025.100425","DOIUrl":"10.1016/j.health.2025.100425","url":null,"abstract":"<div><div>Identifying meaningful patient phenotypes is a cornerstone of data-driven healthcare, enabling risk stratification, resource allocation, and the design of personalized care strategies. Achieving this requires robust analytical methods that can uncover hidden structure in high-dimensional clinical data while ensuring stability and interpretability of results. In this study, we present a machine learning framework for phenotypic clustering that combines partition-based (<span><math><mi>k</mi></math></span>-means) and probabilistic (latent class analysis, LCA) approaches. By comparing subgroup assignments across these complementary methods, the framework provides an internal validation of clustering assignments. Rather than relying on a single method, the framework validates subgroup assignments through cross-method agreement, strengthening confidence in the robustness of the identified phenotypes and their utility for decision support. We apply the proposed framework to patients with chronic kidney disease (CKD) stratified by prior history of acute kidney injury (AKI), illustrating its value in uncovering population-level heterogeneity. While the mechanisms linking AKI to CKD phenotypic patterns remain poorly understood historically, this study investigates CKD trajectories in patients with and without prior AKI and identifies key phenotypic patterns. The analysis revealed consistent phenotypic structures, with over 80% agreement between the two clustering approaches. Distinct phenotypic patterns emerged between the AKI and non-AKI cohorts, with cardiovascular conditions consistently dominating in both groups. These findings demonstrate how stratified clustering can uncover risk signatures that traditional CKD staging systems may overlook. By combining complementary clustering algorithms, the framework strengthens the analytic foundation of phenotyping studies. Moreover, it enables the design of phenotype specific care pathways such as cluster aware monitoring panels and tailored coordination strategies, thus underscoring the broader potential of data-driven analytics to advance personalized medicine and healthcare decision support.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100425"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable analytics framework for predicting diabetes in women using Convolutional Neural Networks 使用卷积神经网络预测女性糖尿病的可解释分析框架
Pub Date : 2025-10-10 DOI: 10.1016/j.health.2025.100422
Gazi Mohammad Imdadul Alam , Tapu Biswas , Sharia Arfin Tanim , M.F. Mridha
Diabetes is a chronic metabolic disorder that heightens the risk of complications for women and presents diagnostic challenges owing to imbalanced datasets and the need for interpretable predictive models. In this study, we propose a 1D Convolutional Neural Network (1D CNN) model that achieves an accuracy of 98.61% on German Patient Dataset, comprising 2,000 samples, and 99.35% on the Bangladeshi Patient Dataset, which includes 465 samples. Our model effectively addresses class imbalance by integrating the Synthetic Minority Over-sampling Technique and Edited Nearest Neighbor (SMOTE-ENN), which significantly enhances performance. Additionally, we conducted a statistical comparison with Multi-Layer Perceptron (MLP), Long Short-Term Memory (LSTM), and Bidirectional LSTM (BiLSTM) models, demonstrating our CNN’s superior accuracy while maintaining reduced complexity and enhanced transparency through the integration of SHapley Additive exPlanations (SHAP). Our SHAP analysis revealed significant variations in feature importance between the two populations, offering culturally relevant insights into the risk factors for diabetes. The SHAP analysis not only facilitates interpretability by allowing healthcare professionals to understand the influence of individual features but also emphasizes the cultural context of diabetes risk. Overall, our findings surpass existing methodologies in terms of accuracy and complexity while underscoring the critical need for demographic diversity in predictive healthcare models, paving the way for more effective diabetes prediction strategies.
糖尿病是一种慢性代谢紊乱,增加了女性并发症的风险,由于数据集不平衡和需要可解释的预测模型,糖尿病给诊断带来了挑战。在这项研究中,我们提出了一个一维卷积神经网络(1D CNN)模型,该模型在德国患者数据集(包括2000个样本)上实现了98.61%的准确率,在孟加拉国患者数据集(包括465个样本)上实现了99.35%的准确率。我们的模型通过集成合成少数过采样技术和编辑最近邻(SMOTE-ENN)有效地解决了类不平衡问题,显著提高了性能。此外,我们与多层感知器(MLP)、长短期记忆(LSTM)和双向LSTM (BiLSTM)模型进行了统计比较,证明了我们的CNN在通过集成SHapley加性解释(SHAP)保持降低复杂性和增强透明度的同时具有卓越的准确性。我们的SHAP分析揭示了两种人群在特征重要性上的显著差异,为糖尿病的危险因素提供了与文化相关的见解。SHAP分析不仅有助于医疗保健专业人员理解个体特征的影响,而且还强调了糖尿病风险的文化背景。总的来说,我们的研究结果在准确性和复杂性方面超越了现有的方法,同时强调了预测医疗模型中人口多样性的关键需求,为更有效的糖尿病预测策略铺平了道路。
{"title":"An explainable analytics framework for predicting diabetes in women using Convolutional Neural Networks","authors":"Gazi Mohammad Imdadul Alam ,&nbsp;Tapu Biswas ,&nbsp;Sharia Arfin Tanim ,&nbsp;M.F. Mridha","doi":"10.1016/j.health.2025.100422","DOIUrl":"10.1016/j.health.2025.100422","url":null,"abstract":"<div><div>Diabetes is a chronic metabolic disorder that heightens the risk of complications for women and presents diagnostic challenges owing to imbalanced datasets and the need for interpretable predictive models. In this study, we propose a 1D Convolutional Neural Network (1D CNN) model that achieves an accuracy of 98.61% on German Patient Dataset, comprising 2,000 samples, and 99.35% on the Bangladeshi Patient Dataset, which includes 465 samples. Our model effectively addresses class imbalance by integrating the Synthetic Minority Over-sampling Technique and Edited Nearest Neighbor (SMOTE-ENN), which significantly enhances performance. Additionally, we conducted a statistical comparison with Multi-Layer Perceptron (MLP), Long Short-Term Memory (LSTM), and Bidirectional LSTM (BiLSTM) models, demonstrating our CNN’s superior accuracy while maintaining reduced complexity and enhanced transparency through the integration of SHapley Additive exPlanations (SHAP). Our SHAP analysis revealed significant variations in feature importance between the two populations, offering culturally relevant insights into the risk factors for diabetes. The SHAP analysis not only facilitates interpretability by allowing healthcare professionals to understand the influence of individual features but also emphasizes the cultural context of diabetes risk. Overall, our findings surpass existing methodologies in terms of accuracy and complexity while underscoring the critical need for demographic diversity in predictive healthcare models, paving the way for more effective diabetes prediction strategies.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100422"},"PeriodicalIF":0.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A focal loss and sequential analytics approach for liver disease classification and detection 肝脏疾病分类和检测的局灶丢失和顺序分析方法
Pub Date : 2025-10-04 DOI: 10.1016/j.health.2025.100424
Musa Mustapha , Oluwadamilare Harazeem Abdulganiyu , Isah Ndakara Abubakar , Kaloma Usman Majikumna , Garba Suleiman , Mehdi Ech-chariy , Mekila Mbayam Olivier
Liver disease poses a significant global health challenge requiring accurate and timely diagnosis. This research develops a novel deep learning model, named AFLID-Liver, to improve the classification of liver diseases from medical data. The AFLID-Liver model integrates three key techniques: an Attention Mechanism to focus on the most relevant data features, Long Short-Term Memory (LSTM) networks to process potential sequential information, and Focal Loss to effectively handle imbalances between different disease classes in the dataset. This combination enhances the model's ability to learn complex patterns and make robust predictions. We evaluated AFLID-Liver using a dataset of various patient records, including biomarkers and demographics. Our proposed model achieved superior performance, with 99.9 % accuracy, 99.9 % precision, and a 99.9 % F-score, significantly outperforming a baseline Gated Recurrent Unit (GRU) model (99.7 % accuracy, 97.9 % F-score) and existing state-of-the-art approaches. These results demonstrate AFLID-Liver's potential for highly accurate liver disease detection. To validate the generalizability of the proposed model, we performed cross validation using an external dataset which also yielded a good performance depicting the potential of the proposed model. The novelty lies in the synergistic integration of these techniques, offering a robust approach for clinical decision support and improved patient outcomes. Future research will aim to enhance the computational efficiency, paving the way for its adoption in real-time clinical applications.
肝病是一项重大的全球健康挑战,需要准确和及时的诊断。本研究开发了一种新的深度学习模型,名为AFLID-Liver,以改进从医疗数据中对肝脏疾病的分类。AFLID-Liver模型集成了三种关键技术:专注于最相关数据特征的注意机制,处理潜在顺序信息的长短期记忆(LSTM)网络,以及有效处理数据集中不同疾病类别之间不平衡的焦点丢失。这种组合增强了模型学习复杂模式和做出可靠预测的能力。我们使用各种患者记录的数据集来评估AFLID-Liver,包括生物标志物和人口统计学。我们提出的模型取得了优异的性能,具有99.9%的准确度,99.9%的精度和99.9%的F-score,显著优于基线门控循环单元(GRU)模型(99.7%的准确度,97.9%的F-score)和现有的最先进的方法。这些结果证明了AFLID-Liver在高度精确的肝脏疾病检测方面的潜力。为了验证所提出模型的可泛化性,我们使用外部数据集进行交叉验证,该数据集也产生了良好的性能,描绘了所提出模型的潜力。新颖之处在于这些技术的协同整合,为临床决策支持和改善患者预后提供了强有力的方法。未来的研究将致力于提高计算效率,为其在实时临床应用中采用铺平道路。
{"title":"A focal loss and sequential analytics approach for liver disease classification and detection","authors":"Musa Mustapha ,&nbsp;Oluwadamilare Harazeem Abdulganiyu ,&nbsp;Isah Ndakara Abubakar ,&nbsp;Kaloma Usman Majikumna ,&nbsp;Garba Suleiman ,&nbsp;Mehdi Ech-chariy ,&nbsp;Mekila Mbayam Olivier","doi":"10.1016/j.health.2025.100424","DOIUrl":"10.1016/j.health.2025.100424","url":null,"abstract":"<div><div>Liver disease poses a significant global health challenge requiring accurate and timely diagnosis. This research develops a novel deep learning model, named AFLID-Liver, to improve the classification of liver diseases from medical data. The AFLID-Liver model integrates three key techniques: an Attention Mechanism to focus on the most relevant data features, Long Short-Term Memory (LSTM) networks to process potential sequential information, and Focal Loss to effectively handle imbalances between different disease classes in the dataset. This combination enhances the model's ability to learn complex patterns and make robust predictions. We evaluated AFLID-Liver using a dataset of various patient records, including biomarkers and demographics. Our proposed model achieved superior performance, with 99.9 % accuracy, 99.9 % precision, and a 99.9 % F-score, significantly outperforming a baseline Gated Recurrent Unit (GRU) model (99.7 % accuracy, 97.9 % F-score) and existing state-of-the-art approaches. These results demonstrate AFLID-Liver's potential for highly accurate liver disease detection. To validate the generalizability of the proposed model, we performed cross validation using an external dataset which also yielded a good performance depicting the potential of the proposed model. The novelty lies in the synergistic integration of these techniques, offering a robust approach for clinical decision support and improved patient outcomes. Future research will aim to enhance the computational efficiency, paving the way for its adoption in real-time clinical applications.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100424"},"PeriodicalIF":0.0,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A constrained optimization approach for ultrasound shear wave speed estimation with time-lateral plane cleaning in medical imaging 医学成像中带时间横向平面清洗的超声剪切波速估计约束优化方法
Pub Date : 2025-09-27 DOI: 10.1016/j.health.2025.100423
MD Jahin Alam, Md. Kamrul Hasan
Ultrasound shear wave elastography (SWE) is a noninvasive tissue stiffness measurement technique for medical diagnosis. In SWE, an acoustic radiation force creates shear waves (SW) throughout a medium where the shear wave speed (SWS) is related to the medium stiffness. Traditional SWS estimation techniques are not noise-resilient in handling jitter and reflection artifacts. This paper proposes new techniques to estimate SWS in both time and frequency domains. These new methods utilize loss functions which are: (1) optimized by lateral signal shift between known locations, and (2) constrained by neighborhood displacement group shift determined from the time-lateral plane-denoised SW propagation. The proposed constrained optimization is formed by coupling neighboring particles’ losses with a Gaussian kernel, giving an optimum arrival time for the center particle to enforce local stiffness homogeneity and enable noise resilience. The explicit denoising scheme involves isolating SW profiles from time-lateral planes, creating parameterized masks. Additionally, lateral interpolation is performed to enhance reconstruction resolution and thereby improve the reliability of optimization. The proposed scheme is evaluated on a simulation (US-SWS-Digital-Phantoms) and three experimental phantom datasets: (i) Mayo Clinic CIRS049 model, (ii) RSNA-QIBA-US-SWS, (iii) Private data. The constrained optimization performance is compared with three time-of-flight (ToF) and two frequency-domain methods. The evaluations produced visually and quantitatively superior and noise-robust reconstructions compared to classical methods. Due to the quality and minimal error of SWS map formation, the proposed technique can find its application in tissue health inspection and cancer diagnosis.
超声剪切波弹性成像(SWE)是一种用于医学诊断的无创组织刚度测量技术。在SWE中,声辐射力在介质中产生横波(SW),其中横波速度(SWS)与介质刚度有关。传统的SWS估计技术在处理抖动和反射伪影时不具有抗噪声能力。本文提出了在时域和频域估计SWS的新技术。这些新方法利用的损失函数:(1)通过已知位置之间的横向信号位移来优化,(2)通过时间横向平面去噪的SW传播确定的邻域位移群位移来约束。所提出的约束优化是通过将相邻粒子的损失与高斯核耦合形成的,为中心粒子提供最佳到达时间,以增强局部刚度均匀性并使噪声恢复。显式去噪方案包括从时间横向平面中隔离SW剖面,创建参数化掩模。此外,通过横向插值提高重构分辨率,从而提高优化的可靠性。该方案在模拟(US-SWS-Digital-Phantoms)和三个实验幻影数据集上进行了评估:(i)梅奥诊所CIRS049模型,(ii) RSNA-QIBA-US-SWS, (iii)私人数据。对比了三种飞行时间法和两种频域法的约束优化性能。与经典方法相比,评估产生了视觉和数量上的优势和噪声鲁棒性重建。该方法具有质量好、误差小的特点,可用于组织健康检查和肿瘤诊断。
{"title":"A constrained optimization approach for ultrasound shear wave speed estimation with time-lateral plane cleaning in medical imaging","authors":"MD Jahin Alam,&nbsp;Md. Kamrul Hasan","doi":"10.1016/j.health.2025.100423","DOIUrl":"10.1016/j.health.2025.100423","url":null,"abstract":"<div><div>Ultrasound shear wave elastography (SWE) is a noninvasive tissue stiffness measurement technique for medical diagnosis. In SWE, an acoustic radiation force creates shear waves (SW) throughout a medium where the shear wave speed (SWS) is related to the medium stiffness. Traditional SWS estimation techniques are not noise-resilient in handling jitter and reflection artifacts. This paper proposes new techniques to estimate SWS in both time and frequency domains. These new methods utilize loss functions which are: (1) optimized by lateral signal shift between known locations, and (2) constrained by neighborhood displacement group shift determined from the time-lateral plane-denoised SW propagation. The proposed constrained optimization is formed by coupling neighboring particles’ losses with a Gaussian kernel, giving an optimum arrival time for the center particle to enforce local stiffness homogeneity and enable noise resilience. The explicit denoising scheme involves isolating SW profiles from time-lateral planes, creating parameterized masks. Additionally, lateral interpolation is performed to enhance reconstruction resolution and thereby improve the reliability of optimization. The proposed scheme is evaluated on a simulation (US-SWS-Digital-Phantoms) and three experimental phantom datasets: (i) Mayo Clinic CIRS049 model, (ii) RSNA-QIBA-US-SWS, (iii) Private data. The constrained optimization performance is compared with three time-of-flight (ToF) and two frequency-domain methods. The evaluations produced visually and quantitatively superior and noise-robust reconstructions compared to classical methods. Due to the quality and minimal error of SWS map formation, the proposed technique can find its application in tissue health inspection and cancer diagnosis.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100423"},"PeriodicalIF":0.0,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated deep learning approach for enhancing brain tumor diagnosis 一种增强脑肿瘤诊断的集成深度学习方法
Pub Date : 2025-09-25 DOI: 10.1016/j.health.2025.100421
Rabeya Bashri Sumona , John Pritom Biswas , Ahmed Shafkat , Md Mahbubur Rahman , Md Omor Faruk , Yaqoob Majeed
The diagnosis of a brain tumor poses a significant challenge due to the varied manifestations of tumors and their impact on patient health. Traditional Magnetic Resonance Imaging (MRI) based methods are time-consuming, expensive, and highly reliant on radiologists’ expertise. Automated and reliable classification techniques are crucial to enhancing diagnostic accuracy, improving patient outcomes, and ensuring timely detection. This study introduces RDXNet, a hybrid deep learning model that integrates ResNet50, DenseNet121, and Xception to improve the classification of multiclass brain tumors. We utilized three publicly available datasets which are Br35H :: Brain Tumor Detection 2020, Figshare Brain Tumor Dataset, and Radiopaedia MRI Scans, combining 7,023 MRI images in four categories: glioma, meningioma, no tumor, and pituitary tumor. After evaluating individual models, we integrated them into RDXNet using feature fusion and transfer learning. Our model achieves an accuracy of 94%, exceeding the performance of individual models and mitigating overfitting. To validate robustness, K-Fold Cross-Validation was conducted across multiple data splits. Additionally, Grad-CAM-based visualizations were employed to enhance interpretability, enabling clinicians to understand the model’s decision-making. Using hybrid deep learning techniques, RDXNet significantly improves classification performance and reliability. This study demonstrates the potential of Artificial Intelligence (AI)-driven computer-aided diagnosis (CAD) systems to support radiologists, enabling faster and more accurate brain tumor identification, ultimately improving patient outcomes. Our proposed hybrid model, RDXNet outperforms individual architectures in multiclass brain tumor classification, achieving state-of-the-art performance and contributing towards faster, more reliable automated diagnosis.
由于肿瘤的各种表现及其对患者健康的影响,脑肿瘤的诊断提出了一个重大挑战。传统的基于磁共振成像(MRI)的方法耗时、昂贵,并且高度依赖放射科医生的专业知识。自动化和可靠的分类技术对于提高诊断准确性、改善患者预后和确保及时检测至关重要。本研究引入RDXNet,这是一种集成了ResNet50、DenseNet121和Xception的混合深度学习模型,用于改进多类别脑肿瘤的分类。我们利用Br35H:: Brain Tumor Detection 2020、Figshare Brain Tumor Dataset和Radiopaedia MRI Scans三个公开可用的数据集,结合了胶质瘤、脑膜瘤、无肿瘤和垂体瘤四类7,023张MRI图像。在评估单个模型之后,我们使用特征融合和迁移学习将它们集成到RDXNet中。我们的模型达到了94%的准确率,超过了单个模型的性能并减轻了过拟合。为了验证稳健性,对多个数据分割进行K-Fold交叉验证。此外,采用基于grad - cam的可视化来增强可解释性,使临床医生能够理解模型的决策。使用混合深度学习技术,RDXNet显著提高了分类性能和可靠性。这项研究证明了人工智能(AI)驱动的计算机辅助诊断(CAD)系统在支持放射科医生、实现更快、更准确的脑肿瘤识别、最终改善患者预后方面的潜力。我们提出的混合模型RDXNet在多类别脑肿瘤分类中优于单个架构,实现了最先进的性能,并有助于更快,更可靠的自动化诊断。
{"title":"An integrated deep learning approach for enhancing brain tumor diagnosis","authors":"Rabeya Bashri Sumona ,&nbsp;John Pritom Biswas ,&nbsp;Ahmed Shafkat ,&nbsp;Md Mahbubur Rahman ,&nbsp;Md Omor Faruk ,&nbsp;Yaqoob Majeed","doi":"10.1016/j.health.2025.100421","DOIUrl":"10.1016/j.health.2025.100421","url":null,"abstract":"<div><div>The diagnosis of a brain tumor poses a significant challenge due to the varied manifestations of tumors and their impact on patient health. Traditional Magnetic Resonance Imaging (MRI) based methods are time-consuming, expensive, and highly reliant on radiologists’ expertise. Automated and reliable classification techniques are crucial to enhancing diagnostic accuracy, improving patient outcomes, and ensuring timely detection. This study introduces RDXNet, a hybrid deep learning model that integrates ResNet50, DenseNet121, and Xception to improve the classification of multiclass brain tumors. We utilized three publicly available datasets which are Br35H :: Brain Tumor Detection 2020, Figshare Brain Tumor Dataset, and Radiopaedia MRI Scans, combining 7,023 MRI images in four categories: glioma, meningioma, no tumor, and pituitary tumor. After evaluating individual models, we integrated them into RDXNet using feature fusion and transfer learning. Our model achieves an accuracy of 94%, exceeding the performance of individual models and mitigating overfitting. To validate robustness, K-Fold Cross-Validation was conducted across multiple data splits. Additionally, Grad-CAM-based visualizations were employed to enhance interpretability, enabling clinicians to understand the model’s decision-making. Using hybrid deep learning techniques, RDXNet significantly improves classification performance and reliability. This study demonstrates the potential of Artificial Intelligence (AI)-driven computer-aided diagnosis (CAD) systems to support radiologists, enabling faster and more accurate brain tumor identification, ultimately improving patient outcomes. Our proposed hybrid model, RDXNet outperforms individual architectures in multiclass brain tumor classification, achieving state-of-the-art performance and contributing towards faster, more reliable automated diagnosis.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100421"},"PeriodicalIF":0.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analytics-driven review of U-Net for medical image segmentation U-Net用于医学图像分割的分析驱动综述
Pub Date : 2025-09-20 DOI: 10.1016/j.health.2025.100416
Fnu Neha , Deepshikha Bhati , Deepak Kumar Shukla , Sonavi Makarand Dalvi , Nikolaos Mantzou , Safa Shubbar
Medical imaging (MI) plays a vital role in healthcare by providing detailed insights into anatomical structures and pathological conditions, supporting accurate diagnosis and treatment planning. Noninvasive modalities, such as X-ray, magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US), produce high-resolution images of internal organs and tissues. The effective interpretation of these images relies on the precise segmentation of the regions of interest (ROI), including organs and lesions. Traditional methods based on manual feature extraction are time-consuming, inconsistent, and not scalable. This review explores recent advances in artificial intelligence (AI)-driven segmentation, focusing on Convolutional Neural Network (CNN) architectures, particularly the U-Net family and its variants—U-Net++, and U-Net 3+. These models enable automated, pixel-wise classification across modalities and have improved segmentation accuracy and efficiency. The review outlines the evolution of U-Net architectures, their clinical integration, and offers a modality-wise comparison. It also addresses challenges such as data heterogeneity, limited generalizability, and model interpretability, proposing solutions including attention mechanisms and Transformer-based designs. Emphasizing clinical applicability, this work bridges the gap between algorithmic development and real-world implementation.
医学成像(MI)通过提供解剖结构和病理状况的详细信息,支持准确的诊断和治疗计划,在医疗保健中发挥着至关重要的作用。无创模式,如x射线,磁共振成像(MRI),计算机断层扫描(CT)和超声(US),产生内部器官和组织的高分辨率图像。这些图像的有效解释依赖于对感兴趣区域(ROI)的精确分割,包括器官和病变。传统的基于人工特征提取的方法耗时长、不一致且不可扩展。本文探讨了人工智能(AI)驱动的分段技术的最新进展,重点关注卷积神经网络(CNN)架构,特别是U-Net家族及其变体——U-Net++和U-Net 3+。这些模型支持跨模态的自动、逐像素分类,并提高了分割的准确性和效率。这篇综述概述了U-Net体系结构的演变,它们的临床整合,并提供了一个模式明智的比较。它还解决了诸如数据异构、有限的通用性和模型可解释性等挑战,提出了包括注意力机制和基于转换器的设计在内的解决方案。强调临床适用性,这项工作弥合了算法开发和现实世界实现之间的差距。
{"title":"An analytics-driven review of U-Net for medical image segmentation","authors":"Fnu Neha ,&nbsp;Deepshikha Bhati ,&nbsp;Deepak Kumar Shukla ,&nbsp;Sonavi Makarand Dalvi ,&nbsp;Nikolaos Mantzou ,&nbsp;Safa Shubbar","doi":"10.1016/j.health.2025.100416","DOIUrl":"10.1016/j.health.2025.100416","url":null,"abstract":"<div><div>Medical imaging (MI) plays a vital role in healthcare by providing detailed insights into anatomical structures and pathological conditions, supporting accurate diagnosis and treatment planning. Noninvasive modalities, such as X-ray, magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US), produce high-resolution images of internal organs and tissues. The effective interpretation of these images relies on the precise segmentation of the regions of interest (ROI), including organs and lesions. Traditional methods based on manual feature extraction are time-consuming, inconsistent, and not scalable. This review explores recent advances in artificial intelligence (AI)-driven segmentation, focusing on Convolutional Neural Network (CNN) architectures, particularly the U-Net family and its variants—U-Net++, and U-Net 3+. These models enable automated, pixel-wise classification across modalities and have improved segmentation accuracy and efficiency. The review outlines the evolution of U-Net architectures, their clinical integration, and offers a modality-wise comparison. It also addresses challenges such as data heterogeneity, limited generalizability, and model interpretability, proposing solutions including attention mechanisms and Transformer-based designs. Emphasizing clinical applicability, this work bridges the gap between algorithmic development and real-world implementation.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100416"},"PeriodicalIF":0.0,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EAGLE-Net: A hierarchical neural network for detecting anatomical landmarks in upper gastrointestinal endoscopy for clinical diagnosis EAGLE-Net:一种用于检测上消化道内镜解剖标志的分层神经网络,用于临床诊断
Pub Date : 2025-09-20 DOI: 10.1016/j.health.2025.100420
Thi Thu Huong Nguyen , Van Duy Truong , Xuan Huy Manh , Thanh Tung Nguyen , Hang Viet Dao , Hai Vu
This study proposes a hierarchical network architecture, named EAGLE-Net, for identifying anatomical landmarks in the upper gastrointestinal (GI) tract endoscopic videos. Unlike conventional techniques, which label anatomical landmarks for static endoscopic images, the proposed method aims to classify landmarks from videos of the upper GI tract. Video streams often suffer from many noises and contaminated objects, which requires a new approach to tackle this issue. The proposed technique utilizes hierarchical network architecture, which consists of two stages: endoscopic image quality assessment and anatomical landmark classification. In the first stage, high-quality frames are preserved from GI tract videos. These frames are then used to identify a specific location among ten anatomical landmarks. The proposed method increases the coherence between the hierarchical data levels. It integrates an attention module to strengthen feature connections and utilizes a new hierarchical cross-entropy loss function to optimize model performance. The experimental results demonstrated that the proposed system achieves a high accuracy of 93% on average in both classification stages. In clinical experiments, anatomical landmarks are automatically denoted to help physicians monitor the endoscopy process. In addition, the proposed method demonstrates a potential solution for the deployment of a computer-aided diagnostic application for the detection and treatment of upper GI tract lesions.
本研究提出了一种名为EAGLE-Net的分层网络架构,用于识别上消化道内镜视频中的解剖标志。与传统的标记静态内窥镜图像解剖地标的技术不同,该方法旨在从上消化道视频中对地标进行分类。视频流经常受到许多噪声和污染物体的影响,这需要一种新的方法来解决这个问题。该方法采用分层网络结构,包括内镜图像质量评估和解剖地标分类两个阶段。在第一阶段,从胃肠道视频中保留高质量的帧。然后使用这些框架在十个解剖标志中识别特定位置。该方法提高了分层数据层之间的一致性。它集成了一个关注模块来加强特征连接,并利用新的分层交叉熵损失函数来优化模型性能。实验结果表明,该系统在两个分类阶段的平均准确率均达到93%以上。在临床实验中,解剖标志被自动标记,以帮助医生监测内镜检查过程。此外,所提出的方法为计算机辅助诊断应用程序的部署提供了一种潜在的解决方案,用于检测和治疗上消化道病变。
{"title":"EAGLE-Net: A hierarchical neural network for detecting anatomical landmarks in upper gastrointestinal endoscopy for clinical diagnosis","authors":"Thi Thu Huong Nguyen ,&nbsp;Van Duy Truong ,&nbsp;Xuan Huy Manh ,&nbsp;Thanh Tung Nguyen ,&nbsp;Hang Viet Dao ,&nbsp;Hai Vu","doi":"10.1016/j.health.2025.100420","DOIUrl":"10.1016/j.health.2025.100420","url":null,"abstract":"<div><div>This study proposes a hierarchical network architecture, named EAGLE-Net, for identifying anatomical landmarks in the upper gastrointestinal (GI) tract endoscopic videos. Unlike conventional techniques, which label anatomical landmarks for static endoscopic images, the proposed method aims to classify landmarks from videos of the upper GI tract. Video streams often suffer from many noises and contaminated objects, which requires a new approach to tackle this issue. The proposed technique utilizes hierarchical network architecture, which consists of two stages: endoscopic image quality assessment and anatomical landmark classification. In the first stage, high-quality frames are preserved from GI tract videos. These frames are then used to identify a specific location among ten anatomical landmarks. The proposed method increases the coherence between the hierarchical data levels. It integrates an attention module to strengthen feature connections and utilizes a new hierarchical cross-entropy loss function to optimize model performance. The experimental results demonstrated that the proposed system achieves a high accuracy of 93% on average in both classification stages. In clinical experiments, anatomical landmarks are automatically denoted to help physicians monitor the endoscopy process. In addition, the proposed method demonstrates a potential solution for the deployment of a computer-aided diagnostic application for the detection and treatment of upper GI tract lesions.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100420"},"PeriodicalIF":0.0,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145157325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning framework for 3D brain tumor segmentation and survival prediction 三维脑肿瘤分割和生存预测的深度学习框架
Pub Date : 2025-09-17 DOI: 10.1016/j.health.2025.100418
Ashfak Yeafi, Monira Islam, Md. Salah Uddin Yusuf
Accurate and efficient segmentation of brain tumors is crucial for early diagnosis, personalized treatment planning, and improved survival rates. Brain tumors exhibit complex spatial and morphological variations, making automated segmentation a challenging task. This study introduces a dynamic segmentation network (DSNet), a novel 3D brain tumor segmentation framework that integrates adversarial learning, dynamic convolutional neural network (DCNN), and attention mechanisms to enhance precision and robustness. DSNet processes 3D magnetic resonance imaging (MRI) volumes, including T1-weighted (T1), T1-weighted with contrast enhancement (T1ce), T2-weighted (T2), and fluid-attenuated inversion recovery (FLAIR) modalities, capturing rich spatial and contextual features. Leveraging adversarial training, DSNet refines boundary definitions, while dynamic filters adapt to tumor-specific heterogeneities, ensuring accurate segmentation across diverse cases. The attention mechanism further emphasizes tumor-relevant regions, enhancing feature extraction and boundary delineation. The model was trained and validated on the BraTS 2020 dataset, achieving dice similarity coefficients of 0.959, 0.975, and 0.947 for whole tumors (WT), tumor cores (TC), and enhancing tumor (ET) regions, respectively. Its generalizability was further confirmed through evaluations on the BraTS 2019 and BraTS 2018 datasets. Additionally, volumetric features derived from segmented images were used to predict patients’ overall survival rates via a Random Forest (RF) classifier. To enhance accessibility, we integrated the segmentation and prediction processes into a user-friendly web application. DSNet outperforms state-of-the-art methods, providing a robust and accurate solution for 3D brain tumor segmentation with strong clinical potential.
准确有效的脑肿瘤分割对于早期诊断、个性化治疗计划和提高生存率至关重要。脑肿瘤表现出复杂的空间和形态变化,使自动分割成为一项具有挑战性的任务。本研究引入了一种动态分割网络(DSNet),这是一种新的3D脑肿瘤分割框架,它集成了对抗学习、动态卷积神经网络(DCNN)和注意机制,以提高精度和鲁棒性。DSNet处理三维磁共振成像(MRI)体积,包括T1加权(T1)、T1加权对比度增强(T1ce)、T2加权(T2)和流体衰减反演恢复(FLAIR)模式,捕捉丰富的空间和背景特征。利用对抗训练,DSNet细化边界定义,而动态过滤器适应肿瘤特异性异质性,确保在不同情况下准确分割。注意机制进一步强调肿瘤相关区域,加强特征提取和边界划定。该模型在BraTS 2020数据集上进行了训练和验证,在全肿瘤(WT)、肿瘤核心(TC)和增强肿瘤(ET)区域上的骰子相似系数分别为0.959、0.975和0.947。通过对BraTS 2019和BraTS 2018数据集的评估,进一步证实了其通用性。此外,通过随机森林(RF)分类器,使用从分割图像中获得的体积特征来预测患者的总体生存率。为了提高可访问性,我们将分割和预测过程集成到一个用户友好的web应用程序中。DSNet优于最先进的方法,为具有强大临床潜力的3D脑肿瘤分割提供了强大而准确的解决方案。
{"title":"A deep learning framework for 3D brain tumor segmentation and survival prediction","authors":"Ashfak Yeafi,&nbsp;Monira Islam,&nbsp;Md. Salah Uddin Yusuf","doi":"10.1016/j.health.2025.100418","DOIUrl":"10.1016/j.health.2025.100418","url":null,"abstract":"<div><div>Accurate and efficient segmentation of brain tumors is crucial for early diagnosis, personalized treatment planning, and improved survival rates. Brain tumors exhibit complex spatial and morphological variations, making automated segmentation a challenging task. This study introduces a dynamic segmentation network (DSNet), a novel 3D brain tumor segmentation framework that integrates adversarial learning, dynamic convolutional neural network (DCNN), and attention mechanisms to enhance precision and robustness. DSNet processes 3D magnetic resonance imaging (MRI) volumes, including T1-weighted (T1), T1-weighted with contrast enhancement (T1ce), T2-weighted (T2), and fluid-attenuated inversion recovery (FLAIR) modalities, capturing rich spatial and contextual features. Leveraging adversarial training, DSNet refines boundary definitions, while dynamic filters adapt to tumor-specific heterogeneities, ensuring accurate segmentation across diverse cases. The attention mechanism further emphasizes tumor-relevant regions, enhancing feature extraction and boundary delineation. The model was trained and validated on the BraTS 2020 dataset, achieving dice similarity coefficients of 0.959, 0.975, and 0.947 for whole tumors (WT), tumor cores (TC), and enhancing tumor (ET) regions, respectively. Its generalizability was further confirmed through evaluations on the BraTS 2019 and BraTS 2018 datasets. Additionally, volumetric features derived from segmented images were used to predict patients’ overall survival rates via a Random Forest (RF) classifier. To enhance accessibility, we integrated the segmentation and prediction processes into a user-friendly web application. DSNet outperforms state-of-the-art methods, providing a robust and accurate solution for 3D brain tumor segmentation with strong clinical potential.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100418"},"PeriodicalIF":0.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145095141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Healthcare analytics (New York, N.Y.)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1