首页 > 最新文献

Computer methods and programs in biomedicine update最新文献

英文 中文
Robust lung segmentation in Chest X-ray images using modified U-Net with deeper network and residual blocks 基于改进U-Net的胸部x线图像鲁棒肺分割
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100211
Wiley Tam , Paul Babyn , Javad Alirezaie
Lung diseases remain a leading cause of mortality worldwide, as evidenced by statistics from the World Health Organization (WHO). The limited availability of radiologists to interpret Chest X-ray (CXR) images for diagnosing common lung conditions poses a significant challenge, often resulting in delayed diagnosis and treatment. In response, Computer-Aided Diagnostic (CAD) tools can be used to potentially streamline and expedite the diagnostic process. Recently, deep learning techniques have gained prominence in the automated analysis of CXR images, particularly in segmenting lung regions as a critical preliminary step. This study aims to develop and evaluate a lung segmentation model based on a modified U-Net architecture. The architecture leverages techniques such as transfer learning with DenseNet201 as a feature extractor alongside dilated convolutions and residual blocks. An ablation study was conducted to evaluate these architectural components, along with additional elements like augmented data, alternative backbones, and attention mechanisms. Numerous and extensive experiments were performed on two publicly available datasets, the Montgomery County (MC) and Shenzhen Hospital (SH) datasets, to validate the efficacy of these techniques on segmentation performance. Outperforming other state-of-the-art methods on the MC dataset, the proposed model achieved a Jaccard Index (IoU) of 97.77 and a Dice Similarity Coefficient (DSC) of 98.87. These results represent a significant improvement over the baseline U-Net, with gains of 3.37% and 1.75% in IoU and DSC, respectively. These findings highlight the importance of architectural enhancements in deep learning-based lung segmentation models, contributing to more efficient, accurate, and reliable CAD systems for lung disease assessment.
世界卫生组织(世卫组织)的统计数据证明,肺部疾病仍然是全世界死亡的主要原因。放射科医生解释胸部x光片(CXR)图像以诊断常见肺部疾病的有限可用性构成了重大挑战,经常导致诊断和治疗延迟。因此,可以使用计算机辅助诊断(CAD)工具来简化和加快诊断过程。最近,深度学习技术在CXR图像的自动分析中获得了突出的地位,特别是在分割肺区域作为关键的初步步骤方面。本研究旨在开发和评估一个基于改进U-Net架构的肺分割模型。该架构利用了DenseNet201的迁移学习等技术,作为扩展卷积和残差块的特征提取器。我们进行了一项消融研究,以评估这些架构组件,以及其他元素,如增强数据、替代主干和注意机制。在蒙哥马利县(MC)和深圳医院(SH)两个公开可用的数据集上进行了大量广泛的实验,以验证这些技术在分割性能方面的有效性。该模型在MC数据集上的表现优于其他最先进的方法,其Jaccard Index (IoU)为97.77,Dice Similarity Coefficient (DSC)为98.87。这些结果表明,与基线U-Net相比,IoU和DSC分别增加了3.37%和1.75%。这些发现强调了基于深度学习的肺分割模型的架构增强的重要性,有助于更有效、准确和可靠的肺部疾病评估CAD系统。
{"title":"Robust lung segmentation in Chest X-ray images using modified U-Net with deeper network and residual blocks","authors":"Wiley Tam ,&nbsp;Paul Babyn ,&nbsp;Javad Alirezaie","doi":"10.1016/j.cmpbup.2025.100211","DOIUrl":"10.1016/j.cmpbup.2025.100211","url":null,"abstract":"<div><div>Lung diseases remain a leading cause of mortality worldwide, as evidenced by statistics from the World Health Organization (WHO). The limited availability of radiologists to interpret Chest X-ray (CXR) images for diagnosing common lung conditions poses a significant challenge, often resulting in delayed diagnosis and treatment. In response, Computer-Aided Diagnostic (CAD) tools can be used to potentially streamline and expedite the diagnostic process. Recently, deep learning techniques have gained prominence in the automated analysis of CXR images, particularly in segmenting lung regions as a critical preliminary step. This study aims to develop and evaluate a lung segmentation model based on a modified U-Net architecture. The architecture leverages techniques such as transfer learning with DenseNet201 as a feature extractor alongside dilated convolutions and residual blocks. An ablation study was conducted to evaluate these architectural components, along with additional elements like augmented data, alternative backbones, and attention mechanisms. Numerous and extensive experiments were performed on two publicly available datasets, the Montgomery County (MC) and Shenzhen Hospital (SH) datasets, to validate the efficacy of these techniques on segmentation performance. Outperforming other state-of-the-art methods on the MC dataset, the proposed model achieved a Jaccard Index (IoU) of 97.77 and a Dice Similarity Coefficient (DSC) of 98.87. These results represent a significant improvement over the baseline U-Net, with gains of 3.37% and 1.75% in IoU and DSC, respectively. These findings highlight the importance of architectural enhancements in deep learning-based lung segmentation models, contributing to more efficient, accurate, and reliable CAD systems for lung disease assessment.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"8 ","pages":"Article 100211"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep learning-based spider wasp optimization approach for enhancing brain tumor detection and physical therapy prediction 一种新的基于深度学习的黄蜂优化方法,用于增强脑肿瘤检测和物理治疗预测
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100193
Suleiman Daoud , Ahmad Nasayreh , Khalid M.O. Nahar , Wlla k. Abedalaziz , Salem M. Alayasreh , Hasan Gharaibeh , Ayah Bashkami , Amer Jaradat , Sultan Jarrar , Hammam Al-Hawamdeh , Absalom E. Ezugwu , Raed Abu Zitar , Aseel Smerat , Vaclav Snasel , Laith Abualigah
A brain tumor, one of the deadliest disorders, is characterized by the abnormal growth of synapses in the brain. Early detection can improve brain tumor diagnosis, and accurate diagnosis is essential for effective treatment. Researchers have developed several deep-learning classification methods to diagnose brain tumors. Moreover, these types of tumorscan significantly impair physical activity, presenting a broad spectrum of symptoms. As a result, each patient requires an individualized physical therapy treatment plan tailored to their specific needs. However, some challenges remain, including the need for a competent expert in classifying brain tumors using deep learning models, as well as the challenge of creating the most accurate deep learning model for brain tumor classification. To address these challenges, we present a highly accurate and efficient methodology based on advanced metaheuristic algorithms and deep learning. To identify different types of pediatric brain tumors, we specifically develop an optimal residual learning architecture. We also present the Spider Wasp Optimization (SWO) algorithm, which aims to improve performance by feature selection. The algorithm enhances the effectiveness of optimization by balancing the speed of convergence and diversity of solutions. We first convert the algorithm from continuous to binary, combine it with the K-Nearest Neighbor (KNN) algorithm for classification, and evaluate it on a dataset of brain MRI images collected from King Abdullah Hospital. Our analysis revealed that in terms of metrics such as accuracy, sensitivity, specificity, and f1-score, it outperformed other conventional algorithms. We demonstrate the overall effectiveness of the proposed model by using it to select the optimal features extracted from the Resnet50V2 model for pediatric brain tumor detection. We compared the proposed SWO+KNN model with other deep learning architectures such as MobileNetV2, Resnet50V2, and machine learning algorithms such as KNN, Support Vector Machine SVM, and Random Forest (RF). The experimental results indicate that the proposed SWO+KNN model outperforms other well-established deep learning models and previous studies. SWO+KNN achieved accuracy rates of 97.5 % and 95.5 % for both binary classification and multiclass classification, respectively. The results clearly demonstrate the ability of the proposed SWO+KNN model to accurately classify brain tumors.
脑肿瘤是最致命的疾病之一,其特征是大脑中突触的异常生长。早期发现可以提高脑肿瘤的诊断率,准确的诊断是有效治疗的关键。研究人员开发了几种深度学习分类方法来诊断脑肿瘤。此外,这些类型的肿瘤会严重损害身体活动,表现出广泛的症状。因此,每个病人都需要一个个性化的物理治疗计划,以满足他们的具体需求。然而,仍然存在一些挑战,包括需要有能力的专家使用深度学习模型对脑肿瘤进行分类,以及创建最准确的脑肿瘤分类深度学习模型的挑战。为了应对这些挑战,我们提出了一种基于先进的元启发式算法和深度学习的高度准确和高效的方法。为了识别不同类型的儿童脑肿瘤,我们专门开发了一个最佳残差学习架构。我们还提出了蜘蛛黄蜂优化(SWO)算法,该算法旨在通过特征选择来提高性能。该算法通过平衡收敛速度和解的多样性来提高优化的有效性。我们首先将算法从连续转换为二值,将其与k近邻(KNN)算法相结合进行分类,并在阿卜杜拉国王医院采集的脑MRI图像数据集上对其进行评估。我们的分析显示,在准确性、灵敏度、特异性和f1评分等指标方面,它优于其他传统算法。我们通过使用该模型选择从Resnet50V2模型中提取的最优特征用于儿童脑肿瘤检测,证明了该模型的整体有效性。我们将提出的SWO+KNN模型与其他深度学习架构(如MobileNetV2、Resnet50V2)和机器学习算法(如KNN、支持向量机SVM和随机森林(RF))进行了比较。实验结果表明,所提出的SWO+KNN模型优于其他成熟的深度学习模型和先前的研究。SWO+KNN在二元分类和多类分类上的准确率分别为97.5%和95.5%。结果清楚地证明了所提出的SWO+KNN模型能够准确地对脑肿瘤进行分类。
{"title":"A novel deep learning-based spider wasp optimization approach for enhancing brain tumor detection and physical therapy prediction","authors":"Suleiman Daoud ,&nbsp;Ahmad Nasayreh ,&nbsp;Khalid M.O. Nahar ,&nbsp;Wlla k. Abedalaziz ,&nbsp;Salem M. Alayasreh ,&nbsp;Hasan Gharaibeh ,&nbsp;Ayah Bashkami ,&nbsp;Amer Jaradat ,&nbsp;Sultan Jarrar ,&nbsp;Hammam Al-Hawamdeh ,&nbsp;Absalom E. Ezugwu ,&nbsp;Raed Abu Zitar ,&nbsp;Aseel Smerat ,&nbsp;Vaclav Snasel ,&nbsp;Laith Abualigah","doi":"10.1016/j.cmpbup.2025.100193","DOIUrl":"10.1016/j.cmpbup.2025.100193","url":null,"abstract":"<div><div>A brain tumor, one of the deadliest disorders, is characterized by the abnormal growth of synapses in the brain. Early detection can improve brain tumor diagnosis, and accurate diagnosis is essential for effective treatment. Researchers have developed several deep-learning classification methods to diagnose brain tumors. Moreover, these types of tumorscan significantly impair physical activity, presenting a broad spectrum of symptoms. As a result, each patient requires an individualized physical therapy treatment plan tailored to their specific needs. However, some challenges remain, including the need for a competent expert in classifying brain tumors using deep learning models, as well as the challenge of creating the most accurate deep learning model for brain tumor classification. To address these challenges, we present a highly accurate and efficient methodology based on advanced metaheuristic algorithms and deep learning. To identify different types of pediatric brain tumors, we specifically develop an optimal residual learning architecture. We also present the Spider Wasp Optimization (SWO) algorithm, which aims to improve performance by feature selection. The algorithm enhances the effectiveness of optimization by balancing the speed of convergence and diversity of solutions. We first convert the algorithm from continuous to binary, combine it with the K-Nearest Neighbor (KNN) algorithm for classification, and evaluate it on a dataset of brain MRI images collected from King Abdullah Hospital. Our analysis revealed that in terms of metrics such as accuracy, sensitivity, specificity, and f1-score, it outperformed other conventional algorithms. We demonstrate the overall effectiveness of the proposed model by using it to select the optimal features extracted from the Resnet50V2 model for pediatric brain tumor detection. We compared the proposed SWO+KNN model with other deep learning architectures such as MobileNetV2, Resnet50V2, and machine learning algorithms such as KNN, Support Vector Machine SVM, and Random Forest (RF). The experimental results indicate that the proposed SWO+KNN model outperforms other well-established deep learning models and previous studies. SWO+KNN achieved accuracy rates of 97.5 % and 95.5 % for both binary classification and multiclass classification, respectively. The results clearly demonstrate the ability of the proposed SWO+KNN model to accurately classify brain tumors.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100193"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing stroke prediction models: A mixing of data augmentation and transfer learning for small-scale dataset in machine learning 增强中风预测模型:机器学习中小规模数据集的数据增强和迁移学习的混合
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100198
Imam Tahyudin , Ade Nurhopipah , Ades Tikaningsih , Puji Lestari , Yaya Suryana , Edi Winarko , Eko Winarto , Nazwan Haza , Hidetaka Nambo
Machine learning is a powerful technique for analysing datasets and making data-driven recommendations. However, in general, the performance of machine learning in recognising patterns is proportional to the size of the dataset. On the other hand, in some cases, such as in the medical field, providing an instance of a dataset takes a lot of work and budget. Therefore, additional data acquisition techniques are needed to increase data size and improve model quality.
This study applied Data Augmentation and Transfer Learning to solve small-scale dataset problems in analyzing stroke patient information in The Banyumas Regional General Hospital (RSUD Banyumas). The information is utilized to predict the patient's status when discharged from the hospital. The research compared the prediction accuracy from three solutions: Data Augmentation, Transfer Learning, and the mixing of both methods. The classification models employed in this study were four algorithms: Random Forest, Support Vector Machine, Gradient Boosting, and Extreme Gradient Boosting. We implemented the Synthetic Minority Over-sampling Technique for Nominal and Continuous to generate the artificial dataset. In the Transfer Learning process, we used a benchmark stroke dataset with a different target than ours, so we labelled it based on the nearest neighbours of the original dataset. Applying Data Augmentation in this study is a good decision because it leads to better performance than using only the original dataset. However, implementing the Transfer Learning technique does not give a satisfying result for XGBoost and SVM. Mixing Data Augmentation and Transfer Learning provides the best performance with accuracy and recall, both 0.813, the precision of 0.853497, and the F-1 score of 0.826628 given by the Random Forest model. The research can contribute significantly to developing better classification models so physicians can obtain more accurate information and help treat stroke cases more effectively and efficiently.
机器学习是分析数据集和提出数据驱动建议的强大技术。然而,一般来说,机器学习在识别模式方面的表现与数据集的大小成正比。另一方面,在某些情况下,例如在医疗领域,提供数据集的实例需要大量的工作和预算。因此,需要额外的数据采集技术来增加数据大小和提高模型质量。本研究应用数据增强和迁移学习来解决Banyumas地区总医院(RSUD Banyumas)中风患者信息分析中的小规模数据集问题。这些信息被用来预测病人出院时的状态。该研究比较了三种解决方案的预测精度:数据增强、迁移学习和两种方法的混合。本研究使用的分类模型有四种算法:随机森林、支持向量机、梯度增强和极端梯度增强。我们实现了对标称和连续的合成少数派过采样技术来生成人工数据集。在迁移学习过程中,我们使用了一个与我们的目标不同的基准笔画数据集,因此我们基于原始数据集的最近邻居对其进行标记。在本研究中应用数据增强是一个很好的决定,因为它比只使用原始数据集带来更好的性能。然而,迁移学习技术的实现并没有给XGBoost和SVM带来令人满意的结果。混合数据增强和迁移学习的准确率和召回率均为0.813,精度为0.853497,随机森林模型给出的F-1分数为0.826628。这项研究可以为开发更好的分类模型做出重大贡献,这样医生就可以获得更准确的信息,并帮助更有效地治疗中风病例。
{"title":"Enhancing stroke prediction models: A mixing of data augmentation and transfer learning for small-scale dataset in machine learning","authors":"Imam Tahyudin ,&nbsp;Ade Nurhopipah ,&nbsp;Ades Tikaningsih ,&nbsp;Puji Lestari ,&nbsp;Yaya Suryana ,&nbsp;Edi Winarko ,&nbsp;Eko Winarto ,&nbsp;Nazwan Haza ,&nbsp;Hidetaka Nambo","doi":"10.1016/j.cmpbup.2025.100198","DOIUrl":"10.1016/j.cmpbup.2025.100198","url":null,"abstract":"<div><div>Machine learning is a powerful technique for analysing datasets and making data-driven recommendations. However, in general, the performance of machine learning in recognising patterns is proportional to the size of the dataset. On the other hand, in some cases, such as in the medical field, providing an instance of a dataset takes a lot of work and budget. Therefore, additional data acquisition techniques are needed to increase data size and improve model quality.</div><div>This study applied Data Augmentation and Transfer Learning to solve small-scale dataset problems in analyzing stroke patient information in The Banyumas Regional General Hospital (RSUD Banyumas). The information is utilized to predict the patient's status when discharged from the hospital. The research compared the prediction accuracy from three solutions: Data Augmentation, Transfer Learning, and the mixing of both methods. The classification models employed in this study were four algorithms: Random Forest, Support Vector Machine, Gradient Boosting, and Extreme Gradient Boosting. We implemented the Synthetic Minority Over-sampling Technique for Nominal and Continuous to generate the artificial dataset. In the Transfer Learning process, we used a benchmark stroke dataset with a different target than ours, so we labelled it based on the nearest neighbours of the original dataset. Applying Data Augmentation in this study is a good decision because it leads to better performance than using only the original dataset. However, implementing the Transfer Learning technique does not give a satisfying result for XGBoost and SVM. Mixing Data Augmentation and Transfer Learning provides the best performance with accuracy and recall, both 0.813, the precision of 0.853497, and the F-1 score of 0.826628 given by the Random Forest model. The research can contribute significantly to developing better classification models so physicians can obtain more accurate information and help treat stroke cases more effectively and efficiently.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"8 ","pages":"Article 100198"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144500858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retraction notice to ``Can digital vaccine passports potentially bring life back to “true-normal”?'' [Computer Methods and Programs in Biomedicine Update, Volume 1, (2021) 100011] “数字疫苗护照有可能让生活回归“真正正常”吗?”[计算机方法和程序在生物医学更新,卷1,(2021)100011]
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100203
Fauzi Budi Satria , Mohamed Khalifa , Mihajlo Rabrenovic , Usman Iqbal
{"title":"Retraction notice to ``Can digital vaccine passports potentially bring life back to “true-normal”?'' [Computer Methods and Programs in Biomedicine Update, Volume 1, (2021) 100011]","authors":"Fauzi Budi Satria ,&nbsp;Mohamed Khalifa ,&nbsp;Mihajlo Rabrenovic ,&nbsp;Usman Iqbal","doi":"10.1016/j.cmpbup.2025.100203","DOIUrl":"10.1016/j.cmpbup.2025.100203","url":null,"abstract":"","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"8 ","pages":"Article 100203"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating and updating GRASP: An evidence-based framework for grading and assessment of clinical predictive tools 验证和更新GRASP:临床预测工具分级和评估的循证框架
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2024.100161
Mohamed Khalifa , Farah Magrabi , Blanca Gallego

Background

When selecting clinical predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to refine, validate GRASP, and assess its reliability for consistent application.

Methods

A mixed-methods study was conducted, involving an initial web-based survey for feedback from a wide group of international experts in clinical prediction to refine the GRASP framework, followed by reliability testing with two independent researchers assessing eight predictive tools. The survey involved 81 experts who rated agreement with the framework's criteria on a five-point Likert scale and provided qualitative feedback. The reliability of the GRASP framework was evaluated through interrater reliability testing using Spearman's rank correlation coefficient.

Results

The survey yielded strong agreement of the experts with the framework's evaluation criteria, overall average score: 4.35/5, highlighting the importance of predictive performance, usability, potential effect, and post-implementation impact in grading clinical predictive tools. Qualitative feedback led to significant refinements, including detailed categorisation of evidence levels and clearer representation of evaluation criteria. Interrater reliability testing showed high agreement between researchers and authors (0.994) and among researchers (0.988), indicating strong consistency in tool grading.

Conclusion

The GRASP framework provides a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate, compare, and select the best clinical predictive tools, with strong expert agreement and high interrater reliability. It assists clinicians in selecting effective tools by grading them on the level of validation of predictive performance before implementation, usability and potential effect during planning for implementation, and post-implementation impact on healthcare processes and clinical outcomes. Future studies should focus on the framework's application in clinical settings and its impact on decision-making and guideline development.
在选择临床预测工具时,临床医生面临着数量庞大且不断增长的挑战,其中大多数从未被实施或评估过有效性。作者开发了一个基于证据的预测工具分级和评估框架(GRASP)。本研究的目的是完善、验证GRASP,并评估其一致性应用的可靠性。方法进行了一项混合方法研究,包括一项初步的基于网络的调查,以收集临床预测方面的广泛国际专家的反馈,以完善GRASP框架,随后由两名独立研究人员评估八种预测工具进行可靠性测试。这项调查涉及81名专家,他们按照李克特五分制对框架标准的一致性进行评分,并提供定性反馈。通过采用Spearman等级相关系数的互信度检验来评估GRASP框架的信度。调查结果专家对框架的评估标准达成了强烈的一致,总体平均得分:4.35/5,突出了预测性能、可用性、潜在效果和实施后影响对临床预测工具评分的重要性。定性反馈导致了重大改进,包括证据水平的详细分类和评价标准的更清晰表述。研究者与作者之间(0.994)和研究者之间(0.988)的信度检验一致性较高,说明工具分级一致性较强。结论:GRASP框架为评估、比较和选择最佳临床预测工具提供了一种高水平、以证据为基础、全面、简单可行的方法,具有很强的专家一致性和较高的相互可靠性。它根据实施前的预测性能验证、实施计划期间的可用性和潜在效果以及实施后对医疗保健流程和临床结果的影响对工具进行分级,从而帮助临床医生选择有效的工具。未来的研究应关注该框架在临床环境中的应用及其对决策和指南制定的影响。
{"title":"Validating and updating GRASP: An evidence-based framework for grading and assessment of clinical predictive tools","authors":"Mohamed Khalifa ,&nbsp;Farah Magrabi ,&nbsp;Blanca Gallego","doi":"10.1016/j.cmpbup.2024.100161","DOIUrl":"10.1016/j.cmpbup.2024.100161","url":null,"abstract":"<div><h3>Background</h3><div>When selecting clinical predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to refine, validate GRASP, and assess its reliability for consistent application.</div></div><div><h3>Methods</h3><div>A mixed-methods study was conducted, involving an initial web-based survey for feedback from a wide group of international experts in clinical prediction to refine the GRASP framework, followed by reliability testing with two independent researchers assessing eight predictive tools. The survey involved 81 experts who rated agreement with the framework's criteria on a five-point Likert scale and provided qualitative feedback. The reliability of the GRASP framework was evaluated through interrater reliability testing using Spearman's rank correlation coefficient.</div></div><div><h3>Results</h3><div>The survey yielded strong agreement of the experts with the framework's evaluation criteria, overall average score: 4.35/5, highlighting the importance of predictive performance, usability, potential effect, and post-implementation impact in grading clinical predictive tools. Qualitative feedback led to significant refinements, including detailed categorisation of evidence levels and clearer representation of evaluation criteria. Interrater reliability testing showed high agreement between researchers and authors (0.994) and among researchers (0.988), indicating strong consistency in tool grading.</div></div><div><h3>Conclusion</h3><div>The GRASP framework provides a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate, compare, and select the best clinical predictive tools, with strong expert agreement and high interrater reliability. It assists clinicians in selecting effective tools by grading them on the level of validation of predictive performance before implementation, usability and potential effect during planning for implementation, and post-implementation impact on healthcare processes and clinical outcomes. Future studies should focus on the framework's application in clinical settings and its impact on decision-making and guideline development.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100161"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative approach of analyzing data uncertainty in parameter estimation for a Lumpy Skin Disease model 块状皮肤病模型参数估计中数据不确定性的比较分析方法
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100178
Edwiga Renald , Miracle Amadi , Heikki Haario , Joram Buza , Jean M. Tchuenche , Verdiana G. Masanja
The livestock industry has been economically affected by the emergence and reemergence of infectious diseases such as Lumpy Skin Disease (LSD). This has driven the interest to research efficient mitigating measures towards controlling the transmission of LSD. Mathematical models of real-life systems inherit loss of information, and consequently, accuracy of their results is often complicated by the presence of uncertainties in data used to estimate parameter values. There is a need for models with knowledge about the confidence of their long-term predictions. This study has introduced a novel yet simple technique for analyzing data uncertainties in compartmental models which is then used to examine the reliability of a deterministic model of the transmission dynamics of LSD in cattle which involves investigating scenarios related to data quality for which the model parameters can be well identified. The assessment of the uncertainties is determined with the help of Adaptive Metropolis Hastings algorithm, a Markov Chain Monte Carlo (MCMC) standard statistical method. Simulation results with synthetic cases show that the model parameters are identifiable with a reasonable amount of synthetic noise, and enough data points spanning through the model classes. MCMC outcomes derived from synthetic data, generated to mimic the characteristics of the real dataset, significantly surpassed those obtained from actual data in terms of uncertainties in identifying parameters and making predictions. This approach could serve as a guide for obtaining informative real data, and adapted to target key interventions when using routinely collected data to investigate long-term transmission dynamic of a disease.
结节性皮肤病(LSD)等传染病的出现和再次出现对畜牧业造成了经济影响。因此,人们开始关注研究有效的缓解措施,以控制 LSD 的传播。现实生活系统的数学模型继承了信息损失,因此,其结果的准确性往往因用于估计参数值的数据存在不确定性而变得复杂。因此,需要对模型的长期预测置信度有所了解。本研究引入了一种新颖而简单的技术,用于分析分区模型中的数据不确定性,然后用于检验牛群中 LSD 传播动态确定性模型的可靠性,其中涉及调查与数据质量有关的情况,而模型参数可以很好地确定。对不确定性的评估是在自适应 Metropolis Hastings 算法(一种马尔可夫链蒙特卡罗 (MCMC) 标准统计方法)的帮助下确定的。合成案例的模拟结果表明,模型参数在合理的合成噪声量和足够多的数据点跨越模型类别的情况下是可以识别的。从模拟真实数据集特征生成的合成数据中得出的 MCMC 结果,在确定参数和进行预测的不确定性方面,大大超过了从实际数据中得出的结果。这种方法可作为获取翔实真实数据的指南,在使用常规收集的数据研究疾病的长期传播动态时,可将其调整为有针对性的关键干预措施。
{"title":"A comparative approach of analyzing data uncertainty in parameter estimation for a Lumpy Skin Disease model","authors":"Edwiga Renald ,&nbsp;Miracle Amadi ,&nbsp;Heikki Haario ,&nbsp;Joram Buza ,&nbsp;Jean M. Tchuenche ,&nbsp;Verdiana G. Masanja","doi":"10.1016/j.cmpbup.2025.100178","DOIUrl":"10.1016/j.cmpbup.2025.100178","url":null,"abstract":"<div><div>The livestock industry has been economically affected by the emergence and reemergence of infectious diseases such as Lumpy Skin Disease (LSD). This has driven the interest to research efficient mitigating measures towards controlling the transmission of LSD. Mathematical models of real-life systems inherit loss of information, and consequently, accuracy of their results is often complicated by the presence of uncertainties in data used to estimate parameter values. There is a need for models with knowledge about the confidence of their long-term predictions. This study has introduced a novel yet simple technique for analyzing data uncertainties in compartmental models which is then used to examine the reliability of a deterministic model of the transmission dynamics of LSD in cattle which involves investigating scenarios related to data quality for which the model parameters can be well identified. The assessment of the uncertainties is determined with the help of Adaptive Metropolis Hastings algorithm, a Markov Chain Monte Carlo (MCMC) standard statistical method. Simulation results with synthetic cases show that the model parameters are identifiable with a reasonable amount of synthetic noise, and enough data points spanning through the model classes. MCMC outcomes derived from synthetic data, generated to mimic the characteristics of the real dataset, significantly surpassed those obtained from actual data in terms of uncertainties in identifying parameters and making predictions. This approach could serve as a guide for obtaining informative real data, and adapted to target key interventions when using routinely collected data to investigate long-term transmission dynamic of a disease.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100178"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACD-ML: Advanced CKD detection using machine learning: A tri-phase ensemble and multi-layered stacking and blending approach ACD-ML:利用机器学习的高级 CKD 检测:三阶段集合和多层堆叠混合方法
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2024.100173
Mir Faiyaz Hossain, Shajreen Tabassum Diya, Riasat Khan
Chronic Kidney Disease (CKD), the gradual loss and irreversible damage of the kidney’s functionality, is one of the leading contributors to death and causes about 1.3 million people to die annually. It is extremely important to slow down the progression of kidney deterioration to prevent kidney dialysis or transplant. This study aims to leverage machine learning algorithms and ensemble models for early detection of CKD using the “Chronic Kidney Disease (CKD15)” and “Risk Factor Prediction of Chronic Kidney Disease (CKD21)” datasets from the UCI Machine Learning Repository. Two encoding techniques are introduced to combine the datasets, i.e., Discrete Encoding and Ranged Encoding, resulting in Discrete Merged and Ranged Merged datasets. The preprocessing stage employs normalization, class balancing with synthetic oversampling, and five feature selection techniques, including RFECV and Pearson Correlation. This work proposes a novel Tri-phase Ensemble technique combining Voting, Bagging, and Stacking approaches and two other ensemble models: Multi-layer Stacking and Multi-layer Blending classifiers. The investigation reveals that, for the Discrete Merged dataset, the novel Tri-phase Ensemble and Multi-layer Stacking with layers interchanged achieves an accuracy of 99.5%. For the Ranged Merged dataset, AdaBoost attains an accuracy of 97.5%. Logistic Regression accomplishes an accuracy of 99.5% in validating with the discrete dataset, whereas for validating with the ranged dataset, both Random Forest and SVM achieve 100% accuracy. Finally, to interpret and understand the behavior and prediction of the model, a LIME explainer has been utilized.
慢性肾脏疾病(CKD)是肾脏功能的逐渐丧失和不可逆转的损害,是导致死亡的主要原因之一,每年导致约130万人死亡。减缓肾脏恶化的进程以防止肾脏透析或移植是极其重要的。本研究旨在利用来自UCI机器学习存储库的“慢性肾脏疾病(CKD15)”和“慢性肾脏疾病风险因素预测(CKD21)”数据集,利用机器学习算法和集成模型进行CKD的早期检测。引入离散编码和范围编码两种编码技术对数据集进行组合,得到离散合并和范围合并数据集。预处理阶段采用归一化、类平衡和合成过采样,以及五种特征选择技术,包括RFECV和Pearson相关。这项工作提出了一种新的三相集成技术,结合了投票、Bagging和堆叠方法以及另外两种集成模型:多层堆叠和多层混合分类器。研究表明,对于离散合并数据集,新的三相集成和多层叠加层交换的精度达到99.5%。对于范围合并数据集,AdaBoost达到了97.5%的准确率。在使用离散数据集进行验证时,逻辑回归实现了99.5%的准确性,而对于使用范围数据集进行验证,随机森林和支持向量机都实现了100%的准确性。最后,为了解释和理解模型的行为和预测,使用了LIME解释器。
{"title":"ACD-ML: Advanced CKD detection using machine learning: A tri-phase ensemble and multi-layered stacking and blending approach","authors":"Mir Faiyaz Hossain,&nbsp;Shajreen Tabassum Diya,&nbsp;Riasat Khan","doi":"10.1016/j.cmpbup.2024.100173","DOIUrl":"10.1016/j.cmpbup.2024.100173","url":null,"abstract":"<div><div>Chronic Kidney Disease (CKD), the gradual loss and irreversible damage of the kidney’s functionality, is one of the leading contributors to death and causes about 1.3 million people to die annually. It is extremely important to slow down the progression of kidney deterioration to prevent kidney dialysis or transplant. This study aims to leverage machine learning algorithms and ensemble models for early detection of CKD using the “Chronic Kidney Disease (CKD15)” and “Risk Factor Prediction of Chronic Kidney Disease (CKD21)” datasets from the UCI Machine Learning Repository. Two encoding techniques are introduced to combine the datasets, i.e., Discrete Encoding and Ranged Encoding, resulting in Discrete Merged and Ranged Merged datasets. The preprocessing stage employs normalization, class balancing with synthetic oversampling, and five feature selection techniques, including RFECV and Pearson Correlation. This work proposes a novel Tri-phase Ensemble technique combining Voting, Bagging, and Stacking approaches and two other ensemble models: Multi-layer Stacking and Multi-layer Blending classifiers. The investigation reveals that, for the Discrete Merged dataset, the novel Tri-phase Ensemble and Multi-layer Stacking with layers interchanged achieves an accuracy of 99.5%. For the Ranged Merged dataset, AdaBoost attains an accuracy of 97.5%. Logistic Regression accomplishes an accuracy of 99.5% in validating with the discrete dataset, whereas for validating with the ranged dataset, both Random Forest and SVM achieve 100% accuracy. Finally, to interpret and understand the behavior and prediction of the model, a LIME explainer has been utilized.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100173"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Numerical modelling and stability analysis of fractional smoking model 分级抽烟模型的数值模拟及稳定性分析
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100201
Zafar Iqbal , Nauman Ahmed , Abid Ali , Ali Raza , Muhammad Rafiq , Ilyas Khan
In this work, the effects and propagation of smoking in society are studied by considering the fractional tobacco smoking model. For this reason, the underlying model is investigated both analytically and numerically. The system has two equilibrium points, namely the tobacco-free and endemic equilibrium points. Furthermore, the stability of the model is observed by applying the Jacobian matrix technique. For numerical study, the non-standard finite difference scheme (NSFD) is hybridized with the Grunwald-Letnikov (GL) approximation for the Caputo differential operator. The key features of the continuous model are examined for the projected GL-NSFD scheme. The numerically simulated graphs are plotted to guarantee the positivity, boundedness, and convergence towards the exact steady states. Since the integer order epidemic model cannot accurately capture the nonlinear real phenomenon. Moreover, they cannot predict the future state exactly as the integer order derivatives involved in the models are local by nature, and they do not have the memory effect or history of the system. On the contrary, the fractional order model can capture all the necessary features of the continuous model. The proposed numerical method preserves the structure of the continuous system, for instance, the positivity, boundedness and convergence toward the exact steady states. It is worth mentioning that the projected numerical scheme is consistent with the continuous system.
在这项工作中,通过考虑分数吸烟模型,研究了吸烟在社会中的影响和传播。出于这个原因,我们对底层模型进行了分析和数值研究。该系统有两个平衡点,即无烟平衡点和地方性平衡点。利用雅可比矩阵技术对模型的稳定性进行了验证。为了进行数值研究,将非标准有限差分格式(NSFD)与Caputo微分算子的Grunwald-Letnikov (GL)近似相结合。研究了投影GL-NSFD方案的连续模型的主要特征。绘制了数值模拟图,以保证正性、有界性和收敛性。由于整阶流行病模型不能准确地捕捉非线性真实现象。此外,由于模型中涉及的整数阶导数本质上是局部的,并且它们不具有系统的记忆效应或历史,因此它们不能准确地预测未来状态。相反,分数阶模型可以捕获连续模型的所有必要特征。所提出的数值方法保留了连续系统的结构,如正性、有界性和向精确稳态的收敛性。值得一提的是,投影的数值格式与连续系统是一致的。
{"title":"Numerical modelling and stability analysis of fractional smoking model","authors":"Zafar Iqbal ,&nbsp;Nauman Ahmed ,&nbsp;Abid Ali ,&nbsp;Ali Raza ,&nbsp;Muhammad Rafiq ,&nbsp;Ilyas Khan","doi":"10.1016/j.cmpbup.2025.100201","DOIUrl":"10.1016/j.cmpbup.2025.100201","url":null,"abstract":"<div><div>In this work, the effects and propagation of smoking in society are studied by considering the fractional tobacco smoking model. For this reason, the underlying model is investigated both analytically and numerically. The system has two equilibrium points, namely the tobacco-free and endemic equilibrium points. Furthermore, the stability of the model is observed by applying the Jacobian matrix technique. For numerical study, the non-standard finite difference scheme (NSFD) is hybridized with the Grunwald-Letnikov (GL) approximation for the Caputo differential operator. The key features of the continuous model are examined for the projected GL-NSFD scheme. The numerically simulated graphs are plotted to guarantee the positivity, boundedness, and convergence towards the exact steady states. Since the integer order epidemic model cannot accurately capture the nonlinear real phenomenon. Moreover, they cannot predict the future state exactly as the integer order derivatives involved in the models are local by nature, and they do not have the memory effect or history of the system. On the contrary, the fractional order model can capture all the necessary features of the continuous model. The proposed numerical method preserves the structure of the continuous system, for instance, the positivity, boundedness and convergence toward the exact steady states. It is worth mentioning that the projected numerical scheme is consistent with the continuous system.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"8 ","pages":"Article 100201"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging multilingual RAG for breast cancer RCPs: AI-driven speech transcription and compliance in Darija-French clinical discussions 利用多语言RAG治疗乳腺癌rcp: ai驱动的语音转录和依从性在Darija-French临床讨论中
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100221
Ilyass Emssaad , Fatima-Ezzahraa Ben-Bouazzaa , Idriss Tafala , Manal Chakour El Mezali , Bassma Jioudi
The integration of artificial intelligence (AI) into clinical decision-making has introduced new opportunities for automating and enhancing medical documentation, particularly in oncology, where multidisciplinary meetings are central to treatment planning. However, existing speech-to-text and retrieval-augmented generation (RAG) systems are not equipped to operate effectively in multilingual, dialect-rich environments such as those in North African hospitals where Moroccan Darija, Arabic, and French are frequently interwoven. These linguistic complexities, combined with the high-stakes nature of clinical dialogue, challenge transcription accuracy, contextual information retrieval, and regulatory compliance. This study presents a multilingual RAG system tailored to clinical meetings, integrating a fine-tuned Whisper ASR model with a sentence-level semantic retrieval pipeline and a compliance-aware generation framework. Evaluated on real-world clinical queries, the system demonstrates improved transcription quality and retrieval precision over standard pipelines, while enforcing factual grounding and safety through multi-stage output validation. These results highlight the potential of multilingual, speech-driven AI to support decision-making and compliance in linguistically diverse healthcare environments, offering a deployable foundation for clinical NLP in underserved regions.
人工智能(AI)与临床决策的整合为自动化和增强医疗文件提供了新的机会,特别是在肿瘤学领域,多学科会议是治疗计划的核心。然而,现有的语音转文本和检索增强生成(RAG)系统无法在多语言、方言丰富的环境中有效运行,例如在北非医院中,摩洛哥语、阿拉伯语和法语经常交织在一起。这些语言的复杂性,加上临床对话的高风险性质,对转录的准确性、上下文信息检索和法规遵从性提出了挑战。本研究提出了一个为临床会议量身定制的多语言RAG系统,将一个微调的Whisper ASR模型与句子级语义检索管道和依从性感知生成框架集成在一起。通过对现实世界的临床查询进行评估,该系统比标准管道显示出更高的转录质量和检索精度,同时通过多阶段输出验证加强了事实基础和安全性。这些结果突出了多语言、语音驱动的人工智能在语言多样化的医疗环境中支持决策和合规性的潜力,为服务不足地区的临床NLP提供了可部署的基础。
{"title":"Leveraging multilingual RAG for breast cancer RCPs: AI-driven speech transcription and compliance in Darija-French clinical discussions","authors":"Ilyass Emssaad ,&nbsp;Fatima-Ezzahraa Ben-Bouazzaa ,&nbsp;Idriss Tafala ,&nbsp;Manal Chakour El Mezali ,&nbsp;Bassma Jioudi","doi":"10.1016/j.cmpbup.2025.100221","DOIUrl":"10.1016/j.cmpbup.2025.100221","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) into clinical decision-making has introduced new opportunities for automating and enhancing medical documentation, particularly in oncology, where multidisciplinary meetings are central to treatment planning. However, existing speech-to-text and retrieval-augmented generation (RAG) systems are not equipped to operate effectively in multilingual, dialect-rich environments such as those in North African hospitals where Moroccan Darija, Arabic, and French are frequently interwoven. These linguistic complexities, combined with the high-stakes nature of clinical dialogue, challenge transcription accuracy, contextual information retrieval, and regulatory compliance. This study presents a multilingual RAG system tailored to clinical meetings, integrating a fine-tuned Whisper ASR model with a sentence-level semantic retrieval pipeline and a compliance-aware generation framework. Evaluated on real-world clinical queries, the system demonstrates improved transcription quality and retrieval precision over standard pipelines, while enforcing factual grounding and safety through multi-stage output validation. These results highlight the potential of multilingual, speech-driven AI to support decision-making and compliance in linguistically diverse healthcare environments, offering a deployable foundation for clinical NLP in underserved regions.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"8 ","pages":"Article 100221"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unifying heterogeneous hyperspectral databases for in vivo human brain cancer classification: Towards robust algorithm development 统一异构高光谱数据库,进行活体人类脑癌分类:实现稳健的算法开发
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100183
Alberto Martín-Pérez , Beatriz Martinez-Vega , Manuel Villa , Raquel Leon , Alejandro Martinez de Ternero , Himar Fabelo , Samuel Ortega , Eduardo Quevedo , Gustavo M. Callico , Eduardo Juarez , César Sanz

Background and objective

Cancer is one of the leading causes of death worldwide, and early and accurate detection is crucial to improve patient outcomes. Differentiating between healthy and diseased brain tissue during surgery is particularly challenging. Hyperspectral imaging, combined with machine and deep learning algorithms, has shown promise for detecting brain cancer in vivo. The present study is distinguished by an analysis and comparison of the performance of various algorithms, with the objective of evaluating their efficacy in unifying hyperspectral databases obtained from different cameras. These databases include data collected from various hospitals using different hyperspectral instruments, which vary in spectral ranges, spatial and spectral resolution, as well as illumination conditions. The primary aim is to assess the performance of models that respond to the limited availability of in vivo human brain hyperspectral data. The classification of healthy tissue, tumors and blood vessels is achieved through the utilisation of different algorithms in two databases: HELICoiD and SLIMBRAIN.

Methods

This study evaluated conventional and deep learning methods (KNN, RF, SVM, 1D-DNN, 2D-CNN, Fast 3D-CNN, and a DRNN), and advanced classification frameworks (LIBRA and HELICoiD) using cross-validation on 16 and 26 patients from each database, respectively.

Results

For individual datasets,LIBRA achieved the highest sensitivity for tumor classification, with values of 38 %, 72 %, and 80 % on the SLIMBRAIN, HELICoiD (20 bands), and HELICoiD (128 bands) datasets, respectively. The HELICoiD framework yielded the best F1 Scores for tumor tissue, with values of 11 %, 45 %, and 53 % for the same datasets. For the Unified dataset, LIBRA obtained the best results identifying the tumor, with a 40 % of sensitivity and a 30 % of F1 Score.
背景和目的癌症是世界范围内导致死亡的主要原因之一,早期和准确的检测对于改善患者的预后至关重要。在手术中区分健康脑组织和病变脑组织尤其具有挑战性。高光谱成像,结合机器和深度学习算法,有望在体内检测脑癌。本研究的特点是分析和比较了各种算法的性能,目的是评估它们在统一来自不同相机的高光谱数据库方面的功效。这些数据库包括使用不同的高光谱仪器从各医院收集的数据,这些仪器在光谱范围、空间和光谱分辨率以及照明条件方面各不相同。主要目的是评估对有限的体内人脑高光谱数据做出反应的模型的性能。健康组织、肿瘤和血管的分类是通过在HELICoiD和SLIMBRAIN两个数据库中使用不同的算法来实现的。方法本研究通过交叉验证分别对来自每个数据库的16例和26例患者评估了传统和深度学习方法(KNN、RF、SVM、1D-DNN、2D-CNN、Fast 3D-CNN和一个DRNN)以及高级分类框架(LIBRA和HELICoiD)。结果对于单个数据集,LIBRA对肿瘤分类的敏感性最高,在SLIMBRAIN、HELICoiD(20个波段)和HELICoiD(128个波段)数据集上的敏感性分别为38%、72%和80%。HELICoiD框架在肿瘤组织中获得了最好的F1评分,在相同的数据集中分别为11%、45%和53%。对于统一数据集,LIBRA获得了识别肿瘤的最佳结果,灵敏度为40%,F1评分为30%。
{"title":"Unifying heterogeneous hyperspectral databases for in vivo human brain cancer classification: Towards robust algorithm development","authors":"Alberto Martín-Pérez ,&nbsp;Beatriz Martinez-Vega ,&nbsp;Manuel Villa ,&nbsp;Raquel Leon ,&nbsp;Alejandro Martinez de Ternero ,&nbsp;Himar Fabelo ,&nbsp;Samuel Ortega ,&nbsp;Eduardo Quevedo ,&nbsp;Gustavo M. Callico ,&nbsp;Eduardo Juarez ,&nbsp;César Sanz","doi":"10.1016/j.cmpbup.2025.100183","DOIUrl":"10.1016/j.cmpbup.2025.100183","url":null,"abstract":"<div><h3>Background and objective</h3><div>Cancer is one of the leading causes of death worldwide, and early and accurate detection is crucial to improve patient outcomes. Differentiating between healthy and diseased brain tissue during surgery is particularly challenging. Hyperspectral imaging, combined with machine and deep learning algorithms, has shown promise for detecting brain cancer <em>in vivo</em>. The present study is distinguished by an analysis and comparison of the performance of various algorithms, with the objective of evaluating their efficacy in unifying hyperspectral databases obtained from different cameras. These databases include data collected from various hospitals using different hyperspectral instruments, which vary in spectral ranges, spatial and spectral resolution, as well as illumination conditions. The primary aim is to assess the performance of models that respond to the limited availability of <em>in vivo</em> human brain hyperspectral data. The classification of healthy tissue, tumors and blood vessels is achieved through the utilisation of different algorithms in two databases: <em>HELICoiD</em> and <em>SLIMBRAIN</em>.</div></div><div><h3>Methods</h3><div>This study evaluated conventional and deep learning methods (<em>KNN, RF, SVM, 1D-DNN, 2D-CNN, Fast 3D-CNN,</em> and a <em>DRNN</em>), and advanced classification frameworks (<em>LIBRA</em> and <em>HELICoiD</em>) using cross-validation on 16 and 26 patients from each database, respectively.</div></div><div><h3>Results</h3><div>For individual datasets,<em>LIBRA</em> achieved the highest sensitivity for tumor classification, with values of 38 %, 72 %, and 80 % on the <em>SLIMBRAIN, HELICoiD</em> (20 bands), and <em>HELICoiD</em> (128 bands) datasets, respectively. The <em>HELICoiD</em> framework yielded the best <em>F1 Scores</em> for tumor tissue, with values of 11 %, 45 %, and 53 % for the same datasets. For the <em>Unified dataset, LIBRA</em> obtained the best results identifying the tumor, with a 40 % of sensitivity and a 30 % of <em>F1 Score</em>.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100183"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine update
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1