首页 > 最新文献

Computer methods and programs in biomedicine最新文献

英文 中文
Prey capture enhanced Harris hawks optimizer for wrapper-based feature selection in high-dimensional medical data 猎物捕获增强哈里斯鹰优化器包装为基础的特征选择在高维医疗数据
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-04 DOI: 10.1016/j.cmpb.2026.109237
Mohammed Batis , Yi Chen , Lei Liu , Ali Asghar Heidari , Huiling Chen

Background and Objective

While the Harris Hawks Optimizer (HHO) is widely utilized for wrapper-based Feature Selection (FS) due to its efficiency and ease of implementation, existing HHO-based FS approaches encounter challenges when handling high-dimensional datasets, such as falling into local optima and high computational costs. In the HHO algorithm, the Harris hawks engage in surprise attacks on the identified prey according to the prey's escape energy. However, there may be scenarios where the prey could escape due to the algorithm's limitations. To enhance the algorithm's prey-capture ability, this article introduces an enhanced HHO algorithm termed Prey Capture Harris Hawks Optimizer (PCHHO).

Methods

The prey capture strategy incorporates crossover and mutation operators to enhance the algorithm's exploratory-exploitative capabilities. The performance of PCHHO is evaluated on the CEC2017 benchmark suite, where it is compared to HHO, with three enhanced HHO algorithms, nine classical metaheuristic algorithms, and nine improved metaheuristic algorithms. The experimental comparison results are synthesized using the Wilcoxon signed-rank and Friedman tests. Ultimately, a binary form of PCHHO (bPCHHO) is designed for wrapper-based FS and compared with six excellent binary metaheuristics using 15 high-dimensional medical datasets.

Results

The results demonstrate the excellent performance of the proposed algorithm on the CEC2017 benchmark suite compared to other algorithms, as well as the effectiveness of bPCHHO in evolving a subset of features with 77% reduction in classification error, 8% reduction in computational time, and 73% fewer features selected compared to bHHO.

Conclusions

The proposed PCHHO and its binary variant bPCHHO exhibit superior performance in both benchmark optimization and wrapper-based FS for high-dimensional medical data, highlighting their potential for practical applications.
背景与目的Harris Hawks Optimizer (HHO)因其高效和易于实现而被广泛应用于基于包装器的特征选择(FS)中,但现有的基于HHO的特征选择方法在处理高维数据集时面临陷入局部最优和计算成本高的挑战。在HHO算法中,哈里斯鹰根据猎物的逃跑能量对已识别的猎物进行突然袭击。然而,由于算法的局限性,可能会出现猎物逃跑的情况。为了提高算法的猎物捕获能力,本文介绍了一种增强型的HHO算法,称为猎物捕获哈里斯鹰优化器(PCHHO)。方法在捕获策略中引入交叉算子和变异算子,增强算法的探索利用能力。在CEC2017基准测试套件上评估了PCHHO的性能,并将其与HHO进行了比较,其中包括三种增强的HHO算法,九种经典的元启发式算法和九种改进的元启发式算法。采用Wilcoxon符号秩检验和Friedman检验对实验结果进行了综合。最后,针对基于包装器的FS设计了一种二进制形式的PCHHO (bPCHHO),并使用15个高维医疗数据集与6种优秀的二进制元启发式方法进行了比较。结果表明,与其他算法相比,该算法在CEC2017基准测试集上表现优异,并且bPCHHO在进化特征子集方面的有效性,与bHHO相比,分类误差减少77%,计算时间减少8%,选择的特征减少73%。结论所提出的PCHHO及其二进制变体bPCHHO在高维医疗数据的基准优化和基于包装器的FS方面均表现出优异的性能,具有实际应用潜力。
{"title":"Prey capture enhanced Harris hawks optimizer for wrapper-based feature selection in high-dimensional medical data","authors":"Mohammed Batis ,&nbsp;Yi Chen ,&nbsp;Lei Liu ,&nbsp;Ali Asghar Heidari ,&nbsp;Huiling Chen","doi":"10.1016/j.cmpb.2026.109237","DOIUrl":"10.1016/j.cmpb.2026.109237","url":null,"abstract":"<div><h3>Background and Objective</h3><div>While the Harris Hawks Optimizer (HHO) is widely utilized for wrapper-based Feature Selection (FS) due to its efficiency and ease of implementation, existing HHO-based FS approaches encounter challenges when handling high-dimensional datasets, such as falling into local optima and high computational costs. In the HHO algorithm, the Harris hawks engage in surprise attacks on the identified prey according to the prey's escape energy. However, there may be scenarios where the prey could escape due to the algorithm's limitations. To enhance the algorithm's prey-capture ability, this article introduces an enhanced HHO algorithm termed Prey Capture Harris Hawks Optimizer (PCHHO).</div></div><div><h3>Methods</h3><div>The prey capture strategy incorporates crossover and mutation operators to enhance the algorithm's exploratory-exploitative capabilities. The performance of PCHHO is evaluated on the CEC2017 benchmark suite, where it is compared to HHO, with three enhanced HHO algorithms, nine classical metaheuristic algorithms, and nine improved metaheuristic algorithms. The experimental comparison results are synthesized using the Wilcoxon signed-rank and Friedman tests. Ultimately, a binary form of PCHHO (bPCHHO) is designed for wrapper-based FS and compared with six excellent binary metaheuristics using 15 high-dimensional medical datasets.</div></div><div><h3>Results</h3><div>The results demonstrate the excellent performance of the proposed algorithm on the CEC2017 benchmark suite compared to other algorithms, as well as the effectiveness of bPCHHO in evolving a subset of features with 77% reduction in classification error, 8% reduction in computational time, and 73% fewer features selected compared to bHHO.</div></div><div><h3>Conclusions</h3><div>The proposed PCHHO and its binary variant bPCHHO exhibit superior performance in both benchmark optimization and wrapper-based FS for high-dimensional medical data, highlighting their potential for practical applications.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"277 ","pages":"Article 109237"},"PeriodicalIF":4.8,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145975338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver cancer segmentator: Metadata-guided confidence scoring for reliable segmentation of colorectal liver metastases in CT 肝癌分割器:元数据引导的置信度评分在CT上可靠分割结直肠癌肝转移
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-03 DOI: 10.1016/j.cmpb.2026.109233
Mohammad Hamghalam , Jacob J. Peoples , Kaitlyn S.M. Kobayashi , Grace Park , Erin Kwak , E. Claire Bunker , Natalie Gangai , Mithat Gonen , Yun Shin Chun , HyunSeon Christine Kang , Richard K.G. Do , Amber L. Simpson

Background and Objective:

This study introduces the liver cancer segmentator (LCS), a deep learning model designed for automatic and robust segmentation of liver parenchyma and tumors in abdominal contrast-enhanced computed tomography images from patients with colorectal liver metastases. The primary aim was to enhance confidence scoring for more reliable clinical segmentation assessment.

Methods:

In this retrospective study, 446 abdominal contrast-enhanced computed tomography examinations were collected; 355 (80%) were used for training and 91 for testing. Data originated from routine clinical cases at two institutions, representing diverse disease stages and treatment settings. A state-of-the-art neural network segmentation framework was trained on these cases, with performance evaluated using the Dice score and the normalized surface distance. An iterative training process, supported by an integrated annotation workflow, was employed to refine the training set. The final model was applied to the 91 test examinations to assess the impact of tumor volume and slice thickness on confidence scoring. Reliability was quantified through pairwise Dice score for failure detection and the area under the risk coverage curve.

Results:

The LCS achieved a Dice score of 0.9707 (95% CI: 0.9663–0.9751) for liver parenchyma and 0.7695 (95% CI: 0.7166–0.8224) for tumors. Normalized surface distance values at a 3-millimeter tolerance were 0.9605 (95% CI: 0.9539–0.9671) for parenchyma and 0.8412 (95% CI: 0.7928–0.8896) for tumors. Confidence scoring analysis demonstrated strong correlations between tumor volume, slice thickness, and segmentation reliability, reducing the area under the risk coverage curve from 16.7 to 10.3.

Conclusions:

The LCS achieved high segmentation accuracy in patients with colorectal liver metastases. Incorporating tumor volume and slice thickness into the confidence scoring process improved failure detection, enhanced reliability, and provided valuable insights for refining clinical deployment of automated segmentation algorithms.
背景与目的:本研究介绍了肝癌分割器(liver cancer segmentator, LCS),这是一种深度学习模型,旨在对结直肠癌肝转移患者腹部增强ct图像中的肝实质和肿瘤进行自动、鲁棒分割。主要目的是提高可信度评分更可靠的临床分割评估。方法:在本回顾性研究中,收集了446例腹部增强ct检查;355例(80%)用于训练,91例用于测试。数据来自两家机构的常规临床病例,代表了不同的疾病阶段和治疗环境。在这些情况下训练了最先进的神经网络分割框架,并使用Dice分数和归一化表面距离来评估性能。在集成标注工作流的支持下,采用迭代训练过程对训练集进行细化。最后将模型应用于91次检验,评估肿瘤体积和切片厚度对置信度评分的影响。可靠性通过故障检测的两两Dice评分和风险覆盖曲线下的面积来量化。结果:肝实质的LCS评分为0.9707 (95% CI: 0.9663 ~ 0.9751),肿瘤的LCS评分为0.7695 (95% CI: 0.7166 ~ 0.8224)。在3毫米公差下,实质归一化表面距离值为0.9605 (95% CI: 0.9539-0.9671),肿瘤为0.8412 (95% CI: 0.7928-0.8896)。置信度评分分析显示,肿瘤体积、切片厚度和分割可靠性之间存在较强的相关性,将风险覆盖曲线下的面积从16.7降低到10.3。结论:LCS在结直肠肝转移患者中具有较高的分割准确率。将肿瘤体积和切片厚度纳入置信度评分过程可以改进故障检测,增强可靠性,并为改进自动分割算法的临床部署提供有价值的见解。
{"title":"Liver cancer segmentator: Metadata-guided confidence scoring for reliable segmentation of colorectal liver metastases in CT","authors":"Mohammad Hamghalam ,&nbsp;Jacob J. Peoples ,&nbsp;Kaitlyn S.M. Kobayashi ,&nbsp;Grace Park ,&nbsp;Erin Kwak ,&nbsp;E. Claire Bunker ,&nbsp;Natalie Gangai ,&nbsp;Mithat Gonen ,&nbsp;Yun Shin Chun ,&nbsp;HyunSeon Christine Kang ,&nbsp;Richard K.G. Do ,&nbsp;Amber L. Simpson","doi":"10.1016/j.cmpb.2026.109233","DOIUrl":"10.1016/j.cmpb.2026.109233","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>This study introduces the liver cancer segmentator (LCS), a deep learning model designed for automatic and robust segmentation of liver parenchyma and tumors in abdominal contrast-enhanced computed tomography images from patients with colorectal liver metastases. The primary aim was to enhance confidence scoring for more reliable clinical segmentation assessment.</div></div><div><h3>Methods:</h3><div>In this retrospective study, 446 abdominal contrast-enhanced computed tomography examinations were collected; 355 (80%) were used for training and 91 for testing. Data originated from routine clinical cases at two institutions, representing diverse disease stages and treatment settings. A state-of-the-art neural network segmentation framework was trained on these cases, with performance evaluated using the Dice score and the normalized surface distance. An iterative training process, supported by an integrated annotation workflow, was employed to refine the training set. The final model was applied to the 91 test examinations to assess the impact of tumor volume and slice thickness on confidence scoring. Reliability was quantified through pairwise Dice score for failure detection and the area under the risk coverage curve.</div></div><div><h3>Results:</h3><div>The LCS achieved a Dice score of 0.9707 (95% CI: 0.9663–0.9751) for liver parenchyma and 0.7695 (95% CI: 0.7166–0.8224) for tumors. Normalized surface distance values at a 3-millimeter tolerance were 0.9605 (95% CI: 0.9539–0.9671) for parenchyma and 0.8412 (95% CI: 0.7928–0.8896) for tumors. Confidence scoring analysis demonstrated strong correlations between tumor volume, slice thickness, and segmentation reliability, reducing the area under the risk coverage curve from 16.7 to 10.3.</div></div><div><h3>Conclusions:</h3><div>The LCS achieved high segmentation accuracy in patients with colorectal liver metastases. Incorporating tumor volume and slice thickness into the confidence scoring process improved failure detection, enhanced reliability, and provided valuable insights for refining clinical deployment of automated segmentation algorithms.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109233"},"PeriodicalIF":4.8,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegRenal: AI-Driven segmentation of frozen sections in transplant kidney biopsies — A comparative analysis of deep learning models 隔离:移植肾活检中冷冻切片的人工智能驱动分割-深度学习模型的比较分析
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-02 DOI: 10.1016/j.cmpb.2025.109216
Ibrahim Yilmaz , Heba M. Alazab , Fatih Doganay , Bryan Dangott , Sam Albadri , Aziza Nassar , Fadi Salem , Zeynettin Akkus

Background and Objective:

Frozen section evaluation of donor kidney biopsies is vital for determining transplant suitability, yet remains challenging due to interobserver variability and freezing-related artifacts. While deep learning (DL) has been used for permanent sections, its application to frozen tissue is limited. We developed SegRenal, an artificial intelligence (AI)–based segmentation model for automated identification of glomeruli (non-sclerotic and sclerotic), arteries, and interstitial fibrosis and tubular atrophy (IFTA) in hematoxylin and eosin–stained frozen whole-slide images (WSIs). This study focuses on rigorous model adaptation, dataset development, cross-scanner performance evaluation, and integration into a clinical digital pathology workflow.

Methods:

A total of 183 frozen WSIs were collected from two scanners (GT450 and Grundium) and manually annotated by expert renal pathologists. Three encoder–decoder architectures (UNet, ResNet–UNet, and DenseNet–UNet) were trained on patch-level data to compare binary and multiclass segmentation strategies. The best-performing configuration — pre-trained DenseNet with multiclass output — was further evaluated on 21 unseen WSIs scanned with both platforms. Preprocessing included downsampling, patch extraction, and annotation refinement. Performance was assessed using Dice score, precision, recall, and intraclass correlation coefficient (ICC). Bland–Altman analysis and scanner variability experiments were conducted.

Results:

Pre-trained DenseNet multiclass segmentation yielded the best overall performance: Dice scores of 0.95 (glomeruli), 0.90 (sclerotic glomeruli), 0.80 (arteries), and 0.88 (IFTA). Recall reached 99.8% for glomeruli and 100% for arteries. Performance remained consistent across scanners. In several cases, the model detected structures initially missed by manual annotation, later confirmed by pathologists.

Conclusions:

SegRenal accurately segments key renal compartments in frozen biopsies and demonstrates robust cross-scanner performance. By automating tissue quantification, the model reduces variability and turnaround time, supporting fast and consistent intraoperative kidney transplant assessments.
背景和目的:供体肾活检的冷冻切片评估对于确定移植的适用性至关重要,但由于观察者之间的差异和冷冻相关的伪影,仍然具有挑战性。虽然深度学习(DL)已用于永久性切片,但其在冷冻组织中的应用有限。我们开发了一种基于人工智能(AI)的分割模型,用于自动识别苏木精和伊红染色的冷冻全片图像(wsi)中的肾小球(非硬化和硬化)、动脉、间质纤维化和小管萎缩(IFTA)。本研究的重点是严格的模型适应,数据集开发,跨扫描仪性能评估,并整合到临床数字病理工作流程。方法:从两台扫描仪(GT450和Grundium)上收集183例冷冻wsi,并由肾脏病理学专家手工注释。三种编码器-解码器架构(UNet, ResNet-UNet和DenseNet-UNet)在补丁级数据上进行训练,以比较二进制和多类分割策略。性能最好的配置——具有多类输出的预训练DenseNet——在两个平台扫描的21个未见过的wsi上进行了进一步评估。预处理包括下采样、补丁提取和注释细化。使用Dice评分、准确率、召回率和类内相关系数(ICC)评估性能。进行Bland-Altman分析和扫描仪变异性实验。结果:预先训练的DenseNet多类分割产生了最佳的整体性能:Dice评分为0.95(肾小球),0.90(硬化肾小球),0.80(动脉)和0.88 (IFTA)。肾小球的召回率为99.8%,动脉为100%。各个扫描器的性能保持一致。在一些情况下,模型检测到最初被人工注释遗漏的结构,后来被病理学家证实。结论:在冷冻活检中,SegRenal准确地分割了关键的肾间室,并展示了强大的交叉扫描性能。通过自动化组织量化,该模型减少了可变性和周转时间,支持快速和一致的术中肾移植评估。
{"title":"SegRenal: AI-Driven segmentation of frozen sections in transplant kidney biopsies — A comparative analysis of deep learning models","authors":"Ibrahim Yilmaz ,&nbsp;Heba M. Alazab ,&nbsp;Fatih Doganay ,&nbsp;Bryan Dangott ,&nbsp;Sam Albadri ,&nbsp;Aziza Nassar ,&nbsp;Fadi Salem ,&nbsp;Zeynettin Akkus","doi":"10.1016/j.cmpb.2025.109216","DOIUrl":"10.1016/j.cmpb.2025.109216","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Frozen section evaluation of donor kidney biopsies is vital for determining transplant suitability, yet remains challenging due to interobserver variability and freezing-related artifacts. While deep learning (DL) has been used for permanent sections, its application to frozen tissue is limited. We developed <em>SegRenal</em>, an artificial intelligence (AI)–based segmentation model for automated identification of glomeruli (non-sclerotic and sclerotic), arteries, and interstitial fibrosis and tubular atrophy (IFTA) in hematoxylin and eosin–stained frozen whole-slide images (WSIs). This study focuses on rigorous model adaptation, dataset development, cross-scanner performance evaluation, and integration into a clinical digital pathology workflow.</div></div><div><h3>Methods:</h3><div>A total of 183 frozen WSIs were collected from two scanners (GT450 and Grundium) and manually annotated by expert renal pathologists. Three encoder–decoder architectures (UNet, ResNet–UNet, and DenseNet–UNet) were trained on patch-level data to compare binary and multiclass segmentation strategies. The best-performing configuration — pre-trained DenseNet with multiclass output — was further evaluated on 21 unseen WSIs scanned with both platforms. Preprocessing included downsampling, patch extraction, and annotation refinement. Performance was assessed using Dice score, precision, recall, and intraclass correlation coefficient (ICC). Bland–Altman analysis and scanner variability experiments were conducted.</div></div><div><h3>Results:</h3><div>Pre-trained DenseNet multiclass segmentation yielded the best overall performance: Dice scores of 0.95 (glomeruli), 0.90 (sclerotic glomeruli), 0.80 (arteries), and 0.88 (IFTA). Recall reached 99.8% for glomeruli and 100% for arteries. Performance remained consistent across scanners. In several cases, the model detected structures initially missed by manual annotation, later confirmed by pathologists.</div></div><div><h3>Conclusions:</h3><div>SegRenal accurately segments key renal compartments in frozen biopsies and demonstrates robust cross-scanner performance. By automating tissue quantification, the model reduces variability and turnaround time, supporting fast and consistent intraoperative kidney transplant assessments.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109216"},"PeriodicalIF":4.8,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond a single mode: GAN ensembles for diverse medical data generation 超越单一模式:GAN集成多种医疗数据生成
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-02 DOI: 10.1016/j.cmpb.2026.109234
Lorenzo Tronchin , Tommy Löfstedt , Paolo Soda , Valerio Guarrasi

Background and Objective:

The advancement of generative AI in medical imaging faces the trilemma of simultaneously achieving high fidelity and diversity in synthetic data generation. Although Generative Adversarial Networks (GANs) have demonstrated significant potential, they are often hindered by limitations such as mode collapse and poor coverage of real data distributions. This study investigates the use of GAN ensembles as a solution to these challenges, with the goal of enhancing the quality and utility of synthetic medical images.

Methods:

We formulate a multi-objective optimisation problem to select an optimal ensemble of GANs that balances fidelity and diversity. The ensemble comprises models that contribute uniquely to the synthetic data space, ensuring minimal redundancy. A comprehensive evaluation was conducted using three distinct medical imaging datasets. We tested 22 GAN architectures, incorporating various loss functions and regularisation techniques. By sampling models at different training epochs, we crafted 110 unique configurations for ensemble selection.

Results:

The selected GAN ensembles demonstrated improved performance in generating synthetic medical images that closely resemble real data distributions. These ensembles preserved image fidelity while increasing diversity. In some settings, downstream models trained on synthetic data achieved slightly higher accuracy than those trained on real data alone. This effect arises because the synthetic images act as a targeted data augmentation mechanism that enhances class balance and diversity rather than replacing real data.

Conclusions:

GAN ensembles offer a robust solution to the fidelity–diversity–efficiency trade-off in medical image synthesis. By integrating multiple complementary models, the proposed approach improves the representativeness and utility of synthetic medical data, potentially advancing a wide range of clinical and research applications in diagnostic AI.
背景与目的:生成式人工智能在医学成像领域的发展面临着在合成数据生成中同时实现高保真度和多样性的三难选择。尽管生成对抗网络(GANs)已经显示出巨大的潜力,但它们经常受到模式崩溃和真实数据分布覆盖不足等限制的阻碍。本研究探讨了使用GAN集成作为解决这些挑战的方法,目的是提高合成医学图像的质量和实用性。方法:我们制定了一个多目标优化问题,以选择一个最优的gan集合,平衡保真度和多样性。集成包括对合成数据空间做出独特贡献的模型,确保了最小的冗余。使用三种不同的医学成像数据集进行综合评估。我们测试了22种GAN架构,结合了各种损失函数和正则化技术。通过对不同训练时期的模型进行采样,我们为集成选择制作了110种独特的配置。结果:所选择的GAN集成在生成与真实数据分布非常相似的合成医学图像方面表现出改进的性能。这些组合保留了图像的保真度,同时增加了多样性。在某些情况下,使用合成数据训练的下游模型的准确性略高于仅使用真实数据训练的模型。产生这种效果是因为合成图像充当了一种有针对性的数据增强机制,增强了类的平衡和多样性,而不是取代了真实数据。结论:GAN集成为医学图像合成中的保真度-多样性-效率权衡提供了一个强大的解决方案。通过整合多个互补模型,该方法提高了合成医疗数据的代表性和实用性,有望推动诊断人工智能的广泛临床和研究应用。
{"title":"Beyond a single mode: GAN ensembles for diverse medical data generation","authors":"Lorenzo Tronchin ,&nbsp;Tommy Löfstedt ,&nbsp;Paolo Soda ,&nbsp;Valerio Guarrasi","doi":"10.1016/j.cmpb.2026.109234","DOIUrl":"10.1016/j.cmpb.2026.109234","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>The advancement of generative AI in medical imaging faces the trilemma of simultaneously achieving high fidelity and diversity in synthetic data generation. Although Generative Adversarial Networks (GANs) have demonstrated significant potential, they are often hindered by limitations such as mode collapse and poor coverage of real data distributions. This study investigates the use of GAN ensembles as a solution to these challenges, with the goal of enhancing the quality and utility of synthetic medical images.</div></div><div><h3>Methods:</h3><div>We formulate a multi-objective optimisation problem to select an optimal ensemble of GANs that balances fidelity and diversity. The ensemble comprises models that contribute uniquely to the synthetic data space, ensuring minimal redundancy. A comprehensive evaluation was conducted using three distinct medical imaging datasets. We tested 22 GAN architectures, incorporating various loss functions and regularisation techniques. By sampling models at different training epochs, we crafted 110 unique configurations for ensemble selection.</div></div><div><h3>Results:</h3><div>The selected GAN ensembles demonstrated improved performance in generating synthetic medical images that closely resemble real data distributions. These ensembles preserved image fidelity while increasing diversity. In some settings, downstream models trained on synthetic data achieved slightly higher accuracy than those trained on real data alone. This effect arises because the synthetic images act as a targeted data augmentation mechanism that enhances class balance and diversity rather than replacing real data.</div></div><div><h3>Conclusions:</h3><div>GAN ensembles offer a robust solution to the fidelity–diversity–efficiency trade-off in medical image synthesis. By integrating multiple complementary models, the proposed approach improves the representativeness and utility of synthetic medical data, potentially advancing a wide range of clinical and research applications in diagnostic AI.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"277 ","pages":"Article 109234"},"PeriodicalIF":4.8,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADHTransNet-based radiomics on multimodal pituitary MRI for non-invasive hormone prediction in children 基于adhtransnet的多模态垂体MRI放射组学用于儿童无创激素预测
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-02 DOI: 10.1016/j.cmpb.2026.109235
Qiang Zheng , Xiaolin Jiang , Jianzheng Sun , Limei Song , Lin Zhang , Jungang Liu

Background and Objective

Growth hormone deficiency (GHD) and idiopathic central precocious puberty (ICPP) are typically diagnosed through invasive stimulation tests that require multiple blood samples collected over time. To reduce the need for such procedures, the study aims to establish an adjunctive tool by devising a fully automated pipeline for adenohypophysis segmentation and radiomics-based prediction of growth hormone (arg-pGH and ins-pGH in GHD) and gonadotropin (pLH and pLH/FSH in ICPP) levels in children.

Methods

A total of 274 subjects with 548 scans (T1-weighted and T2-weighted images, T1WI and T2WI) were identified, including GHD, ICPP, and normal control groups. MRI acquisition was performed 1 day prior to the hormone stimulation tests. The automated segmentation of adenohypophysis (ADH) on pituitary MRI was first achieved by the proposed ADHTransNet. Then, the radiomics features were extracted, and the consistency was assessed between manual and automated segmentations. Lastly, using a full-search feature selection strategy, we developed radiomics-based models to predict arginine-stimulated growth hormone (arg-pGH) and insulin-stimulated growth hormone (ins-pGH) levels in patients with GHD, as well as luteinizing hormone (pLH) levels and the pLH/FSH ratio in patients with ICPP.

Results

The superior ADH segmentation was achieved by ADHTransNet over other deep learning methods under comparison. The radiomics was validated with high measurement consistency and statistical consistency of the statistical T-values on both T1WI and T2WI images. Significant correlations were observed between truth hormone level and the predicted the peak GH of arginine stimulation test in GHD group (r=0.422, p<0.001), the peak GH of insulin stimulation test in GHD group (r=0.359, p<0.001), the peak luteinizing hormone (LH) in ICPP group (r=0.680, p<0.001), and the ratio of peak LH to peak follicle-stimulating hormone (FSH) in ICPP group(r=0.766, p<0.001).

Conclusions

This fully automated, multimodal, reproducible, and non-invasive pipeline shows promise in predicting GH and gonadotropin levels from MRI, reducing reliance on repeated blood tests, and enhancing assessment of hormone-related disorders.
背景和目的生长激素缺乏症(GHD)和特发性中性性早熟(ICPP)通常通过侵入性刺激试验诊断,需要长期收集多个血液样本。为了减少对此类手术的需求,本研究旨在通过设计一个全自动管道来建立一个辅助工具,用于腺垂体分割和基于放射组学的生长激素(GHD中的arg-pGH和ins-pGH)和促性腺激素(ICPP中的pLH和pLH/FSH)水平的预测。方法共对274例受试者进行548次扫描(t1、t2加权、T1WI、T2WI),包括GHD组、ICPP组和正常对照组。在激素刺激试验前1天进行MRI采集。ADH transnet首次实现了垂体MRI上腺垂体(ADH)的自动分割。然后,提取放射组学特征,并评估人工和自动分割的一致性。最后,使用全搜索特征选择策略,我们开发了基于放射组学的模型来预测GHD患者的精氨酸刺激生长激素(arg-pGH)和胰岛素刺激生长激素(ins-pGH)水平,以及ICPP患者的促黄体生成素(pLH)水平和pLH/FSH比值。结果ADHTransNet的ADH分割效果优于其他深度学习方法。放射组学在T1WI和T2WI图像上具有较高的测量一致性和统计一致性。GHD组精氨酸刺激试验GH峰预测值(r=0.422, p<0.001)、GHD组胰岛素刺激试验GH峰预测值(r=0.359, p<0.001)、ICPP组促黄体生成素(LH)峰预测值(r=0.680, p<0.001)、ICPP组LH峰与促卵泡刺激素(FSH)峰比值预测值(r=0.766, p<0.001)与真激素水平预测值有显著相关性。结论:这种全自动、多模式、可重复、无创的管道在预测MRI中生长激素和促性腺激素水平、减少对重复血液检查的依赖以及加强激素相关疾病的评估方面显示出前景。
{"title":"ADHTransNet-based radiomics on multimodal pituitary MRI for non-invasive hormone prediction in children","authors":"Qiang Zheng ,&nbsp;Xiaolin Jiang ,&nbsp;Jianzheng Sun ,&nbsp;Limei Song ,&nbsp;Lin Zhang ,&nbsp;Jungang Liu","doi":"10.1016/j.cmpb.2026.109235","DOIUrl":"10.1016/j.cmpb.2026.109235","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Growth hormone deficiency (GHD) and idiopathic central precocious puberty (ICPP) are typically diagnosed through invasive stimulation tests that require multiple blood samples collected over time. To reduce the need for such procedures, the study aims to establish an adjunctive tool by devising a fully automated pipeline for adenohypophysis segmentation and radiomics-based prediction of growth hormone (arg-pGH and ins-pGH in GHD) and gonadotropin (pLH and pLH/FSH in ICPP) levels in children.</div></div><div><h3>Methods</h3><div>A total of 274 subjects with 548 scans (T1-weighted and T2-weighted images, T1WI and T2WI) were identified, including GHD, ICPP, and normal control groups. MRI acquisition was performed 1 day prior to the hormone stimulation tests. The automated segmentation of adenohypophysis (ADH) on pituitary MRI was first achieved by the proposed ADHTransNet. Then, the radiomics features were extracted, and the consistency was assessed between manual and automated segmentations. Lastly, using a full-search feature selection strategy, we developed radiomics-based models to predict arginine-stimulated growth hormone (arg-pGH) and insulin-stimulated growth hormone (ins-pGH) levels in patients with GHD, as well as luteinizing hormone (pLH) levels and the pLH/FSH ratio in patients with ICPP.</div></div><div><h3>Results</h3><div>The superior ADH segmentation was achieved by ADHTransNet over other deep learning methods under comparison. The radiomics was validated with high measurement consistency and statistical consistency of the statistical T-values on both T1WI and T2WI images. Significant correlations were observed between truth hormone level and the predicted the peak GH of arginine stimulation test in GHD group (r=0.422, p&lt;0.001), the peak GH of insulin stimulation test in GHD group (r=0.359, p&lt;0.001), the peak luteinizing hormone (LH) in ICPP group (r=0.680, p&lt;0.001), and the ratio of peak LH to peak follicle-stimulating hormone (FSH) in ICPP group(r=0.766, p&lt;0.001).</div></div><div><h3>Conclusions</h3><div>This fully automated, multimodal, reproducible, and non-invasive pipeline shows promise in predicting GH and gonadotropin levels from MRI, reducing reliance on repeated blood tests, and enhancing assessment of hormone-related disorders.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109235"},"PeriodicalIF":4.8,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fitting high-dimensional mixture cure models using the hdcuremodels R package 使用hdcumodels R包拟合高维混合固化模型
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-31 DOI: 10.1016/j.cmpb.2025.109212
Kellie J. Archer , Han Fu

Background and Objective:

Time-to-event outcomes are often of interest in biomedical studies. When the dataset includes long-term survivors or subjects who will not experience the event of interest, mixture cure models (MCMs) should be fit. Further, it is clinically relevant to identify molecular features from high-throughput assays that are associated with time-to-event outcomes, both to elucidate important pathways and to identify molecular features that may be therapeutic targets or for developing improved risk stratification systems. Herein, we describe our hdcuremodels R package that can be used to model right-censored time-to-event data when a cured fraction is present and the predictor space is high-dimensional.

Methods:

We implemented two different optimization methods, the expectation–maximization and generalized monotone incremental forward stagewise algorithms, for fitting high-dimensional penalized Weibull, exponential, and Cox mixture cure models. Cross-validation functions for each optimization method are provided that can be run with or without controlling the false discovery rate. The modeling functions are flexible in that there is no requirement for the predictors to be the same in the incidence and latency components of the model. The package also includes functions for testing mixture cure modeling assumptions, evaluating performance, and generic functions that can be used to extract meaningful results.

Results:

We demonstrate fitting a high-dimensional penalized mixture cure model to an acute myeloid leukemia dataset, which had strong predictive performance on an independent test set.

Conclusion:

Our hdcuremodels package fits penalized mixture cure models that can accommodate datasets where the number of predictors exceeds the sample size.
背景与目的:时间到事件的结果在生物医学研究中经常引起人们的兴趣。当数据集包括长期幸存者或不会经历感兴趣事件的受试者时,应适合混合治疗模型(mcm)。此外,从高通量分析中识别与事件发生时间相关的分子特征具有临床意义,既可以阐明重要的途径,也可以识别可能成为治疗靶点的分子特征,或用于开发改进的风险分层系统。在此,我们描述了我们的hdcumodels R包,该包可用于在存在固化部分且预测空间为高维时对右截尾时间到事件数据进行建模。方法:我们实现了两种不同的优化方法,即期望最大化和广义单调增量前向分阶段算法,用于拟合高维惩罚Weibull,指数和Cox混合模型。提供了每种优化方法的交叉验证函数,可以在控制错误发现率或不控制错误发现率的情况下运行。建模功能是灵活的,因为不要求模型的发生率和延迟组件中的预测器相同。该软件包还包括用于测试混合物、建模假设、评估性能和可用于提取有意义结果的通用函数的功能。结果:我们展示了将高维惩罚混合治疗模型拟合到急性髓系白血病数据集,该模型在独立测试集上具有很强的预测性能。结论:我们的hdcuremodels包适合惩罚混合治疗模型,可以适应预测因子数量超过样本量的数据集。
{"title":"Fitting high-dimensional mixture cure models using the hdcuremodels R package","authors":"Kellie J. Archer ,&nbsp;Han Fu","doi":"10.1016/j.cmpb.2025.109212","DOIUrl":"10.1016/j.cmpb.2025.109212","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Time-to-event outcomes are often of interest in biomedical studies. When the dataset includes long-term survivors or subjects who will not experience the event of interest, mixture cure models (MCMs) should be fit. Further, it is clinically relevant to identify molecular features from high-throughput assays that are associated with time-to-event outcomes, both to elucidate important pathways and to identify molecular features that may be therapeutic targets or for developing improved risk stratification systems. Herein, we describe our <strong>hdcuremodels</strong> <span>R</span> package that can be used to model right-censored time-to-event data when a cured fraction is present and the predictor space is high-dimensional.</div></div><div><h3>Methods:</h3><div>We implemented two different optimization methods, the expectation–maximization and generalized monotone incremental forward stagewise algorithms, for fitting high-dimensional penalized Weibull, exponential, and Cox mixture cure models. Cross-validation functions for each optimization method are provided that can be run with or without controlling the false discovery rate. The modeling functions are flexible in that there is no requirement for the predictors to be the same in the incidence and latency components of the model. The package also includes functions for testing mixture cure modeling assumptions, evaluating performance, and generic functions that can be used to extract meaningful results.</div></div><div><h3>Results:</h3><div>We demonstrate fitting a high-dimensional penalized mixture cure model to an acute myeloid leukemia dataset, which had strong predictive performance on an independent test set.</div></div><div><h3>Conclusion:</h3><div>Our <strong>hdcuremodels</strong> package fits penalized mixture cure models that can accommodate datasets where the number of predictors exceeds the sample size.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109212"},"PeriodicalIF":4.8,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145881684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven bifurcation handling in physics-based reduced-order vascular hemodynamic models 基于物理的降阶血管血流动力学模型中数据驱动的分岔处理
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-30 DOI: 10.1016/j.cmpb.2025.109230
Natalia L. Rubio , Eric F. Darve , Alison L. Marsden

Background and Objective:

Three-dimensional (3D) computational fluid dynamics simulations of cardiovascular flows provide high-fidelity hemodynamic predictions to support cardiovascular medicine, but require substantial computational resources, limiting their clinical applicability. Reduced-order models (ROMs) offer computationally efficient alternatives but suffer from significant accuracy losses, particularly at vessel bifurcations where complex flow physics are inadequately captured by standard Poiseuille flow assumptions. This work presents an enhanced numerical framework that integrates machine learning-predicted bifurcation coefficients into 0D hemodynamic solvers to improve accuracy while maintaining computational efficiency.

Methods:

We develop a resistor–resistor–inductor (RRI) model that uses neural networks to predict pressure-flow relationships from bifurcation geometry, incorporating both linear and quadratic resistance terms along with inductive effects. The method employs physics-based non-dimensionalization to reduce training data requirements and includes flow split prediction for improved geometric characterization. We incorporate the RRI model into a zero-dimensional (0D) cardiovascular flow model using an optimization-based solution strategy. We validate the approach in isolated bifurcations and vascular trees containing up to 40 junctions across Reynolds numbers ranging from 0 to 5500, defining ROM accuracy by comparison to high-fidelity 3D finite element simulation results.

Results:

Results demonstrate substantial accuracy improvements: averaged across all trees and all Reynolds numbers, the RRI method reduces inlet pressure errors from 54 mmHg (45%) for standard 0D models to 25 mmHg (17%), while a simplified resistor-inductor (RI) variant achieves 31 mmHg (26%) error. The enhanced 0D models show particular effectiveness at high Reynolds numbers and in extensive vascular networks.

Conclusions:

This hybrid numerical approach enables accurate, real-time hemodynamic modeling suitable for clinical decision support, uncertainty quantification, and digital twin applications in cardiovascular biomedical engineering.
背景与目的:三维(3D)计算流体动力学模拟心血管血流提供高保真的血流动力学预测,以支持心血管医学,但需要大量的计算资源,限制了其临床适用性。降阶模型(ROMs)提供了计算效率高的替代方案,但存在显著的精度损失,特别是在标准泊泽维尔流假设无法充分捕捉复杂流动物理的船舶分岔处。这项工作提出了一个增强的数值框架,将机器学习预测的分岔系数集成到0D血流动力学求解器中,以提高精度,同时保持计算效率。方法:我们开发了一个电阻-电阻-电感(RRI)模型,该模型使用神经网络从分岔几何中预测压力-流量关系,结合线性和二次电阻项以及感应效应。该方法采用基于物理的无量纲化来减少训练数据的要求,并包括流分裂预测,以改进几何表征。我们使用基于优化的解决方案策略将RRI模型纳入零维(0D)心血管流模型。我们在孤立的分支和血管树中验证了该方法,这些分支和血管树包含多达40个结点,雷诺数范围从0到5500,通过与高保真3D有限元模拟结果进行比较来定义ROM精度。结果表明,在所有树和所有雷诺数的平均值下,RRI方法将进口压力误差从标准0D模型的54 mmHg(45%)减少到25 mmHg(17%),而简化的电阻-电感器(RI)变体的误差为31 mmHg(26%)。增强的0D模型在高雷诺数和广泛的血管网络中显示出特别的有效性。结论:这种混合数值方法能够实现准确、实时的血流动力学建模,适用于临床决策支持、不确定性量化和心血管生物医学工程中的数字孪生应用。
{"title":"Data-driven bifurcation handling in physics-based reduced-order vascular hemodynamic models","authors":"Natalia L. Rubio ,&nbsp;Eric F. Darve ,&nbsp;Alison L. Marsden","doi":"10.1016/j.cmpb.2025.109230","DOIUrl":"10.1016/j.cmpb.2025.109230","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Three-dimensional (3D) computational fluid dynamics simulations of cardiovascular flows provide high-fidelity hemodynamic predictions to support cardiovascular medicine, but require substantial computational resources, limiting their clinical applicability. Reduced-order models (ROMs) offer computationally efficient alternatives but suffer from significant accuracy losses, particularly at vessel bifurcations where complex flow physics are inadequately captured by standard Poiseuille flow assumptions. This work presents an enhanced numerical framework that integrates machine learning-predicted bifurcation coefficients into 0D hemodynamic solvers to improve accuracy while maintaining computational efficiency.</div></div><div><h3>Methods:</h3><div>We develop a resistor–resistor–inductor (RRI) model that uses neural networks to predict pressure-flow relationships from bifurcation geometry, incorporating both linear and quadratic resistance terms along with inductive effects. The method employs physics-based non-dimensionalization to reduce training data requirements and includes flow split prediction for improved geometric characterization. We incorporate the RRI model into a zero-dimensional (0D) cardiovascular flow model using an optimization-based solution strategy. We validate the approach in isolated bifurcations and vascular trees containing up to 40 junctions across Reynolds numbers ranging from 0 to 5500, defining ROM accuracy by comparison to high-fidelity 3D finite element simulation results.</div></div><div><h3>Results:</h3><div>Results demonstrate substantial accuracy improvements: averaged across all trees and all Reynolds numbers, the RRI method reduces inlet pressure errors from 54 mmHg (45%) for standard 0D models to 25 mmHg (17%), while a simplified resistor-inductor (RI) variant achieves 31 mmHg (26%) error. The enhanced 0D models show particular effectiveness at high Reynolds numbers and in extensive vascular networks.</div></div><div><h3>Conclusions:</h3><div>This hybrid numerical approach enables accurate, real-time hemodynamic modeling suitable for clinical decision support, uncertainty quantification, and digital twin applications in cardiovascular biomedical engineering.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109230"},"PeriodicalIF":4.8,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145881683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-assisted prognosis of multiple myeloma side population cells via SRGs and OCLR stemness index 基于SRGs和OCLR干细胞指数的机器学习辅助多发性骨髓瘤侧群细胞的预后。
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-29 DOI: 10.1016/j.cmpb.2025.109211
Xufei Xiang , Ruiyi Yang , Jicong Li , Brian Su , Zefang Xu , Yang An

Background and Objective:

Relapse in Multiple Myeloma, driven by therapy-resistant cancer stem cells, necessitates the development of more specific and accurate prognostic models. Existing stemness indices often lack specificity for the unique biology of Multiple Myeloma. This study aimed to develop, validate, and optimize a novel prognostic gene signature derived from Side Population cells, a well-defined cancer stem cell-enriched subpopulation in Multiple Myeloma.

Methods:

A core stemness gene module was identified from Side Population cell transcriptomes (GSE109651) using Weighted Gene Co-expression Network Analysis, guided by a One-Class Logistic Regression-based stemness index. Stemness-Related Gene scores were computed from this module’s key pathways via single-sample Gene Set Enrichment Analysis. A nonlinear programming algorithm was then employed to create an optimally weighted prognostic model. The model’s performance was validated in independent cohorts (The Cancer Genome Atlas – Multiple Myeloma Research Foundation, GSE24080, GSE57317) using Cox proportional hazards modeling, and its clinical relevance was assessed via drug sensitivity (OncoPredict) and immunotherapy response (Tumor Immune Dysfunction and Exclusion) prediction.

Results:

The resulting Stemness-Related Gene score strongly correlated with the established mRNA stemness index (r=0.62, p<1×1082). The hsa05222 pathway was identified as the dominant prognostic component (HR=12.765, p<0.0001) and was found to specifically modulate chemoresistance. In contrast, the composite Stemness-Related Gene score better predicted immune evasion potential. The final optimally weighted model, integrating these distinct facets, demonstrated superior prognostic accuracy, consistently outperforming existing benchmarks and simpler models across all validation cohorts.

Conclusions:

This Side Population cell-derived, optimally weighted signature is a robust and multifaceted independent prognostic biomarker for Multiple Myeloma. By distinguishing between chemoresistance and immune evasion profiles, this framework provides a valuable tool to guide personalized, cancer stem cell-targeted therapeutic strategies.
背景和目的:多发性骨髓瘤的复发是由治疗耐药的癌症干细胞驱动的,需要开发更具体和准确的预后模型。对于多发性骨髓瘤独特的生物学特性,现有的干性指标往往缺乏特异性。本研究旨在开发、验证和优化一种新的预后基因特征,该基因来自侧群细胞,这是多发性骨髓瘤中一个定义明确的癌症干细胞富集亚群。方法:采用加权基因共表达网络分析方法,在基于一类Logistic回归的干性指数指导下,从侧边群体细胞转录组(GSE109651)中鉴定出一个核心干性基因模块。通过单样本基因集富集分析,从该模块的关键通路计算stemness相关基因评分。然后采用非线性规划算法建立最优加权预测模型。采用Cox比例风险模型在独立队列(The Cancer Genome Atlas - Multiple Myeloma Research Foundation, GSE24080, GSE57317)中验证了该模型的性能,并通过药物敏感性(OncoPredict)和免疫治疗反应(肿瘤免疫功能障碍和排斥)预测来评估其临床相关性。结果:stemness - related Gene评分与建立的mRNA干性指数呈强相关(r=0.62, p-82)。hsa05222通路被确定为主要的预后成分(HR=12.765)。结论:这种来自侧群细胞的最佳加权信号是多发性骨髓瘤的一个强大的、多方面的独立预后生物标志物。通过区分化疗耐药和免疫逃避特征,该框架为指导个性化的癌症干细胞靶向治疗策略提供了有价值的工具。
{"title":"Machine learning-assisted prognosis of multiple myeloma side population cells via SRGs and OCLR stemness index","authors":"Xufei Xiang ,&nbsp;Ruiyi Yang ,&nbsp;Jicong Li ,&nbsp;Brian Su ,&nbsp;Zefang Xu ,&nbsp;Yang An","doi":"10.1016/j.cmpb.2025.109211","DOIUrl":"10.1016/j.cmpb.2025.109211","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Relapse in Multiple Myeloma, driven by therapy-resistant cancer stem cells, necessitates the development of more specific and accurate prognostic models. Existing stemness indices often lack specificity for the unique biology of Multiple Myeloma. This study aimed to develop, validate, and optimize a novel prognostic gene signature derived from Side Population cells, a well-defined cancer stem cell-enriched subpopulation in Multiple Myeloma.</div></div><div><h3>Methods:</h3><div>A core stemness gene module was identified from Side Population cell transcriptomes (GSE109651) using Weighted Gene Co-expression Network Analysis, guided by a One-Class Logistic Regression-based stemness index. Stemness-Related Gene scores were computed from this module’s key pathways via single-sample Gene Set Enrichment Analysis. A nonlinear programming algorithm was then employed to create an optimally weighted prognostic model. The model’s performance was validated in independent cohorts (The Cancer Genome Atlas – Multiple Myeloma Research Foundation, GSE24080, GSE57317) using Cox proportional hazards modeling, and its clinical relevance was assessed via drug sensitivity (OncoPredict) and immunotherapy response (Tumor Immune Dysfunction and Exclusion) prediction.</div></div><div><h3>Results:</h3><div>The resulting Stemness-Related Gene score strongly correlated with the established mRNA stemness index (<span><math><mrow><mi>r</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>62</mn></mrow></math></span>, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>1</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mo>−</mo><mn>82</mn></mrow></msup></mrow></math></span>). The <em>hsa05222</em> pathway was identified as the dominant prognostic component (<span><math><mrow><mi>HR</mi><mo>=</mo><mn>12</mn><mo>.</mo><mn>765</mn></mrow></math></span>, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>0001</mn></mrow></math></span>) and was found to specifically modulate chemoresistance. In contrast, the composite Stemness-Related Gene score better predicted immune evasion potential. The final optimally weighted model, integrating these distinct facets, demonstrated superior prognostic accuracy, consistently outperforming existing benchmarks and simpler models across all validation cohorts.</div></div><div><h3>Conclusions:</h3><div>This Side Population cell-derived, optimally weighted signature is a robust and multifaceted independent prognostic biomarker for Multiple Myeloma. By distinguishing between chemoresistance and immune evasion profiles, this framework provides a valuable tool to guide personalized, cancer stem cell-targeted therapeutic strategies.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109211"},"PeriodicalIF":4.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145877990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of blockchain-based digital twin technology in healthcare: A scoping review 基于区块链的数字孪生技术在医疗保健中的应用:范围审查
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-29 DOI: 10.1016/j.cmpb.2025.109231
You Yang, Mengying Liu, Haiying Chen, Li Chen

Background

The integration of blockchain and digital twin (DT) technologies is expected to transform the healthcare sector. DTs are virtual representations of physical entities and allow for real-time monitoring of assets. Predictive analytics of equipment performance can also be supported. Data integrity, security, and trust can be strengthened by blockchain technology. However, the practical applicability and effectiveness of this combined approach in healthcare systems have not been fully established.

Objective

The aim of the present scoping review was to assess the practical applications and synergistic advantages of blockchain-based DT technology in healthcare, evaluate relevant implementation challenges, and provide a research agenda for future studies.

Methods

A scoping review was conducted. PubMed, Web of Science, Scopus, CINAHL, Embase, and OVID were searched systematically. Manual searches were also performed. Boolean operators and targeted keywords were used. Relevant studies were retrieved from database inception to May 20, 2025.

Results

Narrative findings were categorized into three main domains: 1) Technical foundations and core mechanisms for integrating blockchain and DT technologies were described; (2) Application scenarios of blockchain-based DT technology in healthcare were summarized; and (3) Implementation challenges and corresponding solutions for blockchain-based DT technology in healthcare were identified.

Conclusion

The innovative integration of blockchain and DT technologies has advanced the healthcare sector by reshaping the management, interaction, and security of medical data in the digital environment. This convergence establishes a strategic foundation for ongoing digital transformation within healthcare. Future research should prioritize the translation of these developed systems into real-world clinical applications and focus on optimizing their performance to better elucidate how emerging technologies can effectively address practical healthcare challenges.
区块链和数字孪生(DT)技术的集成有望改变医疗保健行业。dt是物理实体的虚拟表示,允许对资产进行实时监控。还可以支持设备性能的预测分析。区块链技术可以增强数据的完整性、安全性和信任度。然而,这种联合方法在医疗保健系统中的实用性和有效性尚未完全建立。本范围综述的目的是评估基于区块链的DT技术在医疗保健中的实际应用和协同优势,评估相关的实施挑战,并为未来的研究提供研究议程。方法进行范围综述。系统检索PubMed、Web of Science、Scopus、CINAHL、Embase和OVID。还执行了手动搜索。使用了布尔运算符和目标关键字。检索自数据库建立至2025年5月20日的相关研究。结果研究结果分为三个主要领域:1)描述了区块链和DT技术集成的技术基础和核心机制;(2)总结了基于区块链的DT技术在医疗领域的应用场景;(3)确定了基于区块链的DT技术在医疗保健领域的实施挑战和相应的解决方案。结论区块链和DT技术的创新整合通过重塑数字环境下医疗数据的管理、交互和安全,推动了医疗行业的发展。这种融合为医疗保健领域正在进行的数字化转型奠定了战略基础。未来的研究应优先考虑将这些已开发的系统转化为现实世界的临床应用,并将重点放在优化其性能上,以更好地阐明新兴技术如何有效地解决实际医疗挑战。
{"title":"Application of blockchain-based digital twin technology in healthcare: A scoping review","authors":"You Yang,&nbsp;Mengying Liu,&nbsp;Haiying Chen,&nbsp;Li Chen","doi":"10.1016/j.cmpb.2025.109231","DOIUrl":"10.1016/j.cmpb.2025.109231","url":null,"abstract":"<div><h3>Background</h3><div>The integration of blockchain and digital twin (DT) technologies is expected to transform the healthcare sector. DTs are virtual representations of physical entities and allow for real-time monitoring of assets. Predictive analytics of equipment performance can also be supported. Data integrity, security, and trust can be strengthened by blockchain technology. However, the practical applicability and effectiveness of this combined approach in healthcare systems have not been fully established.</div></div><div><h3>Objective</h3><div>The aim of the present scoping review was to assess the practical applications and synergistic advantages of blockchain-based DT technology in healthcare, evaluate relevant implementation challenges, and provide a research agenda for future studies.</div></div><div><h3>Methods</h3><div>A scoping review was conducted. PubMed, Web of Science, Scopus, CINAHL, Embase, and OVID were searched systematically. Manual searches were also performed. Boolean operators and targeted keywords were used. Relevant studies were retrieved from database inception to May 20, 2025.</div></div><div><h3>Results</h3><div>Narrative findings were categorized into three main domains: 1) Technical foundations and core mechanisms for integrating blockchain and DT technologies were described; (2) Application scenarios of blockchain-based DT technology in healthcare were summarized; and (3) Implementation challenges and corresponding solutions for blockchain-based DT technology in healthcare were identified.</div></div><div><h3>Conclusion</h3><div>The innovative integration of blockchain and DT technologies has advanced the healthcare sector by reshaping the management, interaction, and security of medical data in the digital environment. This convergence establishes a strategic foundation for ongoing digital transformation within healthcare. Future research should prioritize the translation of these developed systems into real-world clinical applications and focus on optimizing their performance to better elucidate how emerging technologies can effectively address practical healthcare challenges.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109231"},"PeriodicalIF":4.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145881241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking the value of dynamic and static feature planes in 4D reconstruction of deformable tissues 再思考动、静态特征面在变形组织四维重建中的价值。
IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-27 DOI: 10.1016/j.cmpb.2025.109232
Ran Bu , Chenwei Xu , Runyi Liu, Yanzi Miao

Background and Objective:

Reconstructing deformable tissues is crucial for medical image computing and robotic surgery, as it enhances the safety and efficacy of surgical procedures. However, current methods face significant challenges, including errors in tissue reconstruction at occluded regions and limitations in real-time accurate observation of complex structures.

Methods:

In this paper, we present a novel method called Rethink Plane (RPlane), an efficient framework based on Neural Radiance Fields (NeRF), designed to reconstruct global high-fidelity deformable tissues from binocular endoscopic videos efficiently. Our main contribution lies in rethinking the value of dynamic and static features that existing methods often overlook, and developing a Depth Uncertainty Filter. Throughout this work, the dynamic filter is an extremely important foundational component. Based on this, a Dynamic Feature Enhancement module is proposed to address the depth distortion problem caused by the occlusion of surgical instruments. Additionally, a Color Recurrent Refinement strategy is proposed to reduce dynamic blurring caused by instrument contact or tissue self-motion. We validate the effectiveness of RPlane on two datasets (ENDONERF and StereoMIS).

Results:

In all cases, RPlane achieves state-of-the-art (SOTA) performance in terms of tissue reconstruction quality and detail clarity (with a PSNR of 40.527 in ENDONERF and 36.267 in StereoMIS). Furthermore, RPlane demonstrates a 53.3% improvement in robustness without increasing training time or computational resources.

Conclusions:

RPlane addresses depth reconstruction errors caused by surgical instrument occlusion, which are common in existing algorithms. The Dynamic Feature Enhancement module is used to enhance geometric modeling in occluded areas, while the Dynamic Weight Generation & Fusion and Color Recurrent Refinement strategies improve the texture details of tissues. This significant performance improvement promises to be an innovative solution for intraoperative applications.
背景与目的:可变形组织的重建对于医学图像计算和机器人手术至关重要,因为它可以提高手术过程的安全性和有效性。然而,目前的方法面临着巨大的挑战,包括闭塞区域组织重建的误差和复杂结构实时准确观察的局限性。方法:本文提出了一种基于神经辐射场(NeRF)的高效框架——重新思考平面(RPlane),用于从双目内窥镜视频中高效地重建全局高保真的可变形组织。我们的主要贡献在于重新思考现有方法经常忽略的动态和静态特征的价值,并开发了深度不确定性过滤器。在整个工作中,动态滤波器是一个极其重要的基础组件。在此基础上,提出了一种动态特征增强模块来解决手术器械遮挡引起的深度失真问题。此外,提出了一种颜色循环细化策略,以减少由仪器接触或组织自运动引起的动态模糊。我们在两个数据集(ENDONERF和StereoMIS)上验证了RPlane的有效性。结果:在所有病例中,RPlane在组织重建质量和细节清晰度方面达到了最先进的(SOTA)性能(在ENDONERF和StereoMIS中PSNR分别为40.527和36.267)。此外,RPlane在不增加训练时间或计算资源的情况下,鲁棒性提高了53.3%。结论:RPlane解决了现有算法中常见的手术器械闭塞导致的深度重建误差。动态特征增强模块用于增强遮挡区域的几何建模,而动态权值生成与融合和颜色循环细化策略用于改进组织的纹理细节。这种显著的性能改进有望成为术中应用的创新解决方案。
{"title":"Rethinking the value of dynamic and static feature planes in 4D reconstruction of deformable tissues","authors":"Ran Bu ,&nbsp;Chenwei Xu ,&nbsp;Runyi Liu,&nbsp;Yanzi Miao","doi":"10.1016/j.cmpb.2025.109232","DOIUrl":"10.1016/j.cmpb.2025.109232","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Reconstructing deformable tissues is crucial for medical image computing and robotic surgery, as it enhances the safety and efficacy of surgical procedures. However, current methods face significant challenges, including errors in tissue reconstruction at occluded regions and limitations in real-time accurate observation of complex structures.</div></div><div><h3>Methods:</h3><div>In this paper, we present a novel method called Rethink Plane (RPlane), an efficient framework based on Neural Radiance Fields (NeRF), designed to reconstruct global high-fidelity deformable tissues from binocular endoscopic videos efficiently. Our main contribution lies in rethinking the value of dynamic and static features that existing methods often overlook, and developing a Depth Uncertainty Filter. Throughout this work, the dynamic filter is an extremely important foundational component. Based on this, a Dynamic Feature Enhancement module is proposed to address the depth distortion problem caused by the occlusion of surgical instruments. Additionally, a Color Recurrent Refinement strategy is proposed to reduce dynamic blurring caused by instrument contact or tissue self-motion. We validate the effectiveness of RPlane on two datasets (ENDONERF and StereoMIS).</div></div><div><h3>Results:</h3><div>In all cases, RPlane achieves state-of-the-art (SOTA) performance in terms of tissue reconstruction quality and detail clarity (with a PSNR of 40.527 in ENDONERF and 36.267 in StereoMIS). Furthermore, RPlane demonstrates a 53.3% improvement in robustness without increasing training time or computational resources.</div></div><div><h3>Conclusions:</h3><div>RPlane addresses depth reconstruction errors caused by surgical instrument occlusion, which are common in existing algorithms. The Dynamic Feature Enhancement module is used to enhance geometric modeling in occluded areas, while the Dynamic Weight Generation &amp; Fusion and Color Recurrent Refinement strategies improve the texture details of tissues. This significant performance improvement promises to be an innovative solution for intraoperative applications.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"276 ","pages":"Article 109232"},"PeriodicalIF":4.8,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145910852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1