首页 > 最新文献

Computer methods and programs in biomedicine最新文献

英文 中文
Dual-path neural network extracts tumor microenvironment information from whole slide images to predict molecular typing and prognosis of Glioma 双路径神经网络从整个幻灯片图像中提取肿瘤微环境信息,预测胶质瘤的分子分型和预后。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-04 DOI: 10.1016/j.cmpb.2024.108580
Zehang Ning , Bojie Yang , Yuanyuan Wang , Zhifeng Shi , Jinhua Yu , Guoqing Wu

Background and Objective:

Utilizing AI to mine tumor microenvironment information in whole slide images (WSIs) for glioma molecular subtype and prognosis prediction is significant for treatment. Existing weakly-supervised learning frameworks based on multi-instance learning have potential in WSIs analysis, but the large number of patches from WSIs challenges the effective extraction of key local patch and neighboring patch microenvironment info. Therefore, this paper aims to develop an automatic neural network that effectively extracts tumor microenvironment information from WSIs to predict molecular typing and prognosis of glioma.

Methods:

In this paper, we proposed a dual-path pathology analysis (DPPA) framework to enhance the analysis ability of WSIs for glioma diagnosis. Firstly, to mitigate the impact of redundant patches and enhance the integration of salient patch information within a multi-instance learning context, we propose a two-stage attention-based dynamic multi-instance learning network. In the network, two-stage attention and dynamic random sampling are designed to integrate diverse image patch information in pivotal regions adaptively. Secondly, to unearth the wealth of spatial context inherent in WSIs, we build a spatial relationship information quantification module. This module captures the spatial distribution of patches that encompass a variety of tissue structures, shedding light on the tumor microenvironment.

Results:

A large number of experiments on three datasets, two in-house and one public, totaling 1,795 WSIs demonstrate the encouraging performance of the DPPA, with mean area under curves of 0.94, 0.85, and 0.88 in predicting Isocitrate Dehydrogenase 1, Telomerase Reverse Tranase, and 1p/19q respectively, and a mean C-index of 0.82 in prognosis prediction. The proposed model can also group tumors in existing tumor subgroups into good and bad prognoses, with P < 0.05 on the Log-rank test.

Conclusions:

The results of multi-center experiments demonstrate that the proposed DPPA surpasses the state-of-the-art models across multiple metrics. Through ablation experiments and survival analysis, the outstanding analytical ability of this model is further validated. Meanwhile, based on the work related to the interpretability of the model, the reliability and validity of the model have also been strongly confirmed. All source codes are released at: https://github.com/nzehang97/DPPA.
背景与目的:利用人工智能在全幻灯片图像(WSIs)中挖掘肿瘤微环境信息,预测胶质瘤分子亚型及预后,对治疗具有重要意义。现有的基于多实例学习的弱监督学习框架在wsi分析中具有一定的潜力,但wsi中大量的补丁对有效提取关键局部补丁和邻近补丁微环境信息提出了挑战。因此,本文旨在开发一种自动神经网络,有效地从wsi中提取肿瘤微环境信息,以预测胶质瘤的分子分型和预后。方法:在本文中,我们提出了一个双路径病理分析(DPPA)框架,以提高wsi对胶质瘤诊断的分析能力。首先,为了减轻冗余补丁的影响,增强显著补丁信息在多实例学习环境中的整合,我们提出了一种两阶段的基于注意力的动态多实例学习网络。在网络中,设计了两阶段关注和动态随机采样,以自适应地整合关键区域的不同图像补丁信息。其次,构建空间关系信息量化模块,挖掘wsi所蕴含的丰富空间脉络。该模块捕获了包含各种组织结构的斑块的空间分布,揭示了肿瘤微环境。结果:在3个数据集(2个内部数据集和1个公共数据集)共1795个wsi上进行的大量实验表明,DPPA在预测异柠檬酸脱氢酶1、端粒酶逆转录酶和1p/19q方面的平均曲线下面积分别为0.94、0.85和0.88,预测预后的平均c指数为0.82。该模型还可以将现有肿瘤亚组中的肿瘤分为预后良好和预后不良,Log-rank检验P < 0.05。结论:多中心实验结果表明,所提出的DPPA在多个指标上都优于最先进的模型。通过烧蚀实验和生存分析,进一步验证了该模型出色的分析能力。同时,通过对模型可解释性的相关研究,也有力地证实了模型的信度和效度。所有源代码发布在:https://github.com/nzehang97/DPPA。
{"title":"Dual-path neural network extracts tumor microenvironment information from whole slide images to predict molecular typing and prognosis of Glioma","authors":"Zehang Ning ,&nbsp;Bojie Yang ,&nbsp;Yuanyuan Wang ,&nbsp;Zhifeng Shi ,&nbsp;Jinhua Yu ,&nbsp;Guoqing Wu","doi":"10.1016/j.cmpb.2024.108580","DOIUrl":"10.1016/j.cmpb.2024.108580","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Utilizing AI to mine tumor microenvironment information in whole slide images (WSIs) for glioma molecular subtype and prognosis prediction is significant for treatment. Existing weakly-supervised learning frameworks based on multi-instance learning have potential in WSIs analysis, but the large number of patches from WSIs challenges the effective extraction of key local patch and neighboring patch microenvironment info. Therefore, this paper aims to develop an automatic neural network that effectively extracts tumor microenvironment information from WSIs to predict molecular typing and prognosis of glioma.</div></div><div><h3>Methods:</h3><div>In this paper, we proposed a dual-path pathology analysis (DPPA) framework to enhance the analysis ability of WSIs for glioma diagnosis. Firstly, to mitigate the impact of redundant patches and enhance the integration of salient patch information within a multi-instance learning context, we propose a two-stage attention-based dynamic multi-instance learning network. In the network, two-stage attention and dynamic random sampling are designed to integrate diverse image patch information in pivotal regions adaptively. Secondly, to unearth the wealth of spatial context inherent in WSIs, we build a spatial relationship information quantification module. This module captures the spatial distribution of patches that encompass a variety of tissue structures, shedding light on the tumor microenvironment.</div></div><div><h3>Results:</h3><div>A large number of experiments on three datasets, two in-house and one public, totaling 1,795 WSIs demonstrate the encouraging performance of the DPPA, with mean area under curves of 0.94, 0.85, and 0.88 in predicting Isocitrate Dehydrogenase 1, Telomerase Reverse Tranase, and 1p/19q respectively, and a mean C-index of 0.82 in prognosis prediction. The proposed model can also group tumors in existing tumor subgroups into good and bad prognoses, with P <span><math><mo>&lt;</mo></math></span> 0.05 on the Log-rank test.</div></div><div><h3>Conclusions:</h3><div>The results of multi-center experiments demonstrate that the proposed DPPA surpasses the state-of-the-art models across multiple metrics. Through ablation experiments and survival analysis, the outstanding analytical ability of this model is further validated. Meanwhile, based on the work related to the interpretability of the model, the reliability and validity of the model have also been strongly confirmed. All source codes are released at: <span><span>https://github.com/nzehang97/DPPA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"261 ","pages":"Article 108580"},"PeriodicalIF":4.9,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142982634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Transformers-based models and linked data for deep phenotyping in radiology 利用基于transformer的模型和关联数据在放射学中进行深度表型分析。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-03 DOI: 10.1016/j.cmpb.2024.108567
Lluís-F. Hurtado , Luis Marco-Ruiz , Encarna Segarra , Maria Jose Castro-Bleda , Aurelia Bustos-Moreno , Maria de la Iglesia-Vayá , Juan Francisco Vallalta-Rueda

Background and Objective:

Despite significant investments in the normalization and the standardization of Electronic Health Records (EHRs), free text is still the rule rather than the exception in clinical notes. The use of free text has implications in data reuse methods used for supporting clinical research since the query mechanisms used in cohort definition and patient matching are mainly based on structured data and clinical terminologies. This study aims to develop a method for the secondary use of clinical text by: (a) using Natural Language Processing (NLP) for tagging clinical notes with biomedical terminology; and (b) designing an ontology that maps and classifies all the identified tags to various terminologies and allows for running phenotyping queries.

Methods and Results:

Transformers-based NLP Models, concretely pre-trained RoBERTa language models, were used to process radiology reports and annotate them identifying elements matching UMLS Concept Unique Identifiers (CUIs) definitions. CUIs were mapped into several biomedical ontologies useful for phenotyping (e.g., SNOMED-CT, HPO, ICD-10, FMA, LOINC, and ICPC2, among others) and represented as a lightweight ontology using OWL (Web Ontology Language) constructs. This process resulted in a Linked Knowledge Base (LKB), which allows running expressive queries to retrieve reports that comply with specific criteria using automatic reasoning.

Conclusion:

Although phenotyping tools mostly rely on relational databases, the combination of NLP and Linked Data technologies allows us to build scalable knowledge bases using standard ontologies from the Web of data. Our approach enables us to execute a pipeline which input is free text and automatically maps identified entities to a LKB that allows answering phenotyping queries. In this work, we have only used Spanish radiology reports, although it is extensible to other languages for which suitable corpora are available. This is particularly valuable in regional and national systems dealing with large research databases from different registries and cohorts and plays an essential role in the scalability of large data reuse infrastructures that require indexing and governing distributed data sources.
背景和目的:尽管在电子健康记录(EHRs)的规范化和标准化方面进行了大量投资,但在临床记录中,自由文本仍然是一种规则,而不是例外。由于队列定义和患者匹配中使用的查询机制主要基于结构化数据和临床术语,因此使用自由文本对用于支持临床研究的数据重用方法具有影响。本研究旨在开发一种临床文本的二次使用方法:(a)使用自然语言处理(NLP)用生物医学术语标记临床笔记;(b)设计一个本体,将所有已识别的标签映射和分类到各种术语,并允许运行表型查询。方法和结果:基于transformer的NLP模型,具体的预训练RoBERTa语言模型,用于处理放射学报告,并对其进行注释,以识别与UMLS概念唯一标识符(gui)定义匹配的元素。将gui映射为几种对表型分析有用的生物医学本体(例如,SNOMED-CT、HPO、ICD-10、FMA、LOINC和ICPC2等),并使用OWL (Web ontology Language)结构表示为轻量级本体。这个过程产生了一个链接知识库(link Knowledge Base, LKB),它允许运行表达性查询来检索使用自动推理符合特定标准的报告。结论:虽然表型工具主要依赖于关系数据库,但NLP和关联数据技术的结合使我们能够使用来自数据网络的标准本体构建可扩展的知识库。我们的方法使我们能够执行一个管道,它的输入是自由文本,并自动将识别的实体映射到允许回答表型查询的LKB。在这项工作中,我们只使用了西班牙语放射学报告,尽管它可以扩展到其他语言,因为有合适的语料库可用。这在处理来自不同登记和队列的大型研究数据库的区域和国家系统中特别有价值,并在需要索引和管理分布式数据源的大型数据重用基础设施的可伸缩性方面发挥重要作用。
{"title":"Leveraging Transformers-based models and linked data for deep phenotyping in radiology","authors":"Lluís-F. Hurtado ,&nbsp;Luis Marco-Ruiz ,&nbsp;Encarna Segarra ,&nbsp;Maria Jose Castro-Bleda ,&nbsp;Aurelia Bustos-Moreno ,&nbsp;Maria de la Iglesia-Vayá ,&nbsp;Juan Francisco Vallalta-Rueda","doi":"10.1016/j.cmpb.2024.108567","DOIUrl":"10.1016/j.cmpb.2024.108567","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Despite significant investments in the normalization and the standardization of Electronic Health Records (EHRs), free text is still the rule rather than the exception in clinical notes. The use of free text has implications in data reuse methods used for supporting clinical research since the query mechanisms used in cohort definition and patient matching are mainly based on structured data and clinical terminologies. This study aims to develop a method for the secondary use of clinical text by: (a) using Natural Language Processing (NLP) for tagging clinical notes with biomedical terminology; and (b) designing an ontology that maps and classifies all the identified tags to various terminologies and allows for running phenotyping queries.</div></div><div><h3>Methods and Results:</h3><div>Transformers-based NLP Models, concretely pre-trained RoBERTa language models, were used to process radiology reports and annotate them identifying elements matching UMLS Concept Unique Identifiers (CUIs) definitions. CUIs were mapped into several biomedical ontologies useful for phenotyping (e.g., SNOMED-CT, HPO, ICD-10, FMA, LOINC, and ICPC2, among others) and represented as a lightweight ontology using OWL (Web Ontology Language) constructs. This process resulted in a Linked Knowledge Base (LKB), which allows running expressive queries to retrieve reports that comply with specific criteria using automatic reasoning.</div></div><div><h3>Conclusion:</h3><div>Although phenotyping tools mostly rely on relational databases, the combination of NLP and Linked Data technologies allows us to build scalable knowledge bases using standard ontologies from the Web of data. Our approach enables us to execute a pipeline which input is free text and automatically maps identified entities to a LKB that allows answering phenotyping queries. In this work, we have only used Spanish radiology reports, although it is extensible to other languages for which suitable corpora are available. This is particularly valuable in regional and national systems dealing with large research databases from different registries and cohorts and plays an essential role in the scalability of large data reuse infrastructures that require indexing and governing distributed data sources.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108567"},"PeriodicalIF":4.9,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142945900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGCSL: An unsupervised framework reveals the underlying structure of large-scale whole-brain functional connectivity networks BGCSL:一个无监督框架揭示了大规模全脑功能连接网络的潜在结构。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-02 DOI: 10.1016/j.cmpb.2024.108573
Hua Zhang , Weiming Zeng , Ying Li , Jin Deng , Boyang Wei

Background and Objective:

Inferring large-scale brain networks from functional magnetic resonance imaging (fMRI) provides more detailed and richer connectivity information, which is critical for gaining insight into brain structure and function and for predicting clinical phenotypes. However, as the number of network nodes increases, most existing methods suffer from the following limitations: (1) Traditional shallow models often struggle to estimate large-scale brain networks. (2) Existing deep graph structure learning models rely on downstream tasks and labels. (3) They rely on sparse postprocessing operations. To overcome these limitations, this paper proposes a novel framework for revealing large-scale functional brain connectivity networks through graph contrastive structure learning, called BGCSL.

Methods:

Unlike traditional supervised graph structure learning methods, this framework does not rely on labeled information. It consists of two important modules: sparse graph structure learner and graph contrastive learning (GCL). It employs dynamic augmentation in GCL to train a sparse graph structure learner, enabling it to capture the intrinsic structure of the data.

Results:

We conducted extensive experiments on 12 synthetic datasets and 2 public functional magnetic resonance imaging datasets, demonstrating the effectiveness of our proposed framework. In the synthetic datasets, particularly in cases where node features are insufficient, BGCSL still maintains state-of-the-art performance. More importantly, on the ABIDE-I and HCP-rest datasets, BGCSL improved the downstream task performance of GCN-based models, including the original GCN, dGCN, and ContrastPool, to varying degrees.

Conclusion:

Our proposed method holds significant potential as a valuable reference for future large-scale brain network estimation and representation and is conducive to supporting the exploration of more fine-grained biomarkers.
背景与目的:通过功能磁共振成像(fMRI)推断大规模脑网络提供了更详细和更丰富的连接信息,这对于深入了解大脑结构和功能以及预测临床表型至关重要。然而,随着网络节点数量的增加,大多数现有方法都存在以下局限性:(1)传统的浅层模型往往难以估计大规模的大脑网络。(2)现有深度图结构学习模型依赖于下游任务和标签。(3)它们依赖于稀疏的后处理操作。为了克服这些限制,本文提出了一个新的框架,通过图对比结构学习来揭示大规模的功能性大脑连接网络,称为BGCSL。方法:与传统的监督图结构学习方法不同,该框架不依赖于标记信息。它由两个重要模块组成:稀疏图结构学习器和图对比学习(GCL)。它在GCL中使用动态增强来训练稀疏图结构学习器,使其能够捕获数据的内在结构。结果:我们在12个合成数据集和2个公共功能磁共振成像数据集上进行了大量实验,证明了我们提出的框架的有效性。在合成数据集中,特别是在节点特征不足的情况下,BGCSL仍然保持最先进的性能。更重要的是,在ABIDE-I和HCP-rest数据集上,BGCSL不同程度地提高了基于GCN的模型(包括原始GCN、dGCN和ContrastPool)的下游任务性能。结论:本文提出的方法为未来大规模脑网络估计和表征提供了有价值的参考,并有助于探索更细粒度的生物标志物。
{"title":"BGCSL: An unsupervised framework reveals the underlying structure of large-scale whole-brain functional connectivity networks","authors":"Hua Zhang ,&nbsp;Weiming Zeng ,&nbsp;Ying Li ,&nbsp;Jin Deng ,&nbsp;Boyang Wei","doi":"10.1016/j.cmpb.2024.108573","DOIUrl":"10.1016/j.cmpb.2024.108573","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Inferring large-scale brain networks from functional magnetic resonance imaging (fMRI) provides more detailed and richer connectivity information, which is critical for gaining insight into brain structure and function and for predicting clinical phenotypes. However, as the number of network nodes increases, most existing methods suffer from the following limitations: (1) Traditional shallow models often struggle to estimate large-scale brain networks. (2) Existing deep graph structure learning models rely on downstream tasks and labels. (3) They rely on sparse postprocessing operations. To overcome these limitations, this paper proposes a novel framework for revealing large-scale functional brain connectivity networks through graph contrastive structure learning, called BGCSL.</div></div><div><h3>Methods:</h3><div>Unlike traditional supervised graph structure learning methods, this framework does not rely on labeled information. It consists of two important modules: sparse graph structure learner and graph contrastive learning (GCL). It employs dynamic augmentation in GCL to train a sparse graph structure learner, enabling it to capture the intrinsic structure of the data.</div></div><div><h3>Results:</h3><div>We conducted extensive experiments on 12 synthetic datasets and 2 public functional magnetic resonance imaging datasets, demonstrating the effectiveness of our proposed framework. In the synthetic datasets, particularly in cases where node features are insufficient, BGCSL still maintains state-of-the-art performance. More importantly, on the ABIDE-I and HCP-rest datasets, BGCSL improved the downstream task performance of GCN-based models, including the original GCN, dGCN, and ContrastPool, to varying degrees.</div></div><div><h3>Conclusion:</h3><div>Our proposed method holds significant potential as a valuable reference for future large-scale brain network estimation and representation and is conducive to supporting the exploration of more fine-grained biomarkers.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108573"},"PeriodicalIF":4.9,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142930316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully automated segmentation of brain and scalp blood vessels on multi-parametric magnetic resonance imaging using multi-view cascaded networks
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-02 DOI: 10.1016/j.cmpb.2025.108584
Songxiong Wu , Zilong Huang , Mingyu Wang , Ping Zeng , Biwen Tan , Panying Wang , Bin Huang , Naiwen Zhang , Nashan Wu , Ruodai Wu , Yong Chen , Guangyao Wu , Fuyong Chen , Jian Zhang , Bingsheng Huang

Background and Objective

Neurosurgical navigation is a critical element of brain surgery, and accurate segmentation of brain and scalp blood vessels is crucial for surgical planning and treatment. However, conventional methods for segmenting blood vessels based on statistical or thresholding techniques have limitations. In recent years, deep learning-based methods have emerged as a promising solution for blood vessel segmentation, but the segmentation of small blood vessels and scalp blood vessels remains challenging. This study aimed to explore a solution to overcoming the challenges.

Methods

This study proposes a multi-view cascaded deep learning network (MVPCNet) that combines multiple refinements, including multi-view learning, multi-parameter input, and a multi-view ensemble module. We evaluated the proposed method on a dataset of 155 patients, which included annotations for brain and scalp blood vessels. Five-fold cross-validation was conducted on the dataset to assess the performance of the network.

Results

Ablation experiments showed that the proposed refinements in MVPCNet significantly improved the segmentation of small blood and low-contrast vessel performance, which segmented scalp blood vessels from the original images, increasing the Dice and the 95 % Hausdorf distance (HD), from 0.865 to 0.922 and from 1.28 mm to 0.47 mm, respectively, compared to the baseline model.

Conclusions

The proposed method in this study provided a fully automated and accurate segmentation of brain and scalp blood vessels, which is essential for neurosurgical navigation and has potential for clinical applications.
{"title":"Fully automated segmentation of brain and scalp blood vessels on multi-parametric magnetic resonance imaging using multi-view cascaded networks","authors":"Songxiong Wu ,&nbsp;Zilong Huang ,&nbsp;Mingyu Wang ,&nbsp;Ping Zeng ,&nbsp;Biwen Tan ,&nbsp;Panying Wang ,&nbsp;Bin Huang ,&nbsp;Naiwen Zhang ,&nbsp;Nashan Wu ,&nbsp;Ruodai Wu ,&nbsp;Yong Chen ,&nbsp;Guangyao Wu ,&nbsp;Fuyong Chen ,&nbsp;Jian Zhang ,&nbsp;Bingsheng Huang","doi":"10.1016/j.cmpb.2025.108584","DOIUrl":"10.1016/j.cmpb.2025.108584","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Neurosurgical navigation is a critical element of brain surgery, and accurate segmentation of brain and scalp blood vessels is crucial for surgical planning and treatment. However, conventional methods for segmenting blood vessels based on statistical or thresholding techniques have limitations. In recent years, deep learning-based methods have emerged as a promising solution for blood vessel segmentation, but the segmentation of small blood vessels and scalp blood vessels remains challenging. This study aimed to explore a solution to overcoming the challenges.</div></div><div><h3>Methods</h3><div>This study proposes a multi-view cascaded deep learning network (MVPC<img>Net) that combines multiple refinements, including multi-view learning, multi-parameter input, and a multi-view ensemble module. We evaluated the proposed method on a dataset of 155 patients, which included annotations for brain and scalp blood vessels. Five-fold cross-validation was conducted on the dataset to assess the performance of the network.</div></div><div><h3>Results</h3><div>Ablation experiments showed that the proposed refinements in MVPC<img>Net significantly improved the segmentation of small blood and low-contrast vessel performance, which segmented scalp blood vessels from the original images, increasing the Dice and the 95 % Hausdorf distance (HD), from 0.865 to 0.922 and from 1.28 mm to 0.47 mm, respectively, compared to the baseline model.</div></div><div><h3>Conclusions</h3><div>The proposed method in this study provided a fully automated and accurate segmentation of brain and scalp blood vessels, which is essential for neurosurgical navigation and has potential for clinical applications.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108584"},"PeriodicalIF":4.9,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143133314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HistoColAi: An open-source web platform for collaborative digital histology image annotation with AI-driven predictive integration histolai:一个开源的网络平台,用于协作数字组织学图像注释,具有人工智能驱动的预测集成。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/j.cmpb.2024.108577
Cristian Camilo Pulgarín-Ospina , Rocío del Amor , Julio José Silva-Rodríguez , Adrián Colomer , Valery Naranjo
Digital pathology is now a standard component of the pathology workflow, offering numerous benefits such as high-detail whole slide images and the capability for immediate case sharing between hospitals. Recent advances in deep learning-based methods for image analysis make them a potential aid in digital pathology. However, A significant challenge in developing computer-aided diagnostic systems for pathology is the lack of intuitive, open-source web applications for data annotation. This paper proposes a web service that efficiently provides a tool to visualize and annotate digitized histological images, integrating AI-driven predictive insights. While the tool is capable of handling various image formats, its primary use case is for Whole Slide Imaging (WSI) in the TIFF format, specifically tailored for histopathology applications. This innovative integration not only revolutionizes accessibility but also democratizes the utilization of complex deep-learning models for pathologists unfamiliar with such tools. Moreover, to demonstrate the effectiveness of this approach, we present a use case centered on the diagnosis of spindle cell skin neoplasm involving multiple annotators. Additionally, we conduct a usability study, showing the feasibility of the developed tool.
数字病理现在是病理工作流程的标准组成部分,提供了许多好处,如高细节的整个幻灯片图像和医院之间即时病例共享的能力。基于深度学习的图像分析方法的最新进展使它们成为数字病理学的潜在援助。然而,开发计算机辅助病理诊断系统的一个重大挑战是缺乏直观的、开源的网络应用程序来进行数据注释。本文提出了一个web服务,该服务有效地提供了可视化和注释数字化组织学图像的工具,集成了人工智能驱动的预测见解。虽然该工具能够处理各种图像格式,但其主要用例是TIFF格式的全幻灯片成像(WSI),专门为组织病理学应用程序量身定制。这种创新的整合不仅彻底改变了可访问性,而且使不熟悉此类工具的病理学家能够使用复杂的深度学习模型。此外,为了证明这种方法的有效性,我们提出了一个涉及多个注释器的以梭形细胞皮肤肿瘤诊断为中心的用例。此外,我们还进行了可用性研究,以显示所开发工具的可行性。
{"title":"HistoColAi: An open-source web platform for collaborative digital histology image annotation with AI-driven predictive integration","authors":"Cristian Camilo Pulgarín-Ospina ,&nbsp;Rocío del Amor ,&nbsp;Julio José Silva-Rodríguez ,&nbsp;Adrián Colomer ,&nbsp;Valery Naranjo","doi":"10.1016/j.cmpb.2024.108577","DOIUrl":"10.1016/j.cmpb.2024.108577","url":null,"abstract":"<div><div>Digital pathology is now a standard component of the pathology workflow, offering numerous benefits such as high-detail whole slide images and the capability for immediate case sharing between hospitals. Recent advances in deep learning-based methods for image analysis make them a potential aid in digital pathology. However, A significant challenge in developing computer-aided diagnostic systems for pathology is the lack of intuitive, open-source web applications for data annotation. This paper proposes a web service that efficiently provides a tool to visualize and annotate digitized histological images, integrating AI-driven predictive insights. While the tool is capable of handling various image formats, its primary use case is for Whole Slide Imaging (WSI) in the TIFF format, specifically tailored for histopathology applications. This innovative integration not only revolutionizes accessibility but also democratizes the utilization of complex deep-learning models for pathologists unfamiliar with such tools. Moreover, to demonstrate the effectiveness of this approach, we present a use case centered on the diagnosis of spindle cell skin neoplasm involving multiple annotators. Additionally, we conduct a usability study, showing the feasibility of the developed tool.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108577"},"PeriodicalIF":4.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143001507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel lightweight deep learning based approaches for the automatic diagnosis of gastrointestinal disease using image processing and knowledge distillation techniques 基于图像处理和知识蒸馏技术的胃肠疾病自动诊断新方法。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-30 DOI: 10.1016/j.cmpb.2024.108579
Zafran Waheed , Jinsong Gui , Md Belal Bin Heyat , Saba Parveen , Mohd Ammar Bin Hayat , Muhammad Shahid Iqbal , Zouheir Aya , Awais Khan Nawabi , Mohamad Sawan

Background

Gastrointestinal (GI) diseases pose significant challenges for healthcare systems, largely due to the complexities involved in their detection and treatment. Despite the advancements in deep neural networks, their high computational demands hinder their practical use in clinical environments.

Objective

This study aims to address the computational inefficiencies of deep neural networks by proposing a lightweight model that integrates model compression techniques, ConvLSTM layers, and ConvNext Blocks, all optimized through Knowledge Distillation (KD).

Methods

A dataset of 6000 endoscopic images of various GI diseases was utilized. Advanced image preprocessing techniques, including adaptive noise reduction and image detail enhancement, were employed to improve accuracy and interpretability. The model's performance was assessed in terms of accuracy, computational cost, and disk space usage.

Results

The proposed lightweight model achieved an exceptional overall accuracy of 99.38 %. It operates efficiently with a computational cost of 0.61 GFLOPs and occupies only 3.09 MB of disk space. Additionally, Grad-CAM visualizations demonstrated enhanced model saliency and interpretability, offering insights into the decision-making process of the model post-KD.

Conclusion

The proposed model represents a significant advancement in the diagnosis of GI diseases. It provides a cost-effective and efficient alternative to traditional deep neural network methods, overcoming their computational limitations and contributing valuable insights for improved clinical application.
背景:胃肠道(GI)疾病给医疗保健系统带来了巨大挑战,这主要是由于胃肠道疾病的检测和治疗非常复杂。尽管深度神经网络取得了进步,但其高计算需求阻碍了其在临床环境中的实际应用:本研究旨在通过提出一种轻量级模型来解决深度神经网络计算效率低下的问题,该模型集成了模型压缩技术、ConvLSTM 层和 ConvNext 块,所有这些都通过知识蒸馏(KD)进行了优化:方法:利用一个包含 6000 张各种消化道疾病内窥镜图像的数据集。采用了先进的图像预处理技术,包括自适应降噪和图像细节增强,以提高准确性和可解释性。从准确性、计算成本和磁盘空间使用等方面对模型的性能进行了评估:结果:所提出的轻量级模型的总体准确率高达 99.38%。该模型运行高效,计算成本为 0.61 GFLOPs,仅占用 3.09 MB 磁盘空间。此外,Grad-CAM 可视化展示了增强的模型显著性和可解释性,为模型的后 KD 决策过程提供了洞察力:所提出的模型代表了消化道疾病诊断领域的重大进步。它为传统的深度神经网络方法提供了一种经济高效的替代方法,克服了传统方法的计算局限性,为改善临床应用提供了宝贵的见解。
{"title":"A novel lightweight deep learning based approaches for the automatic diagnosis of gastrointestinal disease using image processing and knowledge distillation techniques","authors":"Zafran Waheed ,&nbsp;Jinsong Gui ,&nbsp;Md Belal Bin Heyat ,&nbsp;Saba Parveen ,&nbsp;Mohd Ammar Bin Hayat ,&nbsp;Muhammad Shahid Iqbal ,&nbsp;Zouheir Aya ,&nbsp;Awais Khan Nawabi ,&nbsp;Mohamad Sawan","doi":"10.1016/j.cmpb.2024.108579","DOIUrl":"10.1016/j.cmpb.2024.108579","url":null,"abstract":"<div><h3>Background</h3><div>Gastrointestinal (GI) diseases pose significant challenges for healthcare systems, largely due to the complexities involved in their detection and treatment. Despite the advancements in deep neural networks, their high computational demands hinder their practical use in clinical environments.</div></div><div><h3>Objective</h3><div>This study aims to address the computational inefficiencies of deep neural networks by proposing a lightweight model that integrates model compression techniques, ConvLSTM layers, and ConvNext Blocks, all optimized through Knowledge Distillation (KD).</div></div><div><h3>Methods</h3><div>A dataset of 6000 endoscopic images of various GI diseases was utilized. Advanced image preprocessing techniques, including adaptive noise reduction and image detail enhancement, were employed to improve accuracy and interpretability. The model's performance was assessed in terms of accuracy, computational cost, and disk space usage.</div></div><div><h3>Results</h3><div>The proposed lightweight model achieved an exceptional overall accuracy of 99.38 %. It operates efficiently with a computational cost of 0.61 GFLOPs and occupies only 3.09 MB of disk space. Additionally, Grad-CAM visualizations demonstrated enhanced model saliency and interpretability, offering insights into the decision-making process of the model post-KD.</div></div><div><h3>Conclusion</h3><div>The proposed model represents a significant advancement in the diagnosis of GI diseases. It provides a cost-effective and efficient alternative to traditional deep neural network methods, overcoming their computational limitations and contributing valuable insights for improved clinical application.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108579"},"PeriodicalIF":4.9,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142969851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based forecast of Helmet-CPAP therapy failure in Acute Respiratory Distress Syndrome patients 基于机器学习的急性呼吸窘迫综合征患者头盔- cpap治疗失败预测。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-30 DOI: 10.1016/j.cmpb.2024.108574
Riccardo Campi , Antonio De Santis , Paolo Colombo , Paolo Scarpazza , Marco Masseroli

Background and Objective:

Helmet-Continuous Positive Airway Pressure (H-CPAP) is a non-invasive respiratory support that is used for the treatment of Acute Respiratory Distress Syndrome (ARDS), a severe medical condition diagnosed when symptoms like profound hypoxemia, pulmonary opacities on radiography, or unexplained respiratory failure are present. It can be classified as mild, moderate or severe. H-CPAP therapy is recommended as the initial treatment approach for mild ARDS. Even though the efficacy of H-CPAP in managing patients with moderate-to-severe hypoxemia remains unclear, its use has increased for these cases in response to the emergence of the COVID-19 Pandemic. Using the electronic medical records (EMR) from the Pulmonology Department of Vimercate Hospital, in this study we develop and evaluate a Machine Learning (ML) system able to predict the failure of H-CPAP therapy on ARDS patients.

Methods:

The Vimercate Hospital EMR provides demographic information, blood tests, and vital parameters of all hospitalizations of patients who are treated with H-CPAP and diagnosed with ARDS. This data is used to create a dataset of 622 records and 38 features, with 70%–30% split between training and test sets. Different ML models such as SVM, XGBoost, Neural Network, Random Forest, and Logistic Regression are iteratively trained in a cross-validation fashion. We also apply a feature selection algorithm to improve predictions quality and reduce the number of features.

Results and Conclusions:

The SVM and Neural Network models proved to be the most effective, achieving final accuracies of 95.19% and 94.65%, respectively. In terms of F1-score, the models scored 88.61% and 87.18%, respectively. Additionally, the SVM and XGBoost models performed well with a reduced number of features (23 and 13, respectively). The PaO2/FiO2 Ratio, C-Reactive Protein, and O2 Saturation resulted as the most important features, followed by Heartbeats, White Blood Cells, and D-Dimer, in accordance with the clinical scientific literature.
背景和目的:头盔持续气道正压通气(H-CPAP)是一种用于治疗急性呼吸窘迫综合征(ARDS)的无创呼吸支持设备,ARDS是一种严重的医学疾病,当出现深度低氧血症、x线摄影上的肺混浊或不明原因的呼吸衰竭时诊断出来。它可以分为轻度、中度和重度。H-CPAP治疗被推荐作为轻度ARDS的初始治疗方法。尽管H-CPAP在治疗中至重度低氧血症患者中的疗效尚不清楚,但随着COVID-19大流行的出现,其在这些病例中的使用有所增加。利用Vimercate医院肺科的电子病历(EMR),在本研究中,我们开发并评估了一个机器学习(ML)系统,该系统能够预测H-CPAP治疗对ARDS患者的失败。方法:vimerate医院EMR提供了所有接受H-CPAP治疗并诊断为ARDS的住院患者的人口统计信息、血液检查和重要参数。该数据用于创建包含622条记录和38个特征的数据集,训练集和测试集之间的比例为70%-30%。不同的ML模型,如SVM、XGBoost、神经网络、随机森林和逻辑回归,以交叉验证的方式进行迭代训练。我们还应用了一种特征选择算法来提高预测质量并减少特征数量。结果与结论:SVM和Neural Network模型最有效,最终准确率分别为95.19%和94.65%。f1得分方面,模型得分分别为88.61%和87.18%。此外,SVM和XGBoost模型在特征数量减少(分别为23和13)的情况下表现良好。根据临床科学文献,PaO2/FiO2比率、c反应蛋白和O2饱和度是最重要的特征,其次是心跳、白细胞和d -二聚体。
{"title":"Machine learning-based forecast of Helmet-CPAP therapy failure in Acute Respiratory Distress Syndrome patients","authors":"Riccardo Campi ,&nbsp;Antonio De Santis ,&nbsp;Paolo Colombo ,&nbsp;Paolo Scarpazza ,&nbsp;Marco Masseroli","doi":"10.1016/j.cmpb.2024.108574","DOIUrl":"10.1016/j.cmpb.2024.108574","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Helmet-Continuous Positive Airway Pressure (H-CPAP) is a non-invasive respiratory support that is used for the treatment of Acute Respiratory Distress Syndrome (ARDS), a severe medical condition diagnosed when symptoms like profound hypoxemia, pulmonary opacities on radiography, or unexplained respiratory failure are present. It can be classified as mild, moderate or severe. H-CPAP therapy is recommended as the initial treatment approach for mild ARDS. Even though the efficacy of H-CPAP in managing patients with moderate-to-severe hypoxemia remains unclear, its use has increased for these cases in response to the emergence of the COVID-19 Pandemic. Using the electronic medical records (EMR) from the Pulmonology Department of Vimercate Hospital, in this study we develop and evaluate a Machine Learning (ML) system able to predict the failure of H-CPAP therapy on ARDS patients.</div></div><div><h3>Methods:</h3><div>The Vimercate Hospital EMR provides demographic information, blood tests, and vital parameters of all hospitalizations of patients who are treated with H-CPAP and diagnosed with ARDS. This data is used to create a dataset of 622 records and 38 features, with 70%–30% split between training and test sets. Different ML models such as <em>SVM</em>, <em>XGBoost</em>, <em>Neural Network</em>, <em>Random Forest</em>, and <em>Logistic Regression</em> are iteratively trained in a cross-validation fashion. We also apply a feature selection algorithm to improve predictions quality and reduce the number of features.</div></div><div><h3>Results and Conclusions:</h3><div>The <em>SVM</em> and <em>Neural Network</em> models proved to be the most effective, achieving final accuracies of 95.19% and 94.65%, respectively. In terms of F1-score, the models scored 88.61% and 87.18%, respectively. Additionally, the <em>SVM</em> and <em>XGBoost</em> models performed well with a reduced number of features (23 and 13, respectively). The <em>PaO2/FiO2 Ratio</em>, <em>C-Reactive Protein</em>, and <em>O2 Saturation</em> resulted as the most important features, followed by <em>Heartbeats</em>, <em>White Blood Cells</em>, and <em>D-Dimer</em>, in accordance with the clinical scientific literature.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108574"},"PeriodicalIF":4.9,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142945904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preserving privacy in healthcare: A systematic review of deep learning approaches for synthetic data generation 在医疗保健中保护隐私:用于合成数据生成的深度学习方法的系统回顾。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-28 DOI: 10.1016/j.cmpb.2024.108571
Yintong Liu , U. Rajendra Acharya , Jen Hong Tan

Background:

Data sharing in healthcare is vital for advancing research and personalized medicine. However, the process is hindered by privacy, ethical, and legal challenges associated with patient data. Synthetic data generation emerges as a promising solution, replicating statistical properties of real data while enhancing privacy protection.

Methods:

This systematic review examines deep learning techniques for synthetic data generation in healthcare, focusing on their ability to maintain data utility and enhance privacy. Studies from Scopus, Web of Science, PubMed, and IEEE databases published between 2019 and 2023 were analyzed. Key methods explored include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Diffusion Models. Evaluation metrics encompass data resemblance, utility, and privacy preservation, with special attention to privacy-enhancing methods like differential privacy and federated learning.

Results:

GANs and VAEs demonstrated robust capabilities in generating realistic synthetic data for tabular, signal, image, and multi-modal datasets. Privacy-preserving approaches such as differential privacy and adversarial training significantly reduced re-identification risks while maintaining data fidelity. However, challenges persist in preserving temporal correlations, reducing biases, and aligning with regulatory frameworks, particularly for longitudinal and high-dimensional data.

Conclusion:

Synthetic data generation holds significant potential for privacy-preserving data sharing in healthcare. Ongoing research is required to develop advanced algorithms and evaluation frameworks, ensuring synthetic data’s quality and privacy. Collaboration between technologists and policymakers is essential to create comprehensive guidelines, fostering secure and effective data sharing in healthcare.
背景:医疗保健领域的数据共享对于推进研究和个性化医疗至关重要。然而,这一过程受到与患者数据相关的隐私、道德和法律挑战的阻碍。合成数据生成是一种很有前途的解决方案,它在增强隐私保护的同时复制了真实数据的统计属性。方法:本系统综述研究了医疗保健领域合成数据生成的深度学习技术,重点关注其维护数据效用和增强隐私的能力。分析了2019年至2023年间发表的Scopus、Web of Science、PubMed和IEEE数据库中的研究。探索的关键方法包括生成对抗网络(gan),变分自编码器(VAEs)和扩散模型。评估指标包括数据相似性、实用性和隐私保护,特别关注隐私增强方法,如差分隐私和联邦学习。结果:gan和VAEs在生成表格、信号、图像和多模态数据集的真实合成数据方面表现出强大的能力。隐私保护方法,如差分隐私和对抗性训练,在保持数据保真度的同时显著降低了重新识别风险。然而,在保持时间相关性、减少偏差和与监管框架保持一致方面仍然存在挑战,特别是对于纵向和高维数据。结论:合成数据生成在医疗保健领域保护隐私的数据共享方面具有巨大潜力。正在进行的研究需要开发先进的算法和评估框架,以确保合成数据的质量和隐私。技术人员和政策制定者之间的协作对于创建全面的指导方针、促进医疗保健领域安全有效的数据共享至关重要。
{"title":"Preserving privacy in healthcare: A systematic review of deep learning approaches for synthetic data generation","authors":"Yintong Liu ,&nbsp;U. Rajendra Acharya ,&nbsp;Jen Hong Tan","doi":"10.1016/j.cmpb.2024.108571","DOIUrl":"10.1016/j.cmpb.2024.108571","url":null,"abstract":"<div><h3>Background:</h3><div>Data sharing in healthcare is vital for advancing research and personalized medicine. However, the process is hindered by privacy, ethical, and legal challenges associated with patient data. Synthetic data generation emerges as a promising solution, replicating statistical properties of real data while enhancing privacy protection.</div></div><div><h3>Methods:</h3><div>This systematic review examines deep learning techniques for synthetic data generation in healthcare, focusing on their ability to maintain data utility and enhance privacy. Studies from Scopus, Web of Science, PubMed, and IEEE databases published between 2019 and 2023 were analyzed. Key methods explored include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Diffusion Models. Evaluation metrics encompass data resemblance, utility, and privacy preservation, with special attention to privacy-enhancing methods like differential privacy and federated learning.</div></div><div><h3>Results:</h3><div>GANs and VAEs demonstrated robust capabilities in generating realistic synthetic data for tabular, signal, image, and multi-modal datasets. Privacy-preserving approaches such as differential privacy and adversarial training significantly reduced re-identification risks while maintaining data fidelity. However, challenges persist in preserving temporal correlations, reducing biases, and aligning with regulatory frameworks, particularly for longitudinal and high-dimensional data.</div></div><div><h3>Conclusion:</h3><div>Synthetic data generation holds significant potential for privacy-preserving data sharing in healthcare. Ongoing research is required to develop advanced algorithms and evaluation frameworks, ensuring synthetic data’s quality and privacy. Collaboration between technologists and policymakers is essential to create comprehensive guidelines, fostering secure and effective data sharing in healthcare.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108571"},"PeriodicalIF":4.9,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142913562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TD-STrans: Tri-domain sparse-view CT reconstruction based on sparse transformer TD-STrans:基于稀疏变换的三域稀疏视图CT重建。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-25 DOI: 10.1016/j.cmpb.2024.108575
Yu Li , Xueqin Sun , Sukai Wang , Lina Guo , Yingwei Qin , Jinxiao Pan , Ping Chen

Background and objective

Sparse-view computed tomography (CT) speeds up scanning and reduces radiation exposure in medical diagnosis. However, when the projection views are severely under-sampled, deep learning-based reconstruction methods often suffer from over-smoothing of the reconstructed images due to the lack of high-frequency information. To address this issue, we introduce frequency domain information into the popular projection-image domain reconstruction, proposing a Tri-Domain sparse-view CT reconstruction model based on Sparse Transformer (TD-STrans).

Methods

TD-STrans integrates three essential modules: the projection recovery module completes the sparse-view projection, the Fourier domain filling module mitigates artifacts and over-smoothing by filling in missing high-frequency details; the image refinement module further enhances and preserves image details. Additionally, a multi-domain joint loss function is designed to simultaneously enhance the reconstruction quality in the projection domain, image domain, and frequency domain, thereby further improving the preservation of image details.

Results

The results of simulation experiments on the lymph node dataset and real experiments on the walnut dataset consistently demonstrate the effectiveness of TD-STrans in artifact removal, suppression of over-smoothing, and preservation of structural fidelity.

Conclusion

The reconstruction results of TD-STrans indicate that sparse transformer across multiple domains can alleviate over-smoothing and detail loss caused by reduced views, offering a novel solution for ultra-sparse-view CT imaging.
背景与目的:稀疏视图计算机断层扫描(CT)在医学诊断中加快了扫描速度,减少了辐射暴露。然而,当投影视图严重欠采样时,由于缺乏高频信息,基于深度学习的重建方法往往会导致重建图像过度平滑。为了解决这一问题,我们将频域信息引入到流行的投影图像域重建中,提出了一种基于稀疏变换(TD-STrans)的三域稀疏视图CT重建模型。方法:TD-STrans集成了三个基本模块:投影恢复模块完成稀疏视图投影,傅里叶域填充模块通过填充缺失的高频细节来减轻伪影和过度平滑;图像细化模块进一步增强和保留图像细节。设计了多域联合损失函数,同时提高了投影域、图像域和频域的重建质量,进一步提高了图像细节的保存性。结果:在淋巴结数据集上的模拟实验和在核桃数据集上的真实实验结果一致地证明了TD-STrans在去除伪影、抑制过度平滑和保持结构保真度方面的有效性。结论:TD-STrans的重建结果表明,多域稀疏变换可以缓解因缩图而造成的过度平滑和细节损失,为超稀疏视图CT成像提供了一种新的解决方案。
{"title":"TD-STrans: Tri-domain sparse-view CT reconstruction based on sparse transformer","authors":"Yu Li ,&nbsp;Xueqin Sun ,&nbsp;Sukai Wang ,&nbsp;Lina Guo ,&nbsp;Yingwei Qin ,&nbsp;Jinxiao Pan ,&nbsp;Ping Chen","doi":"10.1016/j.cmpb.2024.108575","DOIUrl":"10.1016/j.cmpb.2024.108575","url":null,"abstract":"<div><h3>Background and objective</h3><div>Sparse-view computed tomography (CT) speeds up scanning and reduces radiation exposure in medical diagnosis. However, when the projection views are severely under-sampled, deep learning-based reconstruction methods often suffer from over-smoothing of the reconstructed images due to the lack of high-frequency information. To address this issue, we introduce frequency domain information into the popular projection-image domain reconstruction, proposing a Tri-Domain sparse-view CT reconstruction model based on Sparse Transformer (TD-STrans).</div></div><div><h3>Methods</h3><div>TD-STrans integrates three essential modules: the projection recovery module completes the sparse-view projection, the Fourier domain filling module mitigates artifacts and over-smoothing by filling in missing high-frequency details; the image refinement module further enhances and preserves image details. Additionally, a multi-domain joint loss function is designed to simultaneously enhance the reconstruction quality in the projection domain, image domain, and frequency domain, thereby further improving the preservation of image details.</div></div><div><h3>Results</h3><div>The results of simulation experiments on the lymph node dataset and real experiments on the walnut dataset consistently demonstrate the effectiveness of TD-STrans in artifact removal, suppression of over-smoothing, and preservation of structural fidelity.</div></div><div><h3>Conclusion</h3><div>The reconstruction results of TD-STrans indicate that sparse transformer across multiple domains can alleviate over-smoothing and detail loss caused by reduced views, offering a novel solution for ultra-sparse-view CT imaging.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108575"},"PeriodicalIF":4.9,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142902446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic analysis of fractal–fractional cancer model under chemotherapy drug with generalized Mittag-Leffler kernel 具有广义Mittag-Leffler核的化疗药物作用下分形-分形肿瘤模型的动态分析。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-24 DOI: 10.1016/j.cmpb.2024.108565
Hardik Joshi , Mehmet Yavuz , Osman Taylan , Abdulaziz Alkabaa

Background and Objective:

Cancer’s complex and multifaceted nature makes it challenging to identify unique molecular and pathophysiological signatures, thereby hindering the development of effective therapies. This paper presents a novel fractal–fractional cancer model to study the complex interplay among stem cells, effectors cells, and tumor cells in the presence and absence of chemotherapy. The cancer model with effective treatment through chemotherapy drugs is considered and discussed in detail.

Methods:

The numerical method for the fractal–fractional cancer model with a generalized Mittag-Leffler kernel is presented. The Routh–Hurwitz stability criteria are applied to confirm the local asymptotically stability of an endemic equilibrium point of the cancer model without treatment and with effective treatment under some conditions. The existence and uniqueness criteria of the fractal–fractional cancer model are derived. Furthermore, the stability analysis of the fractal–fractional cancer model is performed.

Results:

The temporal concentration pattern of stem cells, effectors cells, tumor cells, and chemotherapy drugs are procured. The usage of chemotherapy drugs kills the tumor cells or decreases their density over time and as a consequence takes a longer time to reach to equilibrium point. The decay rate of stem cells and tumor cells plays a crucial role in cancer dynamics. The notable role of fractal dimensions along with fractional order is observed in capturing the cancer cell concentration.

Conclusion:

A dynamic analysis of the fractal–fractional cancer model is demonstrated to examine the impact of chemotherapy drugs with a generalized Mittag-Leffler kernel. The model possesses three equilibrium points among them two correspond to the cancer model without treatment namely the tumor-free equilibrium point and endemic equilibrium point. One additional endemic equilibrium point exists in the case of effective treatment through chemotherapy drugs. The Routh–Hurwitz stability criteria are applied to confirm the local asymptotically stability of an endemic equilibrium point of the cancer model with and without treatment under some conditions. The chemotherapy drug plays a crucial role in controlling the growth of tumor cells. The fractal–fractional operator provided robustness to study cancer dynamics by the inclusion of memory and heterogeneity.
背景与目的:癌症的复杂性和多面性使得识别独特的分子和病理生理特征具有挑战性,从而阻碍了有效治疗方法的发展。本文提出了一种新的分形-分数癌症模型来研究干细胞、效应细胞和肿瘤细胞在化疗存在和不存在时的复杂相互作用。对经化疗药物有效治疗的肿瘤模型进行了详细的考虑和讨论。方法:给出具有广义Mittag-Leffler核的分形-分数型癌症模型的数值计算方法。应用Routh-Hurwitz稳定性判据,在一定条件下,确定了肿瘤模型的局部平衡点在不治疗和有效治疗情况下的渐近稳定性。导出了分形-分数形肿瘤模型的存在性和唯一性准则。进一步,对分形-分数型癌症模型进行了稳定性分析。结果:获得了干细胞、效应细胞、肿瘤细胞和化疗药物的时间浓度图。化疗药物的使用会杀死肿瘤细胞或随着时间的推移降低其密度,因此需要更长的时间才能达到平衡点。干细胞和肿瘤细胞的衰变速率在肿瘤动力学中起着至关重要的作用。分形维数和分数阶数在捕获癌细胞浓度中起着显著的作用。结论:分形-分数癌症模型的动态分析被证明可以用广义的Mittag-Leffler核来检查化疗药物的影响。该模型具有三个平衡点,其中两个与未治疗的肿瘤模型相对应,即无瘤平衡点和地方性平衡点。在通过化疗药物进行有效治疗的情况下,存在另一个地方性平衡点。应用Routh-Hurwitz稳定性判据,在一定条件下证实了有治疗和无治疗的癌症模型的地方性平衡点的局部渐近稳定性。化疗药物在控制肿瘤细胞生长方面起着至关重要的作用。分形-分数算子通过包含记忆和异质性,为研究癌症动力学提供了鲁棒性。
{"title":"Dynamic analysis of fractal–fractional cancer model under chemotherapy drug with generalized Mittag-Leffler kernel","authors":"Hardik Joshi ,&nbsp;Mehmet Yavuz ,&nbsp;Osman Taylan ,&nbsp;Abdulaziz Alkabaa","doi":"10.1016/j.cmpb.2024.108565","DOIUrl":"10.1016/j.cmpb.2024.108565","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Cancer’s complex and multifaceted nature makes it challenging to identify unique molecular and pathophysiological signatures, thereby hindering the development of effective therapies. This paper presents a novel fractal–fractional cancer model to study the complex interplay among stem cells, effectors cells, and tumor cells in the presence and absence of chemotherapy. The cancer model with effective treatment through chemotherapy drugs is considered and discussed in detail.</div></div><div><h3>Methods:</h3><div>The numerical method for the fractal–fractional cancer model with a generalized Mittag-Leffler kernel is presented. The Routh–Hurwitz stability criteria are applied to confirm the local asymptotically stability of an endemic equilibrium point of the cancer model without treatment and with effective treatment under some conditions. The existence and uniqueness criteria of the fractal–fractional cancer model are derived. Furthermore, the stability analysis of the fractal–fractional cancer model is performed.</div></div><div><h3>Results:</h3><div>The temporal concentration pattern of stem cells, effectors cells, tumor cells, and chemotherapy drugs are procured. The usage of chemotherapy drugs kills the tumor cells or decreases their density over time and as a consequence takes a longer time to reach to equilibrium point. The decay rate of stem cells and tumor cells plays a crucial role in cancer dynamics. The notable role of fractal dimensions along with fractional order is observed in capturing the cancer cell concentration.</div></div><div><h3>Conclusion:</h3><div>A dynamic analysis of the fractal–fractional cancer model is demonstrated to examine the impact of chemotherapy drugs with a generalized Mittag-Leffler kernel. The model possesses three equilibrium points among them two correspond to the cancer model without treatment namely the tumor-free equilibrium point and endemic equilibrium point. One additional endemic equilibrium point exists in the case of effective treatment through chemotherapy drugs. The Routh–Hurwitz stability criteria are applied to confirm the local asymptotically stability of an endemic equilibrium point of the cancer model with and without treatment under some conditions. The chemotherapy drug plays a crucial role in controlling the growth of tumor cells. The fractal–fractional operator provided robustness to study cancer dynamics by the inclusion of memory and heterogeneity.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"260 ","pages":"Article 108565"},"PeriodicalIF":4.9,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142892138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1