首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images 基于多增强视图的主动学习,用于肺部超声图像中多种模式的分类。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-24 DOI: 10.1016/j.compmedimag.2024.102454
Yuanlu Ni , Yang Cong , Chengqian Zhao , Jinhua Yu , Yin Wang , Guohui Zhou , Mengjun Shen
There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.
肺部超声(LUS)图像有几种主要模式,包括 A 线、B 线、合并和胸腔积液。健康肺部的 LUS 图像通常只显示 A 线,而与不同肺部疾病相关的 LUS 图像中可能会出现或同时出现其他模式。对这些主要模式进行准确分类是有效筛查肺部疾病的关键。然而,有两个挑战使分类任务变得复杂:一是由于超声成像特性,主要模式之间的固有特征差异变得模糊;二是在一个病例中可能同时存在多种模式,而临床上只对最主要的模式进行注释。为了应对这些挑战,我们提出了基于多增强视图(MEVAL)的主动学习方法,以实现更精确的 LUS 模式分类。为了突出多个模式之间的特征差异,我们通过垂直线性拟合和 k-means 聚类引入了一个特征增强模块。然后,多重增强视图与原始图像并行使用,从而增强了 MEVAL 对多种模式之间特征差异的感知。为解决模式共存问题,我们提出了一种基于置信集和错误分类集的主动学习策略。这种策略通过选择性地标记少量图像,使网络能够同时识别多种模式。我们的数据集包括 5075 幅 LUS 图像,其中约 4% 呈现出多种模式。实验结果表明,所提出的方法在分类任务中非常有效,准确率为 98.72%,AUC 为 0.9989,灵敏度为 98.76%,特异度为 98.16%,优于基于深度学习的最先进方法。一系列全面的消融研究表明,所提出的每个组件都很有效,在临床应用中显示出巨大的潜力。
{"title":"Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images","authors":"Yuanlu Ni ,&nbsp;Yang Cong ,&nbsp;Chengqian Zhao ,&nbsp;Jinhua Yu ,&nbsp;Yin Wang ,&nbsp;Guohui Zhou ,&nbsp;Mengjun Shen","doi":"10.1016/j.compmedimag.2024.102454","DOIUrl":"10.1016/j.compmedimag.2024.102454","url":null,"abstract":"<div><div>There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102454"},"PeriodicalIF":5.4,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRI-based vector radiomics for predicting breast cancer HER2 status and its changes after neoadjuvant therapy 基于磁共振成像的载体放射组学用于预测乳腺癌 HER2 状态及其在新辅助治疗后的变化。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-17 DOI: 10.1016/j.compmedimag.2024.102443
Lan Zhang , Quan-Xiang Cui , Liang-Qin Zhou , Xin-Yi Wang , Hong-Xia Zhang , Yue-Min Zhu , Xi-Qiao Sang , Zi-Xiang Kuai

Purpose

: To develop a novel MRI-based vector radiomic approach to predict breast cancer (BC) human epidermal growth factor receptor 2 (HER2) status (zero, low, and positive; task 1) and its changes after neoadjuvant therapy (NAT) (positive-to-positive, positive-to-negative, and positive-to-pathologic complete response; task 2).

Materials and Methods

: Both dynamic contrast-enhanced (DCE) MRI data and multi-b-value (MBV) diffusion-weighted imaging (DWI) data were acquired in BC patients at two centers. Vector-radiomic and conventional-radiomic features were extracted from both DCE-MRI and MBV-DWI. After feature selection, the following models were built using the retained features and logistic regression: vector model, conventional model, and combined model that integrates the vector-radiomic and conventional-radiomic features. The models’ performances were quantified by the area under the receiver-operating characteristic curve (AUC).

Results:

The training/external test set (center 1/2) included 483/361 women. For task 1, the vector model (AUCs=0.730.86) was superior to (p<.05) the conventional model (AUCs=0.680.81), and the addition of vector-radiomic features to conventional-radiomic features yielded an incremental predictive value (AUCs=0.800.90, p<.05). For task 2, the combined MBV-DWI model (AUCs=0.850.89) performed better than (p<.05) the conventional MBV-DWI model (AUCs=0.730.82). In addition, for the combined DCE-MRI model and the combined MBV-DWI model, the former (AUCs=0.850.90) outperformed (p<.05) the latter (AUCs=0.800.85) in task 1, whereas the latter (AUCs=0.850.89) outperformed (p<.05) the former (AUCs=0.760.81) in task 2. The above results are true for the training and external test sets.

Conclusions:

MRI-based vector radiomics may predict BC HER2 status and its changes after NAT and provide significant incremental prediction over and above conventional radiomics.
目的:开发一种基于磁共振成像的新型矢量放射组学方法,用于预测乳腺癌(BC)人表皮生长因子受体2(HER2)状态(零、低和阳性;任务1)及其在新辅助治疗(NAT)后的变化(阳性到阳性、阳性到阴性、阳性到病理完全反应;任务2):在两个中心采集了 BC 患者的动态对比增强(DCE)磁共振成像数据和多比值(MBV)弥散加权成像(DWI)数据。从 DCE-MRI 和 MBV-DWI 中提取了矢量放射学和传统放射学特征。经过特征选择后,利用保留的特征和逻辑回归建立了以下模型:矢量模型、传统模型以及整合了矢量放射体和传统放射体特征的组合模型。模型的性能通过接收者工作特征曲线下面积(AUC)进行量化:训练/外部测试集(中心 1/2)包括 483/361 名女性。在任务 1 中,矢量模型(AUCs=0.73∼0.86)优于矢量模型(pConclusions:基于 MRI 的矢量放射组学可预测 BC HER2 状态及其在 NAT 后的变化,其预测效果明显优于传统放射组学。
{"title":"MRI-based vector radiomics for predicting breast cancer HER2 status and its changes after neoadjuvant therapy","authors":"Lan Zhang ,&nbsp;Quan-Xiang Cui ,&nbsp;Liang-Qin Zhou ,&nbsp;Xin-Yi Wang ,&nbsp;Hong-Xia Zhang ,&nbsp;Yue-Min Zhu ,&nbsp;Xi-Qiao Sang ,&nbsp;Zi-Xiang Kuai","doi":"10.1016/j.compmedimag.2024.102443","DOIUrl":"10.1016/j.compmedimag.2024.102443","url":null,"abstract":"<div><h3>Purpose</h3><div>: To develop a novel MRI-based vector radiomic approach to predict breast cancer (BC) human epidermal growth factor receptor 2 (HER2) status (zero, low, and positive; task 1) and its changes after neoadjuvant therapy (NAT) (positive-to-positive, positive-to-negative, and positive-to-pathologic complete response; task 2).</div></div><div><h3>Materials and Methods</h3><div>: Both dynamic contrast-enhanced (DCE) MRI data and multi-<em>b</em>-value (MBV) diffusion-weighted imaging (DWI) data were acquired in BC patients at two centers. Vector-radiomic and conventional-radiomic features were extracted from both DCE-MRI and MBV-DWI. After feature selection, the following models were built using the retained features and logistic regression: vector model, conventional model, and combined model that integrates the vector-radiomic and conventional-radiomic features. The models’ performances were quantified by the area under the receiver-operating characteristic curve (AUC).</div></div><div><h3>Results:</h3><div>The training/external test set (center 1/2) included 483/361 women. For task 1, the vector model (AUCs=0.73<span><math><mo>∼</mo></math></span>0.86) was superior to (<em>p</em><span><math><mo>&lt;</mo></math></span>.05) the conventional model (AUCs=0.68<span><math><mo>∼</mo></math></span>0.81), and the addition of vector-radiomic features to conventional-radiomic features yielded an incremental predictive value (AUCs=0.80<span><math><mo>∼</mo></math></span>0.90, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). For task 2, the combined MBV-DWI model (AUCs=0.85<span><math><mo>∼</mo></math></span>0.89) performed better than (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the conventional MBV-DWI model (AUCs=0.73<span><math><mo>∼</mo></math></span>0.82). In addition, for the combined DCE-MRI model and the combined MBV-DWI model, the former (AUCs=0.85<span><math><mo>∼</mo></math></span>0.90) outperformed (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the latter (AUCs=0.80<span><math><mo>∼</mo></math></span>0.85) in task 1, whereas the latter (AUCs=0.85<span><math><mo>∼</mo></math></span>0.89) outperformed (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the former (AUCs=0.76<span><math><mo>∼</mo></math></span>0.81) in task 2. The above results are true for the training and external test sets.</div></div><div><h3>Conclusions:</h3><div>MRI-based vector radiomics may predict BC HER2 status and its changes after NAT and provide significant incremental prediction over and above conventional radiomics.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102443"},"PeriodicalIF":5.4,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of AutoML optimization techniques for medical image applications 医学影像应用中的 AutoML 优化技术综述。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-16 DOI: 10.1016/j.compmedimag.2024.102441
Muhammad Junaid Ali, Mokhtar Essaid, Laurent Moalic, Lhassane Idoumghar
Automatic analysis of medical images using machine learning techniques has gained significant importance over the years. A large number of approaches have been proposed for solving different medical image analysis tasks using machine learning and deep learning approaches. These approaches are quite effective thanks to their ability to analyze large volume of medical imaging data. Moreover, they can also identify patterns that may be difficult for human experts to detect. Manually designing and tuning the parameters of these algorithms is a challenging and time-consuming task. Furthermore, designing a generalized model that can handle different imaging modalities is difficult, as each modality has specific characteristics. To solve these problems and automate the whole pipeline of different medical image analysis tasks, numerous Automatic Machine Learning (AutoML) techniques have been proposed. These techniques include Hyper-parameter Optimization (HPO), Neural Architecture Search (NAS), and Automatic Data Augmentation (ADA). This study provides an overview of several AutoML-based approaches for different medical imaging tasks in terms of optimization search strategies. The usage of optimization techniques (evolutionary, gradient-based, Bayesian optimization, etc.) is of significant importance for these AutoML approaches. We comprehensively reviewed existing AutoML approaches, categorized them, and performed a detailed analysis of different proposed approaches. Furthermore, current challenges and possible future research directions are also discussed.
多年来,利用机器学习技术自动分析医学影像的重要性日益凸显。人们提出了大量使用机器学习和深度学习方法解决不同医学图像分析任务的方法。由于这些方法能够分析大量医学影像数据,因此相当有效。此外,它们还能识别人类专家难以发现的模式。手动设计和调整这些算法的参数是一项具有挑战性且耗时的任务。此外,设计一个能处理不同成像模式的通用模型也很困难,因为每种模式都有特定的特征。为了解决这些问题并使不同医学图像分析任务的整个流程自动化,人们提出了许多自动机器学习(AutoML)技术。这些技术包括超参数优化(HPO)、神经架构搜索(NAS)和自动数据增强(ADA)。本研究从优化搜索策略的角度概述了几种基于 AutoML 的方法,用于不同的医学成像任务。优化技术(进化、梯度、贝叶斯优化等)的使用对这些 AutoML 方法至关重要。我们全面回顾了现有的 AutoML 方法,对其进行了分类,并对不同的建议方法进行了详细分析。此外,我们还讨论了当前面临的挑战和未来可能的研究方向。
{"title":"A review of AutoML optimization techniques for medical image applications","authors":"Muhammad Junaid Ali,&nbsp;Mokhtar Essaid,&nbsp;Laurent Moalic,&nbsp;Lhassane Idoumghar","doi":"10.1016/j.compmedimag.2024.102441","DOIUrl":"10.1016/j.compmedimag.2024.102441","url":null,"abstract":"<div><div>Automatic analysis of medical images using machine learning techniques has gained significant importance over the years. A large number of approaches have been proposed for solving different medical image analysis tasks using machine learning and deep learning approaches. These approaches are quite effective thanks to their ability to analyze large volume of medical imaging data. Moreover, they can also identify patterns that may be difficult for human experts to detect. Manually designing and tuning the parameters of these algorithms is a challenging and time-consuming task. Furthermore, designing a generalized model that can handle different imaging modalities is difficult, as each modality has specific characteristics. To solve these problems and automate the whole pipeline of different medical image analysis tasks, numerous Automatic Machine Learning (AutoML) techniques have been proposed. These techniques include Hyper-parameter Optimization (HPO), Neural Architecture Search (NAS), and Automatic Data Augmentation (ADA). This study provides an overview of several AutoML-based approaches for different medical imaging tasks in terms of optimization search strategies. The usage of optimization techniques (evolutionary, gradient-based, Bayesian optimization, etc.) is of significant importance for these AutoML approaches. We comprehensively reviewed existing AutoML approaches, categorized them, and performed a detailed analysis of different proposed approaches. Furthermore, current challenges and possible future research directions are also discussed.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102441"},"PeriodicalIF":5.4,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week 以先验知识为指导,基于视觉变换器的无监督领域适配,用于肺病一周内的插管预测。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-15 DOI: 10.1016/j.compmedimag.2024.102442
Junlin Yang , John Anderson Garcia Henao , Nicha Dvornek , Jianchun He , Danielle V. Bower , Arno Depotter , Herkus Bajercius , Aurélie Pahud de Mortanges , Chenyu You , Christopher Gange , Roberta Eufrasia Ledda , Mario Silva , Charles S. Dela Cruz , Wolf Hautz , Harald M. Bonel , Mauricio Reyes , Lawrence H. Staib , Alexander Poellinger , James S. Duncan
Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.
数据驱动方法在各种医学图像分析任务中取得了巨大成功。然而,完全监督的数据驱动方法需要前所未有的大量标注数据,而且由于领域转移,对未见过的新数据的泛化能力往往很差。为了解决这些问题,人们积极探索各种无监督领域适应(UDA)方法。解剖学和空间先验在医学成像中很常见,已被纳入数据驱动方法,以缓解对标记数据的需求,并实现更好的泛化和解释。受最近基于变换器的方法在医学图像分析中的有效性启发,人们对基于变换器的模型的适应性进行了研究。如何将先验知识纳入基于变压器的 UDA 模型仍未得到充分探讨。在本文中,我们介绍了一种以先验知识为指导、基于变换器的无监督域自适应(PUDA)管道。它利用源域和目标域共享的解剖学和空间先验信息对视觉变换器注意头进行正则化,从而提供了对跨域底层数据分布相似性的额外洞察。除了类标记的全局对齐外,它还通过对抗训练分配局部权重来指导标记分布的对齐。我们在一项临床结果预测任务中评估了我们提出的方法,该任务收集了计算机断层扫描(CT)和胸部 X 光片(CXR)数据,用于预测一周内患者的插管状态。异常病变被视为这项任务的解剖和空间先验信息,并在源域扫描中进行注释。广泛的实验表明了所提出的 PUDA 方法的有效性。
{"title":"Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week","authors":"Junlin Yang ,&nbsp;John Anderson Garcia Henao ,&nbsp;Nicha Dvornek ,&nbsp;Jianchun He ,&nbsp;Danielle V. Bower ,&nbsp;Arno Depotter ,&nbsp;Herkus Bajercius ,&nbsp;Aurélie Pahud de Mortanges ,&nbsp;Chenyu You ,&nbsp;Christopher Gange ,&nbsp;Roberta Eufrasia Ledda ,&nbsp;Mario Silva ,&nbsp;Charles S. Dela Cruz ,&nbsp;Wolf Hautz ,&nbsp;Harald M. Bonel ,&nbsp;Mauricio Reyes ,&nbsp;Lawrence H. Staib ,&nbsp;Alexander Poellinger ,&nbsp;James S. Duncan","doi":"10.1016/j.compmedimag.2024.102442","DOIUrl":"10.1016/j.compmedimag.2024.102442","url":null,"abstract":"<div><div>Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102442"},"PeriodicalIF":5.4,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distance guided generative adversarial network for explainable medical image classifications 用于可解释医学图像分类的距离引导生成对抗网络。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-15 DOI: 10.1016/j.compmedimag.2024.102444
Xiangyu Xiong , Yue Sun , Xiaohong Liu , Wei Ke , Chan-Tong Lam , Jiangang Chen , Mingfeng Jiang , Mingwei Wang , Hui Xie , Tong Tong , Qinquan Gao , Hao Chen , Tao Tan
Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification. In this paper, we propose a distance-guided GAN (DisGAN) that controls the variation degrees of generated samples in the hyperplane space. Specifically, we instantiate the idea of DisGAN by combining two ways. The first way is vertical distance GAN (VerDisGAN) where the inter-domain generation is conditioned on the vertical distances. The second way is horizontal distance GAN (HorDisGAN) where the intra-domain generation is conditioned on the horizontal distances. Furthermore, VerDisGAN can produce the class-specific regions by mapping the source images to the hyperplane. Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification. The proposed method can apply to different classification architectures and has the potential to extend to multi-class classification. We provide the code in https://github.com/yXiangXiong/DisGAN.
尽管数据扩增在缓解数据不足方面具有潜在优势,但传统的扩增方法主要依赖于先前的域内知识。另一方面,先进的生成式对抗网络(GAN)生成的域间样本种类有限。这些方法对描述二元分类的决策边界贡献有限。在本文中,我们提出了一种距离引导生成式对抗网络(DisGAN),它能控制超平面空间中生成样本的变化度。具体来说,我们通过结合两种方式来实现 DisGAN 的想法。第一种方式是垂直距离 GAN(VerDisGAN),其中域间生成以垂直距离为条件。第二种方法是水平距离 GAN(HorDisGAN),域内生成以水平距离为条件。此外,VerDisGAN 可以通过将源图像映射到超平面来生成特定类别的区域。实验结果表明,DisGAN 在可解释的二元分类方面始终优于基于 GAN 的增强方法。所提出的方法可适用于不同的分类架构,并有可能扩展到多类分类。我们在 https://github.com/yXiangXiong/DisGAN 中提供了代码。
{"title":"Distance guided generative adversarial network for explainable medical image classifications","authors":"Xiangyu Xiong ,&nbsp;Yue Sun ,&nbsp;Xiaohong Liu ,&nbsp;Wei Ke ,&nbsp;Chan-Tong Lam ,&nbsp;Jiangang Chen ,&nbsp;Mingfeng Jiang ,&nbsp;Mingwei Wang ,&nbsp;Hui Xie ,&nbsp;Tong Tong ,&nbsp;Qinquan Gao ,&nbsp;Hao Chen ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2024.102444","DOIUrl":"10.1016/j.compmedimag.2024.102444","url":null,"abstract":"<div><div>Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification. In this paper, we propose a distance-guided GAN (DisGAN) that controls the variation degrees of generated samples in the hyperplane space. Specifically, we instantiate the idea of DisGAN by combining two ways. The first way is vertical distance GAN (VerDisGAN) where the inter-domain generation is conditioned on the vertical distances. The second way is horizontal distance GAN (HorDisGAN) where the intra-domain generation is conditioned on the horizontal distances. Furthermore, VerDisGAN can produce the class-specific regions by mapping the source images to the hyperplane. Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification. The proposed method can apply to different classification architectures and has the potential to extend to multi-class classification. We provide the code in <span><span>https://github.com/yXiangXiong/DisGAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102444"},"PeriodicalIF":5.4,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning 利用基于弱注释的深度学习的肺结节拟人化诊断系统。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-10 DOI: 10.1016/j.compmedimag.2024.102438
Lipeng Xie , Yongrui Xu , Mingfeng Zheng , Yundi Chen , Min Sun , Michael A. Archer , Wenjun Mao , Yubing Tong , Yuan Wan
The accurate categorization of lung nodules in CT scans is an essential aspect in the prompt detection and diagnosis of lung cancer. The categorization of grade and texture for nodules is particularly significant since it can aid radiologists and clinicians to make better-informed decisions concerning the management of nodules. However, currently existing nodule classification techniques have a singular function of nodule classification and rely on an extensive amount of high-quality annotation data, which does not meet the requirements of clinical practice. To address this issue, we develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. The proposed system uses DL models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. In summary, our system demonstrates efficient localization and differential diagnosis of PNs in a resource limited environment, and thus could be translated into clinical use in the future.
在 CT 扫描中对肺部结节进行准确分类是及时发现和诊断肺癌的一个重要方面。结节的等级和质地分类尤其重要,因为它可以帮助放射科医生和临床医生就结节的处理做出更明智的决定。然而,目前现有的结节分类技术只有单一的结节分类功能,并且依赖于大量高质量的注释数据,无法满足临床实践的要求。为解决这一问题,我们开发了一种基于深度学习(DL)的肺结节(PN)拟人诊断系统,该系统由弱注释数据训练而成,性能与基于全注释的诊断系统相当。该系统利用深度学习模型对弱注释的肺结节进行分类(良性与恶性),从而无需对肺结节进行耗时耗力的人工注释。此外,通过球尺度变换技术获得的手工形状特征增强了 PN 分类网络,证明它有能力区分不同标签的 PN,包括纯磨玻璃不透明、部分实性结节和实性结节。通过在两个数据集上进行 5 倍交叉验证,该系统取得了以下结果:(1)在由 814 个测试病例组成的 LIDC-IDRI 数据集上,PN 定位的曲线下面积(AUC)为 0.938,PN 鉴别诊断的曲线下面积(AUC)为 0.912;(2)在由 822 个测试病例组成的内部数据集上,PN 定位的曲线下面积(AUC)为 0.943,PN 鉴别诊断的曲线下面积(AUC)为 0.815。总之,我们的系统能在资源有限的环境下对 PN 进行有效的定位和鉴别诊断,因此将来可以应用于临床。
{"title":"An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning","authors":"Lipeng Xie ,&nbsp;Yongrui Xu ,&nbsp;Mingfeng Zheng ,&nbsp;Yundi Chen ,&nbsp;Min Sun ,&nbsp;Michael A. Archer ,&nbsp;Wenjun Mao ,&nbsp;Yubing Tong ,&nbsp;Yuan Wan","doi":"10.1016/j.compmedimag.2024.102438","DOIUrl":"10.1016/j.compmedimag.2024.102438","url":null,"abstract":"<div><div>The accurate categorization of lung nodules in CT scans is an essential aspect in the prompt detection and diagnosis of lung cancer. The categorization of grade and texture for nodules is particularly significant since it can aid radiologists and clinicians to make better-informed decisions concerning the management of nodules. However, currently existing nodule classification techniques have a singular function of nodule classification and rely on an extensive amount of high-quality annotation data, which does not meet the requirements of clinical practice. To address this issue, we develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. The proposed system uses DL models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. In summary, our system demonstrates efficient localization and differential diagnosis of PNs in a resource limited environment, and thus could be translated into clinical use in the future.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102438"},"PeriodicalIF":5.4,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to ‘Development and evaluation of an integrated model based on a deep segmentation network and demography-added radiomics algorithm for segmentation and diagnosis of early lung adenocarcinoma’ [Computerized Medical Imaging and Graphics Volume 109 (2023) 102299] 基于深度分割网络和人口统计学附加放射组学算法的综合模型在早期肺腺癌分割和诊断中的开发与评估》[《计算机医学影像与图形学》第 109 (2023) 102299 卷]更正。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-02 DOI: 10.1016/j.compmedimag.2024.102428
Juyoung Lee , Jaehee Chun , Hojin Kim , Jin Sung Kim , Seong Yong Park
{"title":"Corrigendum to ‘Development and evaluation of an integrated model based on a deep segmentation network and demography-added radiomics algorithm for segmentation and diagnosis of early lung adenocarcinoma’ [Computerized Medical Imaging and Graphics Volume 109 (2023) 102299]","authors":"Juyoung Lee ,&nbsp;Jaehee Chun ,&nbsp;Hojin Kim ,&nbsp;Jin Sung Kim ,&nbsp;Seong Yong Park","doi":"10.1016/j.compmedimag.2024.102428","DOIUrl":"10.1016/j.compmedimag.2024.102428","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102428"},"PeriodicalIF":5.4,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment MultiNet 2.0:基于注意力的轻量级深度学习网络,用于颈动脉超声扫描和心血管风险评估中的狭窄测量。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102437
Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri

Background

Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.

Methodology

We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.

Results

The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.
The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.
背景:心血管疾病(CVD)每年造成 1,900 万人死亡,使各国损失数十亿美元。替代生物标志物是心血管疾病风险分层的既定方法;然而,人工检测成本高、操作繁琐且容易出错。当代用于分割和风险预测的人工智能(AI)工具,包括较早的深度学习(DL)网络,采用简单的合并连接,可能会导致语义信息丢失,从而降低准确性:我们假设,与旧式深度学习模型相比,利用注意力机制增强的深度学习网络可以实现更好的分割。注意力机制可以集中在相关特征上,帮助模型更好地理解和解释图像。本研究提出了 MultiNet 2.0(AtheroPoint,Roseville,CA,USA),使用两个注意力网络来分割颈总动脉(CCA)超声图像的管腔并预测心血管疾病的风险:数据库包括 407 幅左右侧 CCA 超声图像,取自 204 名患者。聘请了两位专家对 407 张图像进行边界划分,生成两个基本真相(GT1 和 GT2)。结果远远优于现代模型。GT1 和 GT2 的管腔尺寸(LD)误差分别为 0.13±0.08 毫米和 0.16±0.07 毫米,是市场上最好的。GT1 从狭窄数据中发现低、中、高危患者的 AUC 分别为 0.88、0.98 和 1.00。同样,对于 GT2,低、中、高危患者检测的 AUC 值分别为 0.93、0.97 和 1.00。该系统可完全应用于美国加利福尼亚州罗斯维尔市 AtheroPoint 公司的 AtheroEdge™ 模型的临床实践。
{"title":"MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment","authors":"Mainak Biswas ,&nbsp;Luca Saba ,&nbsp;Mannudeep Kalra ,&nbsp;Rajesh Singh ,&nbsp;J. Fernandes e Fernandes ,&nbsp;Vijay Viswanathan ,&nbsp;John R. Laird ,&nbsp;Laura E. Mantella ,&nbsp;Amer M. Johri ,&nbsp;Mostafa M. Fouda ,&nbsp;Jasjit S. Suri","doi":"10.1016/j.compmedimag.2024.102437","DOIUrl":"10.1016/j.compmedimag.2024.102437","url":null,"abstract":"<div><h3>Background</h3><div>Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.</div></div><div><h3>Methodology</h3><div>We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.</div></div><div><h3>Results</h3><div>The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.</div><div>The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102437"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational modeling of tumor invasion from limited and diverse data in Glioblastoma 从胶质母细胞瘤的有限和多样数据中建立肿瘤侵袭的计算模型。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102436
Padmaja Jonnalagedda , Brent Weinberg , Taejin L. Min , Shiv Bhanu , Bir Bhanu
For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations — which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.
对于多形性胶质母细胞瘤等发病率较高的疾病,预后和治疗计划流水线需要对成像、临床和分子数据进行综合分析。研究表明,许多突变与患者的中位生存率和治疗反应密切相关。研究表明,这些突变在核磁共振成像等肿瘤成像模式中表现为特定的视觉生物标志物。为了最大限度地减少对患者进行侵入性手术的次数,并优化预后和治疗计划过程中的整体资源,成像和分子特征的相关性已引起广泛关注。虽然肿瘤肿块是最重要的特征,但肿瘤周围受影响的组织也是一个重要的生物标志物,有助于突变的直观表现--这一点尚未得到广泛研究。肿瘤的生长模式会对周围组织产生相应的影响,这也是肿瘤特性的一种反映。对肿瘤生长如何影响周围组织进行建模,可以揭示肿瘤增强模式的重要信息,这反过来又具有重要的诊断和预后价值。本文首次介绍了利用生成式深度学习对肿瘤周围受影响组织进行自动计算建模的工作。本文根据突变状态的变化,分离并量化了肿瘤入侵(TI)对周围组织的影响,随后评估了其预后价值。此外,还提出了一种 TI 生成对抗网络(TI-GAN)来模拟肿瘤入侵特性。通过广泛的定性和定量分析、跨数据集测试和放射科医生盲测,证明了 TI-GAN 能够在医疗数据集的实际挑战(如数据有限和类内异质性高)下真实地模拟肿瘤侵袭。
{"title":"Computational modeling of tumor invasion from limited and diverse data in Glioblastoma","authors":"Padmaja Jonnalagedda ,&nbsp;Brent Weinberg ,&nbsp;Taejin L. Min ,&nbsp;Shiv Bhanu ,&nbsp;Bir Bhanu","doi":"10.1016/j.compmedimag.2024.102436","DOIUrl":"10.1016/j.compmedimag.2024.102436","url":null,"abstract":"<div><div>For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations — which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102436"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting thyroid nodules along with surrounding tissues and tracking nodules using motion prior in ultrasound videos 检测甲状腺结节和周围组织,并利用超声视频中的运动先验追踪结节。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102439
Song Gao , Yueyang Li , Haichi Luo
Ultrasound examination plays a crucial role in the clinical diagnosis of thyroid nodules. Although deep learning technology has been applied to thyroid nodule examinations, the existing methods all overlook the prior knowledge of nodules moving along a straight line in the video. We propose a new detection model, DiffusionVID-Line, and design a novel tracking algorithm, ByteTrack-Line, both of which fully leverage the prior knowledge of linear motion of nodules in thyroid ultrasound videos. Among them, ByteTrack-Line groups detected nodules, further reducing the workload of doctors and significantly improving their diagnostic speed and accuracy. In DiffusionVID-Line, we propose two new modules: Freq-FPN and Attn-Line. Freq-FPN module is used to extract frequency features, taking advantage of these features to reduce the impact of image blur in ultrasound videos. Based on the standard practice of segmented scanning by doctors, Attn-Line module enhances the attention on targets moving along a straight line, thus improving the accuracy of detection. In ByteTrack-Line, considering the characteristic of linear motion of nodules, we propose the Match-Line association module, which reduces the number of nodule ID switches. In the testing of the detection and tracking datasets, DiffusionVID-Line achieved a mean Average Precision (mAP50) of 74.2 for multiple tissues and 85.6 for nodules, while ByteTrack-Line achieved a Multiple Object Tracking Accuracy (MOTA) of 83.4. Both nodule detection and tracking have achieved state-of-the-art performance.
超声波检查在甲状腺结节的临床诊断中起着至关重要的作用。虽然深度学习技术已被应用于甲状腺结节检查,但现有方法都忽略了结节在视频中沿直线运动的先验知识。我们提出了一种新的检测模型--DiffusionVID-Line,并设计了一种新的跟踪算法--ByteTrack-Line,这两种方法都充分利用了甲状腺超声视频中结节直线运动的先验知识。其中,ByteTrack-Line 对检测到的结节进行分组,进一步减轻了医生的工作量,显著提高了诊断速度和准确性。在 DiffusionVID-Line 中,我们提出了两个新模块:Freq-FPN 和 Attn-Line。Freq-FPN 模块用于提取频率特性,利用这些特性降低超声视频中图像模糊的影响。Attn-Line 模块基于医生分段扫描的标准做法,加强了对沿直线运动目标的关注,从而提高了检测的准确性。在 ByteTrack-Line 中,考虑到结节直线运动的特点,我们提出了 Match-Line 关联模块,减少了结节 ID 的切换次数。在检测和跟踪数据集测试中,DiffusionVID-Line 的多组织平均精度 (mAP50) 为 74.2,结节平均精度 (mAP50) 为 85.6,而 ByteTrack-Line 的多目标跟踪精度 (MOTA) 为 83.4。结节检测和跟踪都达到了最先进的性能。
{"title":"Detecting thyroid nodules along with surrounding tissues and tracking nodules using motion prior in ultrasound videos","authors":"Song Gao ,&nbsp;Yueyang Li ,&nbsp;Haichi Luo","doi":"10.1016/j.compmedimag.2024.102439","DOIUrl":"10.1016/j.compmedimag.2024.102439","url":null,"abstract":"<div><div>Ultrasound examination plays a crucial role in the clinical diagnosis of thyroid nodules. Although deep learning technology has been applied to thyroid nodule examinations, the existing methods all overlook the prior knowledge of nodules moving along a straight line in the video. We propose a new detection model, DiffusionVID-Line, and design a novel tracking algorithm, ByteTrack-Line, both of which fully leverage the prior knowledge of linear motion of nodules in thyroid ultrasound videos. Among them, ByteTrack-Line groups detected nodules, further reducing the workload of doctors and significantly improving their diagnostic speed and accuracy. In DiffusionVID-Line, we propose two new modules: Freq-FPN and Attn-Line. Freq-FPN module is used to extract frequency features, taking advantage of these features to reduce the impact of image blur in ultrasound videos. Based on the standard practice of segmented scanning by doctors, Attn-Line module enhances the attention on targets moving along a straight line, thus improving the accuracy of detection. In ByteTrack-Line, considering the characteristic of linear motion of nodules, we propose the Match-Line association module, which reduces the number of nodule ID switches. In the testing of the detection and tracking datasets, DiffusionVID-Line achieved a mean Average Precision (mAP50) of 74.2 for multiple tissues and 85.6 for nodules, while ByteTrack-Line achieved a Multiple Object Tracking Accuracy (MOTA) of 83.4. Both nodule detection and tracking have achieved state-of-the-art performance.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102439"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1