首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
EchoGLAD: Hierarchical Graph Neural Networks for Left Ventricle Landmark Detection on Echocardiograms EchoGLAD:用于超声心动图左心室标记检测的层次图神经网络
Masoud Mokhtari, M. Mahdavi, H. Vaseli, C. Luong, P. Abolmaesumi, T. Tsang, Renjie Liao
The functional assessment of the left ventricle chamber of the heart requires detecting four landmark locations and measuring the internal dimension of the left ventricle and the approximate mass of the surrounding muscle. The key challenge of automating this task with machine learning is the sparsity of clinical labels, i.e., only a few landmark pixels in a high-dimensional image are annotated, leading many prior works to heavily rely on isotropic label smoothing. However, such a label smoothing strategy ignores the anatomical information of the image and induces some bias. To address this challenge, we introduce an echocardiogram-based, hierarchical graph neural network (GNN) for left ventricle landmark detection (EchoGLAD). Our main contributions are: 1) a hierarchical graph representation learning framework for multi-resolution landmark detection via GNNs; 2) induced hierarchical supervision at different levels of granularity using a multi-level loss. We evaluate our model on a public and a private dataset under the in-distribution (ID) and out-of-distribution (OOD) settings. For the ID setting, we achieve the state-of-the-art mean absolute errors (MAEs) of 1.46 mm and 1.86 mm on the two datasets. Our model also shows better OOD generalization than prior works with a testing MAE of 4.3 mm.
心脏左心室的功能评估需要检测四个标志性位置,并测量左心室的内部尺寸和周围肌肉的大致质量。用机器学习自动化这项任务的关键挑战是临床标签的稀疏性,即在高维图像中只有少数地标像素被注释,导致许多先前的工作严重依赖于各向同性标签平滑。然而,这种标签平滑策略忽略了图像的解剖信息,并引起一定的偏差。为了解决这一挑战,我们引入了一种基于超声心动图的分层图神经网络(GNN),用于左心室地标检测(EchoGLAD)。我们的主要贡献有:1)通过gnn进行多分辨率地标检测的分层图表示学习框架;2)利用多层次损失诱导不同粒度层次的分层监督。我们在分布内(ID)和分布外(OOD)设置下在公共和私有数据集上评估我们的模型。对于ID设置,我们在两个数据集上实现了1.46 mm和1.86 mm的最先进的平均绝对误差(MAEs)。在测试MAE为4.3 mm的情况下,我们的模型也比之前的研究显示出更好的OOD泛化。
{"title":"EchoGLAD: Hierarchical Graph Neural Networks for Left Ventricle Landmark Detection on Echocardiograms","authors":"Masoud Mokhtari, M. Mahdavi, H. Vaseli, C. Luong, P. Abolmaesumi, T. Tsang, Renjie Liao","doi":"10.48550/arXiv.2307.12229","DOIUrl":"https://doi.org/10.48550/arXiv.2307.12229","url":null,"abstract":"The functional assessment of the left ventricle chamber of the heart requires detecting four landmark locations and measuring the internal dimension of the left ventricle and the approximate mass of the surrounding muscle. The key challenge of automating this task with machine learning is the sparsity of clinical labels, i.e., only a few landmark pixels in a high-dimensional image are annotated, leading many prior works to heavily rely on isotropic label smoothing. However, such a label smoothing strategy ignores the anatomical information of the image and induces some bias. To address this challenge, we introduce an echocardiogram-based, hierarchical graph neural network (GNN) for left ventricle landmark detection (EchoGLAD). Our main contributions are: 1) a hierarchical graph representation learning framework for multi-resolution landmark detection via GNNs; 2) induced hierarchical supervision at different levels of granularity using a multi-level loss. We evaluate our model on a public and a private dataset under the in-distribution (ID) and out-of-distribution (OOD) settings. For the ID setting, we achieve the state-of-the-art mean absolute errors (MAEs) of 1.46 mm and 1.86 mm on the two datasets. Our model also shows better OOD generalization than prior works with a testing MAE of 4.3 mm.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"30 1","pages":"227-237"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75060779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation of Arbitrary Level Contrast Dose in MRI Using an Iterative Global Transformer Model 用迭代全局变压器模型模拟MRI中任意水平造影剂剂量
Dayang Wang, Srivathsa Pasumarthi, G. Zaharchuk, R. Chamberlain
Deep learning (DL) based contrast dose reduction and elimination in MRI imaging is gaining traction, given the detrimental effects of Gadolinium-based Contrast Agents (GBCAs). These DL algorithms are however limited by the availability of high quality low dose datasets. Additionally, different types of GBCAs and pathologies require different dose levels for the DL algorithms to work reliably. In this work, we formulate a novel transformer (Gformer) based iterative modelling approach for the synthesis of images with arbitrary contrast enhancement that corresponds to different dose levels. The proposed Gformer incorporates a sub-sampling based attention mechanism and a rotational shift module that captures the various contrast related features. Quantitative evaluation indicates that the proposed model performs better than other state-of-the-art methods. We further perform quantitative evaluation on downstream tasks such as dose reduction and tumor segmentation to demonstrate the clinical utility.
鉴于钆基造影剂(gbca)的有害影响,基于深度学习(DL)的造影剂剂量减少和消除在MRI成像中越来越受到关注。然而,这些DL算法受到高质量低剂量数据集可用性的限制。此外,不同类型的gbca和病理需要不同的剂量水平才能使DL算法可靠地工作。在这项工作中,我们制定了一种新的基于变压器(Gformer)的迭代建模方法,用于合成具有任意对比度增强的图像,对应于不同的剂量水平。所提出的Gformer结合了一个基于子采样的注意机制和一个捕获各种对比度相关特征的旋转移位模块。定量评价表明,该模型的性能优于其他先进的方法。我们进一步对下游任务进行定量评估,如剂量减少和肿瘤分割,以证明临床效用。
{"title":"Simulation of Arbitrary Level Contrast Dose in MRI Using an Iterative Global Transformer Model","authors":"Dayang Wang, Srivathsa Pasumarthi, G. Zaharchuk, R. Chamberlain","doi":"10.48550/arXiv.2307.11980","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11980","url":null,"abstract":"Deep learning (DL) based contrast dose reduction and elimination in MRI imaging is gaining traction, given the detrimental effects of Gadolinium-based Contrast Agents (GBCAs). These DL algorithms are however limited by the availability of high quality low dose datasets. Additionally, different types of GBCAs and pathologies require different dose levels for the DL algorithms to work reliably. In this work, we formulate a novel transformer (Gformer) based iterative modelling approach for the synthesis of images with arbitrary contrast enhancement that corresponds to different dose levels. The proposed Gformer incorporates a sub-sampling based attention mechanism and a rotational shift module that captures the various contrast related features. Quantitative evaluation indicates that the proposed model performs better than other state-of-the-art methods. We further perform quantitative evaluation on downstream tasks such as dose reduction and tumor segmentation to demonstrate the clinical utility.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"64 1","pages":"88-98"},"PeriodicalIF":0.0,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74098153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation 选择最佳预训练模型:面向医学图像分割的可转移性估计
Yuncheng Yang, Meng Wei, Junjun He, J. Yang, Jin Ye, Yun Gu
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources. With the abundance of medical image data, many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from. Hence, it's vital to estimate the source models' transferability (i.e., the ability to generalize across different downstream tasks) for proper and efficient model reuse. To make up for its deficiency when applying transfer learning to medical image segmentation, in this paper, we therefore propose a new Transferability Estimation (TE) method. We first analyze the drawbacks of using the existing TE algorithms for medical image segmentation and then design a source-free TE framework that considers both class consistency and feature variety for better estimation. Extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation. Code is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV
对于需要大量资源的医学图像分割任务,迁移学习是训练深度神经网络的一项关键技术。随着医学图像数据的丰富,许多研究机构发布了在各种数据集上训练的模型,这些数据集可以形成一个巨大的候选源模型库供选择。因此,评估源模型的可移植性(即跨不同下游任务的泛化能力)对于正确和有效的模型重用是至关重要的。为了弥补迁移学习在医学图像分割中存在的不足,本文提出了一种新的可迁移性估计方法。本文首先分析了使用现有TE算法进行医学图像分割的缺点,然后设计了一个考虑类一致性和特征多样性的无源TE框架,以获得更好的估计。大量的实验表明,我们的方法优于目前医学图像分割中可转移性估计的所有算法。代码可从https://github.com/EndoluminalSurgicalVision-IMR/CCFV获得
{"title":"Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation","authors":"Yuncheng Yang, Meng Wei, Junjun He, J. Yang, Jin Ye, Yun Gu","doi":"10.48550/arXiv.2307.11958","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11958","url":null,"abstract":"Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources. With the abundance of medical image data, many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from. Hence, it's vital to estimate the source models' transferability (i.e., the ability to generalize across different downstream tasks) for proper and efficient model reuse. To make up for its deficiency when applying transfer learning to medical image segmentation, in this paper, we therefore propose a new Transferability Estimation (TE) method. We first analyze the drawbacks of using the existing TE algorithms for medical image segmentation and then design a source-free TE framework that considers both class consistency and feature variety for better estimation. Extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation. Code is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15 1","pages":"674-683"},"PeriodicalIF":0.0,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88800629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DHC: Dual-debiased Heterogeneous Co-training Framework for Class-imbalanced Semi-supervised Medical Image Segmentation 类不平衡半监督医学图像分割的双去偏异构协同训练框架
Hong Wang, X. Li
The volume-wise labeling of 3D medical images is expertise-demanded and time-consuming; hence semi-supervised learning (SSL) is highly desirable for training with limited labeled data. Imbalanced class distribution is a severe problem that bottlenecks the real-world application of these methods but was not addressed much. Aiming to solve this issue, we present a novel Dual-debiased Heterogeneous Co-training (DHC) framework for semi-supervised 3D medical image segmentation. Specifically, we propose two loss weighting strategies, namely Distribution-aware Debiased Weighting (DistDW) and Difficulty-aware Debiased Weighting (DiffDW), which leverage the pseudo labels dynamically to guide the model to solve data and learning biases. The framework improves significantly by co-training these two diverse and accurate sub-models. We also introduce more representative benchmarks for class-imbalanced semi-supervised medical image segmentation, which can fully demonstrate the efficacy of the class-imbalance designs. Experiments show that our proposed framework brings significant improvements by using pseudo labels for debiasing and alleviating the class imbalance problem. More importantly, our method outperforms the state-of-the-art SSL methods, demonstrating the potential of our framework for the more challenging SSL setting. Code and models are available at: https://github.com/xmed-lab/DHC.
三维医学图像的体标记需要专业知识和时间;因此,半监督学习(SSL)非常适合使用有限的标记数据进行训练。不平衡的类分布是一个严重的问题,它阻碍了这些方法在现实世界中的应用,但并没有得到太多的解决。为了解决这一问题,我们提出了一种新的双去偏异构协同训练(DHC)框架,用于半监督三维医学图像分割。具体来说,我们提出了两种损失加权策略,即分布感知的去偏见加权(DistDW)和困难感知的去偏见加权(DiffDW),它们动态地利用伪标签来指导模型解决数据和学习偏差。通过共同训练这两个不同且准确的子模型,该框架得到了显著改进。我们还引入了更多具有代表性的类不平衡半监督医学图像分割基准,充分证明了类不平衡设计的有效性。实验表明,我们提出的框架通过使用伪标签来消除和缓解类不平衡问题,取得了显著的改进。更重要的是,我们的方法优于最先进的SSL方法,展示了我们的框架在更具挑战性的SSL设置方面的潜力。代码和模型可在:https://github.com/xmed-lab/DHC。
{"title":"DHC: Dual-debiased Heterogeneous Co-training Framework for Class-imbalanced Semi-supervised Medical Image Segmentation","authors":"Hong Wang, X. Li","doi":"10.48550/arXiv.2307.11960","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11960","url":null,"abstract":"The volume-wise labeling of 3D medical images is expertise-demanded and time-consuming; hence semi-supervised learning (SSL) is highly desirable for training with limited labeled data. Imbalanced class distribution is a severe problem that bottlenecks the real-world application of these methods but was not addressed much. Aiming to solve this issue, we present a novel Dual-debiased Heterogeneous Co-training (DHC) framework for semi-supervised 3D medical image segmentation. Specifically, we propose two loss weighting strategies, namely Distribution-aware Debiased Weighting (DistDW) and Difficulty-aware Debiased Weighting (DiffDW), which leverage the pseudo labels dynamically to guide the model to solve data and learning biases. The framework improves significantly by co-training these two diverse and accurate sub-models. We also introduce more representative benchmarks for class-imbalanced semi-supervised medical image segmentation, which can fully demonstrate the efficacy of the class-imbalance designs. Experiments show that our proposed framework brings significant improvements by using pseudo labels for debiasing and alleviating the class imbalance problem. More importantly, our method outperforms the state-of-the-art SSL methods, demonstrating the potential of our framework for the more challenging SSL setting. Code and models are available at: https://github.com/xmed-lab/DHC.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"11 1","pages":"582-591"},"PeriodicalIF":0.0,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84204566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
COLosSAL: A Benchmark for Cold-start Active Learning for 3D Medical Image Segmentation COLosSAL:用于3D医学图像分割的冷启动主动学习基准
Han Liu, Hao Li, Xing Yao, Yubo Fan, Dewei Hu, B. Dawant, V. Nath, Zhoubing Xu, I. Oguz
Medical image segmentation is a critical task in medical image analysis. In recent years, deep learning based approaches have shown exceptional performance when trained on a fully-annotated dataset. However, data annotation is often a significant bottleneck, especially for 3D medical images. Active learning (AL) is a promising solution for efficient annotation but requires an initial set of labeled samples to start active selection. When the entire data pool is unlabeled, how do we select the samples to annotate as our initial set? This is also known as the cold-start AL, which permits only one chance to request annotations from experts without access to previously annotated data. Cold-start AL is highly relevant in many practical scenarios but has been under-explored, especially for 3D medical segmentation tasks requiring substantial annotation effort. In this paper, we present a benchmark named COLosSAL by evaluating six cold-start AL strategies on five 3D medical image segmentation tasks from the public Medical Segmentation Decathlon collection. We perform a thorough performance analysis and explore important open questions for cold-start AL, such as the impact of budget on different strategies. Our results show that cold-start AL is still an unsolved problem for 3D segmentation tasks but some important trends have been observed. The code repository, data partitions, and baseline results for the complete benchmark are publicly available at https://github.com/MedICL-VU/COLosSAL.
医学图像分割是医学图像分析中的一项关键任务。近年来,基于深度学习的方法在全标注数据集上训练时表现出优异的性能。然而,数据标注往往是一个重要的瓶颈,特别是对于3D医学图像。主动学习(AL)是一种很有前途的高效注释解决方案,但需要一组初始的标记样本来开始主动选择。当整个数据池都未标记时,我们如何选择将样本作为初始集进行注释?这也被称为冷启动人工智能,它只允许有一次机会请求专家的注释,而不需要访问以前注释过的数据。冷启动人工智能在许多实际场景中高度相关,但尚未得到充分探索,特别是对于需要大量注释工作的3D医学分割任务。本文通过对来自公共医学分割十项全能集合的5个3D医学图像分割任务的6种冷启动人工智能策略进行评估,提出了一个名为COLosSAL的基准。我们对冷启动人工智能进行了全面的性能分析,并探讨了一些重要的开放性问题,例如预算对不同策略的影响。我们的研究结果表明,冷启动人工智能在3D分割任务中仍然是一个未解决的问题,但已经观察到一些重要的趋势。完整基准测试的代码存储库、数据分区和基线结果可在https://github.com/MedICL-VU/COLosSAL上公开获得。
{"title":"COLosSAL: A Benchmark for Cold-start Active Learning for 3D Medical Image Segmentation","authors":"Han Liu, Hao Li, Xing Yao, Yubo Fan, Dewei Hu, B. Dawant, V. Nath, Zhoubing Xu, I. Oguz","doi":"10.48550/arXiv.2307.12004","DOIUrl":"https://doi.org/10.48550/arXiv.2307.12004","url":null,"abstract":"Medical image segmentation is a critical task in medical image analysis. In recent years, deep learning based approaches have shown exceptional performance when trained on a fully-annotated dataset. However, data annotation is often a significant bottleneck, especially for 3D medical images. Active learning (AL) is a promising solution for efficient annotation but requires an initial set of labeled samples to start active selection. When the entire data pool is unlabeled, how do we select the samples to annotate as our initial set? This is also known as the cold-start AL, which permits only one chance to request annotations from experts without access to previously annotated data. Cold-start AL is highly relevant in many practical scenarios but has been under-explored, especially for 3D medical segmentation tasks requiring substantial annotation effort. In this paper, we present a benchmark named COLosSAL by evaluating six cold-start AL strategies on five 3D medical image segmentation tasks from the public Medical Segmentation Decathlon collection. We perform a thorough performance analysis and explore important open questions for cold-start AL, such as the impact of budget on different strategies. Our results show that cold-start AL is still an unsolved problem for 3D segmentation tasks but some important trends have been observed. The code repository, data partitions, and baseline results for the complete benchmark are publicly available at https://github.com/MedICL-VU/COLosSAL.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"40 1","pages":"25-34"},"PeriodicalIF":0.0,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89915673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morphology-inspired Unsupervised Gland Segmentation via Selective Semantic Grouping 基于选择性语义分组的形态学启发的无监督腺体分割
Qixiang Zhang, Yi Li, Cheng Xue, X. Li
Designing deep learning algorithms for gland segmentation is crucial for automatic cancer diagnosis and prognosis, yet the expensive annotation cost hinders the development and application of this technology. In this paper, we make a first attempt to explore a deep learning method for unsupervised gland segmentation, where no manual annotations are required. Existing unsupervised semantic segmentation methods encounter a huge challenge on gland images: They either over-segment a gland into many fractions or under-segment the gland regions by confusing many of them with the background. To overcome this challenge, our key insight is to introduce an empirical cue about gland morphology as extra knowledge to guide the segmentation process. To this end, we propose a novel Morphology-inspired method via Selective Semantic Grouping. We first leverage the empirical cue to selectively mine out proposals for gland sub-regions with variant appearances. Then, a Morphology-aware Semantic Grouping module is employed to summarize the overall information about the gland by explicitly grouping the semantics of its sub-region proposals. In this way, the final segmentation network could learn comprehensive knowledge about glands and produce well-delineated, complete predictions. We conduct experiments on GlaS dataset and CRAG dataset. Our method exceeds the second-best counterpart over 10.56% at mIOU.
设计用于腺体分割的深度学习算法是实现癌症自动诊断和预后的关键,但昂贵的标注成本阻碍了该技术的发展和应用。在本文中,我们首次尝试探索一种不需要人工注释的无监督腺体分割的深度学习方法。现有的无监督语义分割方法在处理腺体图像时遇到了巨大的挑战:要么将一个腺体过度分割成许多部分,要么将许多腺体区域与背景混淆,从而导致腺体区域分割不足。为了克服这一挑战,我们的关键见解是引入关于腺体形态的经验线索作为额外的知识来指导分割过程。为此,我们提出了一种基于选择性语义分组的形态学启发方法。我们首先利用经验线索有选择地挖掘出具有不同外观的腺体子区域的建议。然后,利用形态学感知语义分组模块,通过显式分组腺体子区域建议的语义来总结腺体的总体信息。这样,最终的分割网络可以学习到关于腺体的全面知识,并产生清晰、完整的预测。我们在GlaS数据集和CRAG数据集上进行了实验。我们的方法在mIOU上超过第二好的方法10.56%。
{"title":"Morphology-inspired Unsupervised Gland Segmentation via Selective Semantic Grouping","authors":"Qixiang Zhang, Yi Li, Cheng Xue, X. Li","doi":"10.48550/arXiv.2307.11989","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11989","url":null,"abstract":"Designing deep learning algorithms for gland segmentation is crucial for automatic cancer diagnosis and prognosis, yet the expensive annotation cost hinders the development and application of this technology. In this paper, we make a first attempt to explore a deep learning method for unsupervised gland segmentation, where no manual annotations are required. Existing unsupervised semantic segmentation methods encounter a huge challenge on gland images: They either over-segment a gland into many fractions or under-segment the gland regions by confusing many of them with the background. To overcome this challenge, our key insight is to introduce an empirical cue about gland morphology as extra knowledge to guide the segmentation process. To this end, we propose a novel Morphology-inspired method via Selective Semantic Grouping. We first leverage the empirical cue to selectively mine out proposals for gland sub-regions with variant appearances. Then, a Morphology-aware Semantic Grouping module is employed to summarize the overall information about the gland by explicitly grouping the semantics of its sub-region proposals. In this way, the final segmentation network could learn comprehensive knowledge about glands and produce well-delineated, complete predictions. We conduct experiments on GlaS dataset and CRAG dataset. Our method exceeds the second-best counterpart over 10.56% at mIOU.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"63 1","pages":"281-291"},"PeriodicalIF":0.0,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80137472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FEDD - Fair, Efficient, and Diverse Diffusion-based Lesion Segmentation and Malignancy Classification FEDD -基于弥散的公平、高效和多样化的病灶分割和恶性分类
H'ector Carri'on, Narges Norouzi
Skin diseases affect millions of people worldwide, across all ethnicities. Increasing diagnosis accessibility requires fair and accurate segmentation and classification of dermatology images. However, the scarcity of annotated medical images, especially for rare diseases and underrepresented skin tones, poses a challenge to the development of fair and accurate models. In this study, we introduce a Fair, Efficient, and Diverse Diffusion-based framework for skin lesion segmentation and malignancy classification. FEDD leverages semantically meaningful feature embeddings learned through a denoising diffusion probabilistic backbone and processes them via linear probes to achieve state-of-the-art performance on Diverse Dermatology Images (DDI). We achieve an improvement in intersection over union of 0.18, 0.13, 0.06, and 0.07 while using only 5%, 10%, 15%, and 20% labeled samples, respectively. Additionally, FEDD trained on 10% of DDI demonstrates malignancy classification accuracy of 81%, 14% higher compared to the state-of-the-art. We showcase high efficiency in data-constrained scenarios while providing fair performance for diverse skin tones and rare malignancy conditions. Our newly annotated DDI segmentation masks and training code can be found on https://github.com/hectorcarrion/fedd.
皮肤病影响着全世界所有种族的数百万人。提高诊断的可及性需要公平和准确的皮肤图像分割和分类。然而,标注医学图像的稀缺性,特别是对于罕见疾病和代表性不足的肤色,对公平和准确模型的发展提出了挑战。在这项研究中,我们引入了一个公平、高效和多样化的基于扩散的皮肤病变分割和恶性肿瘤分类框架。FEDD利用通过去噪扩散概率主干学习到的语义上有意义的特征嵌入,并通过线性探针对其进行处理,以在多种皮肤病图像(DDI)上实现最先进的性能。我们在分别使用5%、10%、15%和20%标记样本的情况下,实现了0.18、0.13、0.06和0.07的交集与并的改进。此外,在DDI的10%上训练FEDD,恶性肿瘤分类准确率达到81%,比最先进的方法高14%。我们在数据受限的情况下展示了高效率,同时为不同肤色和罕见的恶性疾病提供了公平的性能。我们新标注的DDI分割掩码和训练代码可以在https://github.com/hectorcarrion/fedd上找到。
{"title":"FEDD - Fair, Efficient, and Diverse Diffusion-based Lesion Segmentation and Malignancy Classification","authors":"H'ector Carri'on, Narges Norouzi","doi":"10.48550/arXiv.2307.11654","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11654","url":null,"abstract":"Skin diseases affect millions of people worldwide, across all ethnicities. Increasing diagnosis accessibility requires fair and accurate segmentation and classification of dermatology images. However, the scarcity of annotated medical images, especially for rare diseases and underrepresented skin tones, poses a challenge to the development of fair and accurate models. In this study, we introduce a Fair, Efficient, and Diverse Diffusion-based framework for skin lesion segmentation and malignancy classification. FEDD leverages semantically meaningful feature embeddings learned through a denoising diffusion probabilistic backbone and processes them via linear probes to achieve state-of-the-art performance on Diverse Dermatology Images (DDI). We achieve an improvement in intersection over union of 0.18, 0.13, 0.06, and 0.07 while using only 5%, 10%, 15%, and 20% labeled samples, respectively. Additionally, FEDD trained on 10% of DDI demonstrates malignancy classification accuracy of 81%, 14% higher compared to the state-of-the-art. We showcase high efficiency in data-constrained scenarios while providing fair performance for diverse skin tones and rare malignancy conditions. Our newly annotated DDI segmentation masks and training code can be found on https://github.com/hectorcarrion/fedd.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"71 1","pages":"270-279"},"PeriodicalIF":0.0,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85022881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation 基于一致性引导的元学习自举半监督医学图像分割
Qingyue Wei, Lequan Yu, Xianhang Li, Wei Shao, Cihang Xie, Lei Xing, Yuyin Zhou
Medical imaging has witnessed remarkable progress but usually requires a large amount of high-quality annotated data which is time-consuming and costly to obtain. To alleviate this burden, semi-supervised learning has garnered attention as a potential solution. In this paper, we present Meta-Learning for Bootstrapping Medical Image Segmentation (MLB-Seg), a novel method for tackling the challenge of semi-supervised medical image segmentation. Specifically, our approach first involves training a segmentation model on a small set of clean labeled images to generate initial labels for unlabeled data. To further optimize this bootstrapping process, we introduce a per-pixel weight mapping system that dynamically assigns weights to both the initialized labels and the model's own predictions. These weights are determined using a meta-process that prioritizes pixels with loss gradient directions closer to those of clean data, which is based on a small set of precisely annotated images. To facilitate the meta-learning process, we additionally introduce a consistency-based Pseudo Label Enhancement (PLE) scheme that improves the quality of the model's own predictions by ensembling predictions from various augmented versions of the same input. In order to improve the quality of the weight maps obtained through multiple augmentations of a single input, we introduce a mean teacher into the PLE scheme. This method helps to reduce noise in the weight maps and stabilize its generation process. Our extensive experimental results on public atrial and prostate segmentation datasets demonstrate that our proposed method achieves state-of-the-art results under semi-supervision. Our code is available at https://github.com/aijinrjinr/MLB-Seg.
医学成像取得了显著的进步,但通常需要大量高质量的注释数据,这些数据耗时且成本高昂。为了减轻这种负担,半监督学习作为一种潜在的解决方案引起了人们的关注。在本文中,我们提出了一种用于自引导医学图像分割(MLB-Seg)的元学习方法,这是一种解决半监督医学图像分割挑战的新方法。具体来说,我们的方法首先涉及在一小组干净标记的图像上训练分割模型,以生成未标记数据的初始标签。为了进一步优化这个自举过程,我们引入了一个逐像素权重映射系统,该系统可以动态地为初始化的标签和模型自己的预测分配权重。这些权重是使用元过程确定的,该过程优先考虑损失梯度方向更接近干净数据的像素,这是基于一小组精确注释的图像。为了促进元学习过程,我们还引入了一种基于一致性的伪标签增强(PLE)方案,该方案通过集成来自相同输入的各种增强版本的预测来提高模型自身预测的质量。为了提高通过对单个输入的多次增强得到的权重图的质量,我们在PLE方案中引入了一个平均教师。该方法有助于降低权重图中的噪声,稳定权重图的生成过程。我们在公共心房和前列腺分割数据集上的广泛实验结果表明,我们提出的方法在半监督下取得了最先进的结果。我们的代码可在https://github.com/aijinrjinr/MLB-Seg上获得。
{"title":"Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation","authors":"Qingyue Wei, Lequan Yu, Xianhang Li, Wei Shao, Cihang Xie, Lei Xing, Yuyin Zhou","doi":"10.48550/arXiv.2307.11604","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11604","url":null,"abstract":"Medical imaging has witnessed remarkable progress but usually requires a large amount of high-quality annotated data which is time-consuming and costly to obtain. To alleviate this burden, semi-supervised learning has garnered attention as a potential solution. In this paper, we present Meta-Learning for Bootstrapping Medical Image Segmentation (MLB-Seg), a novel method for tackling the challenge of semi-supervised medical image segmentation. Specifically, our approach first involves training a segmentation model on a small set of clean labeled images to generate initial labels for unlabeled data. To further optimize this bootstrapping process, we introduce a per-pixel weight mapping system that dynamically assigns weights to both the initialized labels and the model's own predictions. These weights are determined using a meta-process that prioritizes pixels with loss gradient directions closer to those of clean data, which is based on a small set of precisely annotated images. To facilitate the meta-learning process, we additionally introduce a consistency-based Pseudo Label Enhancement (PLE) scheme that improves the quality of the model's own predictions by ensembling predictions from various augmented versions of the same input. In order to improve the quality of the weight maps obtained through multiple augmentations of a single input, we introduce a mean teacher into the PLE scheme. This method helps to reduce noise in the weight maps and stabilize its generation process. Our extensive experimental results on public atrial and prostate segmentation datasets demonstrate that our proposed method achieves state-of-the-art results under semi-supervision. Our code is available at https://github.com/aijinrjinr/MLB-Seg.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"1 1","pages":"183-193"},"PeriodicalIF":0.0,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87772379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
EndoSurf: Neural Surface Reconstruction of Deformable Tissues with Stereo Endoscope Videos 用立体内窥镜视频重建可变形组织的神经表面
Ruyi Zha, Xuelian Cheng, Hongdong Li, Mehrtash Harandi, ZongYuan Ge
Reconstructing soft tissues from stereo endoscope videos is an essential prerequisite for many medical applications. Previous methods struggle to produce high-quality geometry and appearance due to their inadequate representations of 3D scenes. To address this issue, we propose a novel neural-field-based method, called EndoSurf, which effectively learns to represent a deforming surface from an RGBD sequence. In EndoSurf, we model surface dynamics, shape, and texture with three neural fields. First, 3D points are transformed from the observed space to the canonical space using the deformation field. The signed distance function (SDF) field and radiance field then predict their SDFs and colors, respectively, with which RGBD images can be synthesized via differentiable volume rendering. We constrain the learned shape by tailoring multiple regularization strategies and disentangling geometry and appearance. Experiments on public endoscope datasets demonstrate that EndoSurf significantly outperforms existing solutions, particularly in reconstructing high-fidelity shapes. Code is available at https://github.com/Ruyi-Zha/endosurf.git.
从立体内窥镜视频中重建软组织是许多医学应用的必要前提。以前的方法难以产生高质量的几何形状和外观,因为它们对3D场景的表示不足。为了解决这个问题,我们提出了一种新的基于神经场的方法,称为EndoSurf,它可以有效地学习从RGBD序列中表示变形表面。在EndoSurf中,我们用三个神经场对表面动力学、形状和纹理进行建模。首先,利用变形场将三维点从观测空间变换到正则空间;然后利用符号距离函数(SDF)场和亮度场分别预测它们的SDF和颜色,利用这些SDF和颜色可以通过可微体渲染合成RGBD图像。我们通过剪裁多个正则化策略和解除几何和外观的纠缠来约束学习到的形状。在公共内窥镜数据集上的实验表明,EndoSurf显著优于现有的解决方案,特别是在重建高保真形状方面。代码可从https://github.com/Ruyi-Zha/endosurf.git获得。
{"title":"EndoSurf: Neural Surface Reconstruction of Deformable Tissues with Stereo Endoscope Videos","authors":"Ruyi Zha, Xuelian Cheng, Hongdong Li, Mehrtash Harandi, ZongYuan Ge","doi":"10.48550/arXiv.2307.11307","DOIUrl":"https://doi.org/10.48550/arXiv.2307.11307","url":null,"abstract":"Reconstructing soft tissues from stereo endoscope videos is an essential prerequisite for many medical applications. Previous methods struggle to produce high-quality geometry and appearance due to their inadequate representations of 3D scenes. To address this issue, we propose a novel neural-field-based method, called EndoSurf, which effectively learns to represent a deforming surface from an RGBD sequence. In EndoSurf, we model surface dynamics, shape, and texture with three neural fields. First, 3D points are transformed from the observed space to the canonical space using the deformation field. The signed distance function (SDF) field and radiance field then predict their SDFs and colors, respectively, with which RGBD images can be synthesized via differentiable volume rendering. We constrain the learned shape by tailoring multiple regularization strategies and disentangling geometry and appearance. Experiments on public endoscope datasets demonstrate that EndoSurf significantly outperforms existing solutions, particularly in reconstructing high-fidelity shapes. Code is available at https://github.com/Ruyi-Zha/endosurf.git.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"65 1","pages":"13-23"},"PeriodicalIF":0.0,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88169198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SLPD: Slide-level Prototypical Distillation for WSIs SLPD:用于wsi的滑动级原型蒸馏
Zhimiao Yu, Tiancheng Lin, Yi Xu
Improving the feature representation ability is the foundation of many whole slide pathological image (WSIs) tasks. Recent works have achieved great success in pathological-specific self-supervised learning (SSL). However, most of them only focus on learning patch-level representations, thus there is still a gap between pretext and slide-level downstream tasks, e.g., subtyping, grading and staging. Aiming towards slide-level representations, we propose Slide-Level Prototypical Distillation (SLPD) to explore intra- and inter-slide semantic structures for context modeling on WSIs. Specifically, we iteratively perform intra-slide clustering for the regions (4096x4096 patches) within each WSI to yield the prototypes and encourage the region representations to be closer to the assigned prototypes. By representing each slide with its prototypes, we further select similar slides by the set distance of prototypes and assign the regions by cross-slide prototypes for distillation. SLPD achieves state-of-the-art results on multiple slide-level benchmarks and demonstrates that representation learning of semantic structures of slides can make a suitable proxy task for WSI analysis. Code will be available at https://github.com/Carboxy/SLPD.
提高特征表征能力是许多病理全切片图像任务的基础。近年来在病理特异性自监督学习(SSL)方面的研究取得了很大的成功。然而,他们大多只专注于学习补丁级表征,因此在借口和幻灯片级下游任务(如subtyping, grading and staging)之间仍然存在差距。针对幻灯片级表示,我们提出了幻灯片级原型蒸馏(SLPD)来探索幻灯片内和幻灯片间的语义结构,以便在wsi上进行上下文建模。具体来说,我们对每个WSI内的区域(4096x4096个补丁)迭代地执行幻灯片内聚类,以产生原型,并鼓励区域表示更接近指定的原型。通过用原型表示每张幻灯片,我们进一步通过原型的设置距离选择相似的幻灯片,并通过交叉幻灯片原型分配区域进行蒸馏。SLPD在多个幻灯片级别的基准测试中取得了最先进的结果,并证明了幻灯片语义结构的表示学习可以为WSI分析提供合适的代理任务。代码将在https://github.com/Carboxy/SLPD上提供。
{"title":"SLPD: Slide-level Prototypical Distillation for WSIs","authors":"Zhimiao Yu, Tiancheng Lin, Yi Xu","doi":"10.48550/arXiv.2307.10696","DOIUrl":"https://doi.org/10.48550/arXiv.2307.10696","url":null,"abstract":"Improving the feature representation ability is the foundation of many whole slide pathological image (WSIs) tasks. Recent works have achieved great success in pathological-specific self-supervised learning (SSL). However, most of them only focus on learning patch-level representations, thus there is still a gap between pretext and slide-level downstream tasks, e.g., subtyping, grading and staging. Aiming towards slide-level representations, we propose Slide-Level Prototypical Distillation (SLPD) to explore intra- and inter-slide semantic structures for context modeling on WSIs. Specifically, we iteratively perform intra-slide clustering for the regions (4096x4096 patches) within each WSI to yield the prototypes and encourage the region representations to be closer to the assigned prototypes. By representing each slide with its prototypes, we further select similar slides by the set distance of prototypes and assign the regions by cross-slide prototypes for distillation. SLPD achieves state-of-the-art results on multiple slide-level benchmarks and demonstrates that representation learning of semantic structures of slides can make a suitable proxy task for WSI analysis. Code will be available at https://github.com/Carboxy/SLPD.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"6 1","pages":"259-269"},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89454881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1