基于弱监督分割的MR图像中脑肿瘤的深度超像素生成与聚类。

IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING BMC Medical Imaging Pub Date : 2024-12-18 DOI:10.1186/s12880-024-01523-x
Jay J Yoo, Khashayar Namdar, Farzad Khalvati
{"title":"基于弱监督分割的MR图像中脑肿瘤的深度超像素生成与聚类。","authors":"Jay J Yoo, Khashayar Namdar, Farzad Khalvati","doi":"10.1186/s12880-024-01523-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.</p><p><strong>Methods: </strong>This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).</p><p><strong>Results: </strong>We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.</p><p><strong>Conclusion: </strong>The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"335"},"PeriodicalIF":2.9000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657002/pdf/","citationCount":"0","resultStr":"{\"title\":\"Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.\",\"authors\":\"Jay J Yoo, Khashayar Namdar, Farzad Khalvati\",\"doi\":\"10.1186/s12880-024-01523-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.</p><p><strong>Methods: </strong>This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).</p><p><strong>Results: </strong>We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.</p><p><strong>Conclusion: </strong>The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.</p>\",\"PeriodicalId\":9020,\"journal\":{\"name\":\"BMC Medical Imaging\",\"volume\":\"24 1\",\"pages\":\"335\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657002/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12880-024-01523-x\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12880-024-01523-x","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

目的:训练机器学习模型来分割医学图像中的肿瘤和其他异常是开发诊断工具的重要步骤,但通常需要手动注释地面真实分割,这需要大量的时间和资源。我们的目标是开发一个可以使用易于访问的二值图像级分类标签进行训练的管道,以有效地分割感兴趣的区域,而不需要基础真值注释。方法:本工作提出使用深度超像素生成模型和深度超像素聚类模型同时训练来输出弱监督脑肿瘤分割。通过超像素聚类模型对超像素生成模型的输出进行选择和聚类。此外,我们使用二值图像级标签(即指示图像是否包含肿瘤的标签)训练分类器,该标签用于通过将未分割的种子定位为损失项来指导训练。建议同时使用超像素生成和聚类模型,以及引导定位方法,允许输出弱监督肿瘤分割来捕获上下文信息,这些信息在训练期间传播到两个模型,从而产生特定肿瘤轮廓的超像素。我们使用Dice系数和95% Hausdorff距离(HD95)来评估管道的性能,并将其性能与最先进的基线进行比较。这些基线包括使用种子和超像素的最先进的弱监督分割方法(CAM-S)和任意分割模型(SAM)。结果:我们使用来自多模态脑肿瘤分割挑战(BraTS) 2020数据集的二维磁共振脑扫描切片和表明肿瘤存在的标签来训练和评估管道。在BraTS 2023数据集的外部测试队列中,我们的方法获得的平均Dice系数为0.745,平均HD95为20.8,优于所有基线,包括CAM-S和SAM,其平均Dice系数分别为0.646和0.641,平均HD95为21.2和27.3。结论:提出的结合深度超像素生成、深度超像素聚类和将未分割的种子作为损失项的方法改善了弱监督分割。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.

Purpose: Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.

Methods: This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).

Results: We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.

Conclusion: The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMC Medical Imaging
BMC Medical Imaging RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
4.60
自引率
3.70%
发文量
198
审稿时长
27 weeks
期刊介绍: BMC Medical Imaging is an open access journal publishing original peer-reviewed research articles in the development, evaluation, and use of imaging techniques and image processing tools to diagnose and manage disease.
期刊最新文献
A Stepwise decision tree model for differential diagnosis of Kimura's disease in the head and neck. Combining artificial intelligence assisted image segmentation and ultrasound based radiomics for the prediction of carotid plaque stability. Morphological characterization of atypical pancreatic ductal adenocarcinoma with cystic lesion on DCE-CT: a comprehensive retrospective study. AI-ready rectal cancer MR imaging: a workflow for tumor detection and segmentation. Deep learning-based evaluation of panoramic radiographs for osteoporosis screening: a systematic review and meta-analysis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1