Multi-structure Segmentation from Partially Labeled Datasets. Application to Body Composition Measurements on CT Scans.

Germán González, George R Washko, Raúl San José Estépar
{"title":"Multi-structure Segmentation from Partially Labeled Datasets. Application to Body Composition Measurements on CT Scans.","authors":"Germán González, George R Washko, Raúl San José Estépar","doi":"10.1007/978-3-030-00946-5_22","DOIUrl":null,"url":null,"abstract":"<p><p>Labeled data is the current bottleneck of medical image research. Substantial efforts are made to generate segmentation masks to characterize a given organ. The community ends up with multiple label maps of individual structures in different cases, not suitable for current multi-organ segmentation frameworks. Our objective is to leverage segmentations from multiple organs in different cases to generate a robust multi-organ deep learning segmentation network. We propose a modified cost-function that takes into account only the voxels labeled in the image, ignoring unlabeled structures. We evaluate the proposed methodology in the context of pectoralis muscle and subcutaneous fat segmentation on chest CT scans. Six different structures are segmented from an axial slice centered on the transversal aorta. We compare the performance of a network trained on 3,000 images where only one structure has been annotated (PUNet) against six UNets (one per structure) and a multi-class UNet trained on 500 completely annotated images, showing equivalence between the three methods (Dice coefficients of 0.909, 0.906 and 0.909 respectively). We further propose a modification of the architecture by adding convolutions to the skip connections (CUNet). When trained with partially labeled images, it outperforms statistically significantly the other three methods (Dice 0.916, <i>p</i>< 0.0001). We, therefore, show that (a) when keeping the number of organ annotation constant, training with partially labeled images is equivalent to training with wholly labeled data and (b) adding convolutions in the skip connections improves performance.</p>","PeriodicalId":93006,"journal":{"name":"Image analysis for moving organ, breast, and thoracic images : third International Workshop, RAMBO 2018, fourth International Workshop, BIA 2018, and first International Workshop, TIA 2018, held in conjunction with MICCAI 2018, Granada,...","volume":"11040 ","pages":"215-224"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7269188/pdf/nihms-1590104.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image analysis for moving organ, breast, and thoracic images : third International Workshop, RAMBO 2018, fourth International Workshop, BIA 2018, and first International Workshop, TIA 2018, held in conjunction with MICCAI 2018, Granada,...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-00946-5_22","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2018/9/12 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Labeled data is the current bottleneck of medical image research. Substantial efforts are made to generate segmentation masks to characterize a given organ. The community ends up with multiple label maps of individual structures in different cases, not suitable for current multi-organ segmentation frameworks. Our objective is to leverage segmentations from multiple organs in different cases to generate a robust multi-organ deep learning segmentation network. We propose a modified cost-function that takes into account only the voxels labeled in the image, ignoring unlabeled structures. We evaluate the proposed methodology in the context of pectoralis muscle and subcutaneous fat segmentation on chest CT scans. Six different structures are segmented from an axial slice centered on the transversal aorta. We compare the performance of a network trained on 3,000 images where only one structure has been annotated (PUNet) against six UNets (one per structure) and a multi-class UNet trained on 500 completely annotated images, showing equivalence between the three methods (Dice coefficients of 0.909, 0.906 and 0.909 respectively). We further propose a modification of the architecture by adding convolutions to the skip connections (CUNet). When trained with partially labeled images, it outperforms statistically significantly the other three methods (Dice 0.916, p< 0.0001). We, therefore, show that (a) when keeping the number of organ annotation constant, training with partially labeled images is equivalent to training with wholly labeled data and (b) adding convolutions in the skip connections improves performance.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
来自部分标记数据集的多结构分割。应用于 CT 扫描的身体成分测量。
标记数据是目前医学影像研究的瓶颈。人们花费了大量精力来生成分割掩膜,以描述给定器官的特征。最终,社会各界得到了不同病例中单个结构的多个标签图,但这些标签图并不适合当前的多器官分割框架。我们的目标是利用不同情况下多个器官的分割结果,生成稳健的多器官深度学习分割网络。我们提出了一种改进的成本函数,它只考虑图像中标记的体素,而忽略未标记的结构。我们以胸部 CT 扫描的胸肌和皮下脂肪分割为背景,对所提出的方法进行了评估。我们从以横向主动脉为中心的轴向切片中分割出六种不同的结构。我们比较了在 3,000 幅只标注了一种结构的图像上训练的网络(PUNet)与六个 UNet(每个结构一个)和在 500 幅完全标注的图像上训练的多类 UNet 的性能,结果显示这三种方法的性能相当(Dice 系数分别为 0.909、0.906 和 0.909)。我们还提出了一种改进架构的方法,即在跳转连接中加入卷积(CUNet)。当使用部分标记的图像进行训练时,该方法在统计上明显优于其他三种方法(Dice 0.916,p< 0.0001)。因此,我们证明了:(a) 在器官标注数量保持不变的情况下,使用部分标注图像进行训练等同于使用全部标注数据进行训练;(b) 在跳转连接中添加卷积可提高性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On the Relevance of the Loss Function in the Agatston Score Regression from Non-ECG Gated CT Scans. Accurate Measurement of Airway Morphology on Chest CT Images. Diffeomorphic Lung Registration Using Deep CNNs and Reinforced Learning. A CT Scan Harmonization Technique to Detect Emphysema and Small Airway Diseases. Multi-structure Segmentation from Partially Labeled Datasets. Application to Body Composition Measurements on CT Scans.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1