Learning robust medical image segmentation from multi-source annotations

IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Medical image analysis Pub Date : 2025-02-08 DOI:10.1016/j.media.2025.103489
Yifeng Wang , Luyang Luo , Mingxiang Wu , Qiong Wang , Hao Chen
{"title":"Learning robust medical image segmentation from multi-source annotations","authors":"Yifeng Wang ,&nbsp;Luyang Luo ,&nbsp;Mingxiang Wu ,&nbsp;Qiong Wang ,&nbsp;Hao Chen","doi":"10.1016/j.media.2025.103489","DOIUrl":null,"url":null,"abstract":"<div><div>Collecting annotations from multiple independent sources could mitigate the impact of potential noises and biases from a single source, which is a common practice in medical image segmentation. However, learning segmentation networks from multi-source annotations remains a challenge due to the uncertainties brought by the variance of the annotations. In this paper, we proposed an Uncertainty-guided Multi-source Annotation Network (UMA-Net), which guided the training process by uncertainty estimation at both the pixel and the image levels. First, we developed an annotation uncertainty estimation module (AUEM) to estimate the pixel-wise uncertainty of each annotation, which then guided the network to learn from reliable pixels by a weighted segmentation loss. Second, a quality assessment module (QAM) was proposed to assess the image-level quality of the input samples based on the former estimated annotation uncertainties. Furthermore, instead of discarding the low-quality samples, we introduced an auxiliary predictor to learn from them and thus ensured the preservation of their representation knowledge in the backbone without directly accumulating errors within the primary predictor. Extensive experiments demonstrated the effectiveness and feasibility of our proposed UMA-Net on various datasets, including 2D chest X-ray segmentation dataset, 2D fundus image segmentation dataset, 3D breast DCE-MRI segmentation dataset, and the QUBIQ multi-task segmentation dataset. Code will be released at <span><span>https://github.com/wangjin2945/UMA-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103489"},"PeriodicalIF":11.8000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525000374","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Collecting annotations from multiple independent sources could mitigate the impact of potential noises and biases from a single source, which is a common practice in medical image segmentation. However, learning segmentation networks from multi-source annotations remains a challenge due to the uncertainties brought by the variance of the annotations. In this paper, we proposed an Uncertainty-guided Multi-source Annotation Network (UMA-Net), which guided the training process by uncertainty estimation at both the pixel and the image levels. First, we developed an annotation uncertainty estimation module (AUEM) to estimate the pixel-wise uncertainty of each annotation, which then guided the network to learn from reliable pixels by a weighted segmentation loss. Second, a quality assessment module (QAM) was proposed to assess the image-level quality of the input samples based on the former estimated annotation uncertainties. Furthermore, instead of discarding the low-quality samples, we introduced an auxiliary predictor to learn from them and thus ensured the preservation of their representation knowledge in the backbone without directly accumulating errors within the primary predictor. Extensive experiments demonstrated the effectiveness and feasibility of our proposed UMA-Net on various datasets, including 2D chest X-ray segmentation dataset, 2D fundus image segmentation dataset, 3D breast DCE-MRI segmentation dataset, and the QUBIQ multi-task segmentation dataset. Code will be released at https://github.com/wangjin2945/UMA-Net.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学习鲁棒医学图像分割从多源注释
从多个独立来源收集注释可以减轻来自单个来源的潜在噪声和偏差的影响,这是医学图像分割中常见的做法。然而,由于标注的方差带来的不确定性,从多源标注中学习分割网络仍然是一个挑战。本文提出了一种不确定性引导的多源标注网络(UMA-Net),该网络通过像素和图像两个层次的不确定性估计来指导训练过程。首先,我们开发了一个标注不确定性估计模块(AUEM)来估计每个标注的像素不确定性,然后通过加权分割损失引导网络从可靠的像素中学习。其次,提出了一个质量评估模块(QAM),基于先前估计的标注不确定性对输入样本的图像级质量进行评估。此外,我们没有丢弃低质量的样本,而是引入了一个辅助预测器来学习它们,从而确保在主干中保留它们的表示知识,而不会直接在主预测器中积累误差。大量的实验证明了我们提出的UMA-Net在各种数据集上的有效性和可行性,包括二维胸部x射线分割数据集、二维眼底图像分割数据集、三维乳房DCE-MRI分割数据集和QUBIQ多任务分割数据集。代码将在https://github.com/wangjin2945/UMA-Net上发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
期刊最新文献
Artifact-suppressed 3D Retinal Microvascular Segmentation via Multi-scale Topology Regulation An Efficient, Scalable, and Adaptable Plug-and-Play Temporal Attention Module for Motion-Guided Cardiac Segmentation with Sparse Temporal Labels MICCAI STS 2024 Challenge: Semi-Supervised Instance-Level Tooth Segmentation in Panoramic X-ray and CBCT Images Physics-informed graph neural networks for flow field estimation in carotid arteries Diversity-Driven MG-MAE: Multi-Granularity Representation Learning for Non-Salient Object Segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1