{"title":"Uncertainty-Driven Edge Prompt Generation Network for Medical Image Segmentation","authors":"Junyong Zhao;Liang Sun;Dingwei Fan;Kun Wang;Haipeng Si;Huazhu Fu;Daoqiang Zhang","doi":"10.1109/TMI.2025.3535478","DOIUrl":null,"url":null,"abstract":"Segment Anything Model (SAM) is a foundational image segmentation model, which shows superior performance for natural image segmentation tasks. Several SAM-based medical image segmentations have been proposed. However, these SAM-based medical image segmentation methods heavily depend on prior manual guidance involving points, boxes, and coarse-grained masks, which lack adaptability and flexibility. Moreover, the inherent challenge of edge blurring in medical images is critical, as it directly affects the quality of segmentation. To address these challenges, we propose an uncertainty-driven edge prompt generation network for medical image segmentation, called UDEG-Net. Specifically, to better adapt to medical image segmentation, we fine-tune the encoder by using Low-Rank Adaptation (LoRA) technology to enhance the encoder’s learning capability and capture enriched medical image features. Furthermore, to overcome the limitations of interactive prompts, we develop an auto edge prompt generator to generate edge prompt information and further enhance the structural representation. Finally, to focus on the high-uncertainty edge areas, we introduce an evidence-based uncertainty estimation and a progressive uncertainty-driven loss to drive the auto edge prompt generator to yield robust edge prompt information and reliable segmentation results. Experimental results on three public datasets and one private dataset show that our UDEG-Net outperforms the state-of-the-art medical image segmentation methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 10","pages":"3950-3961"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10855574/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Segment Anything Model (SAM) is a foundational image segmentation model, which shows superior performance for natural image segmentation tasks. Several SAM-based medical image segmentations have been proposed. However, these SAM-based medical image segmentation methods heavily depend on prior manual guidance involving points, boxes, and coarse-grained masks, which lack adaptability and flexibility. Moreover, the inherent challenge of edge blurring in medical images is critical, as it directly affects the quality of segmentation. To address these challenges, we propose an uncertainty-driven edge prompt generation network for medical image segmentation, called UDEG-Net. Specifically, to better adapt to medical image segmentation, we fine-tune the encoder by using Low-Rank Adaptation (LoRA) technology to enhance the encoder’s learning capability and capture enriched medical image features. Furthermore, to overcome the limitations of interactive prompts, we develop an auto edge prompt generator to generate edge prompt information and further enhance the structural representation. Finally, to focus on the high-uncertainty edge areas, we introduce an evidence-based uncertainty estimation and a progressive uncertainty-driven loss to drive the auto edge prompt generator to yield robust edge prompt information and reliable segmentation results. Experimental results on three public datasets and one private dataset show that our UDEG-Net outperforms the state-of-the-art medical image segmentation methods.