{"title":"Semi-supervised Medical Image Segmentation with Low-Confidence Consistency and Class Separation","authors":"Zhimin Gao, Tianyou Yu","doi":"10.1109/ISBP57705.2023.10061306","DOIUrl":null,"url":null,"abstract":"Deep learning has achieved a great success in various fields, such as image classification, semantic segmentation and so on. But its excellent performance tends to rely on a large amount of data annotations that are hard to collect, especially in dense prediction tasks, like medical image segmentation. Semi-supervised learning (SSL), as a popular solution, relieves the burden of labeling. However, most of current semi-supervised medical image segmentation methods treat each pixel equally and underestimate the importance of indistinguishable and low-proportion pixels which are drowned in easily distinguishable but high-proportion pixels. We believe that these regions with less attention tend to contain crucial and indispensable information to obtain better segmentation performance. Therefore, we propose a simple but effective method for semi-supervised medical image segmentation task via enforcing low-confidence consistency and applying low-confidence class separation. Concretely, we separate low- and high-confidence pixels via the maximum probability values of model’s predictions and only low-confidence pixels are kept. For these remaining pixels, in the mean teacher framework, consistency is enforced for invariant predictions between student and teacher in the output level, and class separation is applied for promoting representations close to corresponding class prototypes in the feature level. We evaluated the proposed approach on two public datasets of cardiac, achieving a higher performance than the state-of-the-art semi-supervised methods on both datasets.","PeriodicalId":309634,"journal":{"name":"2023 International Conference on Intelligent Supercomputing and BioPharma (ISBP)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Intelligent Supercomputing and BioPharma (ISBP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBP57705.2023.10061306","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has achieved a great success in various fields, such as image classification, semantic segmentation and so on. But its excellent performance tends to rely on a large amount of data annotations that are hard to collect, especially in dense prediction tasks, like medical image segmentation. Semi-supervised learning (SSL), as a popular solution, relieves the burden of labeling. However, most of current semi-supervised medical image segmentation methods treat each pixel equally and underestimate the importance of indistinguishable and low-proportion pixels which are drowned in easily distinguishable but high-proportion pixels. We believe that these regions with less attention tend to contain crucial and indispensable information to obtain better segmentation performance. Therefore, we propose a simple but effective method for semi-supervised medical image segmentation task via enforcing low-confidence consistency and applying low-confidence class separation. Concretely, we separate low- and high-confidence pixels via the maximum probability values of model’s predictions and only low-confidence pixels are kept. For these remaining pixels, in the mean teacher framework, consistency is enforced for invariant predictions between student and teacher in the output level, and class separation is applied for promoting representations close to corresponding class prototypes in the feature level. We evaluated the proposed approach on two public datasets of cardiac, achieving a higher performance than the state-of-the-art semi-supervised methods on both datasets.