Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434100
Yuan Liqiang, Marius Erdt, Wang Lipo
Supervised deep learning has greatly catalyzed the development of medical image processing. However, reliable predictions require a large amount of labeled data, which is hard to attain due to the required expensive manual efforts. Transfer learning serves as a potential solution for mitigating the issue of data insufficiency. But up till now, most transfer learning strategies for medical image segmentation either fine-tune only the last few layers of a network or focus on the decoder or encoder parts as a whole. Thus, improving transfer learning strategies is of critical importance for developing supervised deep learning, further benefits medical image processing. In this work, we propose a novel strategy that adaptively fine-tunes the network based on policy value. Specifically, the encoder layers are fine-tuned to extract latent feature followed by a fully connected layer that generates policy value. The decoder is then adaptively fine-tuned according to these policy value. The proposed approach has been applied to segment human brain tumors in MRI. The evaluation has been performed using 769 volumes from public databases. Domain transfer from T2 to T1, T1ce, and Flair shows state-of-the-art segmentation accuracy.
{"title":"Adaptive Transfer Learning To Enhance Domain Transfer In Brain Tumor Segmentation","authors":"Yuan Liqiang, Marius Erdt, Wang Lipo","doi":"10.1109/ISBI48211.2021.9434100","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434100","url":null,"abstract":"Supervised deep learning has greatly catalyzed the development of medical image processing. However, reliable predictions require a large amount of labeled data, which is hard to attain due to the required expensive manual efforts. Transfer learning serves as a potential solution for mitigating the issue of data insufficiency. But up till now, most transfer learning strategies for medical image segmentation either fine-tune only the last few layers of a network or focus on the decoder or encoder parts as a whole. Thus, improving transfer learning strategies is of critical importance for developing supervised deep learning, further benefits medical image processing. In this work, we propose a novel strategy that adaptively fine-tunes the network based on policy value. Specifically, the encoder layers are fine-tuned to extract latent feature followed by a fully connected layer that generates policy value. The decoder is then adaptively fine-tuned according to these policy value. The proposed approach has been applied to segment human brain tumors in MRI. The evaluation has been performed using 769 volumes from public databases. Domain transfer from T2 to T1, T1ce, and Flair shows state-of-the-art segmentation accuracy.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127749478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434078
Jing Ke, Yiqing Shen, Yizhou Lu
The global cancer burden is on the rise, and Artificial Intelligence (AI) has become increasingly crucial to achieve more objective and efficient diagnosis in digital pathology. Current AI-assisted histopathology analysis methods need to address the following two issues. First, the color variations due to use of different stains need to be tackled such as with stain style transfer technique. Second, in parallel with heterogeneity, datasets from individual clinical institutions are characterized by privacy regulations, and thus need to be addressed such as with robust data-private collaborative training. In this paper, to address the color heterogeneity problem, we propose a novel generative adversarial network with one orchestrating generator and multiple distributed discriminators for stain style transfer. We also incorporate Federated Learning (FL) to further preserve data privacy and security from multiple data centers. We use a large cohort of histopathology datasets as a case study.
{"title":"Style Normalization In Histology With Federated Learning","authors":"Jing Ke, Yiqing Shen, Yizhou Lu","doi":"10.1109/ISBI48211.2021.9434078","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434078","url":null,"abstract":"The global cancer burden is on the rise, and Artificial Intelligence (AI) has become increasingly crucial to achieve more objective and efficient diagnosis in digital pathology. Current AI-assisted histopathology analysis methods need to address the following two issues. First, the color variations due to use of different stains need to be tackled such as with stain style transfer technique. Second, in parallel with heterogeneity, datasets from individual clinical institutions are characterized by privacy regulations, and thus need to be addressed such as with robust data-private collaborative training. In this paper, to address the color heterogeneity problem, we propose a novel generative adversarial network with one orchestrating generator and multiple distributed discriminators for stain style transfer. We also incorporate Federated Learning (FL) to further preserve data privacy and security from multiple data centers. We use a large cohort of histopathology datasets as a case study.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126365617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434084
Qian Zhao, Wenming Yang, Q. Liao
Wireless capsule endoscopy plays an important role in the examination of gastrointestinal diseases. However, the large number of medical images produced by endoscopy makes it a time-consuming and labor-intensive work for doctors to examine. Clinically, the detection rate of small ulcers and superficial lesions is low. If these minor lesions are not screened and treated timely, they are likely to develop into cancer. Therefore, it is of great significance to develop computer-aided diagnostic algorithms to help doctors perform gastrointestinal image analysis. In this paper, we propose an adaptive cosine similarity network with self-attention module — AdaSAN, for automatic classification of gastrointestinal wireless capsule endoscope images. The experimental results on the clinical gastrointestinal image analysis dataset illustrate that our proposed method outperforms the state-of-the-art algorithms in the classification of inflammatory lesions, vascular lesions, polyps and normal images, with an average accuracy rate of 95.7%.
{"title":"Adasan: Adaptive Cosine Similarity Self-Attention Network For Gastrointestinal Endoscopy Image Classification","authors":"Qian Zhao, Wenming Yang, Q. Liao","doi":"10.1109/ISBI48211.2021.9434084","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434084","url":null,"abstract":"Wireless capsule endoscopy plays an important role in the examination of gastrointestinal diseases. However, the large number of medical images produced by endoscopy makes it a time-consuming and labor-intensive work for doctors to examine. Clinically, the detection rate of small ulcers and superficial lesions is low. If these minor lesions are not screened and treated timely, they are likely to develop into cancer. Therefore, it is of great significance to develop computer-aided diagnostic algorithms to help doctors perform gastrointestinal image analysis. In this paper, we propose an adaptive cosine similarity network with self-attention module — AdaSAN, for automatic classification of gastrointestinal wireless capsule endoscope images. The experimental results on the clinical gastrointestinal image analysis dataset illustrate that our proposed method outperforms the state-of-the-art algorithms in the classification of inflammatory lesions, vascular lesions, polyps and normal images, with an average accuracy rate of 95.7%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128071834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433943
Li Xiao, Chunlong Luo
Chromosome classification is an important but difficult and tedious task in karyotyping. Previous methods only classify manually segmented single chromosome, which is far from clinical practice. In this work, we propose a detection based method, DeepACC, to locate and fine classify chromosomes simultaneously based on the whole metaphase image. We firstly introduce the Additive Angular Margin Loss to enhance the discriminative power of the model. To alleviate batch effects, we transform decision boundary of each class case-by-case through a siamese network which make full use of priori knowledges that chromosomes usually appear in pairs. Furthermore, we take the clinically seven group criteria as a prior-knowledge and design an additional Group Inner-Adjacency Loss to further reduce inter-class similarities. A private metaphase image dataset from clinical laboratory are collected and labelled to evaluate the performance. Results show that the new design brings encouraging performance gains comparing to the state-of-the-art baseline models.
{"title":"DEEPACC:Automate Chromosome Classification Based On Metaphase Images Using Deep Learning Framework Fused With Priori Knowledge","authors":"Li Xiao, Chunlong Luo","doi":"10.1109/ISBI48211.2021.9433943","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433943","url":null,"abstract":"Chromosome classification is an important but difficult and tedious task in karyotyping. Previous methods only classify manually segmented single chromosome, which is far from clinical practice. In this work, we propose a detection based method, DeepACC, to locate and fine classify chromosomes simultaneously based on the whole metaphase image. We firstly introduce the Additive Angular Margin Loss to enhance the discriminative power of the model. To alleviate batch effects, we transform decision boundary of each class case-by-case through a siamese network which make full use of priori knowledges that chromosomes usually appear in pairs. Furthermore, we take the clinically seven group criteria as a prior-knowledge and design an additional Group Inner-Adjacency Loss to further reduce inter-class similarities. A private metaphase image dataset from clinical laboratory are collected and labelled to evaluate the performance. Results show that the new design brings encouraging performance gains comparing to the state-of-the-art baseline models.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"113 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131448906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434044
A. Imran, D. Pal, B. Patel, Adam S. Wang
Reduction of CT radiation dose is important due to the potential effects on patients. But lowering dose incurs degradation in the reconstructed image quality, furthering compromise in the diagnostic and image-based analyses performance. Considering the patient health risks, high quality reference images cannot be easily obtained, making the assessment challenging. Therefore, automatic no-reference image quality assessment is desirable. Leveraging an innovative self-supervised regularization in a convolutional neural network, we propose a novel, fully automated, no-reference CT image quantification method namely self-supervised image quality assessment (SSIQA). Extensive experimentation via in-domain (abdomen CT) and cross-domain (chest CT) evaluations demonstrates SSIQA is accurate in quantifying CT image quality, generalized across the scan types, and consistent with the established metrics and different relative dose levels.
{"title":"Ssiqa: Multi-Task Learning For Non-Reference Ct Image Quality Assessment With Self-Supervised Noise Level Prediction","authors":"A. Imran, D. Pal, B. Patel, Adam S. Wang","doi":"10.1109/ISBI48211.2021.9434044","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434044","url":null,"abstract":"Reduction of CT radiation dose is important due to the potential effects on patients. But lowering dose incurs degradation in the reconstructed image quality, furthering compromise in the diagnostic and image-based analyses performance. Considering the patient health risks, high quality reference images cannot be easily obtained, making the assessment challenging. Therefore, automatic no-reference image quality assessment is desirable. Leveraging an innovative self-supervised regularization in a convolutional neural network, we propose a novel, fully automated, no-reference CT image quantification method namely self-supervised image quality assessment (SSIQA). Extensive experimentation via in-domain (abdomen CT) and cross-domain (chest CT) evaluations demonstrates SSIQA is accurate in quantifying CT image quality, generalized across the scan types, and consistent with the established metrics and different relative dose levels.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132217244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434171
Antonio Tejero-de-Pablos, Hiroaki Yamane, Y. Kurose, Junichi Iho, Youji Tokunaga, M. Horie, Keisuke Nishizawa, Yusaku Hayashi, Y. Koyama, T. Harada
The estimation of the coronary artery wall boundaries in CCTA scans is a costly but essential task in the diagnosis of cardiac diseases. To automatize this task, deep learning-based image segmentation methods are commonly used. However, in the case of coronary artery wall, even state-of-the-art segmentation methods fail to produce an accurate boundary in the presence of plaques and bifurcations. Post-processing reconstruction methods have been proposed to further refine segmentation results, but when applying general-purpose reconstruction to artery wall segmentations, they fail to reproduce the wide variety of boundary shapes. In this paper, we propose a novel method for reconstructing coronary artery wall segmentations, the Tube Beam Stack Search (TBSS). By leveraging the voxel shape of adjacent slices in a CPR volume, our TBSS is capable of finding the most plausible path of the artery wall. Similarly to the original Beam Stack Search, TBSS navigates along the voxel probabilities output by the segmentation method, reconstructing the inner and outer artery walls. Finally, skeletonization is applied on the TBSS reconstructions to eliminate noise and produce more refined segmentations. Also, since our method does not require learning a model, the lack of annotated data is not a limitation. We evaluated our method on a dataset of coronary CT angiography with curved planar reconstruction (CCTA-CPR) of 92 arteries. Experimental results show that our method outperforms the state-of-the-art work in reconstruction.
{"title":"Beam Stack Search-Based Reconstruction Of Unhealthy Coronary Artery Wall Segmentations In CCTA-CPR Scans","authors":"Antonio Tejero-de-Pablos, Hiroaki Yamane, Y. Kurose, Junichi Iho, Youji Tokunaga, M. Horie, Keisuke Nishizawa, Yusaku Hayashi, Y. Koyama, T. Harada","doi":"10.1109/ISBI48211.2021.9434171","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434171","url":null,"abstract":"The estimation of the coronary artery wall boundaries in CCTA scans is a costly but essential task in the diagnosis of cardiac diseases. To automatize this task, deep learning-based image segmentation methods are commonly used. However, in the case of coronary artery wall, even state-of-the-art segmentation methods fail to produce an accurate boundary in the presence of plaques and bifurcations. Post-processing reconstruction methods have been proposed to further refine segmentation results, but when applying general-purpose reconstruction to artery wall segmentations, they fail to reproduce the wide variety of boundary shapes. In this paper, we propose a novel method for reconstructing coronary artery wall segmentations, the Tube Beam Stack Search (TBSS). By leveraging the voxel shape of adjacent slices in a CPR volume, our TBSS is capable of finding the most plausible path of the artery wall. Similarly to the original Beam Stack Search, TBSS navigates along the voxel probabilities output by the segmentation method, reconstructing the inner and outer artery walls. Finally, skeletonization is applied on the TBSS reconstructions to eliminate noise and produce more refined segmentations. Also, since our method does not require learning a model, the lack of annotated data is not a limitation. We evaluated our method on a dataset of coronary CT angiography with curved planar reconstruction (CCTA-CPR) of 92 arteries. Experimental results show that our method outperforms the state-of-the-art work in reconstruction.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132467167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433783
SeokHwan Oh, Myeong-Gee Kim, Youngmin Kim, Hyeon-Min Bae
In this paper, a single-probe ultrasonic imaging system that captures multi-variable quantitative profiles is presented. As pathological changes cause biomechanical property variation, quantitative imaging has great potential for lesion characterization. The proposed system simultaneously extracts four clinically informative quantitative biomarkers, such as the speed of sound, attenuation, effective scatter density, and effective scatter radius, in real-time using a single scalable neural network. The performance of the proposed system was evaluated through numerical simulations, and phantom and ex vivo measurements. The simulation results demonstrated that the proposed SQI-Net reconstructs four quantitative images with PSNR and SSIM of 19.52 dB and 0.8251, respectively, while achieving a parameter reduction of 75% compared to the design of four parallel networks, each of which was dedicated to a single parameter. In the phantom and ex vivo experiments, the SQI-Net demonstrated the classification of cyst, and benign- and malignant-like inclusions through a comprehensive analysis of four reconstructed images.
{"title":"A Learned Representation For Multi-Variable Ultrasonic Lesion Quantification","authors":"SeokHwan Oh, Myeong-Gee Kim, Youngmin Kim, Hyeon-Min Bae","doi":"10.1109/ISBI48211.2021.9433783","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433783","url":null,"abstract":"In this paper, a single-probe ultrasonic imaging system that captures multi-variable quantitative profiles is presented. As pathological changes cause biomechanical property variation, quantitative imaging has great potential for lesion characterization. The proposed system simultaneously extracts four clinically informative quantitative biomarkers, such as the speed of sound, attenuation, effective scatter density, and effective scatter radius, in real-time using a single scalable neural network. The performance of the proposed system was evaluated through numerical simulations, and phantom and ex vivo measurements. The simulation results demonstrated that the proposed SQI-Net reconstructs four quantitative images with PSNR and SSIM of 19.52 dB and 0.8251, respectively, while achieving a parameter reduction of 75% compared to the design of four parallel networks, each of which was dedicated to a single parameter. In the phantom and ex vivo experiments, the SQI-Net demonstrated the classification of cyst, and benign- and malignant-like inclusions through a comprehensive analysis of four reconstructed images.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134438125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434122
M. V. S. D. Cea, D. Gruen, David Richmond
Pneumoperitoneum (free air in the peritoneal cavity) is a rare condition that can be life threatening and require emergency surgery. It can be detected in chest X-ray but there are some challenges associated to this detection, such as small amounts of air that may be missed by a radiologist, or pseudo-pneumoperitoneum (air in the abdomen that may look like pneumoperitoneum). In this work, we propose using an ensemble of deep learning models trained on different subsets of data to boost the classification and generalization performance of the model as well as hard-negative mining to mitigate the effect of pseudo-pneumoperitoneum. We demonstrate superior performance when the model ensemble is utilized as well as good localization of the finding with multiple model explainability techniques.
{"title":"Pneumoperitoneum Detection In Chest X-Ray By A Deep Learning Ensemble With Model Explainability","authors":"M. V. S. D. Cea, D. Gruen, David Richmond","doi":"10.1109/ISBI48211.2021.9434122","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434122","url":null,"abstract":"Pneumoperitoneum (free air in the peritoneal cavity) is a rare condition that can be life threatening and require emergency surgery. It can be detected in chest X-ray but there are some challenges associated to this detection, such as small amounts of air that may be missed by a radiologist, or pseudo-pneumoperitoneum (air in the abdomen that may look like pneumoperitoneum). In this work, we propose using an ensemble of deep learning models trained on different subsets of data to boost the classification and generalization performance of the model as well as hard-negative mining to mitigate the effect of pseudo-pneumoperitoneum. We demonstrate superior performance when the model ensemble is utilized as well as good localization of the finding with multiple model explainability techniques.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130716738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433954
Yan Li, Xiaoyi Chen, Li Quan, N. Zhang
For medical image segmentation tasks, some of foreground objects have more ambiguities than other areas because of confusing appearances. It is critical to seek a proper method to measure such ambiguity of each pixel and use it for robust model training. To this end, we design a Bayesian uncertainty estimate layer, and propose an uncertainty-guided training for standard convolutional segmentation models. In particular, the proposed Bayesian uncertainty estimate layer provides the confidence on each pixel’s prediction independently, and works with prediction correctness to obtain the rescaling weights of training loss for each pixel. Through this mechanism, the learning importance of the regions with different ambiguities can be distinguished. We validate our proposal by comparing it with other loss rescaling approaches on medical image datasets. The results consistently show that the uncertainty-guided training brings significant improvement on lesion segmentation accuracy.
{"title":"Uncertainty-Guided Robust Training For Medical Image Segmentation","authors":"Yan Li, Xiaoyi Chen, Li Quan, N. Zhang","doi":"10.1109/ISBI48211.2021.9433954","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433954","url":null,"abstract":"For medical image segmentation tasks, some of foreground objects have more ambiguities than other areas because of confusing appearances. It is critical to seek a proper method to measure such ambiguity of each pixel and use it for robust model training. To this end, we design a Bayesian uncertainty estimate layer, and propose an uncertainty-guided training for standard convolutional segmentation models. In particular, the proposed Bayesian uncertainty estimate layer provides the confidence on each pixel’s prediction independently, and works with prediction correctness to obtain the rescaling weights of training loss for each pixel. Through this mechanism, the learning importance of the regions with different ambiguities can be distinguished. We validate our proposal by comparing it with other loss rescaling approaches on medical image datasets. The results consistently show that the uncertainty-guided training brings significant improvement on lesion segmentation accuracy.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434135
Chuchen Li, Huafeng Liu
Due to the limitation of ethics and the number of professional annotators, pixel-wise annotations for medical images are hard to obtain. Thus, how to exploit limited annotations and maintain the performance is an important yet challenging problem. In this paper, we propose Generative Adversarial Semi-supervised Network(GASNet) to tackle this problem in a self-learning manner. Only limited labels are available during the training procedure and the unlabeled images are exploited as auxiliary information to boost segmentation performance. We modulate segmentation network as a generator to produce pseudo labels whose reliability will be judged by an uncertainty discriminator. Feature mapping loss will obtain statistic distribution consistency between the generated labels and the real ones to further ensure the credibility. We obtain 0.8348 to 0.9131 dice coefficient with 1/32 to 1/2 proportion of annotations respectively on right ventricle dataset. Improvements are up to 28.6 points higher than the corresponding fully supervised baseline.
{"title":"Generative Adversarial Semi-Supervised Network For Medical Image Segmentation","authors":"Chuchen Li, Huafeng Liu","doi":"10.1109/ISBI48211.2021.9434135","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434135","url":null,"abstract":"Due to the limitation of ethics and the number of professional annotators, pixel-wise annotations for medical images are hard to obtain. Thus, how to exploit limited annotations and maintain the performance is an important yet challenging problem. In this paper, we propose Generative Adversarial Semi-supervised Network(GASNet) to tackle this problem in a self-learning manner. Only limited labels are available during the training procedure and the unlabeled images are exploited as auxiliary information to boost segmentation performance. We modulate segmentation network as a generator to produce pseudo labels whose reliability will be judged by an uncertainty discriminator. Feature mapping loss will obtain statistic distribution consistency between the generated labels and the real ones to further ensure the credibility. We obtain 0.8348 to 0.9131 dice coefficient with 1/32 to 1/2 proportion of annotations respectively on right ventricle dataset. Improvements are up to 28.6 points higher than the corresponding fully supervised baseline.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"28 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}