Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434128
D. Mandache, E. B. Á. L. Guillaume, J. Olivo-Marin, V. Meas-Yedid
We propose a method to fully exploit the dynamic signal produced by a recently developed non-invasive imaging modality: Dynamic Cell Imaging based on Full Field Optical Coherence Tomography, towards fast extemporaneous tissue assessment. The non-negative matrix factorisation method is used in an interpretable and quantifiable fashion to extract the signals coming from different structures of breast tissue in order to characterize cancerous tissue.
{"title":"Blind Source Separation In Dynamic Cell Imaging Using Non-Negative Matrix Factorization Applied To Breast Cancer Biopsies","authors":"D. Mandache, E. B. Á. L. Guillaume, J. Olivo-Marin, V. Meas-Yedid","doi":"10.1109/ISBI48211.2021.9434128","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434128","url":null,"abstract":"We propose a method to fully exploit the dynamic signal produced by a recently developed non-invasive imaging modality: Dynamic Cell Imaging based on Full Field Optical Coherence Tomography, towards fast extemporaneous tissue assessment. The non-negative matrix factorisation method is used in an interpretable and quantifiable fashion to extract the signals coming from different structures of breast tissue in order to characterize cancerous tissue.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127635321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433927
John P. Bryan, B. Cleary, Samouil L. Farhi, Yonina C. Eldar
Imaging transcriptomics (IT) techniques enable characterization of gene expression in cells in their native context by imaging barcoded mRNA probes with single molecule resolution. However, the need to acquire many rounds of high-magnification imaging data limits the throughput and impact of existing methods. We propose an algorithm for decoding lower magnification IT data than that used in standard experimental workflows. Our approach, Joint Sparse method for Imaging Transcriptomics (JSIT), incorporates codebook knowledge and sparsity assumptions into an optimization problem. Using simulated low-magnification data, we demonstrate that JSIT enables improved throughput and recovery performance over standard decoding methods.
{"title":"Sparse Recovery Of Imaging Transcriptomics Data","authors":"John P. Bryan, B. Cleary, Samouil L. Farhi, Yonina C. Eldar","doi":"10.1109/ISBI48211.2021.9433927","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433927","url":null,"abstract":"Imaging transcriptomics (IT) techniques enable characterization of gene expression in cells in their native context by imaging barcoded mRNA probes with single molecule resolution. However, the need to acquire many rounds of high-magnification imaging data limits the throughput and impact of existing methods. We propose an algorithm for decoding lower magnification IT data than that used in standard experimental workflows. Our approach, Joint Sparse method for Imaging Transcriptomics (JSIT), incorporates codebook knowledge and sparsity assumptions into an optimization problem. Using simulated low-magnification data, we demonstrate that JSIT enables improved throughput and recovery performance over standard decoding methods.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115492468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434102
Chanmin Park, Kanggeun Lee, Su Yeon Kim, Fatma Sema Canbakis Cecen, Seok-Kyu Kwon, Won-Ki Jeong
Recent advances in machine learning have shown significant success in biomedical image segmentation. Most existing high-quality segmentation algorithms rely on supervised learning with full training labels. However, such methods are more susceptible to label quality; besides, generating accurate labels in biomedical data is a labor- and time-intensive task. In this paper, we propose a novel neuron segmentation method that uses only incomplete and noisy labels. The proposed method employs a noise-tolerant adaptive loss that handles partially annotated labels. Moreover, the proposed reconstruction loss leverages prior knowledge of neuronal cell structures to reduce false segmentation near noisy labels. The proposed loss function outperforms several widely used state-of-the-art noise-tolerant losses, such as reverse cross entropy, normalized cross entropy and noise-robust dice losses.
{"title":"Neuron Segmentation using Incomplete and Noisy Labels via Adaptive Learning with Structure Priors","authors":"Chanmin Park, Kanggeun Lee, Su Yeon Kim, Fatma Sema Canbakis Cecen, Seok-Kyu Kwon, Won-Ki Jeong","doi":"10.1109/ISBI48211.2021.9434102","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434102","url":null,"abstract":"Recent advances in machine learning have shown significant success in biomedical image segmentation. Most existing high-quality segmentation algorithms rely on supervised learning with full training labels. However, such methods are more susceptible to label quality; besides, generating accurate labels in biomedical data is a labor- and time-intensive task. In this paper, we propose a novel neuron segmentation method that uses only incomplete and noisy labels. The proposed method employs a noise-tolerant adaptive loss that handles partially annotated labels. Moreover, the proposed reconstruction loss leverages prior knowledge of neuronal cell structures to reduce false segmentation near noisy labels. The proposed loss function outperforms several widely used state-of-the-art noise-tolerant losses, such as reverse cross entropy, normalized cross entropy and noise-robust dice losses.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116886389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434016
Jingyang Zhang, Ran Gu, Guotai Wang, Hongzhi Xie, Lixu Gu
The segmentation of coronary arteries by convolutional neural network is promising yet requires a large amount of labor-intensive manual annotations. Transferring knowledge from retinal vessels in widely-available public labeled fundus images (FIs) has a potential to reduce the annotation requirement for coronary artery segmentation in X-ray angiograms (XAs) due to their common tubular structures. However, it is challenged by the cross-anatomy domain shift due to the intrinsically different vesselness characteristics in different anatomical regions under even different imaging protocols. To solve this problem, we propose a Semi-Supervised Cross-Anatomy Domain Adaptation (SS-CADA) which requires only limited annotations for coronary arteries in $text{X}text{A}text{s}$. With the supervision from a small number of labeled XAs and publicly available labeled $text{F}text{I}text{s}$, we propose a vesselness-specific batch normalization (VSBN) to individually normalize feature maps for them considering their different cross-anatomic vesselness characteristics. In addition, to further facilitate the annotation efficiency, we employ a self-ensembling mean-teacher (SE-MT) to exploit abundant unlabeled XAs by imposing a prediction consistency constraint. Extensive experiments show that our SS-CADA is able to solve the challenging cross-anatomy domain shift, achieving accurate segmentation for coronary arteries given only a small number of labeled $text{X}text{A}text{s}$.
{"title":"SS-CADA: A Semi-Supervised Cross-Anatomy Domain Adaptation for Coronary Artery Segmentation","authors":"Jingyang Zhang, Ran Gu, Guotai Wang, Hongzhi Xie, Lixu Gu","doi":"10.1109/ISBI48211.2021.9434016","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434016","url":null,"abstract":"The segmentation of coronary arteries by convolutional neural network is promising yet requires a large amount of labor-intensive manual annotations. Transferring knowledge from retinal vessels in widely-available public labeled fundus images (FIs) has a potential to reduce the annotation requirement for coronary artery segmentation in X-ray angiograms (XAs) due to their common tubular structures. However, it is challenged by the cross-anatomy domain shift due to the intrinsically different vesselness characteristics in different anatomical regions under even different imaging protocols. To solve this problem, we propose a Semi-Supervised Cross-Anatomy Domain Adaptation (SS-CADA) which requires only limited annotations for coronary arteries in $text{X}text{A}text{s}$. With the supervision from a small number of labeled XAs and publicly available labeled $text{F}text{I}text{s}$, we propose a vesselness-specific batch normalization (VSBN) to individually normalize feature maps for them considering their different cross-anatomic vesselness characteristics. In addition, to further facilitate the annotation efficiency, we employ a self-ensembling mean-teacher (SE-MT) to exploit abundant unlabeled XAs by imposing a prediction consistency constraint. Extensive experiments show that our SS-CADA is able to solve the challenging cross-anatomy domain shift, achieving accurate segmentation for coronary arteries given only a small number of labeled $text{X}text{A}text{s}$.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116928636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433772
J. Sprenger, T. Saathoff, A. Schlaefer
Optical coherence tomography (OCT) is a near-infrared light based imaging modality that enables depth scans with a high spatial resolution. By scanning along the lateral dimensions, high-resolution volumes can be acquired. This allows to characterize tissue and precisely detect abnormal structures in medical scenarios. However, its small field of view (FOV) limits the applicability of OCT for medical examinations. We therefore present an automated setup to move an OCT scan head over arbitrary surfaces. By mounting the scan head to a highly accurate robot arm, we obtain precise information about the position of the acquired volumes. We implement a geometric approach to stitch the volumes and generate the surface scans. Our results show that a precise stitching of the volumes is achieved with mean absolute errors of 0.078 mm and 0.098 mm in the lateral directions and 0.037 mm in the axial direction. We can show that our setup provides automated surface scanning with OCT of samples and phantoms larger than the usual FOV.
{"title":"Automated Robotic Surface Scanning With Optical Coherence Tomography","authors":"J. Sprenger, T. Saathoff, A. Schlaefer","doi":"10.1109/ISBI48211.2021.9433772","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433772","url":null,"abstract":"Optical coherence tomography (OCT) is a near-infrared light based imaging modality that enables depth scans with a high spatial resolution. By scanning along the lateral dimensions, high-resolution volumes can be acquired. This allows to characterize tissue and precisely detect abnormal structures in medical scenarios. However, its small field of view (FOV) limits the applicability of OCT for medical examinations. We therefore present an automated setup to move an OCT scan head over arbitrary surfaces. By mounting the scan head to a highly accurate robot arm, we obtain precise information about the position of the acquired volumes. We implement a geometric approach to stitch the volumes and generate the surface scans. Our results show that a precise stitching of the volumes is achieved with mean absolute errors of 0.078 mm and 0.098 mm in the lateral directions and 0.037 mm in the axial direction. We can show that our setup provides automated surface scanning with OCT of samples and phantoms larger than the usual FOV.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117302260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433827
Brahim Belaoucha, T. Papadopoulo
Electroencephalography (EEG) distributed source reconstruction methods can be improved by using spatio-temporal constraints. Few methods use structural connectivity (SC), obtained from diffusion MRI, to constrain the EEG source space. In this work, we present a source reconstruction algorithm that uses SC and constrains the source dynamics by a multivariate autoregressive model (MAR) to estimate both the effective connectivity (EC) between brain regions and their activation. To obtain an asymmetric EC, we add a sparse prior to the MAR model. We call this algorithm Elasticnet iterative Source and Dynamics reconstruction (eiSDR). This paper presents our approach and how the proposed model can obtain both brain activation and interactions. Its accuracy is demonstrated using synthetic data and tested with real data for a face recognition task. The results are in phase with other works that used the same data showing that the choice of using a MAR model and some priors on it give relevant results.
{"title":"Elasticnetisdr to Reconstruct Both Sparse Brain Activity and Effective Connectivity","authors":"Brahim Belaoucha, T. Papadopoulo","doi":"10.1109/ISBI48211.2021.9433827","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433827","url":null,"abstract":"Electroencephalography (EEG) distributed source reconstruction methods can be improved by using spatio-temporal constraints. Few methods use structural connectivity (SC), obtained from diffusion MRI, to constrain the EEG source space. In this work, we present a source reconstruction algorithm that uses SC and constrains the source dynamics by a multivariate autoregressive model (MAR) to estimate both the effective connectivity (EC) between brain regions and their activation. To obtain an asymmetric EC, we add a sparse prior to the MAR model. We call this algorithm Elasticnet iterative Source and Dynamics reconstruction (eiSDR). This paper presents our approach and how the proposed model can obtain both brain activation and interactions. Its accuracy is demonstrated using synthetic data and tested with real data for a face recognition task. The results are in phase with other works that used the same data showing that the choice of using a MAR model and some priors on it give relevant results.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116405348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433823
Hongzhi Wang, Vaishnavi Subramanian, T. Syeda-Mahmood
Fusion of multimodal data is important for disease understanding. In this paper, we propose a new method of fusion exploiting the uncertainty in prediction produced by the individual modality learners. Specifically, we extend the joint label fusion method by taking model uncertainty into account when estimating correlations among predictions produced by different modalities. Through experimental study in survival prediction for non-small cell lung cancer patients who received surgical resection, we demonstrated promising performance produced by the proposed method.
{"title":"Modeling Uncertainty in Multi-Modal Fusion for Lung Cancer Survival Analysis","authors":"Hongzhi Wang, Vaishnavi Subramanian, T. Syeda-Mahmood","doi":"10.1109/ISBI48211.2021.9433823","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433823","url":null,"abstract":"Fusion of multimodal data is important for disease understanding. In this paper, we propose a new method of fusion exploiting the uncertainty in prediction produced by the individual modality learners. Specifically, we extend the joint label fusion method by taking model uncertainty into account when estimating correlations among predictions produced by different modalities. Through experimental study in survival prediction for non-small cell lung cancer patients who received surgical resection, we demonstrated promising performance produced by the proposed method.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121985132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433774
Suraj Mishra, D. Chen, Sharon Hu
From diagnosing neovascular diseases to detecting white matter lesions, accurate tiny vessel segmentation in fundus images is critical. Promising results for accurate vessel segmentation have been known. However, their effectiveness in segmenting tiny vessels is still limited. In this paper, we study retinal vessel segmentation by incorporating tiny vessel segmentation into our framework for the overall accurate vessel segmentation. To achieve this, we propose a new deep convolutional neural network (CNN) which divides vessel segmentation into two separate objectives. Specifically, we consider the overall accurate vessel segmentation and tiny vessel segmentation as two individual objectives. Then, by exploiting the objective-dependent (homoscedastic) uncertainty, we enable the network to learn both objectives simultaneously. Further, to improve the individual objectives, we propose: (a) a vessel weight map based auxiliary loss for enhancing tiny vessel connectivity (i.e., improving tiny vessel segmentation), and (b) an enhanced encoder-decoder architecture for improved localization (i.e., for accurate vessel segmentation). Using 3 public retinal vessel segmentation datasets (CHASE DB1, DRIVE, and STARE), we verify the superiority of our proposed framework in segmenting tiny vessels (8.3% average improvement in sensitivity) while achieving better area under the receiver operating characteristic curve (AUC) compared to state-of-the-art methods.
{"title":"Objective-Dependent Uncertainty Driven Retinal Vessel Segmentation","authors":"Suraj Mishra, D. Chen, Sharon Hu","doi":"10.1109/ISBI48211.2021.9433774","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433774","url":null,"abstract":"From diagnosing neovascular diseases to detecting white matter lesions, accurate tiny vessel segmentation in fundus images is critical. Promising results for accurate vessel segmentation have been known. However, their effectiveness in segmenting tiny vessels is still limited. In this paper, we study retinal vessel segmentation by incorporating tiny vessel segmentation into our framework for the overall accurate vessel segmentation. To achieve this, we propose a new deep convolutional neural network (CNN) which divides vessel segmentation into two separate objectives. Specifically, we consider the overall accurate vessel segmentation and tiny vessel segmentation as two individual objectives. Then, by exploiting the objective-dependent (homoscedastic) uncertainty, we enable the network to learn both objectives simultaneously. Further, to improve the individual objectives, we propose: (a) a vessel weight map based auxiliary loss for enhancing tiny vessel connectivity (i.e., improving tiny vessel segmentation), and (b) an enhanced encoder-decoder architecture for improved localization (i.e., for accurate vessel segmentation). Using 3 public retinal vessel segmentation datasets (CHASE DB1, DRIVE, and STARE), we verify the superiority of our proposed framework in segmenting tiny vessels (8.3% average improvement in sensitivity) while achieving better area under the receiver operating characteristic curve (AUC) compared to state-of-the-art methods.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117099080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434003
Duc Duy Pham, S. M. Koesnadi, Gurbandurdy Dovletov, J. Pauli
In this paper we address the task of unsupervised domain adaptation for multi-label classification problems with convolutional neural networks. We particularly consider the domain shift in between X-ray data sets. Domain adaptation between different X-ray data sets is especially of practical and clinical importance to guarantee applicability across hospitals and clinics, which may use different machines for image acquisition. In contrast to the usual multi-class setting, in multi-label classification tasks multiple labels can be assigned to an input instance instead of just one label. While most related work focus on domain adaptation for multi-class tasks, we consider the more general case of multi-label classification across domains. We propose an adversarial domain adaptation approach, in which the discriminator is equipped with additional conditional information regarding the current classification output. Our experiments show promising and competitive results on publicly available data sets, compared to state of the art approaches.
{"title":"Unsupervised Adversarial Domain Adaptation for Multi-Label Classification of Chest X-Ray","authors":"Duc Duy Pham, S. M. Koesnadi, Gurbandurdy Dovletov, J. Pauli","doi":"10.1109/ISBI48211.2021.9434003","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434003","url":null,"abstract":"In this paper we address the task of unsupervised domain adaptation for multi-label classification problems with convolutional neural networks. We particularly consider the domain shift in between X-ray data sets. Domain adaptation between different X-ray data sets is especially of practical and clinical importance to guarantee applicability across hospitals and clinics, which may use different machines for image acquisition. In contrast to the usual multi-class setting, in multi-label classification tasks multiple labels can be assigned to an input instance instead of just one label. While most related work focus on domain adaptation for multi-class tasks, we consider the more general case of multi-label classification across domains. We propose an adversarial domain adaptation approach, in which the discriminator is equipped with additional conditional information regarding the current classification output. Our experiments show promising and competitive results on publicly available data sets, compared to state of the art approaches.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124058893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433851
Kaiping Wang, Bo Zhan, Yanmei Luo, Jiliu Zhou, Xi Wu, Yan Wang
The lack of annotated data is a common problem in medical image segmentation tasks. In this paper, we present a novel multi-task semi-supervised segmentation algorithm with a curriculum-style learning strategy. The proposed method includes a segmentation task and an auxiliary regression task. Concretely, the auxiliary regression task aims to learn image-level properties such as the size and centroid position of target region to regularize the segmentation network, enforcing the pixel-level segmentation result match the distributions of these regressions. In addition, these regressions are treated as pseudo labels for the learning of unlabeled data. For the purpose of decreasing noise from the deviation of inferred labels, we adopt the inequality constraint for the learning of unlabeled data, which would generate a tolerance interval where the prediction within it would not be published to reduce the impact of prediction deviation of regression network. Experimental results on both 2017 ACDC dataset and PROMISE12 dataset demonstrate the effectiveness of our method.
{"title":"Multi-Task Curriculum Learning For Semi-Supervised Medical Image Segmentation","authors":"Kaiping Wang, Bo Zhan, Yanmei Luo, Jiliu Zhou, Xi Wu, Yan Wang","doi":"10.1109/ISBI48211.2021.9433851","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433851","url":null,"abstract":"The lack of annotated data is a common problem in medical image segmentation tasks. In this paper, we present a novel multi-task semi-supervised segmentation algorithm with a curriculum-style learning strategy. The proposed method includes a segmentation task and an auxiliary regression task. Concretely, the auxiliary regression task aims to learn image-level properties such as the size and centroid position of target region to regularize the segmentation network, enforcing the pixel-level segmentation result match the distributions of these regressions. In addition, these regressions are treated as pseudo labels for the learning of unlabeled data. For the purpose of decreasing noise from the deviation of inferred labels, we adopt the inequality constraint for the learning of unlabeled data, which would generate a tolerance interval where the prediction within it would not be published to reduce the impact of prediction deviation of regression network. Experimental results on both 2017 ACDC dataset and PROMISE12 dataset demonstrate the effectiveness of our method.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126112413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}