Pub Date : 2019-07-11DOI: 10.1109/ISBI.2019.8759185
N. Anantrasirichai, M. Allinovi, W. Hayes, D. Bull, A. Achim
Lines and boundaries are important structures in medical ultrasound images as they can help differentiate between tissue types, organs, and membranes. A typical example is in lung ultrasonography, where the presence of so-called B-lines is indicative of lung status in ventilated critically ill patients or of fluid overload in patients on dialysis. In order to be able to quantify such linear features, deconvolution is typically necessary, in order to enhance the generally poor ultrasound image quality. This paper presents a novel deconvolution technique for restoring ultrasound images. Our approach employs a standard inverse problem formulation involving a penalty term for enforcing a sparse solution, but augmented with an additional term aimed at promoting linear features. Specifically, we regularise our solution using the Radon transform, which effectively acts as a dictionary of lines. The resulting optimisation problem can then be addressed using both con-vex and non-convex techniques. We evaluated our approach on real B-mode ultrasound images and our results show that the proposed method outperforms existing techniques by up to 30% in terms of contrast-to-noise ratio.
{"title":"Regularisation With a Dictionary of Lines for Medical Ultrasound Image Deconvolution","authors":"N. Anantrasirichai, M. Allinovi, W. Hayes, D. Bull, A. Achim","doi":"10.1109/ISBI.2019.8759185","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759185","url":null,"abstract":"Lines and boundaries are important structures in medical ultrasound images as they can help differentiate between tissue types, organs, and membranes. A typical example is in lung ultrasonography, where the presence of so-called B-lines is indicative of lung status in ventilated critically ill patients or of fluid overload in patients on dialysis. In order to be able to quantify such linear features, deconvolution is typically necessary, in order to enhance the generally poor ultrasound image quality. This paper presents a novel deconvolution technique for restoring ultrasound images. Our approach employs a standard inverse problem formulation involving a penalty term for enforcing a sparse solution, but augmented with an additional term aimed at promoting linear features. Specifically, we regularise our solution using the Radon transform, which effectively acts as a dictionary of lines. The resulting optimisation problem can then be addressed using both con-vex and non-convex techniques. We evaluated our approach on real B-mode ultrasound images and our results show that the proposed method outperforms existing techniques by up to 30% in terms of contrast-to-noise ratio.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121828327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-08DOI: 10.1109/ISBI.2019.8759404
E. Villain, H. Wendt, A. Basarab, D. Kouamé
Tissue characterization based on ultrasound (US) images is an extensively explored research field. Most of the existing techniques are focused on the estimation of statistical or acoustic parameters from the backscattered radio-frequency signals, thus complementing the visual inspection of the conventional B-mode images. Additionally, a few studies show the interest of analyzing the fractal or multifractal behavior of human tissues, in particular of tumors. While biological experiments sustain such multifractal behaviors, the observations on US images are rather empirical. To our knowledge, there is no theoretical or practical study relating the fractal or multifractal parameters extracted from US images to those of the imaged tissues. The aim of this paper is to investigate how multifractal properties of a tissue correlate with the ones estimated from a simulated US image for the same tissue. To this end, an original simulation pipeline of multifractal tissues and their corresponding US images is proposed. Simulation results are compared to those in an in vivo experiment.
{"title":"On Multifractal Tissue Characterization in Ultrasound Imaging","authors":"E. Villain, H. Wendt, A. Basarab, D. Kouamé","doi":"10.1109/ISBI.2019.8759404","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759404","url":null,"abstract":"Tissue characterization based on ultrasound (US) images is an extensively explored research field. Most of the existing techniques are focused on the estimation of statistical or acoustic parameters from the backscattered radio-frequency signals, thus complementing the visual inspection of the conventional B-mode images. Additionally, a few studies show the interest of analyzing the fractal or multifractal behavior of human tissues, in particular of tumors. While biological experiments sustain such multifractal behaviors, the observations on US images are rather empirical. To our knowledge, there is no theoretical or practical study relating the fractal or multifractal parameters extracted from US images to those of the imaged tissues. The aim of this paper is to investigate how multifractal properties of a tissue correlate with the ones estimated from a simulated US image for the same tissue. To this end, an original simulation pipeline of multifractal tissues and their corresponding US images is proposed. Simulation results are compared to those in an in vivo experiment.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120894257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759248
Y. Farouj, F. I. Karahanoğlu, D. Ville
The investigation of spontaneous and evoked neuronal activity from functional Magnetic Resonance Imaging (fMRI) data has come to play a significant role in deepening our understanding of brain function. As this research trend continues, activity detection metthat can adapt to different activation scenarios must be developed. The present work describes a new method for temporal semi-blind deconvolution of fMRI data; i.e., undo temporal signals from the effect of the Hæmodynamic Response Function (HRF), in the absence of information about the timing and duration of neuronal events and under uncertain characterization of cerebral hæmodynamics. A sequential minimization of two functionals is deployed: the first functional recovers activity signals with sparse transients while the second exploits the retrieved activity moments to estimate the Taylor expansion coefficients of the HRF. These coefficients are inherently linked to two values of interests that characterize the hæmodynamics: time-to-peak and the width of the response. We evaluate the performances of the method on synthetic signals before demonstrating its potential on experimental measurements from the visual cortex.
{"title":"Bold Signal Deconvolution Under Uncertain HÆModynamics: A Semi-Blind Approach","authors":"Y. Farouj, F. I. Karahanoğlu, D. Ville","doi":"10.1109/ISBI.2019.8759248","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759248","url":null,"abstract":"The investigation of spontaneous and evoked neuronal activity from functional Magnetic Resonance Imaging (fMRI) data has come to play a significant role in deepening our understanding of brain function. As this research trend continues, activity detection metthat can adapt to different activation scenarios must be developed. The present work describes a new method for temporal semi-blind deconvolution of fMRI data; i.e., undo temporal signals from the effect of the Hæmodynamic Response Function (HRF), in the absence of information about the timing and duration of neuronal events and under uncertain characterization of cerebral hæmodynamics. A sequential minimization of two functionals is deployed: the first functional recovers activity signals with sparse transients while the second exploits the retrieved activity moments to estimate the Taylor expansion coefficients of the HRF. These coefficients are inherently linked to two values of interests that characterize the hæmodynamics: time-to-peak and the width of the response. We evaluate the performances of the method on synthetic signals before demonstrating its potential on experimental measurements from the visual cortex.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127049304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759393
L. Gueddari, P. Ciuciu, É. Chouzenoux, A. Vignaud, J. Pesquet
Reducing acquisition time is a crucial issue in MRI especially in the high resolution context. Compressed sensing has faced this problem for a decade. However, to maintain a high signal-to-noise ratio (SNR), CS must be combined with parallel imaging. This leads to harder reconstruction problems that usually require the knowledge of coil sensitivity profiles. In this work, we introduce a calibrationless image reconstruction approach that no longer requires this knowledge. The originality of this work lies in using for reconstruction a group sparsity structure (called OSCAR) across channels that handles SNR inhomogeneities across receivers. We compare this re-construction with other calibrationless approaches based on group-LASSO and its sparse variation as well as with the auto-calibrated method called $ell_{1}$-ESPIRiT. We demonstrate that OSCAR outperforms its competitors and provides similar results to $ell_{1}$-ESPIRiT. This suggests that the sensitivity maps are no longer required to per-form combined CS and parallel imaging reconstruction.
{"title":"Calibrationless Oscar-Based Image Reconstruction in Compressed Sensing Parallel MRI","authors":"L. Gueddari, P. Ciuciu, É. Chouzenoux, A. Vignaud, J. Pesquet","doi":"10.1109/ISBI.2019.8759393","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759393","url":null,"abstract":"Reducing acquisition time is a crucial issue in MRI especially in the high resolution context. Compressed sensing has faced this problem for a decade. However, to maintain a high signal-to-noise ratio (SNR), CS must be combined with parallel imaging. This leads to harder reconstruction problems that usually require the knowledge of coil sensitivity profiles. In this work, we introduce a calibrationless image reconstruction approach that no longer requires this knowledge. The originality of this work lies in using for reconstruction a group sparsity structure (called OSCAR) across channels that handles SNR inhomogeneities across receivers. We compare this re-construction with other calibrationless approaches based on group-LASSO and its sparse variation as well as with the auto-calibrated method called $ell_{1}$-ESPIRiT. We demonstrate that OSCAR outperforms its competitors and provides similar results to $ell_{1}$-ESPIRiT. This suggests that the sensitivity maps are no longer required to per-form combined CS and parallel imaging reconstruction.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114944070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759209
Sunil Kumar Gaire, C. Zhang, Hongyu Li, Peizhou Huang, R. Liu, Haifeng Wang, D. Liang, L. Ying
Single-molecule-localization based super-resolution microscopy has enabled the imaging of microscopic objects beyond the diffraction limit. However, these techniques are limited by the requirements of an extremely large number of frames for imaging of cell structures, thus having longer acquisition time. Here, we present a computational algorithm to accelerate 3D single-molecule localization microscopy technique by using blind sparse inpainting. This technique reconstructs the high-density super-resolution 3D images from low-density ones, maintaining similar structures as those of high-density images. The low-density images are generated using fewer frames than usually needed by the high-density images, thus requiring shorter acquisition time. Thus, the algorithm will accelerate 3D single-molecule imaging. The experimental 3D image reconstruction of microtubules using a reduced number of frames is presented to validate the concept.
{"title":"Accelerated 3D Localization Microscopy Using Blind Sparse Inpainting","authors":"Sunil Kumar Gaire, C. Zhang, Hongyu Li, Peizhou Huang, R. Liu, Haifeng Wang, D. Liang, L. Ying","doi":"10.1109/ISBI.2019.8759209","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759209","url":null,"abstract":"Single-molecule-localization based super-resolution microscopy has enabled the imaging of microscopic objects beyond the diffraction limit. However, these techniques are limited by the requirements of an extremely large number of frames for imaging of cell structures, thus having longer acquisition time. Here, we present a computational algorithm to accelerate 3D single-molecule localization microscopy technique by using blind sparse inpainting. This technique reconstructs the high-density super-resolution 3D images from low-density ones, maintaining similar structures as those of high-density images. The low-density images are generated using fewer frames than usually needed by the high-density images, thus requiring shorter acquisition time. Thus, the algorithm will accelerate 3D single-molecule imaging. The experimental 3D image reconstruction of microtubules using a reduced number of frames is presented to validate the concept.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122336810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759529
Yunhao Ge, Dongming Wei, Z. Xue, Qian Wang, Xiaoping Zhou, Y. Zhan, Shu Liao
In medical imaging such as PET-MR attenuation correction and MRI-guided radiation therapy, synthesizing CT images from MR plays an important role in obtaining tissue density properties. Recently deep-learning-based image synthesis techniques have attracted much attention because of their superior ability for image mapping. However, most of the current deep-learning-based synthesis methods require large scales of paired data, which greatly limits their usage. Efforts have been made to relax such a restriction, and the cycle-consistent adversarial networks (Cycle-GAN) is an example to synthesize medical images with unpaired data. In Cycle-GAN, the cycle consistency loss is employed as an indirect structural similarity metric between the input and the synthesized images and often leads to mismatch of anatomical structures in the synthesized results. To overcome this shortcoming, we propose to (1) use the mutual information loss to directly enforce the structural similarity between the input MR and the synthesized CT image and (2) to incorporate the shape consistency information to improve the synthesis result. Experimental results demonstrate that the proposed method can achieve better performance both qualitatively and quantitatively for whole-body MR to CT synthesis with unpaired training images compared to Cycle-GAN.
{"title":"Unpaired Mr to CT Synthesis with Explicit Structural Constrained Adversarial Learning","authors":"Yunhao Ge, Dongming Wei, Z. Xue, Qian Wang, Xiaoping Zhou, Y. Zhan, Shu Liao","doi":"10.1109/ISBI.2019.8759529","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759529","url":null,"abstract":"In medical imaging such as PET-MR attenuation correction and MRI-guided radiation therapy, synthesizing CT images from MR plays an important role in obtaining tissue density properties. Recently deep-learning-based image synthesis techniques have attracted much attention because of their superior ability for image mapping. However, most of the current deep-learning-based synthesis methods require large scales of paired data, which greatly limits their usage. Efforts have been made to relax such a restriction, and the cycle-consistent adversarial networks (Cycle-GAN) is an example to synthesize medical images with unpaired data. In Cycle-GAN, the cycle consistency loss is employed as an indirect structural similarity metric between the input and the synthesized images and often leads to mismatch of anatomical structures in the synthesized results. To overcome this shortcoming, we propose to (1) use the mutual information loss to directly enforce the structural similarity between the input MR and the synthesized CT image and (2) to incorporate the shape consistency information to improve the synthesis result. Experimental results demonstrate that the proposed method can achieve better performance both qualitatively and quantitatively for whole-body MR to CT synthesis with unpaired training images compared to Cycle-GAN.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122011409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a scheme to classify different Wireless Capsule Endoscopy (WCE) lesion images for diagnosis. The main contribution is to quantify multi-scale pooled channel-wise information and merge multi-level features together by explicitly modeling interdependencies between all feature maps of different convolution layers. Firstly, feature maps are resized into multi-scale size with bicubic interpolation, and then down-sampling convolution method is adopted to obtain pooled feature maps of the same resolution, and finally one by one convolution kernels are utilized to fuse feature maps after quantization operation based on channel-wise attention mechanism in order to enhance feature extraction of the proposed architecture. Preliminary experimental result shows that our proposed scheme with less model parameters achieves competitive results compared to the state-of-the-art methods in WCE image classification task.
{"title":"Lesion Classification of Wireless Capsule Endoscopy Images","authors":"Wenming Yang, Yaxing Cao, Qian Zhao, Yong Ren, Q. Liao","doi":"10.1109/ISBI.2019.8759577","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759577","url":null,"abstract":"In this paper, we propose a scheme to classify different Wireless Capsule Endoscopy (WCE) lesion images for diagnosis. The main contribution is to quantify multi-scale pooled channel-wise information and merge multi-level features together by explicitly modeling interdependencies between all feature maps of different convolution layers. Firstly, feature maps are resized into multi-scale size with bicubic interpolation, and then down-sampling convolution method is adopted to obtain pooled feature maps of the same resolution, and finally one by one convolution kernels are utilized to fuse feature maps after quantization operation based on channel-wise attention mechanism in order to enhance feature extraction of the proposed architecture. Preliminary experimental result shows that our proposed scheme with less model parameters achieves competitive results compared to the state-of-the-art methods in WCE image classification task.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759165
Tomasz Pieciak, Fabian Bogusz, A. Tristán-Vega, Rodrigo de Luis García, S. Aja‐Fernández
The Ensemble Average Propagator (EAP) provides a compact theoretical framework to explore the underlying microstructural properties of the tissues with diffusion magnetic resonance imaging. To model tissue characteristics, it is usually required to fit a functional basis to a densely sampled q-space data and then retrieve the EAP-related maps. In this work, we analytically derive a new closed-form formula to calculate one of the EAP features the Return-To-the-Origin Probability (RTOP) map directly from the data leaving aside the EAP estimation step. Our RTOP estimation approach exploits only single-shell data and additionally handles noise-induced bias using a non-stationary log-Rician statistics. We validated our proposal using an in vivo Human Connectome Project database achieving an increased accuracy of the method when subsampling of the q-space was considered and strong correlations to multiple-shell state-of-the-art methods.
{"title":"Single-Shell Return-to-the-Origin Probability Diffusion Mri Measure Under a Non-Stationary Rician Distributed Noise","authors":"Tomasz Pieciak, Fabian Bogusz, A. Tristán-Vega, Rodrigo de Luis García, S. Aja‐Fernández","doi":"10.1109/ISBI.2019.8759165","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759165","url":null,"abstract":"The Ensemble Average Propagator (EAP) provides a compact theoretical framework to explore the underlying microstructural properties of the tissues with diffusion magnetic resonance imaging. To model tissue characteristics, it is usually required to fit a functional basis to a densely sampled q-space data and then retrieve the EAP-related maps. In this work, we analytically derive a new closed-form formula to calculate one of the EAP features the Return-To-the-Origin Probability (RTOP) map directly from the data leaving aside the EAP estimation step. Our RTOP estimation approach exploits only single-shell data and additionally handles noise-induced bias using a non-stationary log-Rician statistics. We validated our proposal using an in vivo Human Connectome Project database achieving an increased accuracy of the method when subsampling of the q-space was considered and strong correlations to multiple-shell state-of-the-art methods.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132975454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759483
Eric Z. Chen, Xu Dong, Xiaoxiao Li, Hongda Jiang, Ruichen Rong, Junyan Wu
Melanoma is the most deadly form of skin cancer worldwide. Many efforts have been made for early detection of melanoma with deep learning based on dermoscopic images. It is crucial to identify the specific lesion patterns for accurate diagnosis of melanoma. However, the common lesion patterns are not consistently present and cause sparse label problems in the data. In this paper, we propose a multi-task U-Net model to automatically detect lesion attributes of melanoma. The network includes two tasks, one is the classification task to classify if the lesion attributes present, and the other is the segmentation task to segment the attributes in the images. Our multi-task U-Net model achieves a Jaccard index of 0.433 on official test data of ISIC 2018 Challenges task 2, which ranks the 5th place on the final leaderboard.
{"title":"Lesion Attributes Segmentation for Melanoma Detection with Multi-Task U-Net","authors":"Eric Z. Chen, Xu Dong, Xiaoxiao Li, Hongda Jiang, Ruichen Rong, Junyan Wu","doi":"10.1109/ISBI.2019.8759483","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759483","url":null,"abstract":"Melanoma is the most deadly form of skin cancer worldwide. Many efforts have been made for early detection of melanoma with deep learning based on dermoscopic images. It is crucial to identify the specific lesion patterns for accurate diagnosis of melanoma. However, the common lesion patterns are not consistently present and cause sparse label problems in the data. In this paper, we propose a multi-task U-Net model to automatically detect lesion attributes of melanoma. The network includes two tasks, one is the classification task to classify if the lesion attributes present, and the other is the segmentation task to segment the attributes in the images. Our multi-task U-Net model achieves a Jaccard index of 0.433 on official test data of ISIC 2018 Challenges task 2, which ranks the 5th place on the final leaderboard.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133828472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759498
Yixuan Yuan, Wenjian Qin, Xiaoqing Guo, M. Buyyounouski, S. Hancock, B. Han, L. Xing
Prostate cancer is a leading cause of mortality among men. Prostate segmentation of Magnetic Resonance (MR) images plays a critical role in treatment planning and image guided interventions. However, manual delineation of prostate is very time-consuming and subjects to large inter-observer variations. To deal with this problem, we proposed a novel Encoder-Decoder Densely Connected Convolutional Network (ED-DenseNet) to segment prostate region automatically. Our model consists of two interconnected pathways, a dense encoder pathway, which learns discriminative high-level image features and a dense decoder pathway, which predicts the final segmentation in the pixel level. Instead of using the convolutional network as the basic unit in the encoder-decoder framework, we utilize Densely Connected Convolutional Network (DenseNet) to preserve the maximum information flow among layers by a densely-connected mechanism. In addition, a novel loss function that jointly considers the encoder-decoder reconstruction error and the prediction error is proposed to optimize the feature learning and segmentation result. Our automatic segmentation result shows high agreement (DSC 87.14%) to the clinical segmentation results by experienced radiation oncologists. In addition, comparison with state-of-the-art methods shows that our ED-DenseNet model is superior in segmentation performance.
{"title":"Prostate Segmentation with Encoder-Decoder Densely Connected Convolutional Network (Ed-Densenet)","authors":"Yixuan Yuan, Wenjian Qin, Xiaoqing Guo, M. Buyyounouski, S. Hancock, B. Han, L. Xing","doi":"10.1109/ISBI.2019.8759498","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759498","url":null,"abstract":"Prostate cancer is a leading cause of mortality among men. Prostate segmentation of Magnetic Resonance (MR) images plays a critical role in treatment planning and image guided interventions. However, manual delineation of prostate is very time-consuming and subjects to large inter-observer variations. To deal with this problem, we proposed a novel Encoder-Decoder Densely Connected Convolutional Network (ED-DenseNet) to segment prostate region automatically. Our model consists of two interconnected pathways, a dense encoder pathway, which learns discriminative high-level image features and a dense decoder pathway, which predicts the final segmentation in the pixel level. Instead of using the convolutional network as the basic unit in the encoder-decoder framework, we utilize Densely Connected Convolutional Network (DenseNet) to preserve the maximum information flow among layers by a densely-connected mechanism. In addition, a novel loss function that jointly considers the encoder-decoder reconstruction error and the prediction error is proposed to optimize the feature learning and segmentation result. Our automatic segmentation result shows high agreement (DSC 87.14%) to the clinical segmentation results by experienced radiation oncologists. In addition, comparison with state-of-the-art methods shows that our ED-DenseNet model is superior in segmentation performance.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114564771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}