Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759099
Z. Vajihi, I. Rosado-Méndez, T. Hall, H. Rivaz
Quantitative Ultrasound (QUS) techniques aim at quantifying backscatter tissue properties to aid in disease diagnosis and treatment monitoring. These techniques rely on accurately compensating for attenuation from intervening tissues. Various methods have been proposed to this end, one of which is based on a Dynamic Programming (DP) approach with a Least Squares (LSq) based cost function and L2 norm regularization to simultaneously estimate attenuation and parameters from the backscatter coefficient. As a way to improve the accuracy and precision of this DP method, we propose to use L1 norm instead of L2 norm as the regularization term in our cost function and optimize the function using DP. Our results show that DP with L1 regularization substantially reduces bias of attenuation and backscatter parameters compared to DP with L2 norm. Furthermore, we employ DP to estimate the QUS parameters of two new phantoms with large scatterer size and compare the results LSq, L2 norm DP and L1 norm DP. Our results show that L1 norm DP outperforms L2 norm DP, which itself outperforms LSq.
{"title":"L1 And L2 Norm Depth-Regularized Estimation Of The Acoustic Attenuation And Backscatter Coefficients Using Dynamic Programming","authors":"Z. Vajihi, I. Rosado-Méndez, T. Hall, H. Rivaz","doi":"10.1109/ISBI.2019.8759099","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759099","url":null,"abstract":"Quantitative Ultrasound (QUS) techniques aim at quantifying backscatter tissue properties to aid in disease diagnosis and treatment monitoring. These techniques rely on accurately compensating for attenuation from intervening tissues. Various methods have been proposed to this end, one of which is based on a Dynamic Programming (DP) approach with a Least Squares (LSq) based cost function and L2 norm regularization to simultaneously estimate attenuation and parameters from the backscatter coefficient. As a way to improve the accuracy and precision of this DP method, we propose to use L1 norm instead of L2 norm as the regularization term in our cost function and optimize the function using DP. Our results show that DP with L1 regularization substantially reduces bias of attenuation and backscatter parameters compared to DP with L2 norm. Furthermore, we employ DP to estimate the QUS parameters of two new phantoms with large scatterer size and compare the results LSq, L2 norm DP and L1 norm DP. Our results show that L1 norm DP outperforms L2 norm DP, which itself outperforms LSq.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114707583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759545
Adrien Foucart, O. Debeir, C. Decaestecker
Digital pathology produces a lot of images. For machine learning applications, these images need to be annotated, which can be complex and time consuming. Therefore, outside of a few benchmark datasets, real-world applications often rely on data with scarce or unreliable annotations. In this paper, we quantitatively analyze how different types of perturbations influence the results of a typical deep learning algorithm by artificially weakening the annotations of a benchmark biomedical dataset. We use classical machine learning paradigms (semi-supervised, noisy and weak learning) adapted to deep learning to try to counteract those effects, and analyze the effectiveness of these methods in addressing different types of weakness.
{"title":"SNOW: Semi-Supervised, Noisy And/Or Weak Data For Deep Learning In Digital Pathology","authors":"Adrien Foucart, O. Debeir, C. Decaestecker","doi":"10.1109/ISBI.2019.8759545","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759545","url":null,"abstract":"Digital pathology produces a lot of images. For machine learning applications, these images need to be annotated, which can be complex and time consuming. Therefore, outside of a few benchmark datasets, real-world applications often rely on data with scarce or unreliable annotations. In this paper, we quantitatively analyze how different types of perturbations influence the results of a typical deep learning algorithm by artificially weakening the annotations of a benchmark biomedical dataset. We use classical machine learning paradigms (semi-supervised, noisy and weak learning) adapted to deep learning to try to counteract those effects, and analyze the effectiveness of these methods in addressing different types of weakness.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"3367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127499579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759593
N. Guigui, C. Philippe, A. Gloaguen, Slim Karkar, V. Guillemot, Tommy Löfstedt, V. Frouin
Imaging genetics is a growing popular research avenue which aims to find genetic variants associated with quantitative phenotypes that characterize a disease. In this work, we combine structural MRI with genetic data structured by prior knowledge of interactions in a Canonical Correlation Analysis (CCA) model with graph regularization. This results in improved prediction performance and yields a more interpretable model.
{"title":"Network Regularization in Imaging Genetics Improves Prediction Performances and Model Interpretability on Alzheimer’s Disease","authors":"N. Guigui, C. Philippe, A. Gloaguen, Slim Karkar, V. Guillemot, Tommy Löfstedt, V. Frouin","doi":"10.1109/ISBI.2019.8759593","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759593","url":null,"abstract":"Imaging genetics is a growing popular research avenue which aims to find genetic variants associated with quantitative phenotypes that characterize a disease. In this work, we combine structural MRI with genetic data structured by prior knowledge of interactions in a Canonical Correlation Analysis (CCA) model with graph regularization. This results in improved prediction performance and yields a more interpretable model.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121938520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759594
F. Lux, P. Matula
Image segmentation of dense cell populations acquired using label-free optical microscopy techniques is a challenging problem. In this paper, we propose a novel approach based on a combination of deep learning and the watershed transform to segment differential interference contrast (DIC) images with high accuracy. The main idea of our approach is to train a convolutional neural network to detect both cellular markers and cellular areas and, based on these predictions, to split the individual cells using the watershed transform. The approach was developed based on the images of dense HeLa cell populations included in the Cell Tracking Challenge database. Our approach was ranked the best in terms of segmentation, detection, as well as overall performance as evaluated on the challenge datasets.
{"title":"DIC Image Segmentation of Dense Cell Populations by Combining Deep Learning and Watershed","authors":"F. Lux, P. Matula","doi":"10.1109/ISBI.2019.8759594","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759594","url":null,"abstract":"Image segmentation of dense cell populations acquired using label-free optical microscopy techniques is a challenging problem. In this paper, we propose a novel approach based on a combination of deep learning and the watershed transform to segment differential interference contrast (DIC) images with high accuracy. The main idea of our approach is to train a convolutional neural network to detect both cellular markers and cellular areas and, based on these predictions, to split the individual cells using the watershed transform. The approach was developed based on the images of dense HeLa cell populations included in the Cell Tracking Challenge database. Our approach was ranked the best in terms of segmentation, detection, as well as overall performance as evaluated on the challenge datasets.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130883406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759101
Changhoo Lee, J. Min, Jaemyung Cha, Seungkyu Lee
Deep convolutional neural network has shown dramatically improved performance not just in computer vision problems but also in various medical imaging tasks. For improved and meaningful result with deep learning approaches, the quality of training dataset is critical. However, in medical imaging applications, collecting full aspects of lesion samples is quite difficult due to the limited number of patients, privacy and right concerns. In this paper, we propose feature space extrapolation for ulcer data augmentation. We build dual encoder network combining two VGG19 nets integrating them in fully connected encoded feature space. Ulcer data is extrapolated in the encoded feature space based on respective closest normal sample. And then, fully connected layers are fine-tuned for final ulcer classification. Experimental evaluation shows our proposed dual encoder network with feature space extrapolation improves ulcer classification.
{"title":"Feature Space Extrapolation for Ulcer Classification in Wireless Capsule Endoscopy Images","authors":"Changhoo Lee, J. Min, Jaemyung Cha, Seungkyu Lee","doi":"10.1109/ISBI.2019.8759101","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759101","url":null,"abstract":"Deep convolutional neural network has shown dramatically improved performance not just in computer vision problems but also in various medical imaging tasks. For improved and meaningful result with deep learning approaches, the quality of training dataset is critical. However, in medical imaging applications, collecting full aspects of lesion samples is quite difficult due to the limited number of patients, privacy and right concerns. In this paper, we propose feature space extrapolation for ulcer data augmentation. We build dual encoder network combining two VGG19 nets integrating them in fully connected encoded feature space. Ulcer data is extrapolated in the encoded feature space based on respective closest normal sample. And then, fully connected layers are fine-tuned for final ulcer classification. Experimental evaluation shows our proposed dual encoder network with feature space extrapolation improves ulcer classification.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132041749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759426
Yungeng Zhang, Yuru Pei, Haifang Qin, Yuke Guo, Gengyu Ma, T. Xu, H. Zha
Masseter segmentation from noisy and blurry cone-beam CT (CBCT) images is a challenging issue considering the device-specific image artefacts. In this paper, we propose a novel approach for noise reduction and masseter muscle segmentation from CBCT images using a generative adversarial network (GAN)-based framework. We adapt the regression model of muscle segmentation from traditional CT (TCT) images to the domain of CBCT images without using prior paired images. The proposed framework is built upon the unsupervised CycleGAN. We mainly address the shape distortion problem in the unsupervised domain adaptation framework. A structure-aware constraint is introduced to guarantee the shape preservation in the feature embedding and image generation processes. We explicitly define a joint embedding space of both the TCT and CBCT images to exploit the intrinsic semantic representation, which is key to the intra-and cross-domain image generation and muscle segmentation. The proposed approach is applied to clinically captured CBCT images. We demonstrate both the effectiveness and efficiency of the proposed approach in noise reduction and muscle segmentation tasks compared with the state-of-the-art.
{"title":"Masseter Muscle Segmentation from Cone-Beam CT Images using Generative Adversarial Network","authors":"Yungeng Zhang, Yuru Pei, Haifang Qin, Yuke Guo, Gengyu Ma, T. Xu, H. Zha","doi":"10.1109/ISBI.2019.8759426","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759426","url":null,"abstract":"Masseter segmentation from noisy and blurry cone-beam CT (CBCT) images is a challenging issue considering the device-specific image artefacts. In this paper, we propose a novel approach for noise reduction and masseter muscle segmentation from CBCT images using a generative adversarial network (GAN)-based framework. We adapt the regression model of muscle segmentation from traditional CT (TCT) images to the domain of CBCT images without using prior paired images. The proposed framework is built upon the unsupervised CycleGAN. We mainly address the shape distortion problem in the unsupervised domain adaptation framework. A structure-aware constraint is introduced to guarantee the shape preservation in the feature embedding and image generation processes. We explicitly define a joint embedding space of both the TCT and CBCT images to exploit the intrinsic semantic representation, which is key to the intra-and cross-domain image generation and muscle segmentation. The proposed approach is applied to clinically captured CBCT images. We demonstrate both the effectiveness and efficiency of the proposed approach in noise reduction and muscle segmentation tasks compared with the state-of-the-art.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134087927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759334
Anton Böhm, Maxim Tatarchenko, Thorsten Falk
We present $mathrm { ISOO } _ { mathrm { DL } } ^ { mathrm { V } 2 } -$ a method for semantic instance segmentation of touching and overlapping objects. We introduce a series of design modifications to the prior framework, including a novel mixed 2D-3D segmentation network and a simplified post-processing procedure which enables segmentation of touching objects without relying on object detection. For the case of overlapping objects where detection is required, we upgrade the bounding box parametrization and allow for smaller reference point distances. All these novel-ties lead to substantial performance improvements and enable the method to deal with a wider range of challenging practical situations. Additionally, our framework can handle object sub-part segmentation. We evaluate our approach on both real-world and synthetically generated biological datasets and report state-of-the-art performance.
{"title":"ISOOV2 DL - Semantic Instance Segmentation of Touching and Overlapping Objects","authors":"Anton Böhm, Maxim Tatarchenko, Thorsten Falk","doi":"10.1109/ISBI.2019.8759334","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759334","url":null,"abstract":"We present $mathrm { ISOO } _ { mathrm { DL } } ^ { mathrm { V } 2 } -$ a method for semantic instance segmentation of touching and overlapping objects. We introduce a series of design modifications to the prior framework, including a novel mixed 2D-3D segmentation network and a simplified post-processing procedure which enables segmentation of touching objects without relying on object detection. For the case of overlapping objects where detection is required, we upgrade the bounding box parametrization and allow for smaller reference point distances. All these novel-ties lead to substantial performance improvements and enable the method to deal with a wider range of challenging practical situations. Additionally, our framework can handle object sub-part segmentation. We evaluate our approach on both real-world and synthetically generated biological datasets and report state-of-the-art performance.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125509862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759424
M. Lee, H. Hong, K. Shim, Seongeun Park
For the reconstruction of the orbital wall of the cranio-maxillofacial surgery, the segmentation of the orbital bone is necessary to support the eye globe position and restore the volume and shape of the orbit. However, due to the wide range of intensities of the orbital bones, conventional U-Net-based segmentation shows under-segmentation in the low-intensity thin bones of the orbital medial wall and orbital floor. In this paper, we propose a multi-gray-bone-Net (MGB-Net) for orbital bone segmentation that improves segmentation accuracy of high-intensity cortical bone as well as low-intensity thin bone in head-and-neck CT images. To prevent under-segmentation of the thin bones of the orbital medial wall and orbital floor, a single orbital bone mask is convert into two masks for cortical bone and thin bone. Two SGB-Nets separately are trained on these masks and each cortical and thin bone segmentation result is integrated to obtain the whole orbital bone segmentation result. Experiments show that our MGB-Net achieves improved performance for whole orbital bone segmentation as well as segmentation of thin bone of orbital medial wall and orbital floor.
{"title":"MGB-NET: Orbital Bone Segmentation from Head and Neck CT Images Using Multi-Graylevel-Bone Convolutional Networks","authors":"M. Lee, H. Hong, K. Shim, Seongeun Park","doi":"10.1109/ISBI.2019.8759424","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759424","url":null,"abstract":"For the reconstruction of the orbital wall of the cranio-maxillofacial surgery, the segmentation of the orbital bone is necessary to support the eye globe position and restore the volume and shape of the orbit. However, due to the wide range of intensities of the orbital bones, conventional U-Net-based segmentation shows under-segmentation in the low-intensity thin bones of the orbital medial wall and orbital floor. In this paper, we propose a multi-gray-bone-Net (MGB-Net) for orbital bone segmentation that improves segmentation accuracy of high-intensity cortical bone as well as low-intensity thin bone in head-and-neck CT images. To prevent under-segmentation of the thin bones of the orbital medial wall and orbital floor, a single orbital bone mask is convert into two masks for cortical bone and thin bone. Two SGB-Nets separately are trained on these masks and each cortical and thin bone segmentation result is integrated to obtain the whole orbital bone segmentation result. Experiments show that our MGB-Net achieves improved performance for whole orbital bone segmentation as well as segmentation of thin bone of orbital medial wall and orbital floor.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128051809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759324
Vinu Sankar Sadasivan, C. Seelamantula
Wireless capsule endoscopy (WCE) is a technology used to record colored internal images of the gastrointestinal (GI) tract for the purpose of medical diagnosis. It transmits a large number of frames in a single examination cycle, which makes the process of analyzing and diagnosis of abnormalities extremely challenging and time-consuming. In this paper, we propose a technique to automate the abnormality detection in WCE images following a deep learning approach. The WCE images are split into patches and input to a convolutional neural network (CNN). A trained deep neural network is used to classify patches to be either malign or benign. The patches with abnormalities are marked on the WCE image output. We obtained an area under receiver-operating-characteristic curve (AUROC) value of about 98.65% on a publicly available test data containing nine abnormalities.
{"title":"High Accuracy Patch-Level Classification of Wireless Capsule Endoscopy Images Using a Convolutional Neural Network","authors":"Vinu Sankar Sadasivan, C. Seelamantula","doi":"10.1109/ISBI.2019.8759324","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759324","url":null,"abstract":"Wireless capsule endoscopy (WCE) is a technology used to record colored internal images of the gastrointestinal (GI) tract for the purpose of medical diagnosis. It transmits a large number of frames in a single examination cycle, which makes the process of analyzing and diagnosis of abnormalities extremely challenging and time-consuming. In this paper, we propose a technique to automate the abnormality detection in WCE images following a deep learning approach. The WCE images are split into patches and input to a convolutional neural network (CNN). A trained deep neural network is used to classify patches to be either malign or benign. The patches with abnormalities are marked on the WCE image output. We obtained an area under receiver-operating-characteristic curve (AUROC) value of about 98.65% on a publicly available test data containing nine abnormalities.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"73 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123246693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759500
S. Ravikumar, L. Wisse, Yang Gao, G. Gerig, Paul Yushkevich
Manual segmentation of anatomical structures in 3D imaging datasets is a highly time-consuming process. This process can be sped up using interslice interpolation techniques, which require only a small subset of slices to be manually segmented. In this paper, we propose a two-step interpolation approach that utilizes a “binary weighted averaging” algorithm to interpolate contour information, and the random forest framework to perform intensity-based label classification. We present the results of experiments performed in the context of hippocampal segmentations in ex vivo MRI scans. Compared to the random walker algorithm and morphology-based interpolation, the proposed method produces more accurate segmentations and smoother 3D reconstructions.
{"title":"Facilitating Manual Segmentation of 3D Datasets Using Contour And Intensity Guided Interpolation","authors":"S. Ravikumar, L. Wisse, Yang Gao, G. Gerig, Paul Yushkevich","doi":"10.1109/ISBI.2019.8759500","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759500","url":null,"abstract":"Manual segmentation of anatomical structures in 3D imaging datasets is a highly time-consuming process. This process can be sped up using interslice interpolation techniques, which require only a small subset of slices to be manually segmented. In this paper, we propose a two-step interpolation approach that utilizes a “binary weighted averaging” algorithm to interpolate contour information, and the random forest framework to perform intensity-based label classification. We present the results of experiments performed in the context of hippocampal segmentations in ex vivo MRI scans. Compared to the random walker algorithm and morphology-based interpolation, the proposed method produces more accurate segmentations and smoother 3D reconstructions.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117315248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}