首页 > 最新文献

2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)最新文献

英文 中文
L1 And L2 Norm Depth-Regularized Estimation Of The Acoustic Attenuation And Backscatter Coefficients Using Dynamic Programming 基于动态规划的L1和L2范数深度正则化声衰减和后向散射系数估计
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759099
Z. Vajihi, I. Rosado-Méndez, T. Hall, H. Rivaz
Quantitative Ultrasound (QUS) techniques aim at quantifying backscatter tissue properties to aid in disease diagnosis and treatment monitoring. These techniques rely on accurately compensating for attenuation from intervening tissues. Various methods have been proposed to this end, one of which is based on a Dynamic Programming (DP) approach with a Least Squares (LSq) based cost function and L2 norm regularization to simultaneously estimate attenuation and parameters from the backscatter coefficient. As a way to improve the accuracy and precision of this DP method, we propose to use L1 norm instead of L2 norm as the regularization term in our cost function and optimize the function using DP. Our results show that DP with L1 regularization substantially reduces bias of attenuation and backscatter parameters compared to DP with L2 norm. Furthermore, we employ DP to estimate the QUS parameters of two new phantoms with large scatterer size and compare the results LSq, L2 norm DP and L1 norm DP. Our results show that L1 norm DP outperforms L2 norm DP, which itself outperforms LSq.
定量超声(QUS)技术旨在量化背散射组织特性,以帮助疾病诊断和治疗监测。这些技术依赖于精确地补偿中间组织的衰减。为此提出了多种方法,其中一种方法是基于基于最小二乘(LSq)的代价函数和L2范数正则化的动态规划(DP)方法,从后向散射系数中同时估计衰减和参数。为了提高该方法的准确性和精密度,我们建议使用L1范数代替L2范数作为代价函数的正则化项,并使用DP对函数进行优化。结果表明,与L2范数的DP相比,L1正则化的DP显著降低了衰减和后向散射参数的偏差。在此基础上,我们利用差分估计了两种大散射体尺寸的新幻影的QUS参数,并比较了LSq、L2范数DP和L1范数DP的结果。我们的结果表明,L1范数DP优于L2范数DP,而L2范数DP本身优于LSq。
{"title":"L1 And L2 Norm Depth-Regularized Estimation Of The Acoustic Attenuation And Backscatter Coefficients Using Dynamic Programming","authors":"Z. Vajihi, I. Rosado-Méndez, T. Hall, H. Rivaz","doi":"10.1109/ISBI.2019.8759099","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759099","url":null,"abstract":"Quantitative Ultrasound (QUS) techniques aim at quantifying backscatter tissue properties to aid in disease diagnosis and treatment monitoring. These techniques rely on accurately compensating for attenuation from intervening tissues. Various methods have been proposed to this end, one of which is based on a Dynamic Programming (DP) approach with a Least Squares (LSq) based cost function and L2 norm regularization to simultaneously estimate attenuation and parameters from the backscatter coefficient. As a way to improve the accuracy and precision of this DP method, we propose to use L1 norm instead of L2 norm as the regularization term in our cost function and optimize the function using DP. Our results show that DP with L1 regularization substantially reduces bias of attenuation and backscatter parameters compared to DP with L2 norm. Furthermore, we employ DP to estimate the QUS parameters of two new phantoms with large scatterer size and compare the results LSq, L2 norm DP and L1 norm DP. Our results show that L1 norm DP outperforms L2 norm DP, which itself outperforms LSq.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114707583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
SNOW: Semi-Supervised, Noisy And/Or Weak Data For Deep Learning In Digital Pathology 数字病理学中深度学习的半监督、噪声和/或弱数据
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759545
Adrien Foucart, O. Debeir, C. Decaestecker
Digital pathology produces a lot of images. For machine learning applications, these images need to be annotated, which can be complex and time consuming. Therefore, outside of a few benchmark datasets, real-world applications often rely on data with scarce or unreliable annotations. In this paper, we quantitatively analyze how different types of perturbations influence the results of a typical deep learning algorithm by artificially weakening the annotations of a benchmark biomedical dataset. We use classical machine learning paradigms (semi-supervised, noisy and weak learning) adapted to deep learning to try to counteract those effects, and analyze the effectiveness of these methods in addressing different types of weakness.
数字病理学产生了大量的图像。对于机器学习应用程序,需要对这些图像进行注释,这可能是复杂且耗时的。因此,除了少数基准数据集之外,现实世界的应用程序通常依赖于具有稀缺或不可靠注释的数据。在本文中,我们通过人为削弱基准生物医学数据集的注释,定量分析了不同类型的扰动如何影响典型深度学习算法的结果。我们使用适应深度学习的经典机器学习范式(半监督、噪声和弱学习)来尝试抵消这些影响,并分析这些方法在解决不同类型弱点方面的有效性。
{"title":"SNOW: Semi-Supervised, Noisy And/Or Weak Data For Deep Learning In Digital Pathology","authors":"Adrien Foucart, O. Debeir, C. Decaestecker","doi":"10.1109/ISBI.2019.8759545","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759545","url":null,"abstract":"Digital pathology produces a lot of images. For machine learning applications, these images need to be annotated, which can be complex and time consuming. Therefore, outside of a few benchmark datasets, real-world applications often rely on data with scarce or unreliable annotations. In this paper, we quantitatively analyze how different types of perturbations influence the results of a typical deep learning algorithm by artificially weakening the annotations of a benchmark biomedical dataset. We use classical machine learning paradigms (semi-supervised, noisy and weak learning) adapted to deep learning to try to counteract those effects, and analyze the effectiveness of these methods in addressing different types of weakness.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"3367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127499579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Network Regularization in Imaging Genetics Improves Prediction Performances and Model Interpretability on Alzheimer’s Disease 影像学遗传学中的网络正则化提高了阿尔茨海默病的预测性能和模型可解释性
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759593
N. Guigui, C. Philippe, A. Gloaguen, Slim Karkar, V. Guillemot, Tommy Löfstedt, V. Frouin
Imaging genetics is a growing popular research avenue which aims to find genetic variants associated with quantitative phenotypes that characterize a disease. In this work, we combine structural MRI with genetic data structured by prior knowledge of interactions in a Canonical Correlation Analysis (CCA) model with graph regularization. This results in improved prediction performance and yields a more interpretable model.
成像遗传学是一种日益流行的研究途径,旨在发现与表征疾病的定量表型相关的遗传变异。在这项工作中,我们将结构MRI与遗传数据结合起来,这些数据是由典型相关分析(CCA)模型中具有图正则化的相互作用的先验知识构成的。这将提高预测性能,并产生更具可解释性的模型。
{"title":"Network Regularization in Imaging Genetics Improves Prediction Performances and Model Interpretability on Alzheimer’s Disease","authors":"N. Guigui, C. Philippe, A. Gloaguen, Slim Karkar, V. Guillemot, Tommy Löfstedt, V. Frouin","doi":"10.1109/ISBI.2019.8759593","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759593","url":null,"abstract":"Imaging genetics is a growing popular research avenue which aims to find genetic variants associated with quantitative phenotypes that characterize a disease. In this work, we combine structural MRI with genetic data structured by prior knowledge of interactions in a Canonical Correlation Analysis (CCA) model with graph regularization. This results in improved prediction performance and yields a more interpretable model.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121938520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DIC Image Segmentation of Dense Cell Populations by Combining Deep Learning and Watershed 结合深度学习和分水岭的DIC密集细胞群图像分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759594
F. Lux, P. Matula
Image segmentation of dense cell populations acquired using label-free optical microscopy techniques is a challenging problem. In this paper, we propose a novel approach based on a combination of deep learning and the watershed transform to segment differential interference contrast (DIC) images with high accuracy. The main idea of our approach is to train a convolutional neural network to detect both cellular markers and cellular areas and, based on these predictions, to split the individual cells using the watershed transform. The approach was developed based on the images of dense HeLa cell populations included in the Cell Tracking Challenge database. Our approach was ranked the best in terms of segmentation, detection, as well as overall performance as evaluated on the challenge datasets.
使用无标记光学显微镜技术获得的密集细胞群图像分割是一个具有挑战性的问题。本文提出了一种基于深度学习和分水岭变换相结合的差分干涉对比度(DIC)图像分割方法。我们的方法的主要思想是训练卷积神经网络来检测细胞标记和细胞区域,并基于这些预测,使用分水岭变换来分割单个细胞。该方法是基于cell Tracking Challenge数据库中包含的密集HeLa细胞群图像开发的。我们的方法在分割、检测以及在挑战数据集上评估的整体性能方面排名最佳。
{"title":"DIC Image Segmentation of Dense Cell Populations by Combining Deep Learning and Watershed","authors":"F. Lux, P. Matula","doi":"10.1109/ISBI.2019.8759594","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759594","url":null,"abstract":"Image segmentation of dense cell populations acquired using label-free optical microscopy techniques is a challenging problem. In this paper, we propose a novel approach based on a combination of deep learning and the watershed transform to segment differential interference contrast (DIC) images with high accuracy. The main idea of our approach is to train a convolutional neural network to detect both cellular markers and cellular areas and, based on these predictions, to split the individual cells using the watershed transform. The approach was developed based on the images of dense HeLa cell populations included in the Cell Tracking Challenge database. Our approach was ranked the best in terms of segmentation, detection, as well as overall performance as evaluated on the challenge datasets.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130883406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Feature Space Extrapolation for Ulcer Classification in Wireless Capsule Endoscopy Images 无线胶囊内镜图像中溃疡分类的特征空间外推
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759101
Changhoo Lee, J. Min, Jaemyung Cha, Seungkyu Lee
Deep convolutional neural network has shown dramatically improved performance not just in computer vision problems but also in various medical imaging tasks. For improved and meaningful result with deep learning approaches, the quality of training dataset is critical. However, in medical imaging applications, collecting full aspects of lesion samples is quite difficult due to the limited number of patients, privacy and right concerns. In this paper, we propose feature space extrapolation for ulcer data augmentation. We build dual encoder network combining two VGG19 nets integrating them in fully connected encoded feature space. Ulcer data is extrapolated in the encoded feature space based on respective closest normal sample. And then, fully connected layers are fine-tuned for final ulcer classification. Experimental evaluation shows our proposed dual encoder network with feature space extrapolation improves ulcer classification.
深度卷积神经网络不仅在计算机视觉问题上表现出色,而且在各种医学成像任务中也表现出色。对于深度学习方法的改进和有意义的结果,训练数据集的质量是至关重要的。然而,在医学成像应用中,由于患者数量有限,隐私和权利问题,收集病变样本的全部方面是相当困难的。在本文中,我们提出特征空间外推法用于溃疡数据增强。我们构建了双编码器网络,将两个VGG19网络整合在完全连通的编码特征空间中。溃疡数据在编码特征空间中根据各自最接近的正态样本进行外推。然后,完全连接的层被微调为最终的溃疡分类。实验结果表明,基于特征空间外推的双编码器网络可以改善溃疡分类。
{"title":"Feature Space Extrapolation for Ulcer Classification in Wireless Capsule Endoscopy Images","authors":"Changhoo Lee, J. Min, Jaemyung Cha, Seungkyu Lee","doi":"10.1109/ISBI.2019.8759101","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759101","url":null,"abstract":"Deep convolutional neural network has shown dramatically improved performance not just in computer vision problems but also in various medical imaging tasks. For improved and meaningful result with deep learning approaches, the quality of training dataset is critical. However, in medical imaging applications, collecting full aspects of lesion samples is quite difficult due to the limited number of patients, privacy and right concerns. In this paper, we propose feature space extrapolation for ulcer data augmentation. We build dual encoder network combining two VGG19 nets integrating them in fully connected encoded feature space. Ulcer data is extrapolated in the encoded feature space based on respective closest normal sample. And then, fully connected layers are fine-tuned for final ulcer classification. Experimental evaluation shows our proposed dual encoder network with feature space extrapolation improves ulcer classification.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132041749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Masseter Muscle Segmentation from Cone-Beam CT Images using Generative Adversarial Network 基于生成对抗网络的锥形束CT图像咬肌分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759426
Yungeng Zhang, Yuru Pei, Haifang Qin, Yuke Guo, Gengyu Ma, T. Xu, H. Zha
Masseter segmentation from noisy and blurry cone-beam CT (CBCT) images is a challenging issue considering the device-specific image artefacts. In this paper, we propose a novel approach for noise reduction and masseter muscle segmentation from CBCT images using a generative adversarial network (GAN)-based framework. We adapt the regression model of muscle segmentation from traditional CT (TCT) images to the domain of CBCT images without using prior paired images. The proposed framework is built upon the unsupervised CycleGAN. We mainly address the shape distortion problem in the unsupervised domain adaptation framework. A structure-aware constraint is introduced to guarantee the shape preservation in the feature embedding and image generation processes. We explicitly define a joint embedding space of both the TCT and CBCT images to exploit the intrinsic semantic representation, which is key to the intra-and cross-domain image generation and muscle segmentation. The proposed approach is applied to clinically captured CBCT images. We demonstrate both the effectiveness and efficiency of the proposed approach in noise reduction and muscle segmentation tasks compared with the state-of-the-art.
考虑到特定设备的图像伪影,从带有噪声和模糊的锥形束CT (CBCT)图像中进行咬头分割是一个具有挑战性的问题。在本文中,我们提出了一种基于生成对抗网络(GAN)框架的CBCT图像降噪和咬肌分割的新方法。我们将传统CT (TCT)图像的肌肉分割回归模型应用到CBCT图像领域,而不使用先验配对图像。提出的框架是建立在无监督的CycleGAN之上的。我们主要在无监督域自适应框架中解决形状畸变问题。在特征嵌入和图像生成过程中引入了结构感知约束,保证了图像的形状保持。我们明确地定义了TCT和CBCT图像的联合嵌入空间,以利用其固有的语义表示,这是实现域内和跨域图像生成和肌肉分割的关键。该方法被应用于临床捕获的CBCT图像。与最先进的方法相比,我们证明了所提出的方法在降噪和肌肉分割任务中的有效性和效率。
{"title":"Masseter Muscle Segmentation from Cone-Beam CT Images using Generative Adversarial Network","authors":"Yungeng Zhang, Yuru Pei, Haifang Qin, Yuke Guo, Gengyu Ma, T. Xu, H. Zha","doi":"10.1109/ISBI.2019.8759426","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759426","url":null,"abstract":"Masseter segmentation from noisy and blurry cone-beam CT (CBCT) images is a challenging issue considering the device-specific image artefacts. In this paper, we propose a novel approach for noise reduction and masseter muscle segmentation from CBCT images using a generative adversarial network (GAN)-based framework. We adapt the regression model of muscle segmentation from traditional CT (TCT) images to the domain of CBCT images without using prior paired images. The proposed framework is built upon the unsupervised CycleGAN. We mainly address the shape distortion problem in the unsupervised domain adaptation framework. A structure-aware constraint is introduced to guarantee the shape preservation in the feature embedding and image generation processes. We explicitly define a joint embedding space of both the TCT and CBCT images to exploit the intrinsic semantic representation, which is key to the intra-and cross-domain image generation and muscle segmentation. The proposed approach is applied to clinically captured CBCT images. We demonstrate both the effectiveness and efficiency of the proposed approach in noise reduction and muscle segmentation tasks compared with the state-of-the-art.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134087927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ISOOV2 DL - Semantic Instance Segmentation of Touching and Overlapping Objects 触摸和重叠对象的语义实例分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759334
Anton Böhm, Maxim Tatarchenko, Thorsten Falk
We present $mathrm { ISOO } _ { mathrm { DL } } ^ { mathrm { V } 2 } -$ a method for semantic instance segmentation of touching and overlapping objects. We introduce a series of design modifications to the prior framework, including a novel mixed 2D-3D segmentation network and a simplified post-processing procedure which enables segmentation of touching objects without relying on object detection. For the case of overlapping objects where detection is required, we upgrade the bounding box parametrization and allow for smaller reference point distances. All these novel-ties lead to substantial performance improvements and enable the method to deal with a wider range of challenging practical situations. Additionally, our framework can handle object sub-part segmentation. We evaluate our approach on both real-world and synthetically generated biological datasets and report state-of-the-art performance.
我们提出了$ mathm {ISOO} _ { mathm {DL}} ^ { mathm {V} 2} -$一种用于触摸和重叠对象的语义实例分割方法。我们对先前的框架进行了一系列设计修改,包括一种新的混合2D-3D分割网络和简化的后处理程序,该程序可以在不依赖对象检测的情况下分割触摸对象。对于需要检测的重叠对象,我们升级了边界框参数化,并允许更小的参考点距离。所有这些新特性都大大提高了性能,并使该方法能够处理更广泛的具有挑战性的实际情况。此外,我们的框架可以处理对象子部分分割。我们在真实世界和合成生成的生物数据集上评估我们的方法,并报告最先进的性能。
{"title":"ISOOV2 DL - Semantic Instance Segmentation of Touching and Overlapping Objects","authors":"Anton Böhm, Maxim Tatarchenko, Thorsten Falk","doi":"10.1109/ISBI.2019.8759334","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759334","url":null,"abstract":"We present $mathrm { ISOO } _ { mathrm { DL } } ^ { mathrm { V } 2 } -$ a method for semantic instance segmentation of touching and overlapping objects. We introduce a series of design modifications to the prior framework, including a novel mixed 2D-3D segmentation network and a simplified post-processing procedure which enables segmentation of touching objects without relying on object detection. For the case of overlapping objects where detection is required, we upgrade the bounding box parametrization and allow for smaller reference point distances. All these novel-ties lead to substantial performance improvements and enable the method to deal with a wider range of challenging practical situations. Additionally, our framework can handle object sub-part segmentation. We evaluate our approach on both real-world and synthetically generated biological datasets and report state-of-the-art performance.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125509862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MGB-NET: Orbital Bone Segmentation from Head and Neck CT Images Using Multi-Graylevel-Bone Convolutional Networks MGB-NET:基于多灰度-骨卷积网络的头颈部CT图像眶骨分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759424
M. Lee, H. Hong, K. Shim, Seongeun Park
For the reconstruction of the orbital wall of the cranio-maxillofacial surgery, the segmentation of the orbital bone is necessary to support the eye globe position and restore the volume and shape of the orbit. However, due to the wide range of intensities of the orbital bones, conventional U-Net-based segmentation shows under-segmentation in the low-intensity thin bones of the orbital medial wall and orbital floor. In this paper, we propose a multi-gray-bone-Net (MGB-Net) for orbital bone segmentation that improves segmentation accuracy of high-intensity cortical bone as well as low-intensity thin bone in head-and-neck CT images. To prevent under-segmentation of the thin bones of the orbital medial wall and orbital floor, a single orbital bone mask is convert into two masks for cortical bone and thin bone. Two SGB-Nets separately are trained on these masks and each cortical and thin bone segmentation result is integrated to obtain the whole orbital bone segmentation result. Experiments show that our MGB-Net achieves improved performance for whole orbital bone segmentation as well as segmentation of thin bone of orbital medial wall and orbital floor.
在颅颌面外科眶壁重建中,需要对眶骨进行分割,以支撑眼球位置,恢复眶的体积和形状。然而,由于眶骨的强度范围广,传统的u - net分割在眶内壁和眶底的低强度薄骨中显示分割不足。在本文中,我们提出了一种多灰骨网(MGB-Net)用于眶骨分割,提高了头颈部CT图像中高强度皮质骨和低强度薄骨的分割精度。为防止眶内壁和眶底薄骨分割不足,将单个眶骨掩膜转换为皮质骨和薄骨两个掩膜。在这些掩模上分别训练两个SGB-Nets,并将每个皮质骨和薄骨分割结果进行综合,得到整个眶骨分割结果。实验表明,我们的MGB-Net在全眶骨分割以及眶内壁和眶底薄骨分割方面取得了较好的效果。
{"title":"MGB-NET: Orbital Bone Segmentation from Head and Neck CT Images Using Multi-Graylevel-Bone Convolutional Networks","authors":"M. Lee, H. Hong, K. Shim, Seongeun Park","doi":"10.1109/ISBI.2019.8759424","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759424","url":null,"abstract":"For the reconstruction of the orbital wall of the cranio-maxillofacial surgery, the segmentation of the orbital bone is necessary to support the eye globe position and restore the volume and shape of the orbit. However, due to the wide range of intensities of the orbital bones, conventional U-Net-based segmentation shows under-segmentation in the low-intensity thin bones of the orbital medial wall and orbital floor. In this paper, we propose a multi-gray-bone-Net (MGB-Net) for orbital bone segmentation that improves segmentation accuracy of high-intensity cortical bone as well as low-intensity thin bone in head-and-neck CT images. To prevent under-segmentation of the thin bones of the orbital medial wall and orbital floor, a single orbital bone mask is convert into two masks for cortical bone and thin bone. Two SGB-Nets separately are trained on these masks and each cortical and thin bone segmentation result is integrated to obtain the whole orbital bone segmentation result. Experiments show that our MGB-Net achieves improved performance for whole orbital bone segmentation as well as segmentation of thin bone of orbital medial wall and orbital floor.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128051809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
High Accuracy Patch-Level Classification of Wireless Capsule Endoscopy Images Using a Convolutional Neural Network 基于卷积神经网络的无线胶囊内窥镜图像的高精度斑块级分类
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759324
Vinu Sankar Sadasivan, C. Seelamantula
Wireless capsule endoscopy (WCE) is a technology used to record colored internal images of the gastrointestinal (GI) tract for the purpose of medical diagnosis. It transmits a large number of frames in a single examination cycle, which makes the process of analyzing and diagnosis of abnormalities extremely challenging and time-consuming. In this paper, we propose a technique to automate the abnormality detection in WCE images following a deep learning approach. The WCE images are split into patches and input to a convolutional neural network (CNN). A trained deep neural network is used to classify patches to be either malign or benign. The patches with abnormalities are marked on the WCE image output. We obtained an area under receiver-operating-characteristic curve (AUROC) value of about 98.65% on a publicly available test data containing nine abnormalities.
无线胶囊内窥镜(WCE)是一种用于记录胃肠道(GI)内部彩色图像以用于医学诊断的技术。它在单个检查周期中传输大量帧,这使得异常分析和诊断过程非常具有挑战性和耗时。在本文中,我们提出了一种基于深度学习方法的WCE图像异常自动检测技术。WCE图像被分割成小块并输入卷积神经网络(CNN)。使用经过训练的深度神经网络对斑块进行恶性或良性分类。在WCE图像输出上标记异常斑块。在包含9个异常的公开测试数据上,我们获得了接受者工作特征曲线下面积(AUROC)值约为98.65%。
{"title":"High Accuracy Patch-Level Classification of Wireless Capsule Endoscopy Images Using a Convolutional Neural Network","authors":"Vinu Sankar Sadasivan, C. Seelamantula","doi":"10.1109/ISBI.2019.8759324","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759324","url":null,"abstract":"Wireless capsule endoscopy (WCE) is a technology used to record colored internal images of the gastrointestinal (GI) tract for the purpose of medical diagnosis. It transmits a large number of frames in a single examination cycle, which makes the process of analyzing and diagnosis of abnormalities extremely challenging and time-consuming. In this paper, we propose a technique to automate the abnormality detection in WCE images following a deep learning approach. The WCE images are split into patches and input to a convolutional neural network (CNN). A trained deep neural network is used to classify patches to be either malign or benign. The patches with abnormalities are marked on the WCE image output. We obtained an area under receiver-operating-characteristic curve (AUROC) value of about 98.65% on a publicly available test data containing nine abnormalities.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"73 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123246693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Facilitating Manual Segmentation of 3D Datasets Using Contour And Intensity Guided Interpolation 使用轮廓和强度引导插值促进3D数据集的手动分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759500
S. Ravikumar, L. Wisse, Yang Gao, G. Gerig, Paul Yushkevich
Manual segmentation of anatomical structures in 3D imaging datasets is a highly time-consuming process. This process can be sped up using interslice interpolation techniques, which require only a small subset of slices to be manually segmented. In this paper, we propose a two-step interpolation approach that utilizes a “binary weighted averaging” algorithm to interpolate contour information, and the random forest framework to perform intensity-based label classification. We present the results of experiments performed in the context of hippocampal segmentations in ex vivo MRI scans. Compared to the random walker algorithm and morphology-based interpolation, the proposed method produces more accurate segmentations and smoother 3D reconstructions.
人工分割三维成像数据集中的解剖结构是一个非常耗时的过程。这个过程可以使用片间插值技术加速,这只需要一小部分片进行手动分割。在本文中,我们提出了一种两步插值方法,该方法利用“二元加权平均”算法来插值轮廓信息,并利用随机森林框架来执行基于强度的标签分类。我们提出了在离体MRI扫描海马分割的背景下进行的实验结果。与随机步行者算法和基于形态的插值算法相比,该方法的分割更加精确,三维重建更加平滑。
{"title":"Facilitating Manual Segmentation of 3D Datasets Using Contour And Intensity Guided Interpolation","authors":"S. Ravikumar, L. Wisse, Yang Gao, G. Gerig, Paul Yushkevich","doi":"10.1109/ISBI.2019.8759500","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759500","url":null,"abstract":"Manual segmentation of anatomical structures in 3D imaging datasets is a highly time-consuming process. This process can be sped up using interslice interpolation techniques, which require only a small subset of slices to be manually segmented. In this paper, we propose a two-step interpolation approach that utilizes a “binary weighted averaging” algorithm to interpolate contour information, and the random forest framework to perform intensity-based label classification. We present the results of experiments performed in the context of hippocampal segmentations in ex vivo MRI scans. Compared to the random walker algorithm and morphology-based interpolation, the proposed method produces more accurate segmentations and smoother 3D reconstructions.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117315248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1