首页 > 最新文献

2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)最新文献

英文 中文
Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation 不同张量编码组合在微观结构参数估计中的比较
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759100
M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones
Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.
磁共振弥散加权成像是一种研究脑白质微观结构的无创工具。它提供了估计室间扩散参数的信息。一些文献研究表明,使用传统的线性扩散编码(Stejskal-Tanner脉冲梯度自旋回波)估计参数存在简并性。人们提出了多种策略来解决简并问题,然而,尚不清楚这些方法是否能完全解决问题。其中一种方法是b张量编码。在以往的研究中,为了保证估计的稳定性,采用了线性球面张量编码(LTE+STE)和线性平面张量编码(LTE+PTE)相结合的方法。在本文中,我们比较了使用不同的b-张量编码组合拟合两室模型的结果。对线性-球面(LTE+STE)、线性-平面(LTE+PTE)、平面-球面(PTE+STE)和线性-平面-球面(LTE+PTE+STE)四种不同组合进行了比较。结果表明,与单一张量编码相比,组合张量编码的参数估计偏差更小,精度更高。
{"title":"Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation","authors":"M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones","doi":"10.1109/ISBI.2019.8759100","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759100","url":null,"abstract":"Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels 使用辅助标签的有限注释的深度网络解剖分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759488
A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer
Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.
深度卷积神经网络(cnn)在解剖学分割中表现出令人印象深刻的性能,接近最先进的基于地图集的分割方法。一方面,cnn的预测速度比基于atlas的分割快20倍。然而,CNN进步的主要障碍之一是它的训练需要大量的注释数据。这是一个代价高昂的障碍,因为注释既耗时又需要昂贵的医疗专业知识。这项工作的目标是使用最少的昂贵的手动注释来达到最先进的分割性能。最近的研究表明,辅助分词可以与人工标注一起使用,以提高CNN的学习效果。为了使这种学习方案更有效,我们提出了一种图像选择算法,该算法明智地选择图像进行手动标注以产生更准确的辅助分割,以及一种质量控制算法,该算法从CNN训练中排除质量差的辅助分割。我们通过改变用于基于图集的方法的手动注释的数量和通过改变辅助分割的数量来训练CNN,对胸部CT数据集进行了广泛的实验。我们的研究结果表明,使用辅助分割训练的CNN在使用少量准确的人工分割训练时获得了0.76比0.58的更高的dice。此外,在训练100个或更多的辅助分割时,CNN总是优于基于地图集的方法。最后,在精心选择单个图集进行辅助分割并控制辅助分割质量时,训练后的CNN在使用随机选择的图像进行所有辅助分割的手动标注时,平均概率为0.72 vs 0.62。
{"title":"Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels","authors":"A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer","doi":"10.1109/ISBI.2019.8759488","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759488","url":null,"abstract":"Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-Aware Artery/Vein Classification on Retinal Images 视网膜图像中不确定性感知的动脉/静脉分类
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759380
A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.
视网膜血管自动分化为动脉和静脉(A/V)是视网膜图像分析领域中一个高度相关的课题。然而,由于视网膜图像采集设备的限制,专家们发现不可能在眼底图像中标记某些血管。在本文中,我们介绍了一种在设计中考虑这种不确定性的方法。为此,我们将A/V分类任务制定为四类分割问题,并训练卷积神经网络将像素分类为背景,A/V或不确定类别。由此产生的技术可以直接提供像素级的不确定性估计。此外,该方法不依赖于先前可用的血管分割,而是自动分割血管树。实验结果表明,该方法的性能与最近几种a /V分类方法相当或更好。此外,在评估血管分割任务时,所提出的技术也达到了最先进的性能,可以推广到训练期间未使用的数据,即使在外观和分辨率方面存在很大差异。
{"title":"Uncertainty-Aware Artery/Vein Classification on Retinal Images","authors":"A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho","doi":"10.1109/ISBI.2019.8759380","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759380","url":null,"abstract":"The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Improving 3D MA-TIRF Reconstruction with Deconvolution and Background Estimation 基于反卷积和背景估计的三维MA-TIRF重建方法
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759103
Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling
Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.
全内反射荧光显微镜(TIRF)产生二维图像的荧光活性集成在一个非常薄的层相邻的玻璃盖。通过改变照明角度(多角度TIRF),可以获得一堆二维图像,从中可以估计观察到的生物结构的轴向位置。由于其独特的光学切片能力,该技术非常适合观察和研究细胞膜附近的生物过程。在本文中,我们提出了一种高效的多角度TIRF显微镜重建算法,该算法同时考虑了采集系统的PSF(衍射)和背景信号(如自体荧光)。它联合进行体积重建、反褶积和背景估计。该算法基于同时方向乘法器方法(SDMM),依赖于优化问题的适当拆分,使得算法的每一步都能得到封闭形式的解。最后,数值实验揭示了在重建过程中考虑背景信号的重要性,从而增强了所提方法的相关性。
{"title":"Improving 3D MA-TIRF Reconstruction with Deconvolution and Background Estimation","authors":"Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling","doi":"10.1109/ISBI.2019.8759103","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759103","url":null,"abstract":"Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125599987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deformable Registration of Whole Brain Zebrafish Microscopy Using an Implementation of the Flash Algorithm Within Ants 利用蚂蚁Flash算法实现斑马鱼全脑显微镜的可变形注册
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759549
G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens
Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.
最近在显微镜、蛋白质工程和遗传学方面的进步,使斑马鱼幼虫成为一个强大的模型系统,可以在细胞分辨率下获得全脑、实时、功能性神经成像。在同一鱼类中补充其他模式的功能数据,如结构连通性和转录组学,将有助于解释个体动物整个大脑的结构-功能关系。然而,在产生的大量图像中正确识别相应的细胞取决于准确和有效的可变形配准。为了解决这一挑战,我们在著名的Advanced Normalization Tools (ANTs)包中实现了fourier -approximate Lie Algebras for Shooting (FLASH)算法。这将FLASH的速度与广泛的图像匹配功能和ant的多阶段多分辨率功能相结合。我们记录了9条鱼的纵向数据,使用一条线在一个独立的通道中唯一地识别神经元子集。我们通过展示准确的细胞间对应来验证我们的方法,同时比ant中的对称归一化(SyN)实现所需的时间和内存要少得多,并且不会损害大变形微分同构度量映射(LDDMM)模型的理论基础。
{"title":"Deformable Registration of Whole Brain Zebrafish Microscopy Using an Implementation of the Flash Algorithm Within Ants","authors":"G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens","doi":"10.1109/ISBI.2019.8759549","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759549","url":null,"abstract":"Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deforming Tessellations For The Segmentation Of Cell Aggregates 变形镶嵌分割细胞聚集体
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759311
A. Badoual, A. Galan, D. Sage, M. Unser
We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.
我们提出了一种新的活动轮廓来分割细胞聚集体。我们用光滑的镶嵌体来描述它,这种镶嵌体被细胞膜吸引。我们的方法依赖于与小波理论紧密相关的细分方案。形状由分组在tile中的控制点编码。每个瓷砖的光滑和连续定义的边界是通过递归地对其控制点应用细化过程生成的。我们使用我们为此目的设计的基于脊的能量以全局方式变形光滑的镶嵌。通过结构,细胞被分割而不重叠,即使在昏暗的膜上也能保持镶嵌结构。因此,可以防止通常的图像处理方法(例如分水岭)所造成的泄漏。我们在合成和真实的显微镜图像上验证了我们的框架,表明所提出的方法对膜间隙和高水平噪声具有鲁棒性。
{"title":"Deforming Tessellations For The Segmentation Of Cell Aggregates","authors":"A. Badoual, A. Galan, D. Sage, M. Unser","doi":"10.1109/ISBI.2019.8759311","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759311","url":null,"abstract":"We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"6 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ellipse Detection of Optic Disc-and-Cup Boundary in Fundus Images 眼底图像中盘杯边界的椭圆检测
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759173
Zeya Wang, Nanqing Dong, Sean D. Rosario, Min Xu, P. Xie, E. Xing
Glaucoma is an eye disease that damages the optic nerve and leads to loss of vision. The diagnosis of glaucoma involves measurement of cup-to-disc ratio from retinal fundus images, which necessitates the detection of the optic disc-and-cup boundary as a crucial task for glaucoma screening. Most existing computer-aided diagnosis (CAD) systems focus on the segmentation approaches but ignore the localization approaches, which requires less human annotation cost. In this paper, we propose a deep learning-based framework to jointly localize the ellipse for the optic disc (OD) and optic cup (OC) regions. Instead of detecting a bounding box like in most object detection approaches, we directly estimate the parameters of an ellipse that suffices to capture the morphology of each OD and OC region for calculating the cup-to-disc ratio. We use two modules to detect the ellipses for OD and OC regions, where the OD region serves as attention to the OC region. The proposed framework achieves competitive results against the state-of-the-art segmentation methods with less supervision. We empirically evaluate our framework with the recent state-of-the-art segmentation models on two scenarios where the training data and test data come from the same and different domains.
青光眼是一种损害视神经并导致视力丧失的眼部疾病。青光眼的诊断需要测量视网膜眼底图像的杯盘比,这就需要检测视盘和杯的边界,这是青光眼筛查的关键任务。现有的计算机辅助诊断(CAD)系统大多侧重于分割方法,而忽略了需要较少人工标注成本的定位方法。在本文中,我们提出了一个基于深度学习的框架来联合定位视盘(OD)和视杯(OC)区域的椭圆。我们不像大多数物体检测方法那样检测一个边界框,而是直接估计一个椭圆的参数,这个椭圆足以捕获每个OD和OC区域的形态,从而计算杯盘比。我们使用两个模块来检测OD和OC区域的省略号,其中OD区域作为OC区域的关注。该框架在较少监督的情况下实现了与最先进的分割方法的竞争结果。我们在训练数据和测试数据来自相同和不同领域的两种情况下,用最新的最先进的分割模型对我们的框架进行了经验评估。
{"title":"Ellipse Detection of Optic Disc-and-Cup Boundary in Fundus Images","authors":"Zeya Wang, Nanqing Dong, Sean D. Rosario, Min Xu, P. Xie, E. Xing","doi":"10.1109/ISBI.2019.8759173","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759173","url":null,"abstract":"Glaucoma is an eye disease that damages the optic nerve and leads to loss of vision. The diagnosis of glaucoma involves measurement of cup-to-disc ratio from retinal fundus images, which necessitates the detection of the optic disc-and-cup boundary as a crucial task for glaucoma screening. Most existing computer-aided diagnosis (CAD) systems focus on the segmentation approaches but ignore the localization approaches, which requires less human annotation cost. In this paper, we propose a deep learning-based framework to jointly localize the ellipse for the optic disc (OD) and optic cup (OC) regions. Instead of detecting a bounding box like in most object detection approaches, we directly estimate the parameters of an ellipse that suffices to capture the morphology of each OD and OC region for calculating the cup-to-disc ratio. We use two modules to detect the ellipses for OD and OC regions, where the OD region serves as attention to the OC region. The proposed framework achieves competitive results against the state-of-the-art segmentation methods with less supervision. We empirically evaluate our framework with the recent state-of-the-art segmentation models on two scenarios where the training data and test data come from the same and different domains.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125571963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Cancer Detection in Mass Spectrometry Imaging Data by Recurrent Neural Networks 递归神经网络在质谱成像数据中的癌症检测
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759571
F. G. Zanjani, Andreas Panteli, S. Zinger, F. V. D. Sommen, T. Tan, Benjamin Balluff, D. Vos, S. Ellis, R. Heeren, M. Lucas, H. Marquering, Ivo G. H. Jansen, C. D. Savci-Heijink, D. M. Bruin, P. D. With
Mass spectrometry imaging (MSI) reveals the localization of a broad scale of compounds ranging from metabolites to proteins in biological tissues. This makes MSI an attractive tool in biomedical research for studying diseases. Computer-aided diagnosis (CAD) systems facilitate the analysis of the molecular profile in tumor tissues to provide a distinctive fingerprint for finding biomarkers. In this paper, the performance of recurrent neural networks (RNNs) is studied on MSI data to exploit their learning capabilities for finding irregular patterns and dependencies in sequential data. In order to design a better CAD model for tumor detection/classification, several configurations of Long Short-Time Memory (LSTM) are examined. The proposed model consists of a 2-layer bidirectional LSTM, each containing 100 LSTM units. The proposed RNN model outperforms the state-of-the-art CNN model by 1.87% and 1.45% higher accuracy in mass spectra classification on lung and bladder cancer datasets with a sixfold faster training time.
质谱成像(MSI)揭示了生物组织中从代谢物到蛋白质的广泛化合物的定位。这使得MSI成为生物医学研究中研究疾病的一个有吸引力的工具。计算机辅助诊断(CAD)系统有助于分析肿瘤组织中的分子特征,为寻找生物标志物提供独特的指纹。本文研究了递归神经网络(rnn)在MSI数据上的性能,以利用其在序列数据中发现不规则模式和依赖关系的学习能力。为了设计一个更好的用于肿瘤检测/分类的CAD模型,研究了长短时记忆(LSTM)的几种配置。该模型由两层双向LSTM组成,每层LSTM包含100个LSTM单元。本文提出的RNN模型在肺癌和膀胱癌数据集上的质谱分类准确率分别比目前最先进的CNN模型高1.87%和1.45%,训练时间提高了6倍。
{"title":"Cancer Detection in Mass Spectrometry Imaging Data by Recurrent Neural Networks","authors":"F. G. Zanjani, Andreas Panteli, S. Zinger, F. V. D. Sommen, T. Tan, Benjamin Balluff, D. Vos, S. Ellis, R. Heeren, M. Lucas, H. Marquering, Ivo G. H. Jansen, C. D. Savci-Heijink, D. M. Bruin, P. D. With","doi":"10.1109/ISBI.2019.8759571","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759571","url":null,"abstract":"Mass spectrometry imaging (MSI) reveals the localization of a broad scale of compounds ranging from metabolites to proteins in biological tissues. This makes MSI an attractive tool in biomedical research for studying diseases. Computer-aided diagnosis (CAD) systems facilitate the analysis of the molecular profile in tumor tissues to provide a distinctive fingerprint for finding biomarkers. In this paper, the performance of recurrent neural networks (RNNs) is studied on MSI data to exploit their learning capabilities for finding irregular patterns and dependencies in sequential data. In order to design a better CAD model for tumor detection/classification, several configurations of Long Short-Time Memory (LSTM) are examined. The proposed model consists of a 2-layer bidirectional LSTM, each containing 100 LSTM units. The proposed RNN model outperforms the state-of-the-art CNN model by 1.87% and 1.45% higher accuracy in mass spectra classification on lung and bladder cancer datasets with a sixfold faster training time.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125755778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Semi-Supervised Learning For Cardiac Left Ventricle Segmentation Using Conditional Deep Generative Models as Prior 基于条件深度生成模型的半监督学习左心室分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759292
M. Jafari, H. Girgis, A. Abdi, Zhibin Liao, Mehran Pesteie, R. Rohling, K. Gin, T. Tsang, P. Abolmaesumi
Accurate segmentation of left ventricle (LV) in apical four chamber echocardiography cine is a key step in cardiac functionality assessment. Cardiologists roughly annotate two frames in the cardiac cycle, namely, the end-diastolic and end-systolic frames, as part of their clinical workflow, limiting the annotated data to less than 5% of the frames in the cardiac cycle. In this paper, we propose a semi-supervised learning algorithm to leverage the unlabeled data to improve the performance of LV segmentation algorithms. This approach is based on a generative model which learns an inverse mapping from segmentation masks to their corresponding echo frames. This generator is then used as a critic to assess and improve the LV segmentation mask generated by a given segmentation algorithm such as U-Net. This semi-supervised approach enforces a prior on the segmentation model based on the perceptual similarity of the generated frame with the original frame. This approach promotes utilization of the unlabeled samples, which, in turn, improves the segmentation accuracy.
在心功能评估中,心尖四室超声心动图准确分割左心室(LV)是关键一步。作为临床工作流程的一部分,心脏病专家大致注释了心脏周期中的两个框架,即舒张末期框架和收缩末期框架,将注释数据限制在心脏周期框架的5%以下。在本文中,我们提出了一种半监督学习算法来利用未标记数据来提高LV分割算法的性能。该方法基于生成模型,该模型学习从分割掩码到相应回波帧的逆映射。然后将该生成器用作评论家来评估和改进由给定分割算法(如U-Net)生成的LV分割掩码。这种半监督方法基于生成的帧与原始帧的感知相似性对分割模型进行先验处理。这种方法促进了未标记样本的利用,从而提高了分割的准确性。
{"title":"Semi-Supervised Learning For Cardiac Left Ventricle Segmentation Using Conditional Deep Generative Models as Prior","authors":"M. Jafari, H. Girgis, A. Abdi, Zhibin Liao, Mehran Pesteie, R. Rohling, K. Gin, T. Tsang, P. Abolmaesumi","doi":"10.1109/ISBI.2019.8759292","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759292","url":null,"abstract":"Accurate segmentation of left ventricle (LV) in apical four chamber echocardiography cine is a key step in cardiac functionality assessment. Cardiologists roughly annotate two frames in the cardiac cycle, namely, the end-diastolic and end-systolic frames, as part of their clinical workflow, limiting the annotated data to less than 5% of the frames in the cardiac cycle. In this paper, we propose a semi-supervised learning algorithm to leverage the unlabeled data to improve the performance of LV segmentation algorithms. This approach is based on a generative model which learns an inverse mapping from segmentation masks to their corresponding echo frames. This generator is then used as a critic to assess and improve the LV segmentation mask generated by a given segmentation algorithm such as U-Net. This semi-supervised approach enforces a prior on the segmentation model based on the perceptual similarity of the generated frame with the original frame. This approach promotes utilization of the unlabeled samples, which, in turn, improves the segmentation accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129999353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Accurate Segmentation of Dental Panoramic Radiographs with U-NETS 基于U-NETS的牙科全景x线片精确分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759563
T. L. Koch, Mathias Perslev, C. Igel, Sami Sebastian Brandt
Fully convolutional neural networks (FCNs) have proven to be powerful tools for medical image segmentation. We apply an FCN based on the U-Net architecture for the challenging task of semantic segmentation of dental panoramic radiographs and discuss general tricks for improving segmentation performance. Among those are network ensembling, test-time augmentation, data symmetry exploitation and bootstrapping of low quality annotations. The performance of our approach was tested on a highly variable dataset of 1500 dental panoramic radiographs. A single network reached the Dice score of 0.934 where 1201 images were used for training, forming an ensemble increased the score to 0.936.
全卷积神经网络已被证明是医学图像分割的有力工具。我们将基于U-Net架构的FCN应用于具有挑战性的牙科全景x线片语义分割任务,并讨论了提高分割性能的一般技巧。其中包括网络集成、测试时间增强、数据对称性开发和低质量注释的自引导。我们的方法的性能在1500张牙科全景x光片的高度可变数据集上进行了测试。在使用1201张图像进行训练的情况下,单个网络的Dice得分达到0.934,形成一个集合将得分提高到0.936。
{"title":"Accurate Segmentation of Dental Panoramic Radiographs with U-NETS","authors":"T. L. Koch, Mathias Perslev, C. Igel, Sami Sebastian Brandt","doi":"10.1109/ISBI.2019.8759563","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759563","url":null,"abstract":"Fully convolutional neural networks (FCNs) have proven to be powerful tools for medical image segmentation. We apply an FCN based on the U-Net architecture for the challenging task of semantic segmentation of dental panoramic radiographs and discuss general tricks for improving segmentation performance. Among those are network ensembling, test-time augmentation, data symmetry exploitation and bootstrapping of low quality annotations. The performance of our approach was tested on a highly variable dataset of 1500 dental panoramic radiographs. A single network reached the Dice score of 0.934 where 1201 images were used for training, forming an ensemble increased the score to 0.936.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128270872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
期刊
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1