首页 > 最新文献

2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)最新文献

英文 中文
Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation 不同张量编码组合在微观结构参数估计中的比较
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759100
M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones
Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.
磁共振弥散加权成像是一种研究脑白质微观结构的无创工具。它提供了估计室间扩散参数的信息。一些文献研究表明,使用传统的线性扩散编码(Stejskal-Tanner脉冲梯度自旋回波)估计参数存在简并性。人们提出了多种策略来解决简并问题,然而,尚不清楚这些方法是否能完全解决问题。其中一种方法是b张量编码。在以往的研究中,为了保证估计的稳定性,采用了线性球面张量编码(LTE+STE)和线性平面张量编码(LTE+PTE)相结合的方法。在本文中,我们比较了使用不同的b-张量编码组合拟合两室模型的结果。对线性-球面(LTE+STE)、线性-平面(LTE+PTE)、平面-球面(PTE+STE)和线性-平面-球面(LTE+PTE+STE)四种不同组合进行了比较。结果表明,与单一张量编码相比,组合张量编码的参数估计偏差更小,精度更高。
{"title":"Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation","authors":"M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones","doi":"10.1109/ISBI.2019.8759100","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759100","url":null,"abstract":"Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels 使用辅助标签的有限注释的深度网络解剖分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759488
A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer
Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.
深度卷积神经网络(cnn)在解剖学分割中表现出令人印象深刻的性能,接近最先进的基于地图集的分割方法。一方面,cnn的预测速度比基于atlas的分割快20倍。然而,CNN进步的主要障碍之一是它的训练需要大量的注释数据。这是一个代价高昂的障碍,因为注释既耗时又需要昂贵的医疗专业知识。这项工作的目标是使用最少的昂贵的手动注释来达到最先进的分割性能。最近的研究表明,辅助分词可以与人工标注一起使用,以提高CNN的学习效果。为了使这种学习方案更有效,我们提出了一种图像选择算法,该算法明智地选择图像进行手动标注以产生更准确的辅助分割,以及一种质量控制算法,该算法从CNN训练中排除质量差的辅助分割。我们通过改变用于基于图集的方法的手动注释的数量和通过改变辅助分割的数量来训练CNN,对胸部CT数据集进行了广泛的实验。我们的研究结果表明,使用辅助分割训练的CNN在使用少量准确的人工分割训练时获得了0.76比0.58的更高的dice。此外,在训练100个或更多的辅助分割时,CNN总是优于基于地图集的方法。最后,在精心选择单个图集进行辅助分割并控制辅助分割质量时,训练后的CNN在使用随机选择的图像进行所有辅助分割的手动标注时,平均概率为0.72 vs 0.62。
{"title":"Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels","authors":"A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer","doi":"10.1109/ISBI.2019.8759488","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759488","url":null,"abstract":"Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-Aware Artery/Vein Classification on Retinal Images 视网膜图像中不确定性感知的动脉/静脉分类
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759380
A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.
视网膜血管自动分化为动脉和静脉(A/V)是视网膜图像分析领域中一个高度相关的课题。然而,由于视网膜图像采集设备的限制,专家们发现不可能在眼底图像中标记某些血管。在本文中,我们介绍了一种在设计中考虑这种不确定性的方法。为此,我们将A/V分类任务制定为四类分割问题,并训练卷积神经网络将像素分类为背景,A/V或不确定类别。由此产生的技术可以直接提供像素级的不确定性估计。此外,该方法不依赖于先前可用的血管分割,而是自动分割血管树。实验结果表明,该方法的性能与最近几种a /V分类方法相当或更好。此外,在评估血管分割任务时,所提出的技术也达到了最先进的性能,可以推广到训练期间未使用的数据,即使在外观和分辨率方面存在很大差异。
{"title":"Uncertainty-Aware Artery/Vein Classification on Retinal Images","authors":"A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho","doi":"10.1109/ISBI.2019.8759380","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759380","url":null,"abstract":"The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Improving 3D MA-TIRF Reconstruction with Deconvolution and Background Estimation 基于反卷积和背景估计的三维MA-TIRF重建方法
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759103
Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling
Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.
全内反射荧光显微镜(TIRF)产生二维图像的荧光活性集成在一个非常薄的层相邻的玻璃盖。通过改变照明角度(多角度TIRF),可以获得一堆二维图像,从中可以估计观察到的生物结构的轴向位置。由于其独特的光学切片能力,该技术非常适合观察和研究细胞膜附近的生物过程。在本文中,我们提出了一种高效的多角度TIRF显微镜重建算法,该算法同时考虑了采集系统的PSF(衍射)和背景信号(如自体荧光)。它联合进行体积重建、反褶积和背景估计。该算法基于同时方向乘法器方法(SDMM),依赖于优化问题的适当拆分,使得算法的每一步都能得到封闭形式的解。最后,数值实验揭示了在重建过程中考虑背景信号的重要性,从而增强了所提方法的相关性。
{"title":"Improving 3D MA-TIRF Reconstruction with Deconvolution and Background Estimation","authors":"Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling","doi":"10.1109/ISBI.2019.8759103","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759103","url":null,"abstract":"Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125599987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deformable Registration of Whole Brain Zebrafish Microscopy Using an Implementation of the Flash Algorithm Within Ants 利用蚂蚁Flash算法实现斑马鱼全脑显微镜的可变形注册
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759549
G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens
Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.
最近在显微镜、蛋白质工程和遗传学方面的进步,使斑马鱼幼虫成为一个强大的模型系统,可以在细胞分辨率下获得全脑、实时、功能性神经成像。在同一鱼类中补充其他模式的功能数据,如结构连通性和转录组学,将有助于解释个体动物整个大脑的结构-功能关系。然而,在产生的大量图像中正确识别相应的细胞取决于准确和有效的可变形配准。为了解决这一挑战,我们在著名的Advanced Normalization Tools (ANTs)包中实现了fourier -approximate Lie Algebras for Shooting (FLASH)算法。这将FLASH的速度与广泛的图像匹配功能和ant的多阶段多分辨率功能相结合。我们记录了9条鱼的纵向数据,使用一条线在一个独立的通道中唯一地识别神经元子集。我们通过展示准确的细胞间对应来验证我们的方法,同时比ant中的对称归一化(SyN)实现所需的时间和内存要少得多,并且不会损害大变形微分同构度量映射(LDDMM)模型的理论基础。
{"title":"Deformable Registration of Whole Brain Zebrafish Microscopy Using an Implementation of the Flash Algorithm Within Ants","authors":"G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens","doi":"10.1109/ISBI.2019.8759549","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759549","url":null,"abstract":"Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deforming Tessellations For The Segmentation Of Cell Aggregates 变形镶嵌分割细胞聚集体
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759311
A. Badoual, A. Galan, D. Sage, M. Unser
We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.
我们提出了一种新的活动轮廓来分割细胞聚集体。我们用光滑的镶嵌体来描述它,这种镶嵌体被细胞膜吸引。我们的方法依赖于与小波理论紧密相关的细分方案。形状由分组在tile中的控制点编码。每个瓷砖的光滑和连续定义的边界是通过递归地对其控制点应用细化过程生成的。我们使用我们为此目的设计的基于脊的能量以全局方式变形光滑的镶嵌。通过结构,细胞被分割而不重叠,即使在昏暗的膜上也能保持镶嵌结构。因此,可以防止通常的图像处理方法(例如分水岭)所造成的泄漏。我们在合成和真实的显微镜图像上验证了我们的框架,表明所提出的方法对膜间隙和高水平噪声具有鲁棒性。
{"title":"Deforming Tessellations For The Segmentation Of Cell Aggregates","authors":"A. Badoual, A. Galan, D. Sage, M. Unser","doi":"10.1109/ISBI.2019.8759311","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759311","url":null,"abstract":"We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"6 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully Automatic Segmentation Of Short-Axis Cardiac MRI Using Modified Deep Layer Aggregation 基于改进深层聚集的心脏短轴MRI全自动分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759516
Zhongyu Li, Yixuan Lou, Zhennan Yan, S. Al’Aref, J. Min, L. Axel, Dimitris N. Metaxas
Delineation of right ventricular cavity (RVC), left ventricular myocardium (LVM) and left ventricular cavity (LVC) are common tasks in the clinical diagnosis of cardiac related diseases, especially in the basis of advanced magnetic resonance imaging (MRI) techniques. Recently, despite deep learning techniques being widely employed in solving segmentation tasks in a variety of medical images, the sheer volume and complexity of the data in some applications such as cine cardiac MRI pose significant challenges for the accurate and efficient segmentation. In cine cardiac MRI we need to segment both short and long axis 2D images. In this paper, we focus on the automated segmentation of short-axis cardiac MRI images. We first introduce the deep layer aggregation (DLA) method to augment the standard deep learning architecture with deeper aggregation to better fuse information across layers, which is particularly suitable for the cardiac MRI segmentation, due to the complexity of the cardiac boundaries appearance and acquisition resolution during a cardiac cycle. In our solution, we develop a modified DLA framework by embedding Refinement Residual Block (RRB) and Channel Attention Block (CAB). Experimental results validate the superior performance of our proposed method for the cardiac structures segmentation in comparison with state-of-the-art. Moreover, we demonstrate its potential use case in the quantitative analysis of cardiac dyssynchrony.
右心室腔(RVC)、左心室心肌(LVM)和左心室腔(LVC)的描绘是心脏相关疾病临床诊断的常见任务,特别是在先进的磁共振成像(MRI)技术的基础上。近年来,尽管深度学习技术被广泛应用于解决各种医学图像的分割任务,但在一些应用中,如电影心脏MRI,数据的庞大数量和复杂性对准确高效的分割提出了重大挑战。在电影心脏MRI中,我们需要分割短轴和长轴二维图像。本文主要研究心脏短轴MRI图像的自动分割。我们首先引入深层聚合(deep layer aggregation, DLA)方法,用更深的聚合增强标准的深度学习架构,以更好地融合跨层的信息,由于心脏周期期间心脏边界外观和采集分辨率的复杂性,该方法特别适合于心脏MRI分割。在我们的解决方案中,我们通过嵌入细化残差块(RRB)和信道注意块(CAB)开发了一个改进的DLA框架。实验结果验证了该方法在心脏结构分割方面的优越性能。此外,我们还展示了它在心脏不同步运动定量分析中的潜在用例。
{"title":"Fully Automatic Segmentation Of Short-Axis Cardiac MRI Using Modified Deep Layer Aggregation","authors":"Zhongyu Li, Yixuan Lou, Zhennan Yan, S. Al’Aref, J. Min, L. Axel, Dimitris N. Metaxas","doi":"10.1109/ISBI.2019.8759516","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759516","url":null,"abstract":"Delineation of right ventricular cavity (RVC), left ventricular myocardium (LVM) and left ventricular cavity (LVC) are common tasks in the clinical diagnosis of cardiac related diseases, especially in the basis of advanced magnetic resonance imaging (MRI) techniques. Recently, despite deep learning techniques being widely employed in solving segmentation tasks in a variety of medical images, the sheer volume and complexity of the data in some applications such as cine cardiac MRI pose significant challenges for the accurate and efficient segmentation. In cine cardiac MRI we need to segment both short and long axis 2D images. In this paper, we focus on the automated segmentation of short-axis cardiac MRI images. We first introduce the deep layer aggregation (DLA) method to augment the standard deep learning architecture with deeper aggregation to better fuse information across layers, which is particularly suitable for the cardiac MRI segmentation, due to the complexity of the cardiac boundaries appearance and acquisition resolution during a cardiac cycle. In our solution, we develop a modified DLA framework by embedding Refinement Residual Block (RRB) and Channel Attention Block (CAB). Experimental results validate the superior performance of our proposed method for the cardiac structures segmentation in comparison with state-of-the-art. Moreover, we demonstrate its potential use case in the quantitative analysis of cardiac dyssynchrony.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Semi-Automatic Cell Segmentation from Noisy Image Data for Quantification of Microtubule Organization on Single Cell Level 基于噪声图像数据的半自动细胞分割,用于单细胞水平的微管组织定量
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759145
B. Möller, K. Bürstenbinder
The structure of the microtubule cytoskeleton provides valuable information related to morphogenesis of cells. The cytoskeleton organizes into diverse patterns that vary in cells of different types and tissues, but also within a single tissue. To assess differences in cytoskeleton organization methods are needed that quantify cytoskeleton patterns within a complete cell and which are suitable for large data sets. A major bottleneck in most approaches, however, is a lack of techniques for automatic extraction of cell contours. Here, we present a semi-automatic pipeline for cell segmentation and quantification of microtubule organization. Automatic methods are applied to extract major parts of the contours and a handy image editor is provided to manually add missing information efficiently. Experimental results prove that our approach yields high-quality contour data with minimal user intervention and serves a suitable basis for subsequent quantitative studies.
微管细胞骨架的结构提供了与细胞形态发生有关的有价值的信息。细胞骨架在不同类型的细胞和组织中组织成不同的模式,但在单个组织中也是如此。为了评估细胞骨架组织的差异,需要量化完整细胞内的细胞骨架模式的方法,并且适合于大型数据集。然而,大多数方法的主要瓶颈是缺乏自动提取细胞轮廓的技术。在这里,我们提出了一个半自动管道细胞分割和微管组织的定量。采用自动方法提取轮廓的主要部分,并提供一个方便的图像编辑器,手动有效地添加缺失信息。实验结果证明,我们的方法以最少的用户干预产生高质量的轮廓数据,为后续的定量研究提供了合适的基础。
{"title":"Semi-Automatic Cell Segmentation from Noisy Image Data for Quantification of Microtubule Organization on Single Cell Level","authors":"B. Möller, K. Bürstenbinder","doi":"10.1109/ISBI.2019.8759145","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759145","url":null,"abstract":"The structure of the microtubule cytoskeleton provides valuable information related to morphogenesis of cells. The cytoskeleton organizes into diverse patterns that vary in cells of different types and tissues, but also within a single tissue. To assess differences in cytoskeleton organization methods are needed that quantify cytoskeleton patterns within a complete cell and which are suitable for large data sets. A major bottleneck in most approaches, however, is a lack of techniques for automatic extraction of cell contours. Here, we present a semi-automatic pipeline for cell segmentation and quantification of microtubule organization. Automatic methods are applied to extract major parts of the contours and a handy image editor is provided to manually add missing information efficiently. Experimental results prove that our approach yields high-quality contour data with minimal user intervention and serves a suitable basis for subsequent quantitative studies.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
JOint Shape Matching for Overlapping Cytoplasm Segmentation in Cervical Smear Images 宫颈涂片图像中重叠细胞质分割的关节形状匹配
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759259
Youyi Song, J. Qin, Baiying Lei, Shengfeng He, K. Choi
We present a novel and effective approach to segmenting overlapping cytoplasm of cells in cervical smear images. Instead of simply combining individual cytoplasm shape information with the intensity or color information for the segmentation, our approach aims at simultaneously matching an accurate shape template for each cytoplasm in a whole clump. There are two main technical contributions. First, we present a novel shape similarity measure that supports shape template matching without clump splitting, allowing us to leverage more shape information, not only from the cytoplasm itself but also from the whole clump. Second, we propose an effective objective function for joint shape template matching based on our shape similarity measure; unlike individual matching, our method is able to exploit more shape constraints. We extensively evaluate our method on two typical cervical smear data sets. Experimental results show that our method outperforms the state-of-the-art methods in term of segmentation accuracy.
我们提出了一种新的和有效的方法来分割重叠的细胞质细胞宫颈涂片图像。我们的方法不是简单地将单个细胞质形状信息与强度或颜色信息相结合进行分割,而是同时为整个团块中的每个细胞质匹配准确的形状模板。有两个主要的技术贡献。首先,我们提出了一种新的形状相似性度量,它支持形状模板匹配而不需要团块分裂,使我们能够利用更多的形状信息,不仅来自细胞质本身,而且来自整个团块。其次,提出了一种有效的基于形状相似性测度的节点形状模板匹配目标函数;与个体匹配不同,我们的方法能够利用更多的形状约束。我们在两个典型的子宫颈涂片数据集上广泛评估了我们的方法。实验结果表明,我们的方法在分割精度方面优于目前最先进的方法。
{"title":"JOint Shape Matching for Overlapping Cytoplasm Segmentation in Cervical Smear Images","authors":"Youyi Song, J. Qin, Baiying Lei, Shengfeng He, K. Choi","doi":"10.1109/ISBI.2019.8759259","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759259","url":null,"abstract":"We present a novel and effective approach to segmenting overlapping cytoplasm of cells in cervical smear images. Instead of simply combining individual cytoplasm shape information with the intensity or color information for the segmentation, our approach aims at simultaneously matching an accurate shape template for each cytoplasm in a whole clump. There are two main technical contributions. First, we present a novel shape similarity measure that supports shape template matching without clump splitting, allowing us to leverage more shape information, not only from the cytoplasm itself but also from the whole clump. Second, we propose an effective objective function for joint shape template matching based on our shape similarity measure; unlike individual matching, our method is able to exploit more shape constraints. We extensively evaluate our method on two typical cervical smear data sets. Experimental results show that our method outperforms the state-of-the-art methods in term of segmentation accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Spreading Model for Patients with Parkinson’s Disease Based on Connectivity Differences 基于连通性差异的帕金森病患者传播模型
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759542
A. Crimi, E. Kara
Parkinsons disease is a neurodegenerative disease characterized by the progressive development of $alpha$-synuclein pathology across the brain. To better understand the disruption of neuronal networks in Parkinsons disease and its relation to the spread of $alpha$-synuclein, advanced descriptors from neuroimaging can be used to complement histopathological analyses and in vitro and mouse experimental models. It is yet to be understood whether the course of Parkinson’s disease affects the structural brain network, or, conversely, if some subjects have specific structural connections which facilitate the transmission of the pathology. In this paper we investigate whether there are differences between the connectomes of Parkinson’s disease patients and healthy controls. Moreover, we evaluate a computational model to simulate the spread of $alpha$-synuclein across neuronal networks in patients with Parkinson’s disease, quantifying which areas could be the most affected by the disease.
帕金森氏病是一种神经退行性疾病,其特征是整个大脑的α -突触核蛋白病理的进行性发展。为了更好地了解帕金森病中神经元网络的破坏及其与突触核蛋白扩散的关系,神经影像学的高级描述符可用于补充组织病理学分析以及体外和小鼠实验模型。目前尚不清楚帕金森氏症的病程是否会影响大脑结构网络,或者相反,是否某些受试者具有促进病理传播的特定结构连接。本文旨在探讨帕金森病患者与健康对照者的连接体是否存在差异。此外,我们评估了一个计算模型,以模拟帕金森病患者神经元网络中$ α $-synuclein的传播,量化哪些区域可能受该疾病影响最大。
{"title":"Spreading Model for Patients with Parkinson’s Disease Based on Connectivity Differences","authors":"A. Crimi, E. Kara","doi":"10.1109/ISBI.2019.8759542","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759542","url":null,"abstract":"Parkinsons disease is a neurodegenerative disease characterized by the progressive development of $alpha$-synuclein pathology across the brain. To better understand the disruption of neuronal networks in Parkinsons disease and its relation to the spread of $alpha$-synuclein, advanced descriptors from neuroimaging can be used to complement histopathological analyses and in vitro and mouse experimental models. It is yet to be understood whether the course of Parkinson’s disease affects the structural brain network, or, conversely, if some subjects have specific structural connections which facilitate the transmission of the pathology. In this paper we investigate whether there are differences between the connectomes of Parkinson’s disease patients and healthy controls. Moreover, we evaluate a computational model to simulate the spread of $alpha$-synuclein across neuronal networks in patients with Parkinson’s disease, quantifying which areas could be the most affected by the disease.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116567148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1