Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759100
M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones
Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.
{"title":"Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation","authors":"M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones","doi":"10.1109/ISBI.2019.8759100","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759100","url":null,"abstract":"Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759488
A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer
Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.
深度卷积神经网络(cnn)在解剖学分割中表现出令人印象深刻的性能,接近最先进的基于地图集的分割方法。一方面,cnn的预测速度比基于atlas的分割快20倍。然而,CNN进步的主要障碍之一是它的训练需要大量的注释数据。这是一个代价高昂的障碍,因为注释既耗时又需要昂贵的医疗专业知识。这项工作的目标是使用最少的昂贵的手动注释来达到最先进的分割性能。最近的研究表明,辅助分词可以与人工标注一起使用,以提高CNN的学习效果。为了使这种学习方案更有效,我们提出了一种图像选择算法,该算法明智地选择图像进行手动标注以产生更准确的辅助分割,以及一种质量控制算法,该算法从CNN训练中排除质量差的辅助分割。我们通过改变用于基于图集的方法的手动注释的数量和通过改变辅助分割的数量来训练CNN,对胸部CT数据集进行了广泛的实验。我们的研究结果表明,使用辅助分割训练的CNN在使用少量准确的人工分割训练时获得了0.76比0.58的更高的dice。此外,在训练100个或更多的辅助分割时,CNN总是优于基于地图集的方法。最后,在精心选择单个图集进行辅助分割并控制辅助分割质量时,训练后的CNN在使用随机选择的图像进行所有辅助分割的手动标注时,平均概率为0.72 vs 0.62。
{"title":"Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels","authors":"A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer","doi":"10.1109/ISBI.2019.8759488","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759488","url":null,"abstract":"Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759380
A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.
{"title":"Uncertainty-Aware Artery/Vein Classification on Retinal Images","authors":"A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho","doi":"10.1109/ISBI.2019.8759380","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759380","url":null,"abstract":"The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759103
Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling
Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.
{"title":"Improving 3D MA-TIRF Reconstruction with Deconvolution and Background Estimation","authors":"Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling","doi":"10.1109/ISBI.2019.8759103","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759103","url":null,"abstract":"Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125599987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759549
G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens
Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.
最近在显微镜、蛋白质工程和遗传学方面的进步,使斑马鱼幼虫成为一个强大的模型系统,可以在细胞分辨率下获得全脑、实时、功能性神经成像。在同一鱼类中补充其他模式的功能数据,如结构连通性和转录组学,将有助于解释个体动物整个大脑的结构-功能关系。然而,在产生的大量图像中正确识别相应的细胞取决于准确和有效的可变形配准。为了解决这一挑战,我们在著名的Advanced Normalization Tools (ANTs)包中实现了fourier -approximate Lie Algebras for Shooting (FLASH)算法。这将FLASH的速度与广泛的图像匹配功能和ant的多阶段多分辨率功能相结合。我们记录了9条鱼的纵向数据,使用一条线在一个独立的通道中唯一地识别神经元子集。我们通过展示准确的细胞间对应来验证我们的方法,同时比ant中的对称归一化(SyN)实现所需的时间和内存要少得多,并且不会损害大变形微分同构度量映射(LDDMM)模型的理论基础。
{"title":"Deformable Registration of Whole Brain Zebrafish Microscopy Using an Implementation of the Flash Algorithm Within Ants","authors":"G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens","doi":"10.1109/ISBI.2019.8759549","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759549","url":null,"abstract":"Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759311
A. Badoual, A. Galan, D. Sage, M. Unser
We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.
{"title":"Deforming Tessellations For The Segmentation Of Cell Aggregates","authors":"A. Badoual, A. Galan, D. Sage, M. Unser","doi":"10.1109/ISBI.2019.8759311","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759311","url":null,"abstract":"We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"6 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759516
Zhongyu Li, Yixuan Lou, Zhennan Yan, S. Al’Aref, J. Min, L. Axel, Dimitris N. Metaxas
Delineation of right ventricular cavity (RVC), left ventricular myocardium (LVM) and left ventricular cavity (LVC) are common tasks in the clinical diagnosis of cardiac related diseases, especially in the basis of advanced magnetic resonance imaging (MRI) techniques. Recently, despite deep learning techniques being widely employed in solving segmentation tasks in a variety of medical images, the sheer volume and complexity of the data in some applications such as cine cardiac MRI pose significant challenges for the accurate and efficient segmentation. In cine cardiac MRI we need to segment both short and long axis 2D images. In this paper, we focus on the automated segmentation of short-axis cardiac MRI images. We first introduce the deep layer aggregation (DLA) method to augment the standard deep learning architecture with deeper aggregation to better fuse information across layers, which is particularly suitable for the cardiac MRI segmentation, due to the complexity of the cardiac boundaries appearance and acquisition resolution during a cardiac cycle. In our solution, we develop a modified DLA framework by embedding Refinement Residual Block (RRB) and Channel Attention Block (CAB). Experimental results validate the superior performance of our proposed method for the cardiac structures segmentation in comparison with state-of-the-art. Moreover, we demonstrate its potential use case in the quantitative analysis of cardiac dyssynchrony.
{"title":"Fully Automatic Segmentation Of Short-Axis Cardiac MRI Using Modified Deep Layer Aggregation","authors":"Zhongyu Li, Yixuan Lou, Zhennan Yan, S. Al’Aref, J. Min, L. Axel, Dimitris N. Metaxas","doi":"10.1109/ISBI.2019.8759516","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759516","url":null,"abstract":"Delineation of right ventricular cavity (RVC), left ventricular myocardium (LVM) and left ventricular cavity (LVC) are common tasks in the clinical diagnosis of cardiac related diseases, especially in the basis of advanced magnetic resonance imaging (MRI) techniques. Recently, despite deep learning techniques being widely employed in solving segmentation tasks in a variety of medical images, the sheer volume and complexity of the data in some applications such as cine cardiac MRI pose significant challenges for the accurate and efficient segmentation. In cine cardiac MRI we need to segment both short and long axis 2D images. In this paper, we focus on the automated segmentation of short-axis cardiac MRI images. We first introduce the deep layer aggregation (DLA) method to augment the standard deep learning architecture with deeper aggregation to better fuse information across layers, which is particularly suitable for the cardiac MRI segmentation, due to the complexity of the cardiac boundaries appearance and acquisition resolution during a cardiac cycle. In our solution, we develop a modified DLA framework by embedding Refinement Residual Block (RRB) and Channel Attention Block (CAB). Experimental results validate the superior performance of our proposed method for the cardiac structures segmentation in comparison with state-of-the-art. Moreover, we demonstrate its potential use case in the quantitative analysis of cardiac dyssynchrony.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759145
B. Möller, K. Bürstenbinder
The structure of the microtubule cytoskeleton provides valuable information related to morphogenesis of cells. The cytoskeleton organizes into diverse patterns that vary in cells of different types and tissues, but also within a single tissue. To assess differences in cytoskeleton organization methods are needed that quantify cytoskeleton patterns within a complete cell and which are suitable for large data sets. A major bottleneck in most approaches, however, is a lack of techniques for automatic extraction of cell contours. Here, we present a semi-automatic pipeline for cell segmentation and quantification of microtubule organization. Automatic methods are applied to extract major parts of the contours and a handy image editor is provided to manually add missing information efficiently. Experimental results prove that our approach yields high-quality contour data with minimal user intervention and serves a suitable basis for subsequent quantitative studies.
{"title":"Semi-Automatic Cell Segmentation from Noisy Image Data for Quantification of Microtubule Organization on Single Cell Level","authors":"B. Möller, K. Bürstenbinder","doi":"10.1109/ISBI.2019.8759145","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759145","url":null,"abstract":"The structure of the microtubule cytoskeleton provides valuable information related to morphogenesis of cells. The cytoskeleton organizes into diverse patterns that vary in cells of different types and tissues, but also within a single tissue. To assess differences in cytoskeleton organization methods are needed that quantify cytoskeleton patterns within a complete cell and which are suitable for large data sets. A major bottleneck in most approaches, however, is a lack of techniques for automatic extraction of cell contours. Here, we present a semi-automatic pipeline for cell segmentation and quantification of microtubule organization. Automatic methods are applied to extract major parts of the contours and a handy image editor is provided to manually add missing information efficiently. Experimental results prove that our approach yields high-quality contour data with minimal user intervention and serves a suitable basis for subsequent quantitative studies.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759259
Youyi Song, J. Qin, Baiying Lei, Shengfeng He, K. Choi
We present a novel and effective approach to segmenting overlapping cytoplasm of cells in cervical smear images. Instead of simply combining individual cytoplasm shape information with the intensity or color information for the segmentation, our approach aims at simultaneously matching an accurate shape template for each cytoplasm in a whole clump. There are two main technical contributions. First, we present a novel shape similarity measure that supports shape template matching without clump splitting, allowing us to leverage more shape information, not only from the cytoplasm itself but also from the whole clump. Second, we propose an effective objective function for joint shape template matching based on our shape similarity measure; unlike individual matching, our method is able to exploit more shape constraints. We extensively evaluate our method on two typical cervical smear data sets. Experimental results show that our method outperforms the state-of-the-art methods in term of segmentation accuracy.
{"title":"JOint Shape Matching for Overlapping Cytoplasm Segmentation in Cervical Smear Images","authors":"Youyi Song, J. Qin, Baiying Lei, Shengfeng He, K. Choi","doi":"10.1109/ISBI.2019.8759259","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759259","url":null,"abstract":"We present a novel and effective approach to segmenting overlapping cytoplasm of cells in cervical smear images. Instead of simply combining individual cytoplasm shape information with the intensity or color information for the segmentation, our approach aims at simultaneously matching an accurate shape template for each cytoplasm in a whole clump. There are two main technical contributions. First, we present a novel shape similarity measure that supports shape template matching without clump splitting, allowing us to leverage more shape information, not only from the cytoplasm itself but also from the whole clump. Second, we propose an effective objective function for joint shape template matching based on our shape similarity measure; unlike individual matching, our method is able to exploit more shape constraints. We extensively evaluate our method on two typical cervical smear data sets. Experimental results show that our method outperforms the state-of-the-art methods in term of segmentation accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759542
A. Crimi, E. Kara
Parkinsons disease is a neurodegenerative disease characterized by the progressive development of $alpha$-synuclein pathology across the brain. To better understand the disruption of neuronal networks in Parkinsons disease and its relation to the spread of $alpha$-synuclein, advanced descriptors from neuroimaging can be used to complement histopathological analyses and in vitro and mouse experimental models. It is yet to be understood whether the course of Parkinson’s disease affects the structural brain network, or, conversely, if some subjects have specific structural connections which facilitate the transmission of the pathology. In this paper we investigate whether there are differences between the connectomes of Parkinson’s disease patients and healthy controls. Moreover, we evaluate a computational model to simulate the spread of $alpha$-synuclein across neuronal networks in patients with Parkinson’s disease, quantifying which areas could be the most affected by the disease.
{"title":"Spreading Model for Patients with Parkinson’s Disease Based on Connectivity Differences","authors":"A. Crimi, E. Kara","doi":"10.1109/ISBI.2019.8759542","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759542","url":null,"abstract":"Parkinsons disease is a neurodegenerative disease characterized by the progressive development of $alpha$-synuclein pathology across the brain. To better understand the disruption of neuronal networks in Parkinsons disease and its relation to the spread of $alpha$-synuclein, advanced descriptors from neuroimaging can be used to complement histopathological analyses and in vitro and mouse experimental models. It is yet to be understood whether the course of Parkinson’s disease affects the structural brain network, or, conversely, if some subjects have specific structural connections which facilitate the transmission of the pathology. In this paper we investigate whether there are differences between the connectomes of Parkinson’s disease patients and healthy controls. Moreover, we evaluate a computational model to simulate the spread of $alpha$-synuclein across neuronal networks in patients with Parkinson’s disease, quantifying which areas could be the most affected by the disease.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116567148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}