Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759100
M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones
Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.
{"title":"Comparison of Different Tensor Encoding Combinations in Microstructural Parameter Estimation","authors":"M. Afzali, C. Tax, C. Chatziantoniou, Derek K. Jones","doi":"10.1109/ISBI.2019.8759100","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759100","url":null,"abstract":"Diffusion-weighted magnetic resonance imaging is a noninvasive tool to investigate the brain white matter microstructure. It provides the information to estimate the compartmental diffusion parameters. Several studies in the literature have shown that there is degeneracy in the estimated parameters using traditional linear diffusion encoding (Stejskal-Tanner pulsed gradient spin echo). Multiple strategies have been proposed to solve degeneracy, however, it is not clear if those methods would completely solve the problem. One of the approaches is b-tensor encoding. The combination of linear-spherical tensor encoding (LTE+STE) and linear-planar (LTE+PTE) have been utilized to make the estimations stable in the previous works. In this paper, we compare the results of fitting a two-compartment model using different combinations of b-tensor encoding. The four different combinations linear-spherical (LTE+STE), linear-planar (LTE+PTE), planar-spherical (PTE+STE) and linear-planar-spherical (LTE+PTE+STE) have been compared. The results show that the combination of tensor encodings leads to lower bias and higher precision in the parameter estimates than single tensor encoding.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759488
A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer
Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.
深度卷积神经网络(cnn)在解剖学分割中表现出令人印象深刻的性能,接近最先进的基于地图集的分割方法。一方面,cnn的预测速度比基于atlas的分割快20倍。然而,CNN进步的主要障碍之一是它的训练需要大量的注释数据。这是一个代价高昂的障碍,因为注释既耗时又需要昂贵的医疗专业知识。这项工作的目标是使用最少的昂贵的手动注释来达到最先进的分割性能。最近的研究表明,辅助分词可以与人工标注一起使用,以提高CNN的学习效果。为了使这种学习方案更有效,我们提出了一种图像选择算法,该算法明智地选择图像进行手动标注以产生更准确的辅助分割,以及一种质量控制算法,该算法从CNN训练中排除质量差的辅助分割。我们通过改变用于基于图集的方法的手动注释的数量和通过改变辅助分割的数量来训练CNN,对胸部CT数据集进行了广泛的实验。我们的研究结果表明,使用辅助分割训练的CNN在使用少量准确的人工分割训练时获得了0.76比0.58的更高的dice。此外,在训练100个或更多的辅助分割时,CNN总是优于基于地图集的方法。最后,在精心选择单个图集进行辅助分割并控制辅助分割质量时,训练后的CNN在使用随机选择的图像进行所有辅助分割的手动标注时,平均概率为0.72 vs 0.62。
{"title":"Deep Network Anatomy Segmentation with Limited Annotations using Auxiliary Labels","authors":"A. Harouni, Hongzhi Wang, T. Syeda-Mahmood, D. Beymer","doi":"10.1109/ISBI.2019.8759488","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759488","url":null,"abstract":"Deep convolutional neural networks (CNNs) have shown impressive performance in anatomy segmentation that are close to the state of the art atlas-based segmentation method. On one hand CNNs have 20x faster predictions than atlas-based segmentation. However, one of the main holdbacks of CNN’s advancement is that it’s training requires large amount of annotated data. This is a costly hurdle as annotation is time consuming and requires expensive medical expertise. The goal of this work is to reach state of the art segmentation performance using the minimum amount of expensive manual annotations. Recent studies show that auxiliary segmentations can be used together with manual annotations to improve CNN learning. To make this learning scheme more effective, we propose an image selection algorithm that wisely chooses images for manual annotation for producing more accurate auxiliary segmentations and a quality control algorithm that excludes poor quality auxiliary segmentations from CNN training. We perform extensive experiments over chest CT dataset by varying the number of manual annotations used for atlas-based methods and by varying the number of auxiliary segmentations to train the CNN. Our results show that CNN trained with auxiliary segmentations achieve higher dice of 0.76 vs 0.58 when trained with few accurate manual segmentations. Moreover, training with 100 or more auxiliary segmentations, the CNN always outperforms atlas-based method. Finally, when carefully selecting single atlas for producing auxiliary segmentations and controlling the quality of auxiliary segmentations, the trained CNN archives high average dice of 0.72 vs 0.62 when using a randomly selected image for manual annotation with all auxiliary segmentations.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759380
A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.
{"title":"Uncertainty-Aware Artery/Vein Classification on Retinal Images","authors":"A. Galdran, Maria Inês Meyer, P. Costa, A. Mendonça, A. Campilho","doi":"10.1109/ISBI.2019.8759380","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759380","url":null,"abstract":"The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. However, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that was not used during training, even with considerable differences in terms of appearance and resolution.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759103
Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling
Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.
{"title":"Improving 3D MA-TIRF Reconstruction with Deconvolution and Background Estimation","authors":"Emmanuel Soubies, L. Blanc-Féraud, S. Schaub, E. Obberghen-Schilling","doi":"10.1109/ISBI.2019.8759103","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759103","url":null,"abstract":"Total internal reflection fluorescence microscopy (TIRF) produces 2D images of the fluorescent activity integrated over a very thin layer adjacent to the glass coverslip. By varying the illumination angle (multi-angle TIRF), a stack of 2D images is acquired from which it is possible to estimate the axial position of the observed biological structures. Due to its unique optical sectioning capability, this technique is ideal to observe and study biological processes at the vicinity of the cell membrane. In this paper, we propose an efficient reconstruction algorithm for multi-angle TIRF microscopy which accounts for both the PSF of the acquisition system (diffraction) and the background signal (e.g., autofluorescence). It jointly performs volume reconstruction, deconvolution, and background estimation. This algorithm, based on the simultaneous-direction method of multipliers (SDMM), relies on a suitable splitting of the optimization problem which allows to obtain closed form solutions at each step of the algorithm. Finally, numerical experiments reveal the importance of considering the background signal into the reconstruction process, which reinforces the relevance of the proposed approach.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125599987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759549
G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens
Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.
最近在显微镜、蛋白质工程和遗传学方面的进步,使斑马鱼幼虫成为一个强大的模型系统,可以在细胞分辨率下获得全脑、实时、功能性神经成像。在同一鱼类中补充其他模式的功能数据,如结构连通性和转录组学,将有助于解释个体动物整个大脑的结构-功能关系。然而,在产生的大量图像中正确识别相应的细胞取决于准确和有效的可变形配准。为了解决这一挑战,我们在著名的Advanced Normalization Tools (ANTs)包中实现了fourier -approximate Lie Algebras for Shooting (FLASH)算法。这将FLASH的速度与广泛的图像匹配功能和ant的多阶段多分辨率功能相结合。我们记录了9条鱼的纵向数据,使用一条线在一个独立的通道中唯一地识别神经元子集。我们通过展示准确的细胞间对应来验证我们的方法,同时比ant中的对称归一化(SyN)实现所需的时间和内存要少得多,并且不会损害大变形微分同构度量映射(LDDMM)模型的理论基础。
{"title":"Deformable Registration of Whole Brain Zebrafish Microscopy Using an Implementation of the Flash Algorithm Within Ants","authors":"G. Fleishman, Miaomiao Zhang, N. Tustison, Isabel Espinosa-Medina, Yu Mu, Khaled Khairy, M. Ahrens","doi":"10.1109/ISBI.2019.8759549","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759549","url":null,"abstract":"Recent advancements in microscopy, protein engineering, and genetics have rendered the larval zerbrafish a powerful model system for which whole brain, real time, functional neuroimaging at cellular resolution is accessible. Supplementing functional data with additional modalities in the same fish such as structural connectivity and transcriptomics will enable interpretation of structure-function relationships across the entire brains of individual animals. However, proper identification of corresponding cells in the large image volumes produced depends on accurate and efficient deformable registration. To address this challenge, we implemented the Fourier-approximated Lie Algebras for Shooting (FLASH) algorithm within the well-known Advanced Normalization Tools (ANTs) package. This combines the speed of FLASH with the extensive set of image matching functionals and multi-staging multi-resolution capabilities of ANTs. We registered longitudinal data from nine fish, using a line that uniquely identifies subsets of neurons in an independent channel. We validate our approach by demonstrating accurate cell-to-cell correspondence while requiring significantly less time and memory than the Symmetric Normalization (SyN) implementation in ANTs, and without compromising the theoretical foundations of the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759311
A. Badoual, A. Galan, D. Sage, M. Unser
We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.
{"title":"Deforming Tessellations For The Segmentation Of Cell Aggregates","authors":"A. Badoual, A. Galan, D. Sage, M. Unser","doi":"10.1109/ISBI.2019.8759311","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759311","url":null,"abstract":"We present a new active contour to segment cell aggregates. We describe it by a smooth tessellation that is attracted toward the cell membranes. Our approach relies on subdivision schemes that are tightly linked to the theory of wavelets. The shape is encoded by control points grouped in tiles. The smooth and continuously defined boundary of each tile is generated by recursively applying a refinement process to its control points. We deform the smooth tessellation in a global manner using a ridge-based energy that we have designed for that purpose. By construction, cells are segmented without overlap and the tessellation structure is maintained even on dim membranes. Leakage, which afflicts usual image-processing methods (e.g., watershed), is thus prevented. We validate our framework on both synthetic and real microscopy images, showing that the proposed method is robust to membrane gaps and to high levels of noise.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"6 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759173
Zeya Wang, Nanqing Dong, Sean D. Rosario, Min Xu, P. Xie, E. Xing
Glaucoma is an eye disease that damages the optic nerve and leads to loss of vision. The diagnosis of glaucoma involves measurement of cup-to-disc ratio from retinal fundus images, which necessitates the detection of the optic disc-and-cup boundary as a crucial task for glaucoma screening. Most existing computer-aided diagnosis (CAD) systems focus on the segmentation approaches but ignore the localization approaches, which requires less human annotation cost. In this paper, we propose a deep learning-based framework to jointly localize the ellipse for the optic disc (OD) and optic cup (OC) regions. Instead of detecting a bounding box like in most object detection approaches, we directly estimate the parameters of an ellipse that suffices to capture the morphology of each OD and OC region for calculating the cup-to-disc ratio. We use two modules to detect the ellipses for OD and OC regions, where the OD region serves as attention to the OC region. The proposed framework achieves competitive results against the state-of-the-art segmentation methods with less supervision. We empirically evaluate our framework with the recent state-of-the-art segmentation models on two scenarios where the training data and test data come from the same and different domains.
{"title":"Ellipse Detection of Optic Disc-and-Cup Boundary in Fundus Images","authors":"Zeya Wang, Nanqing Dong, Sean D. Rosario, Min Xu, P. Xie, E. Xing","doi":"10.1109/ISBI.2019.8759173","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759173","url":null,"abstract":"Glaucoma is an eye disease that damages the optic nerve and leads to loss of vision. The diagnosis of glaucoma involves measurement of cup-to-disc ratio from retinal fundus images, which necessitates the detection of the optic disc-and-cup boundary as a crucial task for glaucoma screening. Most existing computer-aided diagnosis (CAD) systems focus on the segmentation approaches but ignore the localization approaches, which requires less human annotation cost. In this paper, we propose a deep learning-based framework to jointly localize the ellipse for the optic disc (OD) and optic cup (OC) regions. Instead of detecting a bounding box like in most object detection approaches, we directly estimate the parameters of an ellipse that suffices to capture the morphology of each OD and OC region for calculating the cup-to-disc ratio. We use two modules to detect the ellipses for OD and OC regions, where the OD region serves as attention to the OC region. The proposed framework achieves competitive results against the state-of-the-art segmentation methods with less supervision. We empirically evaluate our framework with the recent state-of-the-art segmentation models on two scenarios where the training data and test data come from the same and different domains.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125571963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759571
F. G. Zanjani, Andreas Panteli, S. Zinger, F. V. D. Sommen, T. Tan, Benjamin Balluff, D. Vos, S. Ellis, R. Heeren, M. Lucas, H. Marquering, Ivo G. H. Jansen, C. D. Savci-Heijink, D. M. Bruin, P. D. With
Mass spectrometry imaging (MSI) reveals the localization of a broad scale of compounds ranging from metabolites to proteins in biological tissues. This makes MSI an attractive tool in biomedical research for studying diseases. Computer-aided diagnosis (CAD) systems facilitate the analysis of the molecular profile in tumor tissues to provide a distinctive fingerprint for finding biomarkers. In this paper, the performance of recurrent neural networks (RNNs) is studied on MSI data to exploit their learning capabilities for finding irregular patterns and dependencies in sequential data. In order to design a better CAD model for tumor detection/classification, several configurations of Long Short-Time Memory (LSTM) are examined. The proposed model consists of a 2-layer bidirectional LSTM, each containing 100 LSTM units. The proposed RNN model outperforms the state-of-the-art CNN model by 1.87% and 1.45% higher accuracy in mass spectra classification on lung and bladder cancer datasets with a sixfold faster training time.
{"title":"Cancer Detection in Mass Spectrometry Imaging Data by Recurrent Neural Networks","authors":"F. G. Zanjani, Andreas Panteli, S. Zinger, F. V. D. Sommen, T. Tan, Benjamin Balluff, D. Vos, S. Ellis, R. Heeren, M. Lucas, H. Marquering, Ivo G. H. Jansen, C. D. Savci-Heijink, D. M. Bruin, P. D. With","doi":"10.1109/ISBI.2019.8759571","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759571","url":null,"abstract":"Mass spectrometry imaging (MSI) reveals the localization of a broad scale of compounds ranging from metabolites to proteins in biological tissues. This makes MSI an attractive tool in biomedical research for studying diseases. Computer-aided diagnosis (CAD) systems facilitate the analysis of the molecular profile in tumor tissues to provide a distinctive fingerprint for finding biomarkers. In this paper, the performance of recurrent neural networks (RNNs) is studied on MSI data to exploit their learning capabilities for finding irregular patterns and dependencies in sequential data. In order to design a better CAD model for tumor detection/classification, several configurations of Long Short-Time Memory (LSTM) are examined. The proposed model consists of a 2-layer bidirectional LSTM, each containing 100 LSTM units. The proposed RNN model outperforms the state-of-the-art CNN model by 1.87% and 1.45% higher accuracy in mass spectra classification on lung and bladder cancer datasets with a sixfold faster training time.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125755778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759292
M. Jafari, H. Girgis, A. Abdi, Zhibin Liao, Mehran Pesteie, R. Rohling, K. Gin, T. Tsang, P. Abolmaesumi
Accurate segmentation of left ventricle (LV) in apical four chamber echocardiography cine is a key step in cardiac functionality assessment. Cardiologists roughly annotate two frames in the cardiac cycle, namely, the end-diastolic and end-systolic frames, as part of their clinical workflow, limiting the annotated data to less than 5% of the frames in the cardiac cycle. In this paper, we propose a semi-supervised learning algorithm to leverage the unlabeled data to improve the performance of LV segmentation algorithms. This approach is based on a generative model which learns an inverse mapping from segmentation masks to their corresponding echo frames. This generator is then used as a critic to assess and improve the LV segmentation mask generated by a given segmentation algorithm such as U-Net. This semi-supervised approach enforces a prior on the segmentation model based on the perceptual similarity of the generated frame with the original frame. This approach promotes utilization of the unlabeled samples, which, in turn, improves the segmentation accuracy.
{"title":"Semi-Supervised Learning For Cardiac Left Ventricle Segmentation Using Conditional Deep Generative Models as Prior","authors":"M. Jafari, H. Girgis, A. Abdi, Zhibin Liao, Mehran Pesteie, R. Rohling, K. Gin, T. Tsang, P. Abolmaesumi","doi":"10.1109/ISBI.2019.8759292","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759292","url":null,"abstract":"Accurate segmentation of left ventricle (LV) in apical four chamber echocardiography cine is a key step in cardiac functionality assessment. Cardiologists roughly annotate two frames in the cardiac cycle, namely, the end-diastolic and end-systolic frames, as part of their clinical workflow, limiting the annotated data to less than 5% of the frames in the cardiac cycle. In this paper, we propose a semi-supervised learning algorithm to leverage the unlabeled data to improve the performance of LV segmentation algorithms. This approach is based on a generative model which learns an inverse mapping from segmentation masks to their corresponding echo frames. This generator is then used as a critic to assess and improve the LV segmentation mask generated by a given segmentation algorithm such as U-Net. This semi-supervised approach enforces a prior on the segmentation model based on the perceptual similarity of the generated frame with the original frame. This approach promotes utilization of the unlabeled samples, which, in turn, improves the segmentation accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129999353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759563
T. L. Koch, Mathias Perslev, C. Igel, Sami Sebastian Brandt
Fully convolutional neural networks (FCNs) have proven to be powerful tools for medical image segmentation. We apply an FCN based on the U-Net architecture for the challenging task of semantic segmentation of dental panoramic radiographs and discuss general tricks for improving segmentation performance. Among those are network ensembling, test-time augmentation, data symmetry exploitation and bootstrapping of low quality annotations. The performance of our approach was tested on a highly variable dataset of 1500 dental panoramic radiographs. A single network reached the Dice score of 0.934 where 1201 images were used for training, forming an ensemble increased the score to 0.936.
{"title":"Accurate Segmentation of Dental Panoramic Radiographs with U-NETS","authors":"T. L. Koch, Mathias Perslev, C. Igel, Sami Sebastian Brandt","doi":"10.1109/ISBI.2019.8759563","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759563","url":null,"abstract":"Fully convolutional neural networks (FCNs) have proven to be powerful tools for medical image segmentation. We apply an FCN based on the U-Net architecture for the challenging task of semantic segmentation of dental panoramic radiographs and discuss general tricks for improving segmentation performance. Among those are network ensembling, test-time augmentation, data symmetry exploitation and bootstrapping of low quality annotations. The performance of our approach was tested on a highly variable dataset of 1500 dental panoramic radiographs. A single network reached the Dice score of 0.934 where 1201 images were used for training, forming an ensemble increased the score to 0.936.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128270872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}