Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759145
B. Möller, K. Bürstenbinder
The structure of the microtubule cytoskeleton provides valuable information related to morphogenesis of cells. The cytoskeleton organizes into diverse patterns that vary in cells of different types and tissues, but also within a single tissue. To assess differences in cytoskeleton organization methods are needed that quantify cytoskeleton patterns within a complete cell and which are suitable for large data sets. A major bottleneck in most approaches, however, is a lack of techniques for automatic extraction of cell contours. Here, we present a semi-automatic pipeline for cell segmentation and quantification of microtubule organization. Automatic methods are applied to extract major parts of the contours and a handy image editor is provided to manually add missing information efficiently. Experimental results prove that our approach yields high-quality contour data with minimal user intervention and serves a suitable basis for subsequent quantitative studies.
{"title":"Semi-Automatic Cell Segmentation from Noisy Image Data for Quantification of Microtubule Organization on Single Cell Level","authors":"B. Möller, K. Bürstenbinder","doi":"10.1109/ISBI.2019.8759145","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759145","url":null,"abstract":"The structure of the microtubule cytoskeleton provides valuable information related to morphogenesis of cells. The cytoskeleton organizes into diverse patterns that vary in cells of different types and tissues, but also within a single tissue. To assess differences in cytoskeleton organization methods are needed that quantify cytoskeleton patterns within a complete cell and which are suitable for large data sets. A major bottleneck in most approaches, however, is a lack of techniques for automatic extraction of cell contours. Here, we present a semi-automatic pipeline for cell segmentation and quantification of microtubule organization. Automatic methods are applied to extract major parts of the contours and a handy image editor is provided to manually add missing information efficiently. Experimental results prove that our approach yields high-quality contour data with minimal user intervention and serves a suitable basis for subsequent quantitative studies.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759516
Zhongyu Li, Yixuan Lou, Zhennan Yan, S. Al’Aref, J. Min, L. Axel, Dimitris N. Metaxas
Delineation of right ventricular cavity (RVC), left ventricular myocardium (LVM) and left ventricular cavity (LVC) are common tasks in the clinical diagnosis of cardiac related diseases, especially in the basis of advanced magnetic resonance imaging (MRI) techniques. Recently, despite deep learning techniques being widely employed in solving segmentation tasks in a variety of medical images, the sheer volume and complexity of the data in some applications such as cine cardiac MRI pose significant challenges for the accurate and efficient segmentation. In cine cardiac MRI we need to segment both short and long axis 2D images. In this paper, we focus on the automated segmentation of short-axis cardiac MRI images. We first introduce the deep layer aggregation (DLA) method to augment the standard deep learning architecture with deeper aggregation to better fuse information across layers, which is particularly suitable for the cardiac MRI segmentation, due to the complexity of the cardiac boundaries appearance and acquisition resolution during a cardiac cycle. In our solution, we develop a modified DLA framework by embedding Refinement Residual Block (RRB) and Channel Attention Block (CAB). Experimental results validate the superior performance of our proposed method for the cardiac structures segmentation in comparison with state-of-the-art. Moreover, we demonstrate its potential use case in the quantitative analysis of cardiac dyssynchrony.
{"title":"Fully Automatic Segmentation Of Short-Axis Cardiac MRI Using Modified Deep Layer Aggregation","authors":"Zhongyu Li, Yixuan Lou, Zhennan Yan, S. Al’Aref, J. Min, L. Axel, Dimitris N. Metaxas","doi":"10.1109/ISBI.2019.8759516","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759516","url":null,"abstract":"Delineation of right ventricular cavity (RVC), left ventricular myocardium (LVM) and left ventricular cavity (LVC) are common tasks in the clinical diagnosis of cardiac related diseases, especially in the basis of advanced magnetic resonance imaging (MRI) techniques. Recently, despite deep learning techniques being widely employed in solving segmentation tasks in a variety of medical images, the sheer volume and complexity of the data in some applications such as cine cardiac MRI pose significant challenges for the accurate and efficient segmentation. In cine cardiac MRI we need to segment both short and long axis 2D images. In this paper, we focus on the automated segmentation of short-axis cardiac MRI images. We first introduce the deep layer aggregation (DLA) method to augment the standard deep learning architecture with deeper aggregation to better fuse information across layers, which is particularly suitable for the cardiac MRI segmentation, due to the complexity of the cardiac boundaries appearance and acquisition resolution during a cardiac cycle. In our solution, we develop a modified DLA framework by embedding Refinement Residual Block (RRB) and Channel Attention Block (CAB). Experimental results validate the superior performance of our proposed method for the cardiac structures segmentation in comparison with state-of-the-art. Moreover, we demonstrate its potential use case in the quantitative analysis of cardiac dyssynchrony.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759435
Li Liu, Da Chen, L. Cohen, H. Shu, M. Pâques
In this work, we propose a new minimal path model with a dynamic Riemannian metric to overcome the shortcuts problem in vessel extraction. The invoked metric consists of a crossing-adaptive anisotropic radius-lifted tensor field and a front freezing indicator. It is able to reduce the anisotropy of the metric on the crossing points and steer the front evolution by freezing the points causing high curvature of a geodesic. We validate our model on the DRIVE and IOSTAR datasets, and the segmentation accuracy is 0.861 and 0.881, respectively. The proposed method can extract the centreline position and vessel width efficiently and accuracy.
{"title":"Vessel Extraction Using Crossing-Adaptive Minimal Path Model With Anisotropic Enhancement And Curvature Constraint","authors":"Li Liu, Da Chen, L. Cohen, H. Shu, M. Pâques","doi":"10.1109/ISBI.2019.8759435","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759435","url":null,"abstract":"In this work, we propose a new minimal path model with a dynamic Riemannian metric to overcome the shortcuts problem in vessel extraction. The invoked metric consists of a crossing-adaptive anisotropic radius-lifted tensor field and a front freezing indicator. It is able to reduce the anisotropy of the metric on the crossing points and steer the front evolution by freezing the points causing high curvature of a geodesic. We validate our model on the DRIVE and IOSTAR datasets, and the segmentation accuracy is 0.861 and 0.881, respectively. The proposed method can extract the centreline position and vessel width efficiently and accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130804881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759542
A. Crimi, E. Kara
Parkinsons disease is a neurodegenerative disease characterized by the progressive development of $alpha$-synuclein pathology across the brain. To better understand the disruption of neuronal networks in Parkinsons disease and its relation to the spread of $alpha$-synuclein, advanced descriptors from neuroimaging can be used to complement histopathological analyses and in vitro and mouse experimental models. It is yet to be understood whether the course of Parkinson’s disease affects the structural brain network, or, conversely, if some subjects have specific structural connections which facilitate the transmission of the pathology. In this paper we investigate whether there are differences between the connectomes of Parkinson’s disease patients and healthy controls. Moreover, we evaluate a computational model to simulate the spread of $alpha$-synuclein across neuronal networks in patients with Parkinson’s disease, quantifying which areas could be the most affected by the disease.
{"title":"Spreading Model for Patients with Parkinson’s Disease Based on Connectivity Differences","authors":"A. Crimi, E. Kara","doi":"10.1109/ISBI.2019.8759542","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759542","url":null,"abstract":"Parkinsons disease is a neurodegenerative disease characterized by the progressive development of $alpha$-synuclein pathology across the brain. To better understand the disruption of neuronal networks in Parkinsons disease and its relation to the spread of $alpha$-synuclein, advanced descriptors from neuroimaging can be used to complement histopathological analyses and in vitro and mouse experimental models. It is yet to be understood whether the course of Parkinson’s disease affects the structural brain network, or, conversely, if some subjects have specific structural connections which facilitate the transmission of the pathology. In this paper we investigate whether there are differences between the connectomes of Parkinson’s disease patients and healthy controls. Moreover, we evaluate a computational model to simulate the spread of $alpha$-synuclein across neuronal networks in patients with Parkinson’s disease, quantifying which areas could be the most affected by the disease.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116567148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759259
Youyi Song, J. Qin, Baiying Lei, Shengfeng He, K. Choi
We present a novel and effective approach to segmenting overlapping cytoplasm of cells in cervical smear images. Instead of simply combining individual cytoplasm shape information with the intensity or color information for the segmentation, our approach aims at simultaneously matching an accurate shape template for each cytoplasm in a whole clump. There are two main technical contributions. First, we present a novel shape similarity measure that supports shape template matching without clump splitting, allowing us to leverage more shape information, not only from the cytoplasm itself but also from the whole clump. Second, we propose an effective objective function for joint shape template matching based on our shape similarity measure; unlike individual matching, our method is able to exploit more shape constraints. We extensively evaluate our method on two typical cervical smear data sets. Experimental results show that our method outperforms the state-of-the-art methods in term of segmentation accuracy.
{"title":"JOint Shape Matching for Overlapping Cytoplasm Segmentation in Cervical Smear Images","authors":"Youyi Song, J. Qin, Baiying Lei, Shengfeng He, K. Choi","doi":"10.1109/ISBI.2019.8759259","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759259","url":null,"abstract":"We present a novel and effective approach to segmenting overlapping cytoplasm of cells in cervical smear images. Instead of simply combining individual cytoplasm shape information with the intensity or color information for the segmentation, our approach aims at simultaneously matching an accurate shape template for each cytoplasm in a whole clump. There are two main technical contributions. First, we present a novel shape similarity measure that supports shape template matching without clump splitting, allowing us to leverage more shape information, not only from the cytoplasm itself but also from the whole clump. Second, we propose an effective objective function for joint shape template matching based on our shape similarity measure; unlike individual matching, our method is able to exploit more shape constraints. We extensively evaluate our method on two typical cervical smear data sets. Experimental results show that our method outperforms the state-of-the-art methods in term of segmentation accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759453
Hongjian Wang, T. Huynh, H. Gemmeke, T. Hopp, J. Hesser
To accelerate the process of 3D ultrasound computed tomography, we parallelize the most time-consuming part of a paraxial forward model on GPU, where massive complex multiplications and 2D Fourier transforms have to be performed iteratively. We test our GPU implementation on a synthesized symmetric breast phantom with different sizes. In the best case, for only one emitter position, the speedup of a desktop GPU reaches 23 times when the data transfer time is included, and 100 times when only GPU parallel computing time is considered. In the worst case, the speedup of a less powerful laptop GPU is still 2.5 times over a six-core desktop CPU, when the data transfer time is included. For the correctness of the values computed on GPU, the maximum percent deviation of L2 norm is only 0.014%.
{"title":"GPU Acceleration of Wave Based Transmission Tomography","authors":"Hongjian Wang, T. Huynh, H. Gemmeke, T. Hopp, J. Hesser","doi":"10.1109/ISBI.2019.8759453","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759453","url":null,"abstract":"To accelerate the process of 3D ultrasound computed tomography, we parallelize the most time-consuming part of a paraxial forward model on GPU, where massive complex multiplications and 2D Fourier transforms have to be performed iteratively. We test our GPU implementation on a synthesized symmetric breast phantom with different sizes. In the best case, for only one emitter position, the speedup of a desktop GPU reaches 23 times when the data transfer time is included, and 100 times when only GPU parallel computing time is considered. In the worst case, the speedup of a less powerful laptop GPU is still 2.5 times over a six-core desktop CPU, when the data transfer time is included. For the correctness of the values computed on GPU, the maximum percent deviation of L2 norm is only 0.014%.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130310842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759575
Kristofer Pomiecko, Carson D. Sestili, K. Fissell, S. Pathak, D. Okonkwo, W. Schneider
This paper presents an application of 3D convolutional neural network (CNN) techniques to compute the white matter region spanned by a fiber tract (the tract mask) from whole-brain MRI diffusion anisotropy maps. The DeepMedic CNN platform was used, allowing for training directly on 3D volumes. The dataset consisted of 240 subjects, controls and traumatic brain injury (TBI) patients, scanned with a high angular direction and high b-value multi-shell diffusion protocol. Twelve tract masks per subject were learned. Median Dice scores of 0.72 were achieved over the 720 test masks in comparing learned tract masks to manually created masks. This work demonstrates ability to learn complex spatial regions in control and patient populations and contributes a new application of CNNs as a fast pre-selection tool in automated white matter tract segmentation methods.
{"title":"3D Convolutional Neural Network Segmentation of White Matter Tract Masks from MR Diffusion Anisotropy Maps","authors":"Kristofer Pomiecko, Carson D. Sestili, K. Fissell, S. Pathak, D. Okonkwo, W. Schneider","doi":"10.1109/ISBI.2019.8759575","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759575","url":null,"abstract":"This paper presents an application of 3D convolutional neural network (CNN) techniques to compute the white matter region spanned by a fiber tract (the tract mask) from whole-brain MRI diffusion anisotropy maps. The DeepMedic CNN platform was used, allowing for training directly on 3D volumes. The dataset consisted of 240 subjects, controls and traumatic brain injury (TBI) patients, scanned with a high angular direction and high b-value multi-shell diffusion protocol. Twelve tract masks per subject were learned. Median Dice scores of 0.72 were achieved over the 720 test masks in comparing learned tract masks to manually created masks. This work demonstrates ability to learn complex spatial regions in control and patient populations and contributes a new application of CNNs as a fast pre-selection tool in automated white matter tract segmentation methods.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133981259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759235
Rémy Dubois, Arthur Imbert, Aubin Samacoïts, M. Peter, E. Bertrand, Florian Müller, Thomas Walter
The localization of messenger RNA (mRNA) molecules inside cells play an important role for the local control of gene expression. However, the localization patterns of many mRNAs remain unknown and poorly understood. Single Molecule Fluorescence in Situ Hybridization (smFISH) allows for the visualization of individual mRNA molecules in cells. This method is now scalable and can be applied in High Content Screening (HCS) mode. Here, we propose a computational workflow based on deep convolutional neural networks trained on simulated data to identify different localization patterns from large-scale smFISH data.
{"title":"A Deep Learning Approach To Identify MRNA Localization Patterns","authors":"Rémy Dubois, Arthur Imbert, Aubin Samacoïts, M. Peter, E. Bertrand, Florian Müller, Thomas Walter","doi":"10.1109/ISBI.2019.8759235","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759235","url":null,"abstract":"The localization of messenger RNA (mRNA) molecules inside cells play an important role for the local control of gene expression. However, the localization patterns of many mRNAs remain unknown and poorly understood. Single Molecule Fluorescence in Situ Hybridization (smFISH) allows for the visualization of individual mRNA molecules in cells. This method is now scalable and can be applied in High Content Screening (HCS) mode. Here, we propose a computational workflow based on deep convolutional neural networks trained on simulated data to identify different localization patterns from large-scale smFISH data.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"7 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759291
Abdullah Nazib, C. Fookes, Dimitri Perrin
Image registration plays an important role in comparing images. It is particularly important in analysing medical images like CT, MRI and PET, to quantify different biological samples, to monitor disease progression, and to fuse different modalities to support better diagnosis. The recent emergence of tissue clearing protocols enable us to take images at cellular level resolution. Image registration tools developed for other modalities are currently unable to manage images of entire organs at such resolution. The popularity of deep learning based methods in the computer vision community justifies a rigorous investigation of deep-learning based methods on tissue cleared images along with their traditional counterparts. In this paper, we investigate and compare the performance of a deep learning based registration method with traditional optimization based methods on samples from tissue-clearing methods. From the comparative results it is found that a deep-learning based method outperforms all traditional registration tools in terms of registration time and has achieved promising registration accuracy.
{"title":"Towards Extreme-Resolution Image Registration with Deep Learning","authors":"Abdullah Nazib, C. Fookes, Dimitri Perrin","doi":"10.1109/ISBI.2019.8759291","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759291","url":null,"abstract":"Image registration plays an important role in comparing images. It is particularly important in analysing medical images like CT, MRI and PET, to quantify different biological samples, to monitor disease progression, and to fuse different modalities to support better diagnosis. The recent emergence of tissue clearing protocols enable us to take images at cellular level resolution. Image registration tools developed for other modalities are currently unable to manage images of entire organs at such resolution. The popularity of deep learning based methods in the computer vision community justifies a rigorous investigation of deep-learning based methods on tissue cleared images along with their traditional counterparts. In this paper, we investigate and compare the performance of a deep learning based registration method with traditional optimization based methods on samples from tissue-clearing methods. From the comparative results it is found that a deep-learning based method outperforms all traditional registration tools in terms of registration time and has achieved promising registration accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123042082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759511
A. Galdran, P. Costa, A. Campilho
Visual exploration of the larynx represents a relevant technique for the early diagnosis of laryngeal disorders. However, visualizing an endoscopy for finding abnormalities is a time-consuming process, and for this reason much research has been dedicated to the automatic analysis of endoscopic video data. In this work we address the particular task of discriminating among informative laryngoscopic frames and those that carry insufficient diagnostic information. In the latter case, the goal is also to determine the reason for this lack of information. To this end, we analyze the possibility of training three different state-of-the-art Convolutional Neural Networks, but initializing their weights from configurations that have been previously optimized for solving natural image classification problems. Our findings show that the simplest of these three architectures not only is the most accurate (outperforming previously proposed techniques), but also the fastest and most efficient, with the lowest inference time and minimal memory requirements, enabling real-time application and deployment in portable devices.
{"title":"Real-Time Informative Laryngoscopic Frame Classification with Pre-Trained Convolutional Neural Networks","authors":"A. Galdran, P. Costa, A. Campilho","doi":"10.1109/ISBI.2019.8759511","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759511","url":null,"abstract":"Visual exploration of the larynx represents a relevant technique for the early diagnosis of laryngeal disorders. However, visualizing an endoscopy for finding abnormalities is a time-consuming process, and for this reason much research has been dedicated to the automatic analysis of endoscopic video data. In this work we address the particular task of discriminating among informative laryngoscopic frames and those that carry insufficient diagnostic information. In the latter case, the goal is also to determine the reason for this lack of information. To this end, we analyze the possibility of training three different state-of-the-art Convolutional Neural Networks, but initializing their weights from configurations that have been previously optimized for solving natural image classification problems. Our findings show that the simplest of these three architectures not only is the most accurate (outperforming previously proposed techniques), but also the fastest and most efficient, with the lowest inference time and minimal memory requirements, enabling real-time application and deployment in portable devices.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114468844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}