Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950582
E. Dehghan, Hongzhi Wang, T. Syeda-Mahmood
Aortic dissection is a condition in which a tear in the inner wall of the aorta allows blood to flow between two layers of the aortic wall. Aortic dissection is associated with severe chest pain and can be deadly. Contrast-enhanced CT is the main modality for detection of aortic dissection. Aortic dissection is one of the target abnormalities during evaluation of a triple rule-out CT in emergency cases. In this paper, we present a method for automatic patient-level detection of aortic dissection. Our algorithm starts by an atlas-based segmentation of the aorta which is used to produce cross-sectional images of the organ. Segmentation refinement, flap detection and shape analysis are employed to detect aortic dissection in these cross-sectional slices. Then, the slice-level results are aggregated to render a patient-level detection result. We tested our algorithm on a data set of 37 contrast-enhanced CT volumes, with 13 cases of aortic dissection. We achieved an accuracy of 83.8%, a sensitivity of 84.6% and a specificity of 83.3%.
{"title":"Automatic detection of aortic dissection in contrast-enhanced CT","authors":"E. Dehghan, Hongzhi Wang, T. Syeda-Mahmood","doi":"10.1109/ISBI.2017.7950582","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950582","url":null,"abstract":"Aortic dissection is a condition in which a tear in the inner wall of the aorta allows blood to flow between two layers of the aortic wall. Aortic dissection is associated with severe chest pain and can be deadly. Contrast-enhanced CT is the main modality for detection of aortic dissection. Aortic dissection is one of the target abnormalities during evaluation of a triple rule-out CT in emergency cases. In this paper, we present a method for automatic patient-level detection of aortic dissection. Our algorithm starts by an atlas-based segmentation of the aorta which is used to produce cross-sectional images of the organ. Segmentation refinement, flap detection and shape analysis are employed to detect aortic dissection in these cross-sectional slices. Then, the slice-level results are aggregated to render a patient-level detection result. We tested our algorithm on a data set of 37 contrast-enhanced CT volumes, with 13 cases of aortic dissection. We achieved an accuracy of 83.8%, a sensitivity of 84.6% and a specificity of 83.3%.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"35 1","pages":"557-560"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87070594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950497
Haohan Li, Zhaozheng Yin, Yingke Xu
Quantitative analysis of vesicle-plasma membrane fusion events in the fluorescence microscopy, has been proven to be important in the vesicle exocytosis study. In this paper, we present a framework to automatically detect fusion events. First, an iterative searching algorithm is developed to extract image patch sequences containing potential events. Then, we propose an event image to integrate the critical image patches of a candidate event into a single-image joint representation as the input to Convolutional Neural Networks (CNNs). According to the duration of candidate events, we design three CNN architectures to automatically learn features for the fusion event classification. Compared on 9 challenging datasets, our proposed method showed very competitive performance and outperformed two state-of-the-arts.
{"title":"Automated vesicle fusion detection using Convolutional Neural Networks","authors":"Haohan Li, Zhaozheng Yin, Yingke Xu","doi":"10.1109/ISBI.2017.7950497","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950497","url":null,"abstract":"Quantitative analysis of vesicle-plasma membrane fusion events in the fluorescence microscopy, has been proven to be important in the vesicle exocytosis study. In this paper, we present a framework to automatically detect fusion events. First, an iterative searching algorithm is developed to extract image patch sequences containing potential events. Then, we propose an event image to integrate the critical image patches of a candidate event into a single-image joint representation as the input to Convolutional Neural Networks (CNNs). According to the duration of candidate events, we design three CNN architectures to automatically learn features for the fusion event classification. Compared on 9 challenging datasets, our proposed method showed very competitive performance and outperformed two state-of-the-arts.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"17 1","pages":"183-187"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86373864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chest tomosynthesis (CTS) is a newly developed imaging technique which provides pseudo-3D volume anatomical information of thorax from limited angle projections and therefore improves the visibility of anatomy without so much increase on radiation dose compared to the chest radiography (CXR). However, one of the relatively common problems in CTS is the respiratory motion of patient during image acquisition, which negatively impacts the detectability. In this paper, we propose a sin-quadratic model to analyze the respiratory motion during CTS scanning, which is a real time method that generates the respiratory signal by directly extracting the motion of diaphragm during data acquisition. According to the extracted respiratory signal, physicians could re-scan the patient immediately or conduct motion free CTS image reconstruction for patients that could not hold their breath perfectly during the scan time. The effectiveness of the proposed model was demonstrated with both the simulated phantom data and the real patient data.
{"title":"A simple respiratory motion analysis method for chest tomosynthesis","authors":"Hua Zhang, X. Tao, G. Qin, Jianhua Ma, Qianjin Feng, Wufan Chen","doi":"10.1109/ISBI.2017.7950569","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950569","url":null,"abstract":"Chest tomosynthesis (CTS) is a newly developed imaging technique which provides pseudo-3D volume anatomical information of thorax from limited angle projections and therefore improves the visibility of anatomy without so much increase on radiation dose compared to the chest radiography (CXR). However, one of the relatively common problems in CTS is the respiratory motion of patient during image acquisition, which negatively impacts the detectability. In this paper, we propose a sin-quadratic model to analyze the respiratory motion during CTS scanning, which is a real time method that generates the respiratory signal by directly extracting the motion of diaphragm during data acquisition. According to the extracted respiratory signal, physicians could re-scan the patient immediately or conduct motion free CTS image reconstruction for patients that could not hold their breath perfectly during the scan time. The effectiveness of the proposed model was demonstrated with both the simulated phantom data and the real patient data.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"26 6 1","pages":"498-501"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83678352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950645
A. Basarab, D. Rohrbach, Ningning Zhao, J. Tourneret, D. Kouamé, J. Mamou
Scanning acoustic microscopy (SAM) is a well-accepted imaging modality for forming quantitative, two-dimensional maps of acoustic properties of soft tissues at microscopic scales. The quantitative maps formed using our custom SAM system using a 250-MHz single-element transducer have a nominal resolution of 7 µm, which is insufficient for some investigations. To enhance spatial resolution, a SAM system operating at even higher frequencies could be designed, but associated costs and experimental difficulties are challenging. Therefore, the objective of this study is to evaluate the potential of super-resolution (SR) image processing to enhance the spatial resolution of quantitative maps in SAM. To the best of our knowledge, this is the first attempt at using post-processing, image-enhancement techniques in SAM. Results of realistic simulations and experimental data acquired from a standard resolution test pattern confirm the improved spatial resolution and the potential value of using SR in SAM.
{"title":"Enhancement of 250-MHz quantitative acoustic-microscopy data using a single-image super-resolution method","authors":"A. Basarab, D. Rohrbach, Ningning Zhao, J. Tourneret, D. Kouamé, J. Mamou","doi":"10.1109/ISBI.2017.7950645","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950645","url":null,"abstract":"Scanning acoustic microscopy (SAM) is a well-accepted imaging modality for forming quantitative, two-dimensional maps of acoustic properties of soft tissues at microscopic scales. The quantitative maps formed using our custom SAM system using a 250-MHz single-element transducer have a nominal resolution of 7 µm, which is insufficient for some investigations. To enhance spatial resolution, a SAM system operating at even higher frequencies could be designed, but associated costs and experimental difficulties are challenging. Therefore, the objective of this study is to evaluate the potential of super-resolution (SR) image processing to enhance the spatial resolution of quantitative maps in SAM. To the best of our knowledge, this is the first attempt at using post-processing, image-enhancement techniques in SAM. Results of realistic simulations and experimental data acquired from a standard resolution test pattern confirm the improved spatial resolution and the potential value of using SR in SAM.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"150 1","pages":"827-830"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79459530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950539
C. Langen, M. Vernooij, L. Cremers, Wyke Huizinga, M. Groot, M. Ikram, T. White, W. Niessen
Brain connectivity is increasingly being studied using connectomes. Typical structural connectome definitions do not directly take white matter pathology into account. Presumably, pathology impedes signal transmission along fibres, leading to a reduction in function. In order to directly study disconnection and localize pathology within the connectome, we present the disconnectome, which only considers fibres that intersect with white matter pathology. To show the potential of the disconnectome in brain studies, we showed in a cohort of 4199 adults with varying loads of white matter lesions (WMLs) that: (1) Disconnection is not a function of streamline density; (2) Hubs are more affected by WMLs than peripheral nodes; (3) Connections between hubs are more severely and frequently affected by WMLs than other connection types; and (4) Connections between region clusters are often more severely affected than those within clusters.
{"title":"The structural disconnectome: A pathology-sensitive extension of the structural connectome","authors":"C. Langen, M. Vernooij, L. Cremers, Wyke Huizinga, M. Groot, M. Ikram, T. White, W. Niessen","doi":"10.1109/ISBI.2017.7950539","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950539","url":null,"abstract":"Brain connectivity is increasingly being studied using connectomes. Typical structural connectome definitions do not directly take white matter pathology into account. Presumably, pathology impedes signal transmission along fibres, leading to a reduction in function. In order to directly study disconnection and localize pathology within the connectome, we present the disconnectome, which only considers fibres that intersect with white matter pathology. To show the potential of the disconnectome in brain studies, we showed in a cohort of 4199 adults with varying loads of white matter lesions (WMLs) that: (1) Disconnection is not a function of streamline density; (2) Hubs are more affected by WMLs than peripheral nodes; (3) Connections between hubs are more severely and frequently affected by WMLs than other connection types; and (4) Connections between region clusters are often more severely affected than those within clusters.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"25 1","pages":"366-370"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89297445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950628
Thomas Kustner, Philipp Wolf, Martin Schwartz, Annika Liebgott, F. Schick, S. Gatidis, Bin Yang
In medical imaging, images are usually evaluated by a human observer (HO) depending on the underlying diagnostic question which can be a time-demanding and cost-intensive process. Model observers (MO) which mimic the human visual system can help to support the HO during this reading process or can provide feedback to the MR scanner and/or HO about the derived image quality. For this purpose MOs are trained on HO-derived image labels with respect to a certain diagnostic task. We propose a non-reference image quality assessment system based on a machine-learning approach with a deep neural network and active learning to keep the amount of needed labeled training data small. A labeling platform is developed as a web application with accounted data security and confidentiality to facilitate the HO labeling procedure. The platform is made publicly available.
{"title":"An easy-to-use image labeling platform for automatic magnetic resonance image quality assessment","authors":"Thomas Kustner, Philipp Wolf, Martin Schwartz, Annika Liebgott, F. Schick, S. Gatidis, Bin Yang","doi":"10.1109/ISBI.2017.7950628","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950628","url":null,"abstract":"In medical imaging, images are usually evaluated by a human observer (HO) depending on the underlying diagnostic question which can be a time-demanding and cost-intensive process. Model observers (MO) which mimic the human visual system can help to support the HO during this reading process or can provide feedback to the MR scanner and/or HO about the derived image quality. For this purpose MOs are trained on HO-derived image labels with respect to a certain diagnostic task. We propose a non-reference image quality assessment system based on a machine-learning approach with a deep neural network and active learning to keep the amount of needed labeled training data small. A labeling platform is developed as a web application with accounted data security and confidentiality to facilitate the HO labeling procedure. The platform is made publicly available.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"23 4 1","pages":"754-757"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91233132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950714
M. Mescam, J. Brossard, N. Vayssiere, C. Fonta
The relation between normal and pathological aging and the cerebrovascular component is still unclear. In this context, the common marmoset, which has the advantage of enabling longitudinal studies over a reasonable timeframe, appears as a good pre-clinical model. However, there is still a lack of quantitative information on the macrovascular structure of the marmoset brain. In this paper, we investigate the potentiality of multi-angle TOF MR angiography using a 3T MRI scanner to perform morphometric analysis of the marmoset brain vasculature. Our image processing pipeline greatly relies on the use of multiscale vesselness enhancement filters to help extract the 3D macrovasculature and perform subsequent morphometric calculations. Although multi-angle acquisition does not improve morphometric analysis significantly as compared to single-angle acquisition, it improves the network extraction by increasing the robustness of image processing algorithms.
{"title":"Multi-angle TOF MR brain angiography of the common marmoset","authors":"M. Mescam, J. Brossard, N. Vayssiere, C. Fonta","doi":"10.1109/ISBI.2017.7950714","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950714","url":null,"abstract":"The relation between normal and pathological aging and the cerebrovascular component is still unclear. In this context, the common marmoset, which has the advantage of enabling longitudinal studies over a reasonable timeframe, appears as a good pre-clinical model. However, there is still a lack of quantitative information on the macrovascular structure of the marmoset brain. In this paper, we investigate the potentiality of multi-angle TOF MR angiography using a 3T MRI scanner to perform morphometric analysis of the marmoset brain vasculature. Our image processing pipeline greatly relies on the use of multiscale vesselness enhancement filters to help extract the 3D macrovasculature and perform subsequent morphometric calculations. Although multi-angle acquisition does not improve morphometric analysis significantly as compared to single-angle acquisition, it improves the network extraction by increasing the robustness of image processing algorithms.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"1 1","pages":"1125-1128"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89877684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950689
Jingxin Liu, Bolei Xu, L. Shen, J. Garibaldi, G. Qiu
In this paper, we present a novel deep learning model termed Deep Autoencoding-Classification Network (DACN) for HEp-2 cell classification. The DACN consists of an autoencoder and a normal classification convolutional neural network (CNN), while the two architectures shares the same encoding pipeline. The DACN model is jointly optimized for the classification error and the image reconstruction error based on a multi-task learning procedure. We evaluate the proposed model using the publicly available ICPR2012 benchmark dataset. We show that this architecture is particularly effective when the training dataset is small which is often the case in medical imaging applications. We present experimental results to show that the proposed approach outperforms all known state of the art HEp-2 cell classification methods.
{"title":"HEp-2 cell classification based on a Deep Autoencoding-Classification convolutional neural network","authors":"Jingxin Liu, Bolei Xu, L. Shen, J. Garibaldi, G. Qiu","doi":"10.1109/ISBI.2017.7950689","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950689","url":null,"abstract":"In this paper, we present a novel deep learning model termed Deep Autoencoding-Classification Network (DACN) for HEp-2 cell classification. The DACN consists of an autoencoder and a normal classification convolutional neural network (CNN), while the two architectures shares the same encoding pipeline. The DACN model is jointly optimized for the classification error and the image reconstruction error based on a multi-task learning procedure. We evaluate the proposed model using the publicly available ICPR2012 benchmark dataset. We show that this architecture is particularly effective when the training dataset is small which is often the case in medical imaging applications. We present experimental results to show that the proposed approach outperforms all known state of the art HEp-2 cell classification methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"529 1","pages":"1019-1023"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77896441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950462
M. Radojević, E. Meijering
Microscopic analysis of neuronal cell morphology is required in many studies in neurobiology. The development of computational methods for this purpose is an ongoing challenge and includes solving some of the fundamental computer vision problems such as detecting and grouping sometimes very noisy line-like image structures. Advancements in the field are impeded by the complexity and immense diversity of neuronal cell shapes across species and brain regions, as well as by the high variability in image quality across labs and experimental setups. Here we present a novel method for fully automatic neuron reconstruction based on sequential Monte Carlo estimation. It uses newly designed models for predicting and updating branch node estimates as well as novel initialization and final tree construction strategies. The proposed method was evaluated on 3D fluorescence microscopy images containing single neurons and neuronal networks for which manual annotations were available as gold-standard references. The results indicate that our method performs favorably compared to state-of-the-art alternative methods.
{"title":"Neuron reconstruction from fluorescence microscopy images using sequential Monte Carlo estimation","authors":"M. Radojević, E. Meijering","doi":"10.1109/ISBI.2017.7950462","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950462","url":null,"abstract":"Microscopic analysis of neuronal cell morphology is required in many studies in neurobiology. The development of computational methods for this purpose is an ongoing challenge and includes solving some of the fundamental computer vision problems such as detecting and grouping sometimes very noisy line-like image structures. Advancements in the field are impeded by the complexity and immense diversity of neuronal cell shapes across species and brain regions, as well as by the high variability in image quality across labs and experimental setups. Here we present a novel method for fully automatic neuron reconstruction based on sequential Monte Carlo estimation. It uses newly designed models for predicting and updating branch node estimates as well as novel initialization and final tree construction strategies. The proposed method was evaluated on 3D fluorescence microscopy images containing single neurons and neuronal networks for which manual annotations were available as gold-standard references. The results indicate that our method performs favorably compared to state-of-the-art alternative methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"30 1","pages":"36-39"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72954974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ultrasonography is a valuable diagnosis method for thyroid nodules. Automatically discriminating benign and malignant nodules in the ultrasound images can provide aided diagnosis suggestions, or increase the diagnosis accuracy when lack of experts. The core problem in this issue is how to capture appropriate features for this specific task. Here, we propose a feature extraction method for ultrasound images based on the convolution neural networks (CNNs), try to introduce more meaningful and specific features to the classification. A CNN model trained with ImageNet data is transferred to the ultrasound image domain, to generate semantic deep features under small sample condition. Then, we combine those deep features with conventional features such as Histogram of Oriented Gradient (HOG) and Scale Invariant Feature Transform (SIFT) together to form a hybrid feature space. Furthermore, to make the general deep features more pertinent to our problem, a feature subset selection process is employed for the hybrid nodule classification, followed by a detailed discussion on the influence of feature number and feature composition method. Experimental results on 1037 images show that the accuracy of our proposed method is 0.929, which outperforms other relative methods by over 10%.
{"title":"Feature selection and thyroid nodule classification using transfer learning","authors":"Tianjiao Liu, Shuaining Xie, Yukang Zhang, Jing Yu, Lijuan Niu, Weidong Sun","doi":"10.1109/ISBI.2017.7950707","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950707","url":null,"abstract":"Ultrasonography is a valuable diagnosis method for thyroid nodules. Automatically discriminating benign and malignant nodules in the ultrasound images can provide aided diagnosis suggestions, or increase the diagnosis accuracy when lack of experts. The core problem in this issue is how to capture appropriate features for this specific task. Here, we propose a feature extraction method for ultrasound images based on the convolution neural networks (CNNs), try to introduce more meaningful and specific features to the classification. A CNN model trained with ImageNet data is transferred to the ultrasound image domain, to generate semantic deep features under small sample condition. Then, we combine those deep features with conventional features such as Histogram of Oriented Gradient (HOG) and Scale Invariant Feature Transform (SIFT) together to form a hybrid feature space. Furthermore, to make the general deep features more pertinent to our problem, a feature subset selection process is employed for the hybrid nodule classification, followed by a detailed discussion on the influence of feature number and feature composition method. Experimental results on 1037 images show that the accuracy of our proposed method is 0.929, which outperforms other relative methods by over 10%.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"43 1","pages":"1096-1099"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72801861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}