Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950526
Neeraj Dhungel, G. Carneiro, A. Bradley
In this paper, we propose a multi-view deep residual neural network (mResNet) for the fully automated classification of mammograms as either malignant or normal/benign. Specifically, our mResNet approach consists of an ensemble of deep residual networks (ResNet), which have six input images, including the unregistered craniocaudal (CC) and mediolateral oblique (MLO) mammogram views as well as the automatically produced binary segmentation maps of the masses and micro-calcifications in each view. We then form the mResNet by concatenating the outputs of each ResNet at the second to last layer, followed by a final, fully connected, layer. The resulting mResNet is trained in an end-to-end fashion to produce a case-based mammogram classifier that has the potential to be used in breast screening programs. We empirically show on the publicly available INbreast dataset, that the proposed mResNet classifies mammograms into malignant or normal/benign with an AUC of 0.8.
{"title":"Fully automated classification of mammograms using deep residual neural networks","authors":"Neeraj Dhungel, G. Carneiro, A. Bradley","doi":"10.1109/ISBI.2017.7950526","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950526","url":null,"abstract":"In this paper, we propose a multi-view deep residual neural network (mResNet) for the fully automated classification of mammograms as either malignant or normal/benign. Specifically, our mResNet approach consists of an ensemble of deep residual networks (ResNet), which have six input images, including the unregistered craniocaudal (CC) and mediolateral oblique (MLO) mammogram views as well as the automatically produced binary segmentation maps of the masses and micro-calcifications in each view. We then form the mResNet by concatenating the outputs of each ResNet at the second to last layer, followed by a final, fully connected, layer. The resulting mResNet is trained in an end-to-end fashion to produce a case-based mammogram classifier that has the potential to be used in breast screening programs. We empirically show on the publicly available INbreast dataset, that the proposed mResNet classifies mammograms into malignant or normal/benign with an AUC of 0.8.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"27 1","pages":"310-314"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79041959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950600
J. Roels, Jonas De Vylder, J. Aelterman, Y. Saeys, W. Philips
Biological membranes are one of the most basic structures and regions of interest in cell biology. In the study of membranes, segment extraction is a well-known and difficult problem because of impeding noise, directional and thickness variability, etc. Recent advances in electron microscopy membrane segmentation are able to cope with such difficulties by training convolutional neural networks. However, because of the massive amount of features that have to be extracted while propagating forward, the practical usability diminishes, even with state-of-the-art GPU's. A significant part of these network features typically contains redundancy through correlation and sparsity. In this work, we propose a pruning method for convolutional neural networks that ensures the training loss increase is minimized. We show that the pruned networks, after retraining, are more efficient in terms of time and memory, without significantly affecting the network accuracy. This way, we manage to obtain real-time membrane segmentation performance, for our specific electron microscopy setup.
{"title":"Convolutional neural network pruning to accelerate membrane segmentation in electron microscopy","authors":"J. Roels, Jonas De Vylder, J. Aelterman, Y. Saeys, W. Philips","doi":"10.1109/ISBI.2017.7950600","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950600","url":null,"abstract":"Biological membranes are one of the most basic structures and regions of interest in cell biology. In the study of membranes, segment extraction is a well-known and difficult problem because of impeding noise, directional and thickness variability, etc. Recent advances in electron microscopy membrane segmentation are able to cope with such difficulties by training convolutional neural networks. However, because of the massive amount of features that have to be extracted while propagating forward, the practical usability diminishes, even with state-of-the-art GPU's. A significant part of these network features typically contains redundancy through correlation and sparsity. In this work, we propose a pruning method for convolutional neural networks that ensures the training loss increase is minimized. We show that the pruned networks, after retraining, are more efficient in terms of time and memory, without significantly affecting the network accuracy. This way, we manage to obtain real-time membrane segmentation performance, for our specific electron microscopy setup.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"9 1","pages":"633-637"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79407313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950571
Liang Sun, Wei Shao, Daoqiang Zhang
Multi-atlas based label fusionmethods have been successfully used for medical image segmentation. In the field of brain region segmentation, multi-atlas based methods propagate labels from multiple atlases to target image by the similarity between patches in target image and atlases. Most of existing multi-atlas based methods usually use intensity feature, which is hard to capture high-order information in brain images. In light of this, in this paper, we endeavor to apply high-order restricted Boltzmann machines to represent brain images and use the learnt feature for brain region of interesting (ROIs) segmentation. Specifically, we firstly capture the covariance and the mean information from patches by high-order Boltzmann Machine. Then, we propagate the label by the similarity of the learnt high-order features. We validate our feature learning method on two well-known label fusion methods e.g., local-weighted voting (LWV) and non-local mean patch-based method (PBM). Experimental results on the NIREP dataset demonstrate that our method can improve the performance of both LWV and PBM by using the high-order features.
基于多图谱的标签融合方法已成功用于医学图像分割。在脑区域分割领域,基于多地图集的方法是利用目标图像和地图集中patch的相似性,将多个地图集中的标签传播到目标图像中。现有的基于多图谱的方法大多采用强度特征,难以捕获脑图像中的高阶信息。鉴于此,本文尝试应用高阶受限玻尔兹曼机对脑图像进行表征,并利用学习到的特征对脑感兴趣区域进行分割。具体而言,我们首先利用高阶玻尔兹曼机捕获patch的协方差和均值信息。然后,我们通过学习到的高阶特征的相似性来传播标签。我们在两种著名的标签融合方法上验证了我们的特征学习方法,即local-weighted voting (LWV)和non-local mean patch based method (PBM)。在NIREP数据集上的实验结果表明,我们的方法可以利用高阶特征提高LWV和PBM的性能。
{"title":"High-order boltzmann machine-based unsupervised feature learning for multi-atlas segmentation","authors":"Liang Sun, Wei Shao, Daoqiang Zhang","doi":"10.1109/ISBI.2017.7950571","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950571","url":null,"abstract":"Multi-atlas based label fusionmethods have been successfully used for medical image segmentation. In the field of brain region segmentation, multi-atlas based methods propagate labels from multiple atlases to target image by the similarity between patches in target image and atlases. Most of existing multi-atlas based methods usually use intensity feature, which is hard to capture high-order information in brain images. In light of this, in this paper, we endeavor to apply high-order restricted Boltzmann machines to represent brain images and use the learnt feature for brain region of interesting (ROIs) segmentation. Specifically, we firstly capture the covariance and the mean information from patches by high-order Boltzmann Machine. Then, we propagate the label by the similarity of the learnt high-order features. We validate our feature learning method on two well-known label fusion methods e.g., local-weighted voting (LWV) and non-local mean patch-based method (PBM). Experimental results on the NIREP dataset demonstrate that our method can improve the performance of both LWV and PBM by using the high-order features.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"53 1","pages":"507-510"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79408773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950549
S. Gulyanon, N. Sharifai, Michael D. Kim, A. Chiba, G. Tsechpenakis
We introduce a novel segmentation method for time-lapse image stacks of neurites based on the co-segmentation principle. Our method aggregates information from multiple stacks to improve the segmentation task, using a neurite model and a tree similarity term. The neurite model takes into account branching characteristics, such as local shape smoothness and continuity, while the tree similarity term exploits the local branch dynamics across image stacks. Our approach improves accuracy in ambiguous regions, handling successfully out-of-focus effects and branching bifurcations. We validated our method using Drosophila sensory neuron datasets and made comparisons with existing methods.
{"title":"Neurite reconstruction from time-lapse sequences using co-segmentation","authors":"S. Gulyanon, N. Sharifai, Michael D. Kim, A. Chiba, G. Tsechpenakis","doi":"10.1109/ISBI.2017.7950549","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950549","url":null,"abstract":"We introduce a novel segmentation method for time-lapse image stacks of neurites based on the co-segmentation principle. Our method aggregates information from multiple stacks to improve the segmentation task, using a neurite model and a tree similarity term. The neurite model takes into account branching characteristics, such as local shape smoothness and continuity, while the tree similarity term exploits the local branch dynamics across image stacks. Our approach improves accuracy in ambiguous regions, handling successfully out-of-focus effects and branching bifurcations. We validated our method using Drosophila sensory neuron datasets and made comparisons with existing methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"36 1","pages":"410-414"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85621261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950674
Huan Liu, Mianzhi Zhang, Xintao Hu, Yudan Ren, Shu Zhang, Junwei Han, Lei Guo, Tianming Liu
Task-based functional magnetic resonance imaging (tfMRI) is widely used to localize brain regions or networks in response to various cognitive tasks. However, given two groups of tfMRI data acquired under distinct task paradigms, it is not clear whether there exist intrinsic inter-group differences in signal composition patterns, and if so, whether these differences could be used for data discrimination. The major challenges originate from the high dimensionality and low signal-to-noise ratio of fMRI data. In this paper, we proposed a novel framework using hybrid temporal and spatial sparse representation to tackle above challenges. We applied the proposed framework to the Human Connectome Project (HCP) tfMRI data. Our experimental results demonstrated that the task types of fMRI data can be successfully classified, achieving a 100% classification accuracy. We also showed that both task-related components and resting state networks (RSNs) can be reliably identified. Our study provides a novel data-driven approach to detecting discriminative inter-group differences in fMRI data based on signal composition patterns, and thus potentially can be used to control fMRI data quality and to infer biomarkers in brain disorders.
{"title":"FMRI data classification based on hybrid temporal and spatial sparse representation","authors":"Huan Liu, Mianzhi Zhang, Xintao Hu, Yudan Ren, Shu Zhang, Junwei Han, Lei Guo, Tianming Liu","doi":"10.1109/ISBI.2017.7950674","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950674","url":null,"abstract":"Task-based functional magnetic resonance imaging (tfMRI) is widely used to localize brain regions or networks in response to various cognitive tasks. However, given two groups of tfMRI data acquired under distinct task paradigms, it is not clear whether there exist intrinsic inter-group differences in signal composition patterns, and if so, whether these differences could be used for data discrimination. The major challenges originate from the high dimensionality and low signal-to-noise ratio of fMRI data. In this paper, we proposed a novel framework using hybrid temporal and spatial sparse representation to tackle above challenges. We applied the proposed framework to the Human Connectome Project (HCP) tfMRI data. Our experimental results demonstrated that the task types of fMRI data can be successfully classified, achieving a 100% classification accuracy. We also showed that both task-related components and resting state networks (RSNs) can be reliably identified. Our study provides a novel data-driven approach to detecting discriminative inter-group differences in fMRI data based on signal composition patterns, and thus potentially can be used to control fMRI data quality and to infer biomarkers in brain disorders.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"20 1","pages":"957-960"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78882092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950715
D. G. Obando, Lauriane Rohfritsch, M. Faure, L. Danglot, V. Meas-Yedid, J. Olivo-Marin, A. Dufour
We investigate a novel, parallel implementation of active contours for image segmentation combining a multi-agent system with a quad-edge representation of the contour. The control points of the contour evolve independently from one another in a parallel fashion, handling contour deformation, and convergence, while the quad-edge representation simplifies contour manipulation and local re-sampling during its evolution. We illustrate this new approach on biological images, and compare results with a conventional active contour implementation, discussing its benefits and limitations. This preliminary work is made freely available as a plug-in for our open-source Icy platform, where it will be developed with future extensions.
{"title":"Quad-edge active contours for biomedical image segmentation","authors":"D. G. Obando, Lauriane Rohfritsch, M. Faure, L. Danglot, V. Meas-Yedid, J. Olivo-Marin, A. Dufour","doi":"10.1109/ISBI.2017.7950715","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950715","url":null,"abstract":"We investigate a novel, parallel implementation of active contours for image segmentation combining a multi-agent system with a quad-edge representation of the contour. The control points of the contour evolve independently from one another in a parallel fashion, handling contour deformation, and convergence, while the quad-edge representation simplifies contour manipulation and local re-sampling during its evolution. We illustrate this new approach on biological images, and compare results with a conventional active contour implementation, discussing its benefits and limitations. This preliminary work is made freely available as a plug-in for our open-source Icy platform, where it will be developed with future extensions.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"32 1","pages":"1129-1132"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88467745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950677
A. Crimi, Luca Dodero, Vittorio Murino, Diego Sona
Functional and structural connectivity convey different information about the brain. The integration of these different approaches is receiving growing attention from the research community, as it can shed new light on brain functions. This manuscript proposes a constrained autoregressive model with different lag-orders generating an “effective” connectivity matrix which models the structural connectivity integrating the functional activity. Multiple orders are investigated to observe how different time dependencies influence the effective connectivity. The proposed approach alters an initial structural connectivity representation according to functional data, by minimizing the reconstruction error of an autoregressive model constrained by the structural prior. The model is further validated in a case-control experiment, which aims at differentiating healthy subject and young patients affected by autism spectrum disorder.
{"title":"Case-control discrimination through effective brain connectivity","authors":"A. Crimi, Luca Dodero, Vittorio Murino, Diego Sona","doi":"10.1109/ISBI.2017.7950677","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950677","url":null,"abstract":"Functional and structural connectivity convey different information about the brain. The integration of these different approaches is receiving growing attention from the research community, as it can shed new light on brain functions. This manuscript proposes a constrained autoregressive model with different lag-orders generating an “effective” connectivity matrix which models the structural connectivity integrating the functional activity. Multiple orders are investigated to observe how different time dependencies influence the effective connectivity. The proposed approach alters an initial structural connectivity representation according to functional data, by minimizing the reconstruction error of an autoregressive model constrained by the structural prior. The model is further validated in a case-control experiment, which aims at differentiating healthy subject and young patients affected by autism spectrum disorder.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"67 1","pages":"970-973"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73996639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950525
Gabriel Maicas, G. Carneiro, A. Bradley
We introduce a new fully automated breast mass segmentation method from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The method is based on globally optimal inference in a continuous space (GOCS) using a shape prior computed from a semantic segmentation produced by a deep learning (DL) model. We propose this approach because the limited amount of annotated training samples does not allow the implementation of a robust DL model that could produce accurate segmentation results on its own. Furthermore, GOCS does not need precise initialisation compared to locally optimal methods on a continuous space (e.g., Mumford-Shah based level set methods); also, GOCS has smaller memory complexity compared to globally optimal inference on a discrete space (e.g., graph cuts). Experimental results show that the proposed method produces the current state-of-the-art mass segmentation (from DCEMRI) results, achieving a mean Dice coefficient of 0.77 for the test set.
{"title":"Globally optimal breast mass segmentation from DCE-MRI using deep semantic segmentation as shape prior","authors":"Gabriel Maicas, G. Carneiro, A. Bradley","doi":"10.1109/ISBI.2017.7950525","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950525","url":null,"abstract":"We introduce a new fully automated breast mass segmentation method from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The method is based on globally optimal inference in a continuous space (GOCS) using a shape prior computed from a semantic segmentation produced by a deep learning (DL) model. We propose this approach because the limited amount of annotated training samples does not allow the implementation of a robust DL model that could produce accurate segmentation results on its own. Furthermore, GOCS does not need precise initialisation compared to locally optimal methods on a continuous space (e.g., Mumford-Shah based level set methods); also, GOCS has smaller memory complexity compared to globally optimal inference on a discrete space (e.g., graph cuts). Experimental results show that the proposed method produces the current state-of-the-art mass segmentation (from DCEMRI) results, achieving a mean Dice coefficient of 0.77 for the test set.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"26 1","pages":"305-309"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78239898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950650
Tzu-Wei Huang, Hwann-Tzong Chen, Ryuichi Fujimoto, Koichi Ito, Kai Wu, Kazunori Sato, Y. Taki, H. Fukuda, T. Aoki
Estimating human age from brain MR images is useful for early detection of Alzheimer's disease. In this paper we propose a fast and accurate method based on deep learning to predict subject's age. Compared with previous methods, our algorithm achieves comparable accuracy using fewer input images. With our GPU version program, the time needed to make a prediction is 20 ms. We evaluate our methods using mean absolute error (MAE) and our method is able to predict subject's age with MAE of 4.0 years.
{"title":"Age estimation from brain MRI images using deep learning","authors":"Tzu-Wei Huang, Hwann-Tzong Chen, Ryuichi Fujimoto, Koichi Ito, Kai Wu, Kazunori Sato, Y. Taki, H. Fukuda, T. Aoki","doi":"10.1109/ISBI.2017.7950650","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950650","url":null,"abstract":"Estimating human age from brain MR images is useful for early detection of Alzheimer's disease. In this paper we propose a fast and accurate method based on deep learning to predict subject's age. Compared with previous methods, our algorithm achieves comparable accuracy using fewer input images. With our GPU version program, the time needed to make a prediction is 20 ms. We evaluate our methods using mean absolute error (MAE) and our method is able to predict subject's age with MAE of 4.0 years.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"18 1","pages":"849-852"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82426844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950735
Rafael Ortiz-Ramón, A. Larroza, E. Arana, D. Moratal
Detection of brain metastases in patients with undiagnosed primary cancer is unusual but still an existing phenomenon. In these cases, identifying the cancer site of origin is non-feasible by visual examination of magnetic resonance (MR) images. Recently, radiomics has been proposed to analyze differences among classes of visually imperceptible imaging characteristics. In this study we analyzed 46 T1-weighted MR images of brain metastases from 29 patients: 29 of lung and 17 of breast origin. A total of 43 radiomics texture features were extracted from the metastatic lesions. Support vector machine (SVM) and k-nearest neighbors (k-NN) classifiers were implemented to evaluate the classification performance. The influence of gray-level quantization for computation of texture features was also examined. The best classification (AUC = 0.953 ± 0.061), evaluated with nested cross-validation, was obtained using the SVM classifier with two texture features derived from the 16 gray-level quantization co-occurrence matrix.
{"title":"Identifying the primary site of origin of MRI brain metastases from lung and breast cancer following a 2D radiomics approach","authors":"Rafael Ortiz-Ramón, A. Larroza, E. Arana, D. Moratal","doi":"10.1109/ISBI.2017.7950735","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950735","url":null,"abstract":"Detection of brain metastases in patients with undiagnosed primary cancer is unusual but still an existing phenomenon. In these cases, identifying the cancer site of origin is non-feasible by visual examination of magnetic resonance (MR) images. Recently, radiomics has been proposed to analyze differences among classes of visually imperceptible imaging characteristics. In this study we analyzed 46 T1-weighted MR images of brain metastases from 29 patients: 29 of lung and 17 of breast origin. A total of 43 radiomics texture features were extracted from the metastatic lesions. Support vector machine (SVM) and k-nearest neighbors (k-NN) classifiers were implemented to evaluate the classification performance. The influence of gray-level quantization for computation of texture features was also examined. The best classification (AUC = 0.953 ± 0.061), evaluated with nested cross-validation, was obtained using the SVM classifier with two texture features derived from the 16 gray-level quantization co-occurrence matrix.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"1 1","pages":"1213-1216"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80167254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}