Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193204
Xiaofeng Niu, Yongyi Yang, M. Wernick
Recently we developed an image reconstruction procedure aimed to unify gated imaging and dynamic imaging in nuclear cardiac imaging. It can yield a single image sequence to show simultaneously both cardiac motion and tracer distribution change over the course of imaging. In this work, we further develop and investigate the feasibility of our gated dynamic imaging procedure for perfusion defect detection in cardiac SPECT imaging, where the challenge is even greater without using fast camera rotations. We study the saliency of temporal kinetic information derived from the reconstructed dynamic images for differentiating defects from normal cardiac perfusion. We also propose several metrics to characterize the salient kinetic information in gated dynamic images. The proposed development was demonstrated using simulated gated cardiac imaging with the NCAT phantom and Tc99m-Teboroxime as the imaging agent.
{"title":"Detectability of perfusion defect in gated dynamic cardiac SPECT images","authors":"Xiaofeng Niu, Yongyi Yang, M. Wernick","doi":"10.1109/ISBI.2009.5193204","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193204","url":null,"abstract":"Recently we developed an image reconstruction procedure aimed to unify gated imaging and dynamic imaging in nuclear cardiac imaging. It can yield a single image sequence to show simultaneously both cardiac motion and tracer distribution change over the course of imaging. In this work, we further develop and investigate the feasibility of our gated dynamic imaging procedure for perfusion defect detection in cardiac SPECT imaging, where the challenge is even greater without using fast camera rotations. We study the saliency of temporal kinetic information derived from the reconstructed dynamic images for differentiating defects from normal cardiac perfusion. We also propose several metrics to characterize the salient kinetic information in gated dynamic images. The proposed development was demonstrated using simulated gated cardiac imaging with the NCAT phantom and Tc99m-Teboroxime as the imaging agent.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124823004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193184
J. Moltz, M. Schwier, H. Peitgen
In follow-up CT examinations of cancer patients, therapy success is evaluated by estimating the change in tumor size from diameter or volume comparison between corresponding lesions. We present an algorithm that automatizes the detection of matching lesions, given a baseline segmentation mask. It is generally applicable and does not need an organ mask or CAD findings, only a coarse registration of the datasets is required. In the first step, lesion candidates are identified in a local area based on gray value filtering and detection of circular structures using a Hough transform. On all candidate voxels, a template matching is performed minimizing normalized cross-correlation. The method was evaluated on clinical follow-up data comprising 94 lung nodules, 107 liver metastases, and 137 lymph nodes. The ratio of correctly detected lesions was 96%, 84% and 85%, respectively, at an average computation time of 0.9 s per lesion on a standard PC.
{"title":"A general framework for automatic detection of matching lesions in follow-up CT","authors":"J. Moltz, M. Schwier, H. Peitgen","doi":"10.1109/ISBI.2009.5193184","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193184","url":null,"abstract":"In follow-up CT examinations of cancer patients, therapy success is evaluated by estimating the change in tumor size from diameter or volume comparison between corresponding lesions. We present an algorithm that automatizes the detection of matching lesions, given a baseline segmentation mask. It is generally applicable and does not need an organ mask or CAD findings, only a coarse registration of the datasets is required. In the first step, lesion candidates are identified in a local area based on gray value filtering and detection of circular structures using a Hough transform. On all candidate voxels, a template matching is performed minimizing normalized cross-correlation. The method was evaluated on clinical follow-up data comprising 94 lung nodules, 107 liver metastases, and 137 lymph nodes. The ratio of correctly detected lesions was 96%, 84% and 85%, respectively, at an average computation time of 0.9 s per lesion on a standard PC.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121744699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5192994
Q. Wei, S. Sueda, Joel Miller, J. Demer, D. Pai
Understanding the mechanisms of eye movement is difficult without a realistic biomechanical model. We present an efficient and robust computational framework for building subject-specific models of the orbit from magnetic resonance images (MRIs). We reconstruct three-dimensional geometric models of the major structures of the orbit (six extraocular muscles, orbital wall, optic nerve, and globe) by fitting a template to the MRIs of individual subjects. A generic template captures the anatomical properties of these orbital structures and serves as the prior knowledge to improve the completeness and robustness of the model reconstruction. We develop an automatic fitting process, which combines parametric surface fitting with successive image feature selections. Reconstructed orbit models from different subjects are demonstrated. The accuracy of the proposed method is validated through comparison of reconstructed extraocular muscle cross sections with manual segmentation. The Dice coefficient is used as the metric and good agreement is observed.
{"title":"Template-based reconstruction of human extraocular muscles from magnetic resonance images","authors":"Q. Wei, S. Sueda, Joel Miller, J. Demer, D. Pai","doi":"10.1109/ISBI.2009.5192994","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5192994","url":null,"abstract":"Understanding the mechanisms of eye movement is difficult without a realistic biomechanical model. We present an efficient and robust computational framework for building subject-specific models of the orbit from magnetic resonance images (MRIs). We reconstruct three-dimensional geometric models of the major structures of the orbit (six extraocular muscles, orbital wall, optic nerve, and globe) by fitting a template to the MRIs of individual subjects. A generic template captures the anatomical properties of these orbital structures and serves as the prior knowledge to improve the completeness and robustness of the model reconstruction. We develop an automatic fitting process, which combines parametric surface fitting with successive image feature selections. Reconstructed orbit models from different subjects are demonstrated. The accuracy of the proposed method is validated through comparison of reconstructed extraocular muscle cross sections with manual segmentation. The Dice coefficient is used as the metric and good agreement is observed.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125414757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193106
Renaud Lopes, M. Vermandel, A. Dewalle-Vignion, S. Maouche, N. Betrouni
Fractal geometry may be an efficient tool for texture analysis in medical imaging. However its application is primarily restricted to 2D cases and at the only use of an approximation method of the fractal dimension (FD). Recently, multifractal analysis has showed interesting results in this field. This study focuses on the use of an optimized set of 3D fractal and multifractal features for the epileptogenic focus characterization in SPECT imaging. Our results showed that this optimized set, compared to various texture features, improved the classification rate by Support Vector Machines (SVM). Moreover, results were significantly better than the clinical method: SISCOM (Substraction Ictal SPECT Co-registred to MRI).
{"title":"An optimized set of 3D fractal and multifractal features for the epileptogenic focus characterization in SPECT imaging","authors":"Renaud Lopes, M. Vermandel, A. Dewalle-Vignion, S. Maouche, N. Betrouni","doi":"10.1109/ISBI.2009.5193106","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193106","url":null,"abstract":"Fractal geometry may be an efficient tool for texture analysis in medical imaging. However its application is primarily restricted to 2D cases and at the only use of an approximation method of the fractal dimension (FD). Recently, multifractal analysis has showed interesting results in this field. This study focuses on the use of an optimized set of 3D fractal and multifractal features for the epileptogenic focus characterization in SPECT imaging. Our results showed that this optimized set, compared to various texture features, improved the classification rate by Support Vector Machines (SVM). Moreover, results were significantly better than the clinical method: SISCOM (Substraction Ictal SPECT Co-registred to MRI).","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125437811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193046
F. Luisier, C. Vonesch, T. Blu, M. Unser
We propose a novel denoising algorithm to reduce the Poisson noise that is typically dominant in fluorescence microscopy data. To process large datasets at a low computational cost, we use the unnormalized Haar wavelet transform. Thanks to some of its appealing properties, independent unbiased MSE estimates can be derived for each subband. Based on these Poisson unbiased MSE estimates, we then optimize linearly parametrized interscale thresholding. Correlations between adjacent images of the multidimensional data are accounted for through a sliding window approach. Experiments on simulated and real data show that the proposed solution is qualitatively similar to a state-of-the-art multiscale method, while being orders of magnitude faster.
{"title":"Fast Haar-wavelet denoising of multidimensional fluorescence microscopy data","authors":"F. Luisier, C. Vonesch, T. Blu, M. Unser","doi":"10.1109/ISBI.2009.5193046","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193046","url":null,"abstract":"We propose a novel denoising algorithm to reduce the Poisson noise that is typically dominant in fluorescence microscopy data. To process large datasets at a low computational cost, we use the unnormalized Haar wavelet transform. Thanks to some of its appealing properties, independent unbiased MSE estimates can be derived for each subband. Based on these Poisson unbiased MSE estimates, we then optimize linearly parametrized interscale thresholding. Correlations between adjacent images of the multidimensional data are accounted for through a sliding window approach. Experiments on simulated and real data show that the proposed solution is qualitatively similar to a state-of-the-art multiscale method, while being orders of magnitude faster.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128515457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193300
D. Kaeli, B. Jang, Perhaad Mistry, Dana Schaa
Given the rapid growth in computational requirements for medical image analysis, Graphics Processing Units (GPUs) have begun to be utilized to address these demands. But even though GPUs are well-suited to the underlying processing associated with medical image reconstruction, extracting the full benefits of moving to GPU platforms requires significant programming effort, and presents a fundamental barrier for more general adoption of GPU acceleration in a wider range of medical imaging applications. In this paper we describe our experience in accelerating a number of challenging medical imaging applications, and discuss how we utilize profile-guided analysis to reap the full benefits available on GPU platforms. Our work considers different GPU architectures, as well as how to fully exploit the benefits of using multiple GPUs.
{"title":"Profile-guided optimization of critical medical imaging algorithms","authors":"D. Kaeli, B. Jang, Perhaad Mistry, Dana Schaa","doi":"10.1109/ISBI.2009.5193300","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193300","url":null,"abstract":"Given the rapid growth in computational requirements for medical image analysis, Graphics Processing Units (GPUs) have begun to be utilized to address these demands. But even though GPUs are well-suited to the underlying processing associated with medical image reconstruction, extracting the full benefits of moving to GPU platforms requires significant programming effort, and presents a fundamental barrier for more general adoption of GPU acceleration in a wider range of medical imaging applications. In this paper we describe our experience in accelerating a number of challenging medical imaging applications, and discuss how we utilize profile-guided analysis to reap the full benefits available on GPU platforms. Our work considers different GPU architectures, as well as how to fully exploit the benefits of using multiple GPUs.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130723643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193293
Damon E. Hyde, E. Miller, D. Brooks, V. Ntziachristos
A primary motivation for multi-modal imaging is to improve reconstructions for low resolution functional modalities using high resolution structural information. Most such approaches assume that the anatomic and functional images share a common physical structure. For fluorescence molecular tomography (FMT), however, this may be only approximately valid. We thus present and analyze a regularization scheme that allows more flexible use of anatomic images. Using parallels between regularization and statistical modeling, we develop a stochastic PDE that shares information across structural boundaries. Simulations indicate that our approach is capable of obtaining more accurate reconstructions than methods treating each tissue independently.
{"title":"Differential equation-driven regularization for joint FMT-CT imaging","authors":"Damon E. Hyde, E. Miller, D. Brooks, V. Ntziachristos","doi":"10.1109/ISBI.2009.5193293","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193293","url":null,"abstract":"A primary motivation for multi-modal imaging is to improve reconstructions for low resolution functional modalities using high resolution structural information. Most such approaches assume that the anatomic and functional images share a common physical structure. For fluorescence molecular tomography (FMT), however, this may be only approximately valid. We thus present and analyze a regularization scheme that allows more flexible use of anatomic images. Using parallels between regularization and statistical modeling, we develop a stochastic PDE that shares information across structural boundaries. Simulations indicate that our approach is capable of obtaining more accurate reconstructions than methods treating each tissue independently.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131001623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193315
N. Masson, F. Nageotte, P. Zanne, M. Mathelin
Flexible endoscopes are used in many diagnostic and interventional procedures. Physiological motions may render the physicians task very difficult to perform. Assistance could be achieved by using motorized endoscopes and real-time visual tracking algorithm to automatically follow a selected target. In order to control the motors, one needs to have an accurate estimation of the motion of the target in the endoscopic view, which requires an efficient tracking algorithm. In this paper, we compare existing tracking algorithms on various in vivo targets in order to assess their behavior under different conditions. The study shows that several issues have to be overcome by tracking algorithms in in vivo environment like illumination change and forward/backward motions of the target.
{"title":"In vivo comparison of real-time tracking algorithms for interventional flexible endoscopy","authors":"N. Masson, F. Nageotte, P. Zanne, M. Mathelin","doi":"10.1109/ISBI.2009.5193315","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193315","url":null,"abstract":"Flexible endoscopes are used in many diagnostic and interventional procedures. Physiological motions may render the physicians task very difficult to perform. Assistance could be achieved by using motorized endoscopes and real-time visual tracking algorithm to automatically follow a selected target. In order to control the motors, one needs to have an accurate estimation of the motion of the target in the endoscopic view, which requires an efficient tracking algorithm. In this paper, we compare existing tracking algorithms on various in vivo targets in order to assess their behavior under different conditions. The study shows that several issues have to be overcome by tracking algorithms in in vivo environment like illumination change and forward/backward motions of the target.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193237
Sheng-Zheng Wang, J. Gee, Jie Yang
This paper presents a novel method for assisting surgeons in automatically computing an optimal surgical plan by directly specifying the desired correction to a facial outline. First, the desired facial appearance is prescribed using a 3D sculpturing tool, while the cut regions of the skull are defined based on facial anatomy. Then, the deformation of the face meshes is performed using an improved biomechanical model in which virtual external forces are driven by the displacements corresponding to the differences of node coordinates between the original and specified face meshes, and free nodes and fixed nodes are defined in terms of the contact surfaces between the soft tissues and the bones within the cut regions. Finally, the shape of the contact surfaces is updated following the deformation of the soft tissues. After registering the deformable contact surfaces and the cut surfaces, the final positions of the cut bones are estimated. Evaluation of preliminary experimental results quantitatively demonstrates the effectiveness of the proposed approach.
{"title":"A framework for craniofacial surgery simulation based on pre-specified target face configurations","authors":"Sheng-Zheng Wang, J. Gee, Jie Yang","doi":"10.1109/ISBI.2009.5193237","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193237","url":null,"abstract":"This paper presents a novel method for assisting surgeons in automatically computing an optimal surgical plan by directly specifying the desired correction to a facial outline. First, the desired facial appearance is prescribed using a 3D sculpturing tool, while the cut regions of the skull are defined based on facial anatomy. Then, the deformation of the face meshes is performed using an improved biomechanical model in which virtual external forces are driven by the displacements corresponding to the differences of node coordinates between the original and specified face meshes, and free nodes and fixed nodes are defined in terms of the contact surfaces between the soft tissues and the bones within the cut regions. Finally, the shape of the contact surfaces is updated following the deformation of the soft tissues. After registering the deformable contact surfaces and the cut surfaces, the final positions of the cut bones are estimated. Evaluation of preliminary experimental results quantitatively demonstrates the effectiveness of the proposed approach.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"2473 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131087906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193186
A. Basavanhally, Jun Xu, S. Ganesan, A. Madabhushi
The current gold standard for predicting disease survival and outcome for lymph node-negative, estrogen receptor-positive breast cancer (LN-, ER+ BC) patients is via the gene-expression based assay, Oncotype DX. In this paper, we present a novel computer-aided prognosis (CAP) scheme that employs quantitatively derived image information to predict patient outcome analogous to the Oncotype DX Recurrence Score (RS), with high RS implying poor outcome and vice versa. While digital pathology has made tissue specimens amenable to computer-aided diagnosis (CAD) for disease detection, our CAP scheme is the first of its kind for predicting disease outcome and patient survival. Since cancer grade is known to be correlated to disease outcome, low grade implying good outcome and vice versa, our CAP scheme captures quantitative image features that are reflective of BC grade. Our scheme involves first semi-automatically detecting BC nuclei via an Expectation Maximization driven algorithm. Using the nuclear centroids, two graphs (Delaunay Triangulation and Minimum Spanning Tree) are constructed and a total of 12 features are extracted from each image. A non-linear dimensionality reduction scheme, Graph Embedding, projects the image-derived features into a low-dimensional space, and a Support Vector Machine classifies the BC images in the reduced dimensional space. On a cohort of 37 samples, and for 100 trials of 3-fold randomized cross-validation, the SVM yielded a mean accuracy of 84.15% in distinguishing samples with low and high RS and 84.12% in distinguishing low and high grade BC. The projection of the high-dimensional image feature data to a 1D line for all BC samples via GE shows a clear separation between, low, intermediate, and high BC grades, which in turn shows high correlation with low, medium, and high RS. The results suggest that our image-based CAP scheme might provide a cheaper alternative to Oncotype DX in predicting BC outcome.
{"title":"Computer-aided prognosis of ER+ breast cancer histopathology and correlating survival outcome with Oncotype DX assay","authors":"A. Basavanhally, Jun Xu, S. Ganesan, A. Madabhushi","doi":"10.1109/ISBI.2009.5193186","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193186","url":null,"abstract":"The current gold standard for predicting disease survival and outcome for lymph node-negative, estrogen receptor-positive breast cancer (LN-, ER+ BC) patients is via the gene-expression based assay, Oncotype DX. In this paper, we present a novel computer-aided prognosis (CAP) scheme that employs quantitatively derived image information to predict patient outcome analogous to the Oncotype DX Recurrence Score (RS), with high RS implying poor outcome and vice versa. While digital pathology has made tissue specimens amenable to computer-aided diagnosis (CAD) for disease detection, our CAP scheme is the first of its kind for predicting disease outcome and patient survival. Since cancer grade is known to be correlated to disease outcome, low grade implying good outcome and vice versa, our CAP scheme captures quantitative image features that are reflective of BC grade. Our scheme involves first semi-automatically detecting BC nuclei via an Expectation Maximization driven algorithm. Using the nuclear centroids, two graphs (Delaunay Triangulation and Minimum Spanning Tree) are constructed and a total of 12 features are extracted from each image. A non-linear dimensionality reduction scheme, Graph Embedding, projects the image-derived features into a low-dimensional space, and a Support Vector Machine classifies the BC images in the reduced dimensional space. On a cohort of 37 samples, and for 100 trials of 3-fold randomized cross-validation, the SVM yielded a mean accuracy of 84.15% in distinguishing samples with low and high RS and 84.12% in distinguishing low and high grade BC. The projection of the high-dimensional image feature data to a 1D line for all BC samples via GE shows a clear separation between, low, intermediate, and high BC grades, which in turn shows high correlation with low, medium, and high RS. The results suggest that our image-based CAP scheme might provide a cheaper alternative to Oncotype DX in predicting BC outcome.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128263056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}