Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193031
Guoping Chang, T. Pan, John W. Clark, O. Mawlawi
Super-Resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POV). In this paper, we propose a new implementation of the SR technique (NSR) whereby the required multiple low-resolution images are generated by shifting the reconstruction pixel grid during the image-reconstruction process rather than being acquired from different POV. In order to reduce the overall processing time and memory storage, we further propose two optimized SR implementations (NSR-O1 & NSR-O2) that require only a subset of the low resolution images (two sides & diagonal of the image matrix, respectively). The objective of this paper is to test the performances of the NSR, NSR-O1 & NSR-O2 implementations and compare them to the original SR implementation (OSR) using experimental studies. Materials and Methods A point source and a NEMA/IEC phantom study were conducted for this investigation. In each study, an OSR image (256×256) was generated by combining 16 (4×4) low-resolution images (64×64) that were reconstructed from 16 different data sets acquired from different POV. Furthermore, another set of 16 low-resolution images were reconstructed from the same (first) data set using different reconstruction POV to generate a NSR image (256×256). In addition, two different subsets of the second 16-image set (two sides & diagonal of the image matrix, respectively) were combined to generate the NSR-O1 and NSR-O2 images respectively. The 4 SR images (OSR, NSR, NSR-O1 & NSR-O2) were compared with one another with respect to contrast, resolution, noise and SNR. For reference purposes a comparison with a native reconstruction (NR) image using a high resolution pixel grid (256×256) was also performed. Results The point source study showed that the proposed NSR, NSR-O1 & NSR-O2 images exhibited identical contrast and resolution as the OSR image (0.5% and 1.2% difference on average, respectively). Comparisons between the SR and NR images for the point source study showed that the NR image exhibited an average 30% and 8% lower contrast and resolution respectively. The NEMA/IEC phantom study showed that the three NSR images exhibited similar noise structures as one another but different from the OSR image. The SNR of the three NSR images were similar (2.1% difference) but on average 22% lower than the OSR image largely due to an increase in background noise, while the NR image had an average of 14.5% lower SNR versus the three NSR images. Conclusion The NSR implementation can potentially replace the OSR approach in current PET scanners while maintaining similar contrast and resolution, but at a relatively lower SNR. This NSR implementation can be further optimized as NSR-O1 & NSR-O2 implementations by using only a subset of low-resolution images which can achieve similar image contrast, resolution and SNR but require less processing time and memory storage. A m
{"title":"Implementation and optimization of a new Super-Resolution technique in PET imaging","authors":"Guoping Chang, T. Pan, John W. Clark, O. Mawlawi","doi":"10.1109/ISBI.2009.5193031","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193031","url":null,"abstract":"Super-Resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POV). In this paper, we propose a new implementation of the SR technique (NSR) whereby the required multiple low-resolution images are generated by shifting the reconstruction pixel grid during the image-reconstruction process rather than being acquired from different POV. In order to reduce the overall processing time and memory storage, we further propose two optimized SR implementations (NSR-O1 & NSR-O2) that require only a subset of the low resolution images (two sides & diagonal of the image matrix, respectively). The objective of this paper is to test the performances of the NSR, NSR-O1 & NSR-O2 implementations and compare them to the original SR implementation (OSR) using experimental studies. Materials and Methods A point source and a NEMA/IEC phantom study were conducted for this investigation. In each study, an OSR image (256×256) was generated by combining 16 (4×4) low-resolution images (64×64) that were reconstructed from 16 different data sets acquired from different POV. Furthermore, another set of 16 low-resolution images were reconstructed from the same (first) data set using different reconstruction POV to generate a NSR image (256×256). In addition, two different subsets of the second 16-image set (two sides & diagonal of the image matrix, respectively) were combined to generate the NSR-O1 and NSR-O2 images respectively. The 4 SR images (OSR, NSR, NSR-O1 & NSR-O2) were compared with one another with respect to contrast, resolution, noise and SNR. For reference purposes a comparison with a native reconstruction (NR) image using a high resolution pixel grid (256×256) was also performed. Results The point source study showed that the proposed NSR, NSR-O1 & NSR-O2 images exhibited identical contrast and resolution as the OSR image (0.5% and 1.2% difference on average, respectively). Comparisons between the SR and NR images for the point source study showed that the NR image exhibited an average 30% and 8% lower contrast and resolution respectively. The NEMA/IEC phantom study showed that the three NSR images exhibited similar noise structures as one another but different from the OSR image. The SNR of the three NSR images were similar (2.1% difference) but on average 22% lower than the OSR image largely due to an increase in background noise, while the NR image had an average of 14.5% lower SNR versus the three NSR images. Conclusion The NSR implementation can potentially replace the OSR approach in current PET scanners while maintaining similar contrast and resolution, but at a relatively lower SNR. This NSR implementation can be further optimized as NSR-O1 & NSR-O2 implementations by using only a subset of low-resolution images which can achieve similar image contrast, resolution and SNR but require less processing time and memory storage. A m","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125782191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193267
Po Wang, C. Kelly, M. Brady
The aim of this work is to segment, and quantify, the vasculature of tumours, based on fluorescent microscope 3D images. Such images have poor contrast and the vascular features vary substantially within a 3D volume. In this paper, we introduce a method to estimate local phase in 3D images based on the monogenic signal theory, and illustrate its performance on our vasculature images.
{"title":"Application of 3D local phase theory in vessel segmentation","authors":"Po Wang, C. Kelly, M. Brady","doi":"10.1109/ISBI.2009.5193267","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193267","url":null,"abstract":"The aim of this work is to segment, and quantify, the vasculature of tumours, based on fluorescent microscope 3D images. Such images have poor contrast and the vascular features vary substantially within a 3D volume. In this paper, we introduce a method to estimate local phase in 3D images based on the monogenic signal theory, and illustrate its performance on our vasculature images.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125829184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193200
V. F. V. Ravesteijn, Lingxiao Zhao, C. Botha, F. Post, F. Vos, L. Vliet
CT colonography is a screening technique for adenomatous colorectal polyps, which are important precursors to colon cancer. Computer aided detection (CAD) systems are developed to assist radiologists. We present a CAD system that substantially reduces false positives while keeping the sensitivity high. Hereto, we combine protrusion measures derived from the solution of a non-linear partial differential equation (PDE) applied to both an explicit mesh and an implicit volumetric representation of the colon wall. The shape of the protruding elements is efficiently described via a technique from data visualization based on curvature streamlines. A low-complex pattern recognition system based on an intuitive feature from the aforementioned representations improves performance to less than 1.6 false positives per scan at 92% sensitivity per polyp.
{"title":"Combining mesh, volume, and streamline representations for polyp detection in CT colonography","authors":"V. F. V. Ravesteijn, Lingxiao Zhao, C. Botha, F. Post, F. Vos, L. Vliet","doi":"10.1109/ISBI.2009.5193200","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193200","url":null,"abstract":"CT colonography is a screening technique for adenomatous colorectal polyps, which are important precursors to colon cancer. Computer aided detection (CAD) systems are developed to assist radiologists. We present a CAD system that substantially reduces false positives while keeping the sensitivity high. Hereto, we combine protrusion measures derived from the solution of a non-linear partial differential equation (PDE) applied to both an explicit mesh and an implicit volumetric representation of the colon wall. The shape of the protruding elements is efficiently described via a technique from data visualization based on curvature streamlines. A low-complex pattern recognition system based on an intuitive feature from the aforementioned representations improves performance to less than 1.6 false positives per scan at 92% sensitivity per polyp.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129488798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193271
L. Ramus, G. Malandain
Atlas-based segmentation has been shown to provide promising results to delineate critical structures for radiotherapy planning. However, it requires to have a reference image with its reference segmentation available. Classical methods used to build an average segmentation can lead to over-segmentation in case of high variability among the manual segmentations. We propose in this paper a consensus-based approach to construct a reference segmentation from a database of manually delineated images. We first compute local consensus measures to estimate a variability map, and then deduct from it a consensus segmentation. Finally, the proposed method is evaluated using a dataset of 64 manually delineated images of the head and neck region.
{"title":"Using consensus measures for atlas construction","authors":"L. Ramus, G. Malandain","doi":"10.1109/ISBI.2009.5193271","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193271","url":null,"abstract":"Atlas-based segmentation has been shown to provide promising results to delineate critical structures for radiotherapy planning. However, it requires to have a reference image with its reference segmentation available. Classical methods used to build an average segmentation can lead to over-segmentation in case of high variability among the manual segmentations. We propose in this paper a consensus-based approach to construct a reference segmentation from a database of manually delineated images. We first compute local consensus measures to estimate a variability map, and then deduct from it a consensus segmentation. Finally, the proposed method is evaluated using a dataset of 64 manually delineated images of the head and neck region.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129828364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193298
W. Xu, K. Mueller
Iterative reconstruction algorithms augmented with regularization can produce high-quality reconstructions from few views and even in the presence of significant noise. In this paper we focus on the particularities associated with the GPU acceleration of these. First, we introduce the idea of using exhaustive benchmark tests to determine the optimal settings of various parameters in iterative algorithm, here OS-SIRT, which proofs decisive for obtaining optimal GPU performance. Then we introduce bilateral filtering as a viable and cost-effective means for regularization, and we show that GPU-acceleration reduces its overhead to very moderate levels.
{"title":"Accelerating regularized iterative ct reconstruction on commodity graphics hardware (GPU)","authors":"W. Xu, K. Mueller","doi":"10.1109/ISBI.2009.5193298","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193298","url":null,"abstract":"Iterative reconstruction algorithms augmented with regularization can produce high-quality reconstructions from few views and even in the presence of significant noise. In this paper we focus on the particularities associated with the GPU acceleration of these. First, we introduce the idea of using exhaustive benchmark tests to determine the optimal settings of various parameters in iterative algorithm, here OS-SIRT, which proofs decisive for obtaining optimal GPU performance. Then we introduce bilateral filtering as a viable and cost-effective means for regularization, and we show that GPU-acceleration reduces its overhead to very moderate levels.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132423117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193110
Arturo Flores, Steven J. Rysavy, R. Enciso, K. Okada
This paper proposes a novel application of computer-aided diagnosis to a clinically significant dental problem: non-invasive differential diagnosis of periapical lesions using cone-beam computed tomography (CBCT). The proposed semi-automatic solution combines graph-theoretic random walks segmentation and machine learning-based LDA and AdaBoost classifiers. Our quantitative experiments show the effectiveness of the proposed method by demonstrating 94.1% correct classification rate. Furthermore, we compare classification performances with two independent ground-truth sets from the biopsy and CBCT diagnoses. ROC analysis reveals our method improves accuracy for both cases and behaves more in agreement with the CBCT diagnosis, supporting a hypothesis presented in a recent clinical report.
{"title":"Non-invasive differential diagnosis of dental periapical lesions in cone-beam CT","authors":"Arturo Flores, Steven J. Rysavy, R. Enciso, K. Okada","doi":"10.1109/ISBI.2009.5193110","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193110","url":null,"abstract":"This paper proposes a novel application of computer-aided diagnosis to a clinically significant dental problem: non-invasive differential diagnosis of periapical lesions using cone-beam computed tomography (CBCT). The proposed semi-automatic solution combines graph-theoretic random walks segmentation and machine learning-based LDA and AdaBoost classifiers. Our quantitative experiments show the effectiveness of the proposed method by demonstrating 94.1% correct classification rate. Furthermore, we compare classification performances with two independent ground-truth sets from the biopsy and CBCT diagnoses. ROC analysis reveals our method improves accuracy for both cases and behaves more in agreement with the CBCT diagnosis, supporting a hypothesis presented in a recent clinical report.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193162
R. Gurjar, Madhavi Seetamraju, N. Kolodziejski, Koo Yong-Eun, Anand T. N. Kumar, R. Kopelman
We present results of diffused phosphorescence lifetime tomography performed on tissue-simulating phantoms for high contrast imaging of hypoxic breast tumors. Oxygen sensitive phosphor embedded in a versatile nanoparticle matrix was used as contrast agent to identify simulated hypoxic tumors in phantoms. The surface of these nanoparticles was decorated with F3 peptide that targets cell surface receptors that is often overexpressed in aggressive breast tumors. The surface functionalization did not interfere with the embedded phosphor's characteristics. The phosphorescence intensity and lifetime was measured using single photon sensitive multi-pixel photon counting (MPPC) detectors in box-car geometry. The detection technique has a large dynamic range, high sensitivity and good resolution in oxygen concentration.
{"title":"Diffused optical tomography using oxygen-sensitive luminescent contrast agent","authors":"R. Gurjar, Madhavi Seetamraju, N. Kolodziejski, Koo Yong-Eun, Anand T. N. Kumar, R. Kopelman","doi":"10.1109/ISBI.2009.5193162","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193162","url":null,"abstract":"We present results of diffused phosphorescence lifetime tomography performed on tissue-simulating phantoms for high contrast imaging of hypoxic breast tumors. Oxygen sensitive phosphor embedded in a versatile nanoparticle matrix was used as contrast agent to identify simulated hypoxic tumors in phantoms. The surface of these nanoparticles was decorated with F3 peptide that targets cell surface receptors that is often overexpressed in aggressive breast tumors. The surface functionalization did not interfere with the embedded phosphor's characteristics. The phosphorescence intensity and lifetime was measured using single photon sensitive multi-pixel photon counting (MPPC) detectors in box-car geometry. The detection technique has a large dynamic range, high sensitivity and good resolution in oxygen concentration.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130269084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193268
Ihor Smal, M. Loog, W. Niessen, E. Meijering
In live-cell fluorescence microscopy imaging, quantitative analysis of biological image data generally involves the detection of many subresolution objects, appearing as diffraction-limited spots. Due to acquisition limitations, the signal-to-noise ratio (SNR) can be extremely low, making automated spot detection a very challenging task. In this paper, we quantitatively evaluate the performance of the most frequently used supervised and unsupervised detection methods for this purpose. Experiments on synthetic images of three different types, for which ground truth was available, as well as on real image data sets acquired for two different biological studies, for which we obtained expert manual annotations for comparison, revealed that for very low SNRs (≈2), the supervised (machine learning) methods perform best overall, closely followed by the detectors based on the so-called h-dome transform from mathematical morphology and the multiscale variance-stabilizing transform, which do not require a learning stage. At high SNRs (≫5), the difference in performance of all considered detectors becomes negligible.
{"title":"Quantitative comparison of spot detection methods in live-cell fluorescence microscopy imaging","authors":"Ihor Smal, M. Loog, W. Niessen, E. Meijering","doi":"10.1109/ISBI.2009.5193268","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193268","url":null,"abstract":"In live-cell fluorescence microscopy imaging, quantitative analysis of biological image data generally involves the detection of many subresolution objects, appearing as diffraction-limited spots. Due to acquisition limitations, the signal-to-noise ratio (SNR) can be extremely low, making automated spot detection a very challenging task. In this paper, we quantitatively evaluate the performance of the most frequently used supervised and unsupervised detection methods for this purpose. Experiments on synthetic images of three different types, for which ground truth was available, as well as on real image data sets acquired for two different biological studies, for which we obtained expert manual annotations for comparison, revealed that for very low SNRs (≈2), the supervised (machine learning) methods perform best overall, closely followed by the detectors based on the so-called h-dome transform from mathematical morphology and the multiscale variance-stabilizing transform, which do not require a learning stage. At high SNRs (≫5), the difference in performance of all considered detectors becomes negligible.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126563594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193233
K. Venkataraju, António R. C. Paiva, E. Jurrus, T. Tasdizen
To better understand the central nervous system, neurobiologists need to reconstruct the underlying neural circuitry from electron microscopy images. One of the necessary tasks is to segment the individual neurons. For this purpose, we propose a supervised learning approach to detect the cell membranes. The classifier was trained using AdaBoost, on local and context features. The features were selected to highlight the line characteristics of cell membranes. It is shown that using features from context positions allows for more information to be utilized in the classification. Together with the nonlinear discrimination ability of the AdaBoost classifier, this results in clearly noticeable improvements over previously used methods.
{"title":"Automatic markup of neural cell membranes using boosted decision stumps","authors":"K. Venkataraju, António R. C. Paiva, E. Jurrus, T. Tasdizen","doi":"10.1109/ISBI.2009.5193233","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193233","url":null,"abstract":"To better understand the central nervous system, neurobiologists need to reconstruct the underlying neural circuitry from electron microscopy images. One of the necessary tasks is to segment the individual neurons. For this purpose, we propose a supervised learning approach to detect the cell membranes. The classifier was trained using AdaBoost, on local and context features. The features were selected to highlight the line characteristics of cell membranes. It is shown that using features from context positions allows for more information to be utilized in the classification. Together with the nonlinear discrimination ability of the AdaBoost classifier, this results in clearly noticeable improvements over previously used methods.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126591662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-28DOI: 10.1109/ISBI.2009.5193143
Shaoting Zhang, Xiaoxu Wang, Dimitris N. Metaxas, Ting Chen, L. Axel
We propose a novel framework to reconstruct the left ventricle (LV)'s 3D surface from sparse tagged-MRI (tMRI). First we acquire an initial surface mesh from a dense tMRI. Then landmarks are calculated both on contours of a specific new tMRI data and on corresponding slices of the initial mesh. Next, we employ several filters including global deformation, local deformation and remeshing to deform the initial surface mesh to the image data. This step integrates Polar Decomposition, Laplacian Surface Optimization (LSO) and Deformation (LSD) algorithms. The resulting mesh represents the reconstructed surface of the image data. Further more, this high quality surface mesh can be adopted by most deformable models. Using tagging line information, these models can reconstruct LV motion. The experimental results show that compared to Thin Plate Spline (TPS) our algorithm is relatively fast, the shape represents image data better and the triangle quality is more suitable for deformable model.
{"title":"LV surface reconstruction from sparse tMRI using Laplacian Surface Deformation and Optimization","authors":"Shaoting Zhang, Xiaoxu Wang, Dimitris N. Metaxas, Ting Chen, L. Axel","doi":"10.1109/ISBI.2009.5193143","DOIUrl":"https://doi.org/10.1109/ISBI.2009.5193143","url":null,"abstract":"We propose a novel framework to reconstruct the left ventricle (LV)'s 3D surface from sparse tagged-MRI (tMRI). First we acquire an initial surface mesh from a dense tMRI. Then landmarks are calculated both on contours of a specific new tMRI data and on corresponding slices of the initial mesh. Next, we employ several filters including global deformation, local deformation and remeshing to deform the initial surface mesh to the image data. This step integrates Polar Decomposition, Laplacian Surface Optimization (LSO) and Deformation (LSD) algorithms. The resulting mesh represents the reconstructed surface of the image data. Further more, this high quality surface mesh can be adopted by most deformable models. Using tagging line information, these models can reconstruct LV motion. The experimental results show that compared to Thin Plate Spline (TPS) our algorithm is relatively fast, the shape represents image data better and the triangle quality is more suitable for deformable model.","PeriodicalId":272938,"journal":{"name":"2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126056087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}