Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562968
Gonzalo Vegas-Sánchez-Ferrero, A. Tristán-Vega, Lucilio Cordero-Grande, P. Casaseca-de-la-Higuera, S. Aja‐Fernández, M. Martín-Fernández, C. Alberola-López
In this paper we propose an alternative method to estimate and visualize the strain rate tensor (ST) in magnetic resonance images (MRI) when phase contrast MRI (PCMRI) and tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, an elastic image registration algorithm is used to estimate the movement of the myocardium at each point. Our experiments with real data prove that the registration algorithm provides a useful deformation field to estimate the ST fields. A classification between regional normal and dysfunctional contraction patterns, as compared with professional diagnosis, points out that the parameters extracted from the estimated ST can represent these patterns.
{"title":"Strain Rate Tensor estimation in cine cardiac MRI based on elastic image registration","authors":"Gonzalo Vegas-Sánchez-Ferrero, A. Tristán-Vega, Lucilio Cordero-Grande, P. Casaseca-de-la-Higuera, S. Aja‐Fernández, M. Martín-Fernández, C. Alberola-López","doi":"10.1109/CVPRW.2008.4562968","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562968","url":null,"abstract":"In this paper we propose an alternative method to estimate and visualize the strain rate tensor (ST) in magnetic resonance images (MRI) when phase contrast MRI (PCMRI) and tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, an elastic image registration algorithm is used to estimate the movement of the myocardium at each point. Our experiments with real data prove that the registration algorithm provides a useful deformation field to estimate the ST fields. A classification between regional normal and dysfunctional contraction patterns, as compared with professional diagnosis, points out that the parameters extracted from the estimated ST can represent these patterns.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127903954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563127
Sung W. Park, J. Heo, M. Savvides
T3D face reconstruction from a single 2D image is mathematically ill-posed. However, to solve ill-posed problems in the area of computer vision, a variety of methods has been proposed; some of the solutions are to estimate latent information or to apply model based approaches. In this paper, we propose a novel method to reconstruct a 3D face from a single 2D face image based on pose estimation and a deformable model of 3D face shape. For 3D face reconstruction from a single 2D face image, it is the first task to estimate the depth lost by 2D projection of 3D faces. Applying the EM algorithm to facial landmarks in a 2D image, we propose a pose estimation algorithm to infer the pose parameters of rotation, scaling, and translation. After estimating the pose, much denser points are interpolated between the landmark points by a 3D deformable model and barycentric coordinates. As opposed to previous literature, our method can locate facial feature points automatically in a 2D facial image. Moreover, we also show that the proposed method for pose estimation can be successfully applied to 3D face reconstruction. Experiments demonstrate that our approach can produce reliable results for reconstructing photorealistic 3D faces.
{"title":"3D face econstruction from a single 2D face image","authors":"Sung W. Park, J. Heo, M. Savvides","doi":"10.1109/CVPRW.2008.4563127","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563127","url":null,"abstract":"T3D face reconstruction from a single 2D image is mathematically ill-posed. However, to solve ill-posed problems in the area of computer vision, a variety of methods has been proposed; some of the solutions are to estimate latent information or to apply model based approaches. In this paper, we propose a novel method to reconstruct a 3D face from a single 2D face image based on pose estimation and a deformable model of 3D face shape. For 3D face reconstruction from a single 2D face image, it is the first task to estimate the depth lost by 2D projection of 3D faces. Applying the EM algorithm to facial landmarks in a 2D image, we propose a pose estimation algorithm to infer the pose parameters of rotation, scaling, and translation. After estimating the pose, much denser points are interpolated between the landmark points by a 3D deformable model and barycentric coordinates. As opposed to previous literature, our method can locate facial feature points automatically in a 2D facial image. Moreover, we also show that the proposed method for pose estimation can be successfully applied to 3D face reconstruction. Experiments demonstrate that our approach can produce reliable results for reconstructing photorealistic 3D faces.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128699100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563078
Longbin Chen, R. Feris, M. Turk
This paper presents an efficient partial shape matching method based on the Smith-Waterman algorithm. For two contours of m and n points respectively, the complexity of our method to find similar parts is only O(mn). In addition to this improvement in efficiency, we also obtain comparable accurate matching with fewer shape descriptors. Also, in contrast to arbitrary distance functions that are used by previous methods, we use a probabilistic similarity measurement, p-value, to evaluate the similarity of two shapes. Our experiments on several public shape databases indicate that our method outperforms state-of-the-art global and partial shape matching algorithms in various scenarios.
{"title":"Efficient partial shape matching using Smith-Waterman algorithm","authors":"Longbin Chen, R. Feris, M. Turk","doi":"10.1109/CVPRW.2008.4563078","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563078","url":null,"abstract":"This paper presents an efficient partial shape matching method based on the Smith-Waterman algorithm. For two contours of m and n points respectively, the complexity of our method to find similar parts is only O(mn). In addition to this improvement in efficiency, we also obtain comparable accurate matching with fewer shape descriptors. Also, in contrast to arbitrary distance functions that are used by previous methods, we use a probabilistic similarity measurement, p-value, to evaluate the similarity of two shapes. Our experiments on several public shape databases indicate that our method outperforms state-of-the-art global and partial shape matching algorithms in various scenarios.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128425628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562978
J. Stahl, K. Oliver, Song Wang
Edge grouping methods aim at detecting the complete boundaries of salient structures in noisy images. In this paper, we develop a new edge grouping method that exhibits several useful properties. First, it combines both boundary and region information by defining a unified grouping cost. The region information of the desirable structures is included as a binary feature map that is of the same size as the input image. Second, it finds the globally optimal solution of this grouping cost. We extend a prior graph-based edge grouping algorithm to achieve this goal. Third, it can detect both closed boundaries, where the structure of interest lies completely within the image perimeter, and open boundaries, where the structure of interest is cropped by the image perimeter. Given this capability for detecting both open and closed boundaries, the proposed method can be extended to segment an image into disjoint regions in a hierarchical way. Experimental results on real images are reported, with a comparison against a prior edge grouping method that can only detect closed boundaries.
{"title":"Open boundary capable edge grouping with feature maps","authors":"J. Stahl, K. Oliver, Song Wang","doi":"10.1109/CVPRW.2008.4562978","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562978","url":null,"abstract":"Edge grouping methods aim at detecting the complete boundaries of salient structures in noisy images. In this paper, we develop a new edge grouping method that exhibits several useful properties. First, it combines both boundary and region information by defining a unified grouping cost. The region information of the desirable structures is included as a binary feature map that is of the same size as the input image. Second, it finds the globally optimal solution of this grouping cost. We extend a prior graph-based edge grouping algorithm to achieve this goal. Third, it can detect both closed boundaries, where the structure of interest lies completely within the image perimeter, and open boundaries, where the structure of interest is cropped by the image perimeter. Given this capability for detecting both open and closed boundaries, the proposed method can be extended to segment an image into disjoint regions in a hierarchical way. Experimental results on real images are reported, with a comparison against a prior edge grouping method that can only detect closed boundaries.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129249399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563110
Ajay Kumar, Arun Passi
The biometric identification approaches using iris images are receiving increasing attention in the literature. Several methods for the automated iris identification have been presented in the literature and those based on the phase encoding of texture information are suggested to be the most promising. However, there has not been any attempt to combine these phase preserving approaches to achieve further improvement in the performance. This paper presents a comparative study of the performance from the iris identification using log-Gabor, Haar wavelet, DCT and FFT based features. Our experimental results suggest that the performance from the Haar wavelet and log Gabor filter based phase encoding is the most promising among all the four approaches considered in this work. Therefore the combination of these two matchers is most promising, both in terms of performance and the computational complexity. Our experimental results from the all 411 users (CASIA v3) and 224 users (IITD v1) database illustrate significant improvement in the performance that is not possible with either of these approaches individually.
{"title":"Comparison and combination of iris matchers for reliable personal identification","authors":"Ajay Kumar, Arun Passi","doi":"10.1109/CVPRW.2008.4563110","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563110","url":null,"abstract":"The biometric identification approaches using iris images are receiving increasing attention in the literature. Several methods for the automated iris identification have been presented in the literature and those based on the phase encoding of texture information are suggested to be the most promising. However, there has not been any attempt to combine these phase preserving approaches to achieve further improvement in the performance. This paper presents a comparative study of the performance from the iris identification using log-Gabor, Haar wavelet, DCT and FFT based features. Our experimental results suggest that the performance from the Haar wavelet and log Gabor filter based phase encoding is the most promising among all the four approaches considered in this work. Therefore the combination of these two matchers is most promising, both in terms of performance and the computational complexity. Our experimental results from the all 411 users (CASIA v3) and 224 users (IITD v1) database illustrate significant improvement in the performance that is not possible with either of these approaches individually.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116328684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563166
Michael Stürmer, J. Penne, J. Hornegger
The intensity-images captured by time-of-flight (ToF)-cameras are biased in several ways. The values differ significantly, depending on the integration time set within the camera and on the distance of the scene. Whereas the integration time leads to an almost linear scaling of the whole image, the attenuation due to the distance is nonlinear, resulting in higher intensities for objects closer to the camera. The background regions that are farther away contain comparably low values, leading to a bad contrast within the image. Another effect is that some kind of specularity may be observed due to uncommon reflecting conditions at some points within the scene. These three effects lead to intensity images which exhibit significantly different values depending on the integration time of the camera and the distance to the scene, thus making parameterization of processing steps like edge-detection, segmentation, registration and threshold computation a tedious task. Additionally, outliers with exceptionally high values lead to insufficient visualization results and problems in processing. In this work we propose scaling techniques which generate images whose intensities are independent of the integration time of the camera and the measured distance. Furthermore, a simple approach for reducing specularity effects is introduced.
{"title":"Standardization of intensity-values acquired by Time-of-Flight-cameras","authors":"Michael Stürmer, J. Penne, J. Hornegger","doi":"10.1109/CVPRW.2008.4563166","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563166","url":null,"abstract":"The intensity-images captured by time-of-flight (ToF)-cameras are biased in several ways. The values differ significantly, depending on the integration time set within the camera and on the distance of the scene. Whereas the integration time leads to an almost linear scaling of the whole image, the attenuation due to the distance is nonlinear, resulting in higher intensities for objects closer to the camera. The background regions that are farther away contain comparably low values, leading to a bad contrast within the image. Another effect is that some kind of specularity may be observed due to uncommon reflecting conditions at some points within the scene. These three effects lead to intensity images which exhibit significantly different values depending on the integration time of the camera and the distance to the scene, thus making parameterization of processing steps like edge-detection, segmentation, registration and threshold computation a tedious task. Additionally, outliers with exceptionally high values lead to insufficient visualization results and problems in processing. In this work we propose scaling techniques which generate images whose intensities are independent of the integration time of the camera and the measured distance. Furthermore, a simple approach for reducing specularity effects is introduced.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116996800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563032
Adrian Ion, N. Artner, G. Peyré, S. Mármol, W. Kropatsch, L. Cohen
This paper makes use of the continuous eccentricity transform to perform 3D shape matching. The eccentricity transform has already been proved useful in a discrete graph-theoretic setting and has been applied to 2D shape matching. We show how these ideas extend to higher dimensions. The eccentricity transform is used to compute descriptors for 3D shapes. These descriptors are defined as histograms of the eccentricity transform and are naturally invariant to Euclidean motion and articulation. They show promising results for shape discrimination.
{"title":"3D shape matching by geodesic eccentricity","authors":"Adrian Ion, N. Artner, G. Peyré, S. Mármol, W. Kropatsch, L. Cohen","doi":"10.1109/CVPRW.2008.4563032","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563032","url":null,"abstract":"This paper makes use of the continuous eccentricity transform to perform 3D shape matching. The eccentricity transform has already been proved useful in a discrete graph-theoretic setting and has been applied to 2D shape matching. We show how these ideas extend to higher dimensions. The eccentricity transform is used to compute descriptors for 3D shapes. These descriptors are defined as histograms of the eccentricity transform and are naturally invariant to Euclidean motion and articulation. They show promising results for shape discrimination.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116257866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563079
Ishay Goldin, J. Delosme, A. Bruckstein
Modeling the deformation of shapes under constraints on both perimeter and area is a challenging task due to the highly nontrivial interaction between the need for flexible local rules for manipulating the boundary and the global constraints. We propose several methods to address this problem and generate ldquorandom walksrdquo in the space of shapes obeying quite general possibly time varying constraints on their perimeter and area. Design of perimeter and area preserving deformations are an interesting and useful special case of this problem. The resulting deformation models are employed in annealing processes that evolve original shapes toward shapes that are optimal in terms of boundary bending-energy or other functionals. Furthermore, such models may find applications in the analysis of sequences of real images of deforming objects obeying global constraints as building blocks for registration and tracking algorithms.
{"title":"Vesicles and amoebae: Globally constrained shape evolutions","authors":"Ishay Goldin, J. Delosme, A. Bruckstein","doi":"10.1109/CVPRW.2008.4563079","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563079","url":null,"abstract":"Modeling the deformation of shapes under constraints on both perimeter and area is a challenging task due to the highly nontrivial interaction between the need for flexible local rules for manipulating the boundary and the global constraints. We propose several methods to address this problem and generate ldquorandom walksrdquo in the space of shapes obeying quite general possibly time varying constraints on their perimeter and area. Design of perimeter and area preserving deformations are an interesting and useful special case of this problem. The resulting deformation models are employed in annealing processes that evolve original shapes toward shapes that are optimal in terms of boundary bending-energy or other functionals. Furthermore, such models may find applications in the analysis of sequences of real images of deforming objects obeying global constraints as building blocks for registration and tracking algorithms.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114373988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563141
Tat-Jun Chin, Hanlin Goh, Joo-Hwee Lim
We investigate the task of efficiently training classifiers to build a robust place recognition system. We advocate an approach which involves densely capturing the facades of buildings and landmarks with video recordings to greedily accumulate as much visual information as possible. Our contributions include (1) a preprocessing step to effectively exploit the temporal continuity intrinsic in the video sequences to dramatically increase training efficiency, (2) training sparse classifiers discriminatively with the resulting data using the AdaBoost principle for place recognition, and (3) methods to speed up recognition using scaled kd-trees and to perform geometric validation on the results. Compared to straightforwardly applying scene recognition methods, our method not only allows a much faster training phase, the resulting classifiers are also more accurate. The sparsity of the classifiers also ensures good potential for recognition at high frame rates. We show extensive experimental results to validate our claims.
{"title":"Boosting descriptors condensed from video sequences for place recognition","authors":"Tat-Jun Chin, Hanlin Goh, Joo-Hwee Lim","doi":"10.1109/CVPRW.2008.4563141","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563141","url":null,"abstract":"We investigate the task of efficiently training classifiers to build a robust place recognition system. We advocate an approach which involves densely capturing the facades of buildings and landmarks with video recordings to greedily accumulate as much visual information as possible. Our contributions include (1) a preprocessing step to effectively exploit the temporal continuity intrinsic in the video sequences to dramatically increase training efficiency, (2) training sparse classifiers discriminatively with the resulting data using the AdaBoost principle for place recognition, and (3) methods to speed up recognition using scaled kd-trees and to perform geometric validation on the results. Compared to straightforwardly applying scene recognition methods, our method not only allows a much faster training phase, the resulting classifiers are also more accurate. The sparsity of the classifiers also ensures good potential for recognition at high frame rates. We show extensive experimental results to validate our claims.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563022
Juan D. García-Arteaga, J. Kybic
Mutual information is one of the most widespread similarity criteria for multi-modal image registration but is limited to low dimensional feature spaces when calculated using histogram and kernel based entropy estimators. In the present article we propose the use of the Kozachenko-Leonenko entropy estimator (KLE) to calculate higher order regional mutual information using local features. The use of local information overcomes the two most prominent problems of nearest neighbor based entropy estimation in image registration: the presence of strong interpolation artifacts and noise. The performance of the proposed criterion is compared to standard MI on data with a known ground truth using a protocol for the evaluation of image registration similarity measures. Finally, we show how the use of the KLE with local features improves the robustness and accuracy of the registration of color colposcopy images.
{"title":"Regional image similarity criteria based on the Kozachenko-Leonenko entropy estimator","authors":"Juan D. García-Arteaga, J. Kybic","doi":"10.1109/CVPRW.2008.4563022","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563022","url":null,"abstract":"Mutual information is one of the most widespread similarity criteria for multi-modal image registration but is limited to low dimensional feature spaces when calculated using histogram and kernel based entropy estimators. In the present article we propose the use of the Kozachenko-Leonenko entropy estimator (KLE) to calculate higher order regional mutual information using local features. The use of local information overcomes the two most prominent problems of nearest neighbor based entropy estimation in image registration: the presence of strong interpolation artifacts and noise. The performance of the proposed criterion is compared to standard MI on data with a known ground truth using a protocol for the evaluation of image registration similarity measures. Finally, we show how the use of the KLE with local features improves the robustness and accuracy of the registration of color colposcopy images.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127639570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}