首页 > 最新文献

2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

英文 中文
Strain Rate Tensor estimation in cine cardiac MRI based on elastic image registration 基于弹性图像配准的电影心脏MRI应变率张量估计
Gonzalo Vegas-Sánchez-Ferrero, A. Tristán-Vega, Lucilio Cordero-Grande, P. Casaseca-de-la-Higuera, S. Aja‐Fernández, M. Martín-Fernández, C. Alberola-López
In this paper we propose an alternative method to estimate and visualize the strain rate tensor (ST) in magnetic resonance images (MRI) when phase contrast MRI (PCMRI) and tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, an elastic image registration algorithm is used to estimate the movement of the myocardium at each point. Our experiments with real data prove that the registration algorithm provides a useful deformation field to estimate the ST fields. A classification between regional normal and dysfunctional contraction patterns, as compared with professional diagnosis, points out that the parameters extracted from the estimated ST can represent these patterns.
在本文中,我们提出了一种替代方法来估计和可视化在磁共振图像(MRI)应变速率张量(ST),当相位对比MRI (PCMRI)和标记MRI (TMRI)不可用。这种选择是基于图像处理技术。具体而言,使用弹性图像配准算法估计心肌在每个点的运动。实际数据实验证明,该配准算法为估计ST场提供了一个有用的形变场。区域正常和功能失调收缩模式之间的分类,与专业诊断相比,指出从估计ST提取的参数可以代表这些模式。
{"title":"Strain Rate Tensor estimation in cine cardiac MRI based on elastic image registration","authors":"Gonzalo Vegas-Sánchez-Ferrero, A. Tristán-Vega, Lucilio Cordero-Grande, P. Casaseca-de-la-Higuera, S. Aja‐Fernández, M. Martín-Fernández, C. Alberola-López","doi":"10.1109/CVPRW.2008.4562968","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562968","url":null,"abstract":"In this paper we propose an alternative method to estimate and visualize the strain rate tensor (ST) in magnetic resonance images (MRI) when phase contrast MRI (PCMRI) and tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, an elastic image registration algorithm is used to estimate the movement of the myocardium at each point. Our experiments with real data prove that the registration algorithm provides a useful deformation field to estimate the ST fields. A classification between regional normal and dysfunctional contraction patterns, as compared with professional diagnosis, points out that the parameters extracted from the estimated ST can represent these patterns.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127903954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D face econstruction from a single 2D face image 从单个二维人脸图像构建三维人脸
Sung W. Park, J. Heo, M. Savvides
T3D face reconstruction from a single 2D image is mathematically ill-posed. However, to solve ill-posed problems in the area of computer vision, a variety of methods has been proposed; some of the solutions are to estimate latent information or to apply model based approaches. In this paper, we propose a novel method to reconstruct a 3D face from a single 2D face image based on pose estimation and a deformable model of 3D face shape. For 3D face reconstruction from a single 2D face image, it is the first task to estimate the depth lost by 2D projection of 3D faces. Applying the EM algorithm to facial landmarks in a 2D image, we propose a pose estimation algorithm to infer the pose parameters of rotation, scaling, and translation. After estimating the pose, much denser points are interpolated between the landmark points by a 3D deformable model and barycentric coordinates. As opposed to previous literature, our method can locate facial feature points automatically in a 2D facial image. Moreover, we also show that the proposed method for pose estimation can be successfully applied to 3D face reconstruction. Experiments demonstrate that our approach can produce reliable results for reconstructing photorealistic 3D faces.
从单个2D图像重建T3D人脸在数学上是病态的。然而,为了解决计算机视觉领域的病态问题,人们提出了各种各样的方法;一些解决方案是估计潜在信息或应用基于模型的方法。在本文中,我们提出了一种基于姿态估计和三维脸型的可变形模型,从单张二维人脸图像重建三维人脸的新方法。对于从单张二维人脸图像重建三维人脸,首先要估计三维人脸在二维投影中损失的深度。将EM算法应用于二维图像中的面部地标,我们提出了一种姿态估计算法来推断旋转、缩放和平移的姿态参数。在估计姿态后,通过三维可变形模型和质心坐标在地标点之间插值更密集的点。与以往文献不同的是,我们的方法可以在二维人脸图像中自动定位人脸特征点。此外,我们还证明了所提出的姿态估计方法可以成功地应用于三维人脸重建。实验表明,我们的方法可以产生可靠的重建逼真的三维人脸的结果。
{"title":"3D face econstruction from a single 2D face image","authors":"Sung W. Park, J. Heo, M. Savvides","doi":"10.1109/CVPRW.2008.4563127","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563127","url":null,"abstract":"T3D face reconstruction from a single 2D image is mathematically ill-posed. However, to solve ill-posed problems in the area of computer vision, a variety of methods has been proposed; some of the solutions are to estimate latent information or to apply model based approaches. In this paper, we propose a novel method to reconstruct a 3D face from a single 2D face image based on pose estimation and a deformable model of 3D face shape. For 3D face reconstruction from a single 2D face image, it is the first task to estimate the depth lost by 2D projection of 3D faces. Applying the EM algorithm to facial landmarks in a 2D image, we propose a pose estimation algorithm to infer the pose parameters of rotation, scaling, and translation. After estimating the pose, much denser points are interpolated between the landmark points by a 3D deformable model and barycentric coordinates. As opposed to previous literature, our method can locate facial feature points automatically in a 2D facial image. Moreover, we also show that the proposed method for pose estimation can be successfully applied to 3D face reconstruction. Experiments demonstrate that our approach can produce reliable results for reconstructing photorealistic 3D faces.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128699100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Efficient partial shape matching using Smith-Waterman algorithm 基于Smith-Waterman算法的高效局部形状匹配
Longbin Chen, R. Feris, M. Turk
This paper presents an efficient partial shape matching method based on the Smith-Waterman algorithm. For two contours of m and n points respectively, the complexity of our method to find similar parts is only O(mn). In addition to this improvement in efficiency, we also obtain comparable accurate matching with fewer shape descriptors. Also, in contrast to arbitrary distance functions that are used by previous methods, we use a probabilistic similarity measurement, p-value, to evaluate the similarity of two shapes. Our experiments on several public shape databases indicate that our method outperforms state-of-the-art global and partial shape matching algorithms in various scenarios.
提出了一种基于Smith-Waterman算法的部分形状匹配方法。对于m和n个点的两个轮廓,我们的方法寻找相似部分的复杂度仅为O(mn)。除了效率的提高之外,我们还使用更少的形状描述符获得了相当精确的匹配。此外,与以前方法使用的任意距离函数不同,我们使用概率相似性度量p值来评估两个形状的相似性。我们在几个公共形状数据库上的实验表明,我们的方法在各种场景下优于最先进的全局和部分形状匹配算法。
{"title":"Efficient partial shape matching using Smith-Waterman algorithm","authors":"Longbin Chen, R. Feris, M. Turk","doi":"10.1109/CVPRW.2008.4563078","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563078","url":null,"abstract":"This paper presents an efficient partial shape matching method based on the Smith-Waterman algorithm. For two contours of m and n points respectively, the complexity of our method to find similar parts is only O(mn). In addition to this improvement in efficiency, we also obtain comparable accurate matching with fewer shape descriptors. Also, in contrast to arbitrary distance functions that are used by previous methods, we use a probabilistic similarity measurement, p-value, to evaluate the similarity of two shapes. Our experiments on several public shape databases indicate that our method outperforms state-of-the-art global and partial shape matching algorithms in various scenarios.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128425628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Open boundary capable edge grouping with feature maps 开放边界的边缘分组与特征映射
J. Stahl, K. Oliver, Song Wang
Edge grouping methods aim at detecting the complete boundaries of salient structures in noisy images. In this paper, we develop a new edge grouping method that exhibits several useful properties. First, it combines both boundary and region information by defining a unified grouping cost. The region information of the desirable structures is included as a binary feature map that is of the same size as the input image. Second, it finds the globally optimal solution of this grouping cost. We extend a prior graph-based edge grouping algorithm to achieve this goal. Third, it can detect both closed boundaries, where the structure of interest lies completely within the image perimeter, and open boundaries, where the structure of interest is cropped by the image perimeter. Given this capability for detecting both open and closed boundaries, the proposed method can be extended to segment an image into disjoint regions in a hierarchical way. Experimental results on real images are reported, with a comparison against a prior edge grouping method that can only detect closed boundaries.
边缘分组方法的目的是检测噪声图像中显著结构的完整边界。在本文中,我们开发了一种新的边缘分组方法,它显示了几个有用的性质。首先,通过定义统一的分组代价,将边界和区域信息结合起来;所需结构的区域信息作为与输入图像大小相同的二值特征映射包含。其次,找出该分组代价的全局最优解。我们扩展了先前的基于图的边缘分组算法来实现这一目标。第三,它既可以检测闭合边界(感兴趣的结构完全位于图像周长内),也可以检测开放边界(感兴趣的结构被图像周长截断)。鉴于这种检测开放和封闭边界的能力,该方法可以扩展到以分层方式将图像分割成不相交的区域。本文报道了在真实图像上的实验结果,并与仅检测封闭边界的先验边缘分组方法进行了比较。
{"title":"Open boundary capable edge grouping with feature maps","authors":"J. Stahl, K. Oliver, Song Wang","doi":"10.1109/CVPRW.2008.4562978","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562978","url":null,"abstract":"Edge grouping methods aim at detecting the complete boundaries of salient structures in noisy images. In this paper, we develop a new edge grouping method that exhibits several useful properties. First, it combines both boundary and region information by defining a unified grouping cost. The region information of the desirable structures is included as a binary feature map that is of the same size as the input image. Second, it finds the globally optimal solution of this grouping cost. We extend a prior graph-based edge grouping algorithm to achieve this goal. Third, it can detect both closed boundaries, where the structure of interest lies completely within the image perimeter, and open boundaries, where the structure of interest is cropped by the image perimeter. Given this capability for detecting both open and closed boundaries, the proposed method can be extended to segment an image into disjoint regions in a hierarchical way. Experimental results on real images are reported, with a comparison against a prior edge grouping method that can only detect closed boundaries.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129249399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Comparison and combination of iris matchers for reliable personal identification 虹膜匹配器的比较与组合,实现可靠的个人识别
Ajay Kumar, Arun Passi
The biometric identification approaches using iris images are receiving increasing attention in the literature. Several methods for the automated iris identification have been presented in the literature and those based on the phase encoding of texture information are suggested to be the most promising. However, there has not been any attempt to combine these phase preserving approaches to achieve further improvement in the performance. This paper presents a comparative study of the performance from the iris identification using log-Gabor, Haar wavelet, DCT and FFT based features. Our experimental results suggest that the performance from the Haar wavelet and log Gabor filter based phase encoding is the most promising among all the four approaches considered in this work. Therefore the combination of these two matchers is most promising, both in terms of performance and the computational complexity. Our experimental results from the all 411 users (CASIA v3) and 224 users (IITD v1) database illustrate significant improvement in the performance that is not possible with either of these approaches individually.
利用虹膜图像进行生物识别的方法越来越受到文献的关注。文献中提出了几种自动虹膜识别方法,其中基于纹理信息的相位编码是最有前途的方法。然而,还没有任何尝试将这些相位保持方法结合起来以实现性能的进一步改进。本文对基于log-Gabor、Haar小波、DCT和FFT特征的虹膜识别性能进行了比较研究。我们的实验结果表明,基于Haar小波和对数Gabor滤波器的相位编码的性能是本工作中考虑的所有四种方法中最有希望的。因此,这两个匹配器的组合在性能和计算复杂性方面都是最有前途的。我们对所有411个用户(CASIA v3)和224个用户(IITD v1)数据库的实验结果表明,单独使用这两种方法都无法显著提高性能。
{"title":"Comparison and combination of iris matchers for reliable personal identification","authors":"Ajay Kumar, Arun Passi","doi":"10.1109/CVPRW.2008.4563110","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563110","url":null,"abstract":"The biometric identification approaches using iris images are receiving increasing attention in the literature. Several methods for the automated iris identification have been presented in the literature and those based on the phase encoding of texture information are suggested to be the most promising. However, there has not been any attempt to combine these phase preserving approaches to achieve further improvement in the performance. This paper presents a comparative study of the performance from the iris identification using log-Gabor, Haar wavelet, DCT and FFT based features. Our experimental results suggest that the performance from the Haar wavelet and log Gabor filter based phase encoding is the most promising among all the four approaches considered in this work. Therefore the combination of these two matchers is most promising, both in terms of performance and the computational complexity. Our experimental results from the all 411 users (CASIA v3) and 224 users (IITD v1) database illustrate significant improvement in the performance that is not possible with either of these approaches individually.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116328684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Standardization of intensity-values acquired by Time-of-Flight-cameras 飞行时间照相机获得的强度值的标准化
Michael Stürmer, J. Penne, J. Hornegger
The intensity-images captured by time-of-flight (ToF)-cameras are biased in several ways. The values differ significantly, depending on the integration time set within the camera and on the distance of the scene. Whereas the integration time leads to an almost linear scaling of the whole image, the attenuation due to the distance is nonlinear, resulting in higher intensities for objects closer to the camera. The background regions that are farther away contain comparably low values, leading to a bad contrast within the image. Another effect is that some kind of specularity may be observed due to uncommon reflecting conditions at some points within the scene. These three effects lead to intensity images which exhibit significantly different values depending on the integration time of the camera and the distance to the scene, thus making parameterization of processing steps like edge-detection, segmentation, registration and threshold computation a tedious task. Additionally, outliers with exceptionally high values lead to insufficient visualization results and problems in processing. In this work we propose scaling techniques which generate images whose intensities are independent of the integration time of the camera and the measured distance. Furthermore, a simple approach for reducing specularity effects is introduced.
由飞行时间(ToF)相机捕获的强度图像在几个方面是有偏差的。根据相机内设置的积分时间和场景的距离,这些值差别很大。而积分时间导致整个图像的几乎线性缩放,由于距离的衰减是非线性的,导致更靠近相机的物体的强度更高。较远的背景区域包含相对较低的值,导致图像内的对比度较差。另一个效果是,由于场景中某些点不常见的反射条件,可能会观察到某种反射性。这三种效应导致图像强度随相机的积分时间和到场景的距离的不同而呈现出明显不同的值,从而使得边缘检测、分割、配准和阈值计算等处理步骤的参数化变得繁琐。此外,异常高的异常值会导致可视化结果不足和处理问题。在这项工作中,我们提出缩放技术,产生图像的强度是独立于相机的积分时间和测量距离。此外,还介绍了一种降低镜面效应的简单方法。
{"title":"Standardization of intensity-values acquired by Time-of-Flight-cameras","authors":"Michael Stürmer, J. Penne, J. Hornegger","doi":"10.1109/CVPRW.2008.4563166","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563166","url":null,"abstract":"The intensity-images captured by time-of-flight (ToF)-cameras are biased in several ways. The values differ significantly, depending on the integration time set within the camera and on the distance of the scene. Whereas the integration time leads to an almost linear scaling of the whole image, the attenuation due to the distance is nonlinear, resulting in higher intensities for objects closer to the camera. The background regions that are farther away contain comparably low values, leading to a bad contrast within the image. Another effect is that some kind of specularity may be observed due to uncommon reflecting conditions at some points within the scene. These three effects lead to intensity images which exhibit significantly different values depending on the integration time of the camera and the distance to the scene, thus making parameterization of processing steps like edge-detection, segmentation, registration and threshold computation a tedious task. Additionally, outliers with exceptionally high values lead to insufficient visualization results and problems in processing. In this work we propose scaling techniques which generate images whose intensities are independent of the integration time of the camera and the measured distance. Furthermore, a simple approach for reducing specularity effects is introduced.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116996800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
3D shape matching by geodesic eccentricity 基于测地线偏心率的三维形状匹配
Adrian Ion, N. Artner, G. Peyré, S. Mármol, W. Kropatsch, L. Cohen
This paper makes use of the continuous eccentricity transform to perform 3D shape matching. The eccentricity transform has already been proved useful in a discrete graph-theoretic setting and has been applied to 2D shape matching. We show how these ideas extend to higher dimensions. The eccentricity transform is used to compute descriptors for 3D shapes. These descriptors are defined as histograms of the eccentricity transform and are naturally invariant to Euclidean motion and articulation. They show promising results for shape discrimination.
本文利用连续偏心变换进行三维形状匹配。偏心变换已被证明在离散图论环境下是有用的,并已应用于二维形状匹配。我们将展示这些思想如何扩展到更高的维度。偏心变换用于计算三维形状的描述符。这些描述符被定义为偏心变换的直方图,对欧几里得运动和关节是自然不变的。它们在形状识别方面显示出令人鼓舞的结果。
{"title":"3D shape matching by geodesic eccentricity","authors":"Adrian Ion, N. Artner, G. Peyré, S. Mármol, W. Kropatsch, L. Cohen","doi":"10.1109/CVPRW.2008.4563032","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563032","url":null,"abstract":"This paper makes use of the continuous eccentricity transform to perform 3D shape matching. The eccentricity transform has already been proved useful in a discrete graph-theoretic setting and has been applied to 2D shape matching. We show how these ideas extend to higher dimensions. The eccentricity transform is used to compute descriptors for 3D shapes. These descriptors are defined as histograms of the eccentricity transform and are naturally invariant to Euclidean motion and articulation. They show promising results for shape discrimination.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116257866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Vesicles and amoebae: Globally constrained shape evolutions 囊泡和变形虫:全局约束的形状演化
Ishay Goldin, J. Delosme, A. Bruckstein
Modeling the deformation of shapes under constraints on both perimeter and area is a challenging task due to the highly nontrivial interaction between the need for flexible local rules for manipulating the boundary and the global constraints. We propose several methods to address this problem and generate ldquorandom walksrdquo in the space of shapes obeying quite general possibly time varying constraints on their perimeter and area. Design of perimeter and area preserving deformations are an interesting and useful special case of this problem. The resulting deformation models are employed in annealing processes that evolve original shapes toward shapes that are optimal in terms of boundary bending-energy or other functionals. Furthermore, such models may find applications in the analysis of sequences of real images of deforming objects obeying global constraints as building blocks for registration and tracking algorithms.
由于需要灵活的局部规则来操纵边界和全局约束之间的高度非琐细的相互作用,在周长和面积约束下建模形状的变形是一项具有挑战性的任务。我们提出了几种方法来解决这个问题,并在形状空间中生成随机行走,这些形状在其周长和面积上服从相当普遍的可能随时间变化的约束。周长和保面积变形的设计是这一问题的一个有趣而有用的特例。由此产生的变形模型被用于退火过程,使原始形状向边界弯曲能或其他功能方面的最佳形状演化。此外,这些模型可以应用于分析服从全局约束的变形物体的真实图像序列,作为配准和跟踪算法的构建块。
{"title":"Vesicles and amoebae: Globally constrained shape evolutions","authors":"Ishay Goldin, J. Delosme, A. Bruckstein","doi":"10.1109/CVPRW.2008.4563079","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563079","url":null,"abstract":"Modeling the deformation of shapes under constraints on both perimeter and area is a challenging task due to the highly nontrivial interaction between the need for flexible local rules for manipulating the boundary and the global constraints. We propose several methods to address this problem and generate ldquorandom walksrdquo in the space of shapes obeying quite general possibly time varying constraints on their perimeter and area. Design of perimeter and area preserving deformations are an interesting and useful special case of this problem. The resulting deformation models are employed in annealing processes that evolve original shapes toward shapes that are optimal in terms of boundary bending-energy or other functionals. Furthermore, such models may find applications in the analysis of sequences of real images of deforming objects obeying global constraints as building blocks for registration and tracking algorithms.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114373988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Boosting descriptors condensed from video sequences for place recognition 从视频序列中压缩增强描述符用于位置识别
Tat-Jun Chin, Hanlin Goh, Joo-Hwee Lim
We investigate the task of efficiently training classifiers to build a robust place recognition system. We advocate an approach which involves densely capturing the facades of buildings and landmarks with video recordings to greedily accumulate as much visual information as possible. Our contributions include (1) a preprocessing step to effectively exploit the temporal continuity intrinsic in the video sequences to dramatically increase training efficiency, (2) training sparse classifiers discriminatively with the resulting data using the AdaBoost principle for place recognition, and (3) methods to speed up recognition using scaled kd-trees and to perform geometric validation on the results. Compared to straightforwardly applying scene recognition methods, our method not only allows a much faster training phase, the resulting classifiers are also more accurate. The sparsity of the classifiers also ensures good potential for recognition at high frame rates. We show extensive experimental results to validate our claims.
我们研究了有效训练分类器的任务,以建立一个鲁棒的位置识别系统。我们提倡用录像密集地捕捉建筑物和地标的立面,以贪婪地积累尽可能多的视觉信息。我们的贡献包括:(1)有效利用视频序列内在时间连续性的预处理步骤,以显着提高训练效率;(2)使用AdaBoost原理对结果数据进行判别性训练稀疏分类器进行位置识别;(3)使用缩放kd树加速识别并对结果进行几何验证的方法。与直接应用场景识别方法相比,我们的方法不仅允许更快的训练阶段,而且得到的分类器也更准确。分类器的稀疏性也确保了在高帧率下识别的良好潜力。我们展示了大量的实验结果来验证我们的主张。
{"title":"Boosting descriptors condensed from video sequences for place recognition","authors":"Tat-Jun Chin, Hanlin Goh, Joo-Hwee Lim","doi":"10.1109/CVPRW.2008.4563141","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563141","url":null,"abstract":"We investigate the task of efficiently training classifiers to build a robust place recognition system. We advocate an approach which involves densely capturing the facades of buildings and landmarks with video recordings to greedily accumulate as much visual information as possible. Our contributions include (1) a preprocessing step to effectively exploit the temporal continuity intrinsic in the video sequences to dramatically increase training efficiency, (2) training sparse classifiers discriminatively with the resulting data using the AdaBoost principle for place recognition, and (3) methods to speed up recognition using scaled kd-trees and to perform geometric validation on the results. Compared to straightforwardly applying scene recognition methods, our method not only allows a much faster training phase, the resulting classifiers are also more accurate. The sparsity of the classifiers also ensures good potential for recognition at high frame rates. We show extensive experimental results to validate our claims.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Regional image similarity criteria based on the Kozachenko-Leonenko entropy estimator 基于Kozachenko-Leonenko熵估计的区域图像相似准则
Juan D. García-Arteaga, J. Kybic
Mutual information is one of the most widespread similarity criteria for multi-modal image registration but is limited to low dimensional feature spaces when calculated using histogram and kernel based entropy estimators. In the present article we propose the use of the Kozachenko-Leonenko entropy estimator (KLE) to calculate higher order regional mutual information using local features. The use of local information overcomes the two most prominent problems of nearest neighbor based entropy estimation in image registration: the presence of strong interpolation artifacts and noise. The performance of the proposed criterion is compared to standard MI on data with a known ground truth using a protocol for the evaluation of image registration similarity measures. Finally, we show how the use of the KLE with local features improves the robustness and accuracy of the registration of color colposcopy images.
互信息是多模态图像配准中最广泛使用的相似性标准之一,但在使用直方图和基于核的熵估计器计算时,它仅限于低维特征空间。在本文中,我们提出使用Kozachenko-Leonenko熵估计器(KLE)来计算利用局部特征的高阶区域互信息。局部信息的使用克服了基于最近邻的熵估计在图像配准中存在的两个最突出的问题:强插值伪影和噪声。使用评估图像配准相似度量的协议,将所提出标准的性能与具有已知基础真值的数据上的标准MI进行比较。最后,我们展示了如何使用局部特征的KLE提高了彩色阴道镜图像配准的鲁棒性和准确性。
{"title":"Regional image similarity criteria based on the Kozachenko-Leonenko entropy estimator","authors":"Juan D. García-Arteaga, J. Kybic","doi":"10.1109/CVPRW.2008.4563022","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563022","url":null,"abstract":"Mutual information is one of the most widespread similarity criteria for multi-modal image registration but is limited to low dimensional feature spaces when calculated using histogram and kernel based entropy estimators. In the present article we propose the use of the Kozachenko-Leonenko entropy estimator (KLE) to calculate higher order regional mutual information using local features. The use of local information overcomes the two most prominent problems of nearest neighbor based entropy estimation in image registration: the presence of strong interpolation artifacts and noise. The performance of the proposed criterion is compared to standard MI on data with a known ground truth using a protocol for the evaluation of image registration similarity measures. Finally, we show how the use of the KLE with local features improves the robustness and accuracy of the registration of color colposcopy images.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127639570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1