首页 > 最新文献

Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

英文 中文
Toward recovering shape and motion of 3D curves from multi-view image sequences 从多视图图像序列中恢复三维曲线的形状和运动
R. Carceroni, Kiriakos N. Kutulakos
We introduce a framework for recovering the 3D shape and motion of unknown, arbitrarily-moving curves from two or more image sequences acquired simultaneously from distinct points in space. We use this framework to (1) identify ambiguities in the multi-view recovery of (rigid or nonrigid) 3D motion for arbitrary curves, and (2) identify a novel spatio-temporal constraint that couples the problems of 3D shape and 3D motion recovery in the multi-view case. We show that this constraint leads to a simple hypothesize-and-test algorithm for estimating 3D curve shape and motion simultaneously. Experiments performed with synthetic data suggest that, in addition to recovering 3D curve motion, our approach yields shape estimates of higher accuracy than those obtained when stereo analysis alone is applied to a multi-view sequence.
我们引入了一个框架,用于从空间中不同点同时获得的两个或多个图像序列中恢复未知的任意移动曲线的三维形状和运动。我们使用该框架来(1)识别任意曲线(刚性或非刚性)三维运动的多视图恢复中的歧义,以及(2)识别在多视图情况下耦合三维形状和三维运动恢复问题的新的时空约束。我们表明,这一约束导致一个简单的假设和测试算法估计三维曲线形状和运动同时。用合成数据进行的实验表明,除了恢复3D曲线运动外,我们的方法产生的形状估计精度高于将立体分析单独应用于多视图序列时获得的形状估计精度。
{"title":"Toward recovering shape and motion of 3D curves from multi-view image sequences","authors":"R. Carceroni, Kiriakos N. Kutulakos","doi":"10.1109/CVPR.1999.786938","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786938","url":null,"abstract":"We introduce a framework for recovering the 3D shape and motion of unknown, arbitrarily-moving curves from two or more image sequences acquired simultaneously from distinct points in space. We use this framework to (1) identify ambiguities in the multi-view recovery of (rigid or nonrigid) 3D motion for arbitrary curves, and (2) identify a novel spatio-temporal constraint that couples the problems of 3D shape and 3D motion recovery in the multi-view case. We show that this constraint leads to a simple hypothesize-and-test algorithm for estimating 3D curve shape and motion simultaneously. Experiments performed with synthetic data suggest that, in addition to recovering 3D curve motion, our approach yields shape estimates of higher accuracy than those obtained when stereo analysis alone is applied to a multi-view sequence.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"45 1","pages":"192-197 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85863472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Reconstruction of linearly parameterized models from single images with a camera of unknown focal length 用未知焦距的相机重建单幅图像的线性参数化模型
David Jelinek, C. J. Taylor
This paper deals with the problem of recovering the dimensions of an object and its pose from a single image acquired with a camera of unknown focal length. It is assumed that the object in question can be modeled as a polyhedron where the coordinates of the vertices can be expressed as a linear function of a dimension vector, /spl lambda/. The reconstruction program takes as input a set of correspondences between features in the model and features in the image. From this information the program determines an appropriate projection model for the camera (scaled orthographic or perspective), the dimensions of the object, its pose relative to the camera and, in the case of perspective projection, the focal length of the camera. We demonstrate that this reconstruction task can be framed as an unconstrained optimization problem involving a small number of variables, no more than four, regardless of the number of parameters in the dimension vector.
本文研究了用未知焦距的相机从单幅图像中恢复物体的尺寸及其姿态的问题。假设所讨论的对象可以建模为多面体,其中顶点的坐标可以表示为维度向量/spl lambda/的线性函数。重建程序将模型中的特征与图像中的特征之间的对应关系集作为输入。根据这些信息,程序确定相机的适当投影模型(缩放的正射影或透视),物体的尺寸,相对于相机的姿态,以及在透视投影的情况下,相机的焦距。我们证明,这个重建任务可以被框定为一个无约束优化问题,涉及少量变量,不超过四个,而不考虑维度向量中的参数数量。
{"title":"Reconstruction of linearly parameterized models from single images with a camera of unknown focal length","authors":"David Jelinek, C. J. Taylor","doi":"10.1109/CVPR.1999.784657","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784657","url":null,"abstract":"This paper deals with the problem of recovering the dimensions of an object and its pose from a single image acquired with a camera of unknown focal length. It is assumed that the object in question can be modeled as a polyhedron where the coordinates of the vertices can be expressed as a linear function of a dimension vector, /spl lambda/. The reconstruction program takes as input a set of correspondences between features in the model and features in the image. From this information the program determines an appropriate projection model for the camera (scaled orthographic or perspective), the dimensions of the object, its pose relative to the camera and, in the case of perspective projection, the focal length of the camera. We demonstrate that this reconstruction task can be framed as an unconstrained optimization problem involving a small number of variables, no more than four, regardless of the number of parameters in the dimension vector.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"34 1","pages":"346-352 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89102486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
An efficient recursive factorization method for determining structure from motion 一种有效的从运动中确定结构的递归分解方法
Yanhua Li, M. Brooks
A recursive method is presented for recovering 3D object shape and camera motion under orthography from an extended sequence of video images. This may be viewed as a natural extension of both the original and the sequential factorization methods. A critical aspect of these factorization approaches is the estimation of the so-called shape space, and they may in part be characterized by the manner in which this subspace is computed. If P points are tracked through F frames, the recursive least-squares method proposed in this paper updates the shape space with complexity O(P) per frame. In contrast, the sequential factorization method updates the shape space with complexity O(P/sup 2/) per frame. The original factorization method is intended to be used in batch mode using points tracked across all available frames. It effectively computes the shape space with complexity O(FP/sup 2/) after F frames. Unlike other methods, the recursive approach does not require the estimation or updating of a large measurement or covariance matrix. Experiments with real and synthetic image sequences confirm the recursive method's low computational complexity and good performance, and indicate that it is well suited to real-time applications.
提出了一种从扩展的视频图像序列中恢复三维物体形状和摄像机运动的递归方法。这可以看作是原始分解方法和顺序分解方法的自然扩展。这些分解方法的一个关键方面是对所谓形状空间的估计,它们可能部分地由计算该子空间的方式来表征。如果在F帧中跟踪P个点,本文提出的递推最小二乘法以每帧O(P)的复杂度更新形状空间。相比之下,顺序分解方法以每帧0 (P/sup /)的复杂度更新形状空间。原始的分解方法旨在在批处理模式下使用所有可用帧中跟踪的点。它有效地计算了F帧后的形状空间,复杂度为0 (FP/sup 2/)。与其他方法不同,递归方法不需要估计或更新大型测量或协方差矩阵。对真实图像序列和合成图像序列的实验表明,该方法计算复杂度低,性能好,适合于实时应用。
{"title":"An efficient recursive factorization method for determining structure from motion","authors":"Yanhua Li, M. Brooks","doi":"10.1109/CVPR.1999.786930","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786930","url":null,"abstract":"A recursive method is presented for recovering 3D object shape and camera motion under orthography from an extended sequence of video images. This may be viewed as a natural extension of both the original and the sequential factorization methods. A critical aspect of these factorization approaches is the estimation of the so-called shape space, and they may in part be characterized by the manner in which this subspace is computed. If P points are tracked through F frames, the recursive least-squares method proposed in this paper updates the shape space with complexity O(P) per frame. In contrast, the sequential factorization method updates the shape space with complexity O(P/sup 2/) per frame. The original factorization method is intended to be used in batch mode using points tracked across all available frames. It effectively computes the shape space with complexity O(FP/sup 2/) after F frames. Unlike other methods, the recursive approach does not require the estimation or updating of a large measurement or covariance matrix. Experiments with real and synthetic image sequences confirm the recursive method's low computational complexity and good performance, and indicate that it is well suited to real-time applications.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"7 1","pages":"138-143 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76021097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Advances in daylight statistical colour modelling 日光统计色彩建模的进展
D. Alexander
In this paper, parametric statistical modelling of distributions of colour camera data is discussed. A review is provided with some analysis of the properties of some common models, which are generally based on an assumption of independence of the chromaticity and intensity components of colour data. Results of an empirical comparison of the performance of various models are also reviewed. These results indicate that such models are not appropriate for situations other than highly controlled environments. In particular, they perform poorly for daylight imagery. Here, a modification to existing statistical colour models is proposed and the resultant new models are assessed using the same methodology as for the previous results. This simple modification, which is based on the inclusion of an ambient term in the underlying physical model, is shown to have a major impact on the performance of the models in less constrained daylight environments.
本文讨论了彩色相机数据分布的参数化统计建模。对一些常用模型的性质进行了回顾和分析,这些模型通常是基于色彩数据的色度和强度成分独立的假设。对各种模型性能的实证比较结果也进行了回顾。这些结果表明,这种模型不适用于高度控制的环境以外的情况。特别是,它们在白天成像时表现不佳。在这里,对现有的统计颜色模型进行了修改,并使用与先前结果相同的方法对所得的新模型进行了评估。这种基于在底层物理模型中包含环境项的简单修改,对模型在较少受限的日光环境中的性能有重大影响。
{"title":"Advances in daylight statistical colour modelling","authors":"D. Alexander","doi":"10.1109/CVPR.1999.786957","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786957","url":null,"abstract":"In this paper, parametric statistical modelling of distributions of colour camera data is discussed. A review is provided with some analysis of the properties of some common models, which are generally based on an assumption of independence of the chromaticity and intensity components of colour data. Results of an empirical comparison of the performance of various models are also reviewed. These results indicate that such models are not appropriate for situations other than highly controlled environments. In particular, they perform poorly for daylight imagery. Here, a modification to existing statistical colour models is proposed and the resultant new models are assessed using the same methodology as for the previous results. This simple modification, which is based on the inclusion of an ambient term in the underlying physical model, is shown to have a major impact on the performance of the models in less constrained daylight environments.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"320 1","pages":"313-318 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77603104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Surface reconstruction from multiple aerial images in dense urban areas 密集城区多幅航拍图像的地表重建
M. Fradkin, M. Roux, H. Maître, U. Leloglu
Accurate 3D surface models of dense urban areas are essential for a variety of applications, such as cartography, urban planning and monitoring mobile communications, etc. Since manual surface reconstruction is very costly and time-consuming, the development of automated algorithms is of great importance. While most of existing algorithms focus on surface reconstruction either in rural or sub-urban areas, we present an approach dealing with dense urban scenes. The approach utilizes different image-derived cues, like multiview stereo and color information, as well as the general scene knowledge, formulated in data-driven reasoning and geometric constraints. Another important feature of our approach is simultaneous processing of 2D and 3D data. Our approach begins with two independent tasks: stereo reconstruction using multiple views and region-based image segmentation, resulting in generation disparity and segmentation maps, respectively. Then, the information derived from the both maps is utilized for generation of a dense elevation map, through robust verification of planar surface approximations for the detected regions and imposition of geometric constraints. The approach has been successfully tested on complex residential and industrial scenes.
密集城市地区的精确3D表面模型对于各种应用至关重要,例如制图,城市规划和监控移动通信等。由于人工表面重建是非常昂贵和耗时的,因此开发自动化算法是非常重要的。虽然大多数现有算法都侧重于农村或城郊地区的表面重建,但我们提出了一种处理密集城市场景的方法。该方法利用不同的图像衍生线索,如多视图立体和颜色信息,以及在数据驱动推理和几何约束中制定的一般场景知识。我们的方法的另一个重要特点是同时处理2D和3D数据。我们的方法从两个独立的任务开始:使用多视图的立体重建和基于区域的图像分割,分别产生生成差异和分割图。然后,通过对检测区域的平面近似进行鲁棒验证并施加几何约束,利用从两种地图中获得的信息生成密集高程图。这种方法已经成功地在复杂的住宅和工业场景中进行了测试。
{"title":"Surface reconstruction from multiple aerial images in dense urban areas","authors":"M. Fradkin, M. Roux, H. Maître, U. Leloglu","doi":"10.1109/CVPR.1999.784639","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784639","url":null,"abstract":"Accurate 3D surface models of dense urban areas are essential for a variety of applications, such as cartography, urban planning and monitoring mobile communications, etc. Since manual surface reconstruction is very costly and time-consuming, the development of automated algorithms is of great importance. While most of existing algorithms focus on surface reconstruction either in rural or sub-urban areas, we present an approach dealing with dense urban scenes. The approach utilizes different image-derived cues, like multiview stereo and color information, as well as the general scene knowledge, formulated in data-driven reasoning and geometric constraints. Another important feature of our approach is simultaneous processing of 2D and 3D data. Our approach begins with two independent tasks: stereo reconstruction using multiple views and region-based image segmentation, resulting in generation disparity and segmentation maps, respectively. Then, the information derived from the both maps is utilized for generation of a dense elevation map, through robust verification of planar surface approximations for the detected regions and imposition of geometric constraints. The approach has been successfully tested on complex residential and industrial scenes.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"8 1-2","pages":"262-267 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91500286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A multiple hypothesis approach to figure tracking 图形跟踪的多假设方法
Tat-Jen Cham, James M. Rehg
This paper describes a probabilistic multiple-hypothesis framework for tracking highly articulated objects. In this framework, the probability density of the tracker state is represented as a set of modes with piecewise Gaussians characterizing the neighborhood around these modes. The temporal evolution of the probability density is achieved through sampling from the prior distribution, followed by local optimization of the sample positions to obtain updated modes. This method of generating hypotheses from state-space search does not require the use of discrete features unlike classical multiple-hypothesis tracking. The parametric form of the model is suited for high dimensional state-spaces which cannot be efficiently modeled using non-parametric approaches. Results are shown for tracking Fred Astaire in a movie dance sequence.
本文描述了一个概率多假设框架,用于跟踪高关节目标。在这个框架中,跟踪器状态的概率密度被表示为一组模式,这些模式周围的邻域由分段高斯分布表征。概率密度的时间演化是通过从先验分布中抽样,然后对样本位置进行局部优化来获得更新模态来实现的。这种从状态空间搜索生成假设的方法不像经典的多假设跟踪那样需要使用离散特征。模型的参数形式适用于高维状态空间,而非参数方法无法有效地建模。结果显示了跟踪弗雷德·阿斯泰尔在一个电影舞蹈序列。
{"title":"A multiple hypothesis approach to figure tracking","authors":"Tat-Jen Cham, James M. Rehg","doi":"10.1109/CVPR.1999.784636","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784636","url":null,"abstract":"This paper describes a probabilistic multiple-hypothesis framework for tracking highly articulated objects. In this framework, the probability density of the tracker state is represented as a set of modes with piecewise Gaussians characterizing the neighborhood around these modes. The temporal evolution of the probability density is achieved through sampling from the prior distribution, followed by local optimization of the sample positions to obtain updated modes. This method of generating hypotheses from state-space search does not require the use of discrete features unlike classical multiple-hypothesis tracking. The parametric form of the model is suited for high dimensional state-spaces which cannot be efficiently modeled using non-parametric approaches. Results are shown for tracking Fred Astaire in a movie dance sequence.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"46 1","pages":"239-244 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91347194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 400
Data-driven shape-from-shading using curvature consistency 使用曲率一致性的数据驱动的阴影形状
P. L. Worthington, E. Hancock
This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. Firstly, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard-constraint. This improves the data-closeness of the recovered needle-map. Secondly, we consider how topographic constraints can be lured to impose local consistency on the recovered needle-map. We present several alternative curvature consistency models, and provide an experimental assessment of the new shape-from-shading framework on both real-world images and synthetic images with known ground-truth surface-normals. The main conclusion drawn from our analysis is that the new framework allows rapid development of more appropriate constraints on the SFS problem.
本文对利用形状-阴影法恢复针图的问题作出了两方面的贡献。首先,我们提供了一个几何更新程序,允许图像辐照度方程作为硬约束来满足。这提高了恢复的针状图的数据接近度。其次,我们考虑了如何利用地形约束对恢复的针图施加局部一致性。我们提出了几种可供选择的曲率一致性模型,并在真实世界图像和具有已知真实表面法线的合成图像上对新的阴影形状框架进行了实验评估。从我们的分析中得出的主要结论是,新的框架允许对SFS问题快速开发更合适的约束。
{"title":"Data-driven shape-from-shading using curvature consistency","authors":"P. L. Worthington, E. Hancock","doi":"10.1109/CVPR.1999.786953","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786953","url":null,"abstract":"This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. Firstly, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard-constraint. This improves the data-closeness of the recovered needle-map. Secondly, we consider how topographic constraints can be lured to impose local consistency on the recovered needle-map. We present several alternative curvature consistency models, and provide an experimental assessment of the new shape-from-shading framework on both real-world images and synthetic images with known ground-truth surface-normals. The main conclusion drawn from our analysis is that the new framework allows rapid development of more appropriate constraints on the SFS problem.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"5 1","pages":"287-293 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87054479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Deformable shape detection and description via model-based region grouping 基于模型的区域分组的可变形形状检测和描述
S. Sclaroff, Lifeng Liu
A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.
描述了一种可变形形状的检测与识别方法。可变形形状模板用于将图像划分为全局一致的解释,部分由最小描述长度原则确定。统计形状模型对每个对象类的全局参数变形强制执行先验概率。经过训练后,该系统会自动从背景中分割变形的形状,而不会将它们与相邻的物体或阴影合并。该公式可用于基于任意图像同质性谓词对图像区域进行分组;例如,纹理、颜色或运动。恢复的形状模型可以直接用于目标识别。本文报道了彩色图像的实验。
{"title":"Deformable shape detection and description via model-based region grouping","authors":"S. Sclaroff, Lifeng Liu","doi":"10.1109/CVPR.1999.784603","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784603","url":null,"abstract":"A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"17 1","pages":"21-27 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85047888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Spatial filter selection for illumination-invariant color texture discrimination 光照不变颜色纹理识别的空间滤波器选择
Bea Thai, Glenn Healey
Color texture contains a large amount of spectral and spatial structure that can be exploited for recognition. Recent work has demonstrated that spatial filters offer a convenient means of extracting illumination invariant spatial information from a color image. In this paper, we address the problem of deriving optimal fillers for illumination-invariant color texture discrimination. Color textures are represented by a set of illumination-invariant features that characterize the color distribution of a filtered image region. Given a pair of color textures, we derive a spatial filter that maximizes the distance between these textures in feature space. We provide a method for using the pair-wise result to obtain a filter that maximizes discriminability among multiple classes. A set of experiments on a database of deterministic and random color textures obtained under different illumination conditions demonstrates the improved discriminatory power achieved by using an optimized filler.
彩色纹理包含了大量的光谱和空间结构,可用于识别。最近的研究表明,空间滤波器提供了一种方便的方法,从彩色图像中提取光照不变的空间信息。在这篇论文中,我们讨论了如何得到最优的用于光照不变颜色纹理识别的填充。颜色纹理由一组描述滤波图像区域颜色分布的光照不变特征表示。给定一对彩色纹理,我们推导出一个空间过滤器,使这些纹理在特征空间中的距离最大化。我们提供了一种使用成对结果来获得最大化多个类之间可判别性的过滤器的方法。在不同光照条件下获得的确定性和随机颜色纹理数据库上进行的一系列实验表明,使用优化填料可以提高识别能力。
{"title":"Spatial filter selection for illumination-invariant color texture discrimination","authors":"Bea Thai, Glenn Healey","doi":"10.1109/CVPR.1999.784623","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784623","url":null,"abstract":"Color texture contains a large amount of spectral and spatial structure that can be exploited for recognition. Recent work has demonstrated that spatial filters offer a convenient means of extracting illumination invariant spatial information from a color image. In this paper, we address the problem of deriving optimal fillers for illumination-invariant color texture discrimination. Color textures are represented by a set of illumination-invariant features that characterize the color distribution of a filtered image region. Given a pair of color textures, we derive a spatial filter that maximizes the distance between these textures in feature space. We provide a method for using the pair-wise result to obtain a filter that maximizes discriminability among multiple classes. A set of experiments on a database of deterministic and random color textures obtained under different illumination conditions demonstrates the improved discriminatory power achieved by using an optimized filler.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"7 1","pages":"154-159 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90494267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On plane-based camera calibration: A general algorithm, singularities, applications 基于平面的摄像机标定:通用算法,奇异点,应用
P. Sturm, S. Maybank
We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.
提出了一种通用的基于平面的标定算法,该算法可以处理任意数量的视图和标定平面。该算法可以同时标定具有可变内参数的摄像机的不同视点,并且易于合并已知的内参数值。对于一些最小的情况,我们描述了所有的奇点,命名了不能估计的参数。实验结果表明,该方法在表现奇异性的同时,在非奇异条件下也表现出良好的性能。讨论了基于平面的三维几何推理的几种应用。
{"title":"On plane-based camera calibration: A general algorithm, singularities, applications","authors":"P. Sturm, S. Maybank","doi":"10.1109/CVPR.1999.786974","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786974","url":null,"abstract":"We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"60 1","pages":"432-437 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89320146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 665
期刊
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1