首页 > 最新文献

Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

英文 中文
On plane-based camera calibration: A general algorithm, singularities, applications 基于平面的摄像机标定:通用算法,奇异点,应用
P. Sturm, S. Maybank
We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.
提出了一种通用的基于平面的标定算法,该算法可以处理任意数量的视图和标定平面。该算法可以同时标定具有可变内参数的摄像机的不同视点,并且易于合并已知的内参数值。对于一些最小的情况,我们描述了所有的奇点,命名了不能估计的参数。实验结果表明,该方法在表现奇异性的同时,在非奇异条件下也表现出良好的性能。讨论了基于平面的三维几何推理的几种应用。
{"title":"On plane-based camera calibration: A general algorithm, singularities, applications","authors":"P. Sturm, S. Maybank","doi":"10.1109/CVPR.1999.786974","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786974","url":null,"abstract":"We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"60 1","pages":"432-437 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89320146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 665
Edge detector evaluation using empirical ROC curves 基于经验ROC曲线的边缘检测器评价
K. Bowyer, C. Kranenburg, Sean Dougherty
A method is demonstrated to evaluate edge detector performance using receiver operating characteristic curves. It involves matching edges to manually specified ground truth to count true positive and false positive detections. Edge detector parameter settings are trained and tested on different images, and aggregate test ROC curves presented for two sets of 10 images. The performance of eight different edge detectors is compared. The Canny and Heitger detectors provide the best performance.
提出了一种利用接收机工作特性曲线评价边缘检测器性能的方法。它包括将边缘匹配到手动指定的基础真值,以计算真阳性和假阳性检测。在不同的图像上对边缘检测器参数设置进行训练和测试,并对两组10幅图像进行聚合测试ROC曲线。比较了八种不同边缘检测器的性能。Canny和Heitger探测器提供了最好的性能。
{"title":"Edge detector evaluation using empirical ROC curves","authors":"K. Bowyer, C. Kranenburg, Sean Dougherty","doi":"10.1109/CVPR.1999.786963","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786963","url":null,"abstract":"A method is demonstrated to evaluate edge detector performance using receiver operating characteristic curves. It involves matching edges to manually specified ground truth to count true positive and false positive detections. Edge detector parameter settings are trained and tested on different images, and aggregate test ROC curves presented for two sets of 10 images. The performance of eight different edge detectors is compared. The Canny and Heitger detectors provide the best performance.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"27 1","pages":"354-359 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87273370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 412
A new structure-from-motion ambiguity 一种新的结构-运动歧义
J. Oliensis
This paper demonstrates the existence of a generic approximate ambiguity in Euclidean structure from motion (SFM) which applies to scenes with large depth variation. In projective SFM the ambiguity is absent, but the maximum-likelihood reconstruction is more likely to have occasional very large errors. The analysis gives a semi-quantitative characterization of the least-squares error surface over a domain complementary to that analyzed by Jepson/Heeger/Maybank.
本文论证了欧几里得运动结构(SFM)中存在一种通用的近似模糊性,该模糊性适用于深度变化较大的场景。在投影SFM中不存在歧义,但最大似然重建更有可能偶尔出现非常大的误差。该分析给出了与Jepson/Heeger/Maybank分析的互补域上的最小二乘误差曲面的半定量表征。
{"title":"A new structure-from-motion ambiguity","authors":"J. Oliensis","doi":"10.1109/CVPR.1999.786937","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786937","url":null,"abstract":"This paper demonstrates the existence of a generic approximate ambiguity in Euclidean structure from motion (SFM) which applies to scenes with large depth variation. In projective SFM the ambiguity is absent, but the maximum-likelihood reconstruction is more likely to have occasional very large errors. The analysis gives a semi-quantitative characterization of the least-squares error surface over a domain complementary to that analyzed by Jepson/Heeger/Maybank.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"79 4 1","pages":"185-191 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87942399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Deformable shape detection and description via model-based region grouping 基于模型的区域分组的可变形形状检测和描述
S. Sclaroff, Lifeng Liu
A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.
描述了一种可变形形状的检测与识别方法。可变形形状模板用于将图像划分为全局一致的解释,部分由最小描述长度原则确定。统计形状模型对每个对象类的全局参数变形强制执行先验概率。经过训练后,该系统会自动从背景中分割变形的形状,而不会将它们与相邻的物体或阴影合并。该公式可用于基于任意图像同质性谓词对图像区域进行分组;例如,纹理、颜色或运动。恢复的形状模型可以直接用于目标识别。本文报道了彩色图像的实验。
{"title":"Deformable shape detection and description via model-based region grouping","authors":"S. Sclaroff, Lifeng Liu","doi":"10.1109/CVPR.1999.784603","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784603","url":null,"abstract":"A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"17 1","pages":"21-27 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85047888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Unifying boundary and region-based information for geodesic active tracking 统一基于边界和区域的测地线主动跟踪信息
N. Paragios, R. Deriche
This paper addresses the problem of tracking several non-rigid objects over a sequence of frames acquired from a static observer using boundary and region-based information under a coupled geodesic active contour framework. Given the current frame, a statistical analysis is performed on the observed difference frame which provides a measurement that distinguishes between the static and mobile regions in terms of conditional probabilities. An objective function is defined that integrates boundary-based and region-based module by seeking curves that attract the object boundaries and maximize the a posteriori segmentation probability on the interior curve regions with respect to intensity and motion properties. This function is minimized using a gradient descent method. The associated Euler-Lagrange PDE is implemented using a Level-Set approach, where a very fast front propagation algorithm evolves the initial curve towards the final tracking result. Very promising experimental results are provided using real video sequences.
本文解决了在耦合测地线主动轮廓框架下,利用基于边界和区域的信息从静态观测器获取的一系列帧上跟踪多个非刚性物体的问题。给定当前帧,对所观察到的差帧进行统计分析,该差帧提供了根据条件概率区分静态区域和移动区域的测量。通过寻找吸引目标边界的曲线,并根据强度和运动属性最大化内部曲线区域的后验分割概率,定义了一个基于边界和基于区域的模块相结合的目标函数。该函数使用梯度下降法最小化。相关的Euler-Lagrange PDE使用Level-Set方法实现,其中非常快速的前传播算法将初始曲线演变为最终跟踪结果。利用真实的视频序列,得到了很有希望的实验结果。
{"title":"Unifying boundary and region-based information for geodesic active tracking","authors":"N. Paragios, R. Deriche","doi":"10.1109/CVPR.1999.784648","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784648","url":null,"abstract":"This paper addresses the problem of tracking several non-rigid objects over a sequence of frames acquired from a static observer using boundary and region-based information under a coupled geodesic active contour framework. Given the current frame, a statistical analysis is performed on the observed difference frame which provides a measurement that distinguishes between the static and mobile regions in terms of conditional probabilities. An objective function is defined that integrates boundary-based and region-based module by seeking curves that attract the object boundaries and maximize the a posteriori segmentation probability on the interior curve regions with respect to intensity and motion properties. This function is minimized using a gradient descent method. The associated Euler-Lagrange PDE is implemented using a Level-Set approach, where a very fast front propagation algorithm evolves the initial curve towards the final tracking result. Very promising experimental results are provided using real video sequences.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"127 1","pages":"300-305 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86587250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Progressive probabilistic Hough transform for line detection 渐进式概率霍夫变换用于线路检测
C. Galambos, J. Kittler, Jiri Matas
We present a novel Hough Transform algorithm referred to as Progressive Probabilistic Hough Transform (PPHT). Unlike the Probabilistic HT where Standard HT is performed on a pre-selected fraction of input points, PPHT minimises the amount of computation needed to detect lines by exploiting the difference an the fraction of votes needed to detect reliably lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection is interleaved. The most salient features are likely to be detected first. Experiments show that in many circumstances PPHT has advantages over the Standard HT.
我们提出了一种新的霍夫变换算法,称为渐进概率霍夫变换(PPHT)。与概率HT不同的是,标准HT是在预先选择的部分输入点上执行的,PPHT通过利用差异和选票的比例来检测具有不同数量支撑点的可靠线,从而最大限度地减少了检测线所需的计算量。用于投票的分数不需要特别指定或使用先验知识,如在概率HT中;它是输入数据固有复杂性的函数。该算法非常适合具有固定可用处理时间的实时应用程序,因为投票和线路检测是交错的。最显著的特征可能首先被发现。实验表明,在许多情况下,PPHT比标准HT有优势。
{"title":"Progressive probabilistic Hough transform for line detection","authors":"C. Galambos, J. Kittler, Jiri Matas","doi":"10.1109/CVPR.1999.786993","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786993","url":null,"abstract":"We present a novel Hough Transform algorithm referred to as Progressive Probabilistic Hough Transform (PPHT). Unlike the Probabilistic HT where Standard HT is performed on a pre-selected fraction of input points, PPHT minimises the amount of computation needed to detect lines by exploiting the difference an the fraction of votes needed to detect reliably lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection is interleaved. The most salient features are likely to be detected first. Experiments show that in many circumstances PPHT has advantages over the Standard HT.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"1 1","pages":"554-560 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89712361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 193
Spatial filter selection for illumination-invariant color texture discrimination 光照不变颜色纹理识别的空间滤波器选择
Bea Thai, Glenn Healey
Color texture contains a large amount of spectral and spatial structure that can be exploited for recognition. Recent work has demonstrated that spatial filters offer a convenient means of extracting illumination invariant spatial information from a color image. In this paper, we address the problem of deriving optimal fillers for illumination-invariant color texture discrimination. Color textures are represented by a set of illumination-invariant features that characterize the color distribution of a filtered image region. Given a pair of color textures, we derive a spatial filter that maximizes the distance between these textures in feature space. We provide a method for using the pair-wise result to obtain a filter that maximizes discriminability among multiple classes. A set of experiments on a database of deterministic and random color textures obtained under different illumination conditions demonstrates the improved discriminatory power achieved by using an optimized filler.
彩色纹理包含了大量的光谱和空间结构,可用于识别。最近的研究表明,空间滤波器提供了一种方便的方法,从彩色图像中提取光照不变的空间信息。在这篇论文中,我们讨论了如何得到最优的用于光照不变颜色纹理识别的填充。颜色纹理由一组描述滤波图像区域颜色分布的光照不变特征表示。给定一对彩色纹理,我们推导出一个空间过滤器,使这些纹理在特征空间中的距离最大化。我们提供了一种使用成对结果来获得最大化多个类之间可判别性的过滤器的方法。在不同光照条件下获得的确定性和随机颜色纹理数据库上进行的一系列实验表明,使用优化填料可以提高识别能力。
{"title":"Spatial filter selection for illumination-invariant color texture discrimination","authors":"Bea Thai, Glenn Healey","doi":"10.1109/CVPR.1999.784623","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784623","url":null,"abstract":"Color texture contains a large amount of spectral and spatial structure that can be exploited for recognition. Recent work has demonstrated that spatial filters offer a convenient means of extracting illumination invariant spatial information from a color image. In this paper, we address the problem of deriving optimal fillers for illumination-invariant color texture discrimination. Color textures are represented by a set of illumination-invariant features that characterize the color distribution of a filtered image region. Given a pair of color textures, we derive a spatial filter that maximizes the distance between these textures in feature space. We provide a method for using the pair-wise result to obtain a filter that maximizes discriminability among multiple classes. A set of experiments on a database of deterministic and random color textures obtained under different illumination conditions demonstrates the improved discriminatory power achieved by using an optimized filler.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"7 1","pages":"154-159 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90494267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Separating reflections and lighting using independent components analysis 使用独立组件分析分离反射和照明
H. Farid, E. Adelson
The image of an object can vary dramatically depending on lighting, specularities/reflections and shadows. It is often advantageous to separate these incidental variations from the intrinsic aspects of an image. This paper describes how the statistical tool of independent components analysis can be used to separate some of these incidental components. We describe the details of this method and show its efficacy with examples of separating reflections off glass, and separating the relative contributions of individual light sources.
物体的图像可以根据光照、镜面/反射和阴影而发生巨大变化。将这些偶然的变化与图像的内在方面分开通常是有利的。本文描述了如何使用独立成分分析的统计工具来分离这些附带成分。我们描述了这种方法的细节,并以分离玻璃反射和分离单个光源的相对贡献的例子说明了它的有效性。
{"title":"Separating reflections and lighting using independent components analysis","authors":"H. Farid, E. Adelson","doi":"10.1109/CVPR.1999.786949","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786949","url":null,"abstract":"The image of an object can vary dramatically depending on lighting, specularities/reflections and shadows. It is often advantageous to separate these incidental variations from the intrinsic aspects of an image. This paper describes how the statistical tool of independent components analysis can be used to separate some of these incidental components. We describe the details of this method and show its efficacy with examples of separating reflections off glass, and separating the relative contributions of individual light sources.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"429 1","pages":"262-267 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78191242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
Shape from video 来自视频的形状
T. Brodský, C. Fermüller, Y. Aloimonos
This paper presents a novel technique for recovering the shape of a static scene from a video sequence due to a rigidly moving camera. The solution procedure consists of two stages. In the first stage, the rigid motion of the camera at each instant in time is recovered. This provides the transformation between successive viewing positions. The solution is achieved through new constraints which relate 3D motion and shape directly to the image derivatives. These constraints allow to combine the processes of 3D motion estimation and segmentation by exploiting the geometry and statistics inherent in the data. In the second stage the scene surfaces are reconstructed through an optimization procedure which utilizes data from all the frames of the video sequence. A number of experimental results demonstrate the potential of the approach.
本文提出了一种从视频序列中恢复静态场景形状的新技术。解决过程包括两个阶段。在第一阶段,恢复相机在每个时刻的刚体运动。这提供了连续观看位置之间的转换。该解决方案是通过将3D运动和形状直接与图像导数相关的新约束来实现的。这些约束允许通过利用数据中固有的几何和统计来结合3D运动估计和分割过程。在第二阶段,通过利用视频序列所有帧的数据的优化程序重构场景表面。大量的实验结果证明了该方法的潜力。
{"title":"Shape from video","authors":"T. Brodský, C. Fermüller, Y. Aloimonos","doi":"10.1109/CVPR.1999.784622","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784622","url":null,"abstract":"This paper presents a novel technique for recovering the shape of a static scene from a video sequence due to a rigidly moving camera. The solution procedure consists of two stages. In the first stage, the rigid motion of the camera at each instant in time is recovered. This provides the transformation between successive viewing positions. The solution is achieved through new constraints which relate 3D motion and shape directly to the image derivatives. These constraints allow to combine the processes of 3D motion estimation and segmentation by exploiting the geometry and statistics inherent in the data. In the second stage the scene surfaces are reconstructed through an optimization procedure which utilizes data from all the frames of the video sequence. A number of experimental results demonstrate the potential of the approach.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"78 1","pages":"146-151 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79061110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A probabilistic framework for embedded face and facial expression recognition 嵌入式人脸和面部表情识别的概率框架
A. Colmenarez, B. Frey, Thomas S. Huang
We present a Bayesian recognition framework in which a model of the whole face is enhanced by models of facial feature position and appearances. Face recognition and facial expression recognition are carried out using maximum likelihood decisions. The algorithm finds the model and facial expression that maximizes the likelihood of a test image. In this framework, facial appearance matching is improved by facial expression matching. Also, changes in facial features due to expressions are used together with facial deformation. Patterns to jointly perform expression recognition. In our current implementation, the face is divided into 9 facial features grouped in 4 regions which are detected and tracked automatically in video segments. The feature images are modeled using Gaussian distributions on a principal component sub-space. The training procedure is supervised; we use video segments of people in which the facial expressions have been segmented and labeled by hand. We report results on face and facial expression recognition using a video database of 18 people and 6 expressions.
我们提出了一种贝叶斯识别框架,其中整个面部的模型通过面部特征位置和外观的模型来增强。人脸识别和面部表情识别使用最大似然决策进行。该算法找到模型和面部表情,使测试图像的可能性最大化。在该框架中,通过面部表情匹配对面部外观匹配进行改进。此外,由于表情引起的面部特征变化与面部变形一起使用。模式,共同进行表情识别。在我们目前的实现中,人脸被分为9个面部特征,分组在4个区域中,在视频片段中自动检测和跟踪。在主成分子空间上使用高斯分布对特征图像进行建模。培训过程受到监督;我们使用人们的视频片段,其中的面部表情被手工分割和标记。我们报告了使用18个人和6种表情的视频数据库进行面部和面部表情识别的结果。
{"title":"A probabilistic framework for embedded face and facial expression recognition","authors":"A. Colmenarez, B. Frey, Thomas S. Huang","doi":"10.1109/CVPR.1999.786999","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786999","url":null,"abstract":"We present a Bayesian recognition framework in which a model of the whole face is enhanced by models of facial feature position and appearances. Face recognition and facial expression recognition are carried out using maximum likelihood decisions. The algorithm finds the model and facial expression that maximizes the likelihood of a test image. In this framework, facial appearance matching is improved by facial expression matching. Also, changes in facial features due to expressions are used together with facial deformation. Patterns to jointly perform expression recognition. In our current implementation, the face is divided into 9 facial features grouped in 4 regions which are detected and tracked automatically in video segments. The feature images are modeled using Gaussian distributions on a principal component sub-space. The training procedure is supervised; we use video segments of people in which the facial expressions have been segmented and labeled by hand. We report results on face and facial expression recognition using a video database of 18 people and 6 expressions.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"20 1","pages":"592-597 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85591623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
期刊
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1