首页 > 最新文献

Proceedings Ninth IEEE International Conference on Computer Vision最新文献

英文 中文
Learning a classification model for segmentation 学习分割的分类模型
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238308
Xiaofeng Ren, Jitendra Malik
We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is over-segmented into super-pixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images.
我们提出了一个两类分类的分组模型。以人类分割的自然图像作为正例。分组的负例是通过随机匹配人类分割和图像来构建的。在预处理阶段,图像被过度分割成超像素。我们从经典格式塔线索中定义了各种特征,包括轮廓、纹理、亮度和良好的连续性。信息论分析应用于评估这些分组线索的力量。我们训练一个线性分类器来组合这些特征。为了展示分类模型的强大功能,我们使用了一个简单的算法来随机搜索好的分割。结果显示在广泛的图像上。
{"title":"Learning a classification model for segmentation","authors":"Xiaofeng Ren, Jitendra Malik","doi":"10.1109/ICCV.2003.1238308","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238308","url":null,"abstract":"We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is over-segmented into super-pixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131934469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1819
Information theoretic focal length selection for real-time active 3D object tracking 实时主动三维目标跟踪的信息理论焦距选择
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238372
Joachim Denzler, M. Zobel, H. Niemann
Active object tracking, for example, in surveillance tasks, becomes more and more important these days. Besides the tracking algorithms themselves methodologies have to be developed for reasonable active control of the degrees of freedom of all involved cameras. We present an information theoretic approach that allows the optimal selection of the focal lengths of two cameras during active 3D object tracking. The selection is based on the uncertainty in the 3D estimation. This allows us to resolve the trade-off between small and large focal length: in the former case, the chance is increased to keep the object in the field of view of the cameras. In the latter one, 3D estimation becomes more reliable. Also, more details are provided, for example for recognizing the objects. Beyond a rigorous mathematical framework we present real-time experiments demonstrating that we gain an improvement in 3D trajectory estimation by up to 42% in comparison with tracking using a fixed focal length.
例如,在监视任务中,主动目标跟踪变得越来越重要。除了跟踪算法本身之外,还必须开发出对所有相关摄像机的自由度进行合理主动控制的方法。我们提出了一种信息理论方法,允许在主动三维目标跟踪过程中两个相机焦距的最佳选择。选择是基于三维估计中的不确定性。这使我们能够解决小焦距和大焦距之间的权衡:在前者的情况下,增加了将物体保持在相机视野内的机会。在后者中,三维估计变得更加可靠。此外,还提供了更多细节,例如用于识别对象。除了严格的数学框架之外,我们还提供了实时实验,证明与使用固定焦距的跟踪相比,我们在3D轨迹估计方面获得了高达42%的改进。
{"title":"Information theoretic focal length selection for real-time active 3D object tracking","authors":"Joachim Denzler, M. Zobel, H. Niemann","doi":"10.1109/ICCV.2003.1238372","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238372","url":null,"abstract":"Active object tracking, for example, in surveillance tasks, becomes more and more important these days. Besides the tracking algorithms themselves methodologies have to be developed for reasonable active control of the degrees of freedom of all involved cameras. We present an information theoretic approach that allows the optimal selection of the focal lengths of two cameras during active 3D object tracking. The selection is based on the uncertainty in the 3D estimation. This allows us to resolve the trade-off between small and large focal length: in the former case, the chance is increased to keep the object in the field of view of the cameras. In the latter one, 3D estimation becomes more reliable. Also, more details are provided, for example for recognizing the objects. Beyond a rigorous mathematical framework we present real-time experiments demonstrating that we gain an improvement in 3D trajectory estimation by up to 42% in comparison with tracking using a fixed focal length.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113966006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Voxel carving for specular surfaces 镜面的体素雕刻
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238401
T. Bonfort, P. Sturm
We present an novel algorithm that reconstructs voxels of a general 3D specular surface from multiple images of a calibrated camera. A calibrated scene (i.e. points whose 3D coordinates are known) is reflected by the unknown specular surface onto the image plane of the camera. For every viewpoint, surface normals are associated to the voxels traversed by each projection ray formed by the reflection of a scene point. A decision process then discards voxels whose associated surface normals are not consistent with one another. The output of the algorithm is a collection of voxels and surface normals in 3D space, whose quality and size depend on user-set thresholds. The method has been tested on synthetic and real images. Visual and quantified experimental results are presented.
我们提出了一种新的算法,从校准相机的多幅图像中重建一般3D镜面的体素。标定后的场景(即三维坐标已知的点)被未知的镜面反射到相机的图像平面上。对于每个视点,表面法线与场景点反射形成的每个投影射线所遍历的体素相关联。然后,决策过程丢弃与其相关的表面法线彼此不一致的体素。该算法的输出是三维空间中体素和表面法线的集合,其质量和大小取决于用户设置的阈值。该方法已在合成图像和真实图像上进行了测试。给出了可视化和量化的实验结果。
{"title":"Voxel carving for specular surfaces","authors":"T. Bonfort, P. Sturm","doi":"10.1109/ICCV.2003.1238401","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238401","url":null,"abstract":"We present an novel algorithm that reconstructs voxels of a general 3D specular surface from multiple images of a calibrated camera. A calibrated scene (i.e. points whose 3D coordinates are known) is reflected by the unknown specular surface onto the image plane of the camera. For every viewpoint, surface normals are associated to the voxels traversed by each projection ray formed by the reflection of a scene point. A decision process then discards voxels whose associated surface normals are not consistent with one another. The output of the algorithm is a collection of voxels and surface normals in 3D space, whose quality and size depend on user-set thresholds. The method has been tested on synthetic and real images. Visual and quantified experimental results are presented.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121199188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 115
Camera calibration with known rotation 已知旋转的摄像机校准
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238656
Jan-Michael Frahm, R. Koch
We address the problem of using external rotation information with uncalibrated video sequences. The main problem addressed is, what is the benefit of the orientation information for camera calibration? It is shown that in case of a rotating camera the camera calibration problem is linear even in the case that all intrinsic parameters vary. For arbitrarily moving cameras the calibration problem is also linear but underdetermined for the general case of varying all intrinsic parameters. However, if certain constraints are applied to the intrinsic parameters the camera calibration can be computed linearly. It is analyzed which constraints are needed for camera calibration of freely moving cameras. Furthermore we address the problem of aligning the camera data with the rotation sensor data in time. We give an approach to align these data in case of a rotating camera.
我们解决了在未校准视频序列中使用外部旋转信息的问题。解决的主要问题是,方向信息对相机校准有什么好处?结果表明,在旋转摄像机的情况下,即使所有的固有参数都发生变化,摄像机标定问题也是线性的。对于任意移动的摄像机,标定问题也是线性的,但在所有固有参数变化的一般情况下,标定问题是不确定的。然而,如果对相机的固有参数施加一定的约束,则可以线性计算相机标定。分析了自由运动摄像机标定所需要的约束条件。此外,我们还解决了相机数据与旋转传感器数据的及时对齐问题。我们给出了一种在旋转摄像机的情况下对齐这些数据的方法。
{"title":"Camera calibration with known rotation","authors":"Jan-Michael Frahm, R. Koch","doi":"10.1109/ICCV.2003.1238656","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238656","url":null,"abstract":"We address the problem of using external rotation information with uncalibrated video sequences. The main problem addressed is, what is the benefit of the orientation information for camera calibration? It is shown that in case of a rotating camera the camera calibration problem is linear even in the case that all intrinsic parameters vary. For arbitrarily moving cameras the calibration problem is also linear but underdetermined for the general case of varying all intrinsic parameters. However, if certain constraints are applied to the intrinsic parameters the camera calibration can be computed linearly. It is analyzed which constraints are needed for camera calibration of freely moving cameras. Furthermore we address the problem of aligning the camera data with the rotation sensor data in time. We give an approach to align these data in case of a rotating camera.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128673736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Variational space-time motion segmentation 变分时空运动分割
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238442
D. Cremers, Stefano Soatto
We propose a variational method for segmenting image sequences into spatiotemporal domains of homogeneous motion. To this end, we formulate the problem of motion estimation in the framework of Bayesian inference, using a prior which favors domain boundaries of minimal surface area. We derive a cost functional which depends on a surface in space-time separating a set of motion regions, as well as a set of vectors modeling the motion in each region. We propose a multiphase level set formulation of this functional, in which the surface and the motion regions are represented implicitly by a vector-valued level set function. Joint minimization of the proposed functional results in an eigenvalue problem for the motion model of each region and in a gradient descent evolution for the separating interface. Numerical results on real-world sequences demonstrate that minimization of a single cost functional generates a segmentation of space-time into multiple motion regions.
我们提出了一种将图像序列分割成均匀运动的时空域的变分方法。为此,我们在贝叶斯推理的框架下,利用有利于最小表面积域边界的先验,提出了运动估计问题。我们推导了一个代价函数,它依赖于时空中分离一组运动区域的表面,以及一组模拟每个区域运动的向量。我们提出了该泛函的多相水平集公式,其中曲面和运动区域由向量值水平集函数隐式表示。所提出的泛函的联合最小化结果是每个区域的运动模型的特征值问题和分离界面的梯度下降演化。在真实序列上的数值结果表明,单个代价函数的最小化可以将时空分割成多个运动区域。
{"title":"Variational space-time motion segmentation","authors":"D. Cremers, Stefano Soatto","doi":"10.1109/ICCV.2003.1238442","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238442","url":null,"abstract":"We propose a variational method for segmenting image sequences into spatiotemporal domains of homogeneous motion. To this end, we formulate the problem of motion estimation in the framework of Bayesian inference, using a prior which favors domain boundaries of minimal surface area. We derive a cost functional which depends on a surface in space-time separating a set of motion regions, as well as a set of vectors modeling the motion in each region. We propose a multiphase level set formulation of this functional, in which the surface and the motion regions are represented implicitly by a vector-valued level set function. Joint minimization of the proposed functional results in an eigenvalue problem for the motion model of each region and in a gradient descent evolution for the separating interface. Numerical results on real-world sequences demonstrate that minimization of a single cost functional generates a segmentation of space-time into multiple motion regions.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"10 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116932111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Facial expression decomposition 面部表情分解
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238452
Hongcheng Wang, N. Ahuja
In this paper, we propose a novel approach for facial expression decomposition - higher-order singular value decomposition (HOSVD), a natural generalization of matrix SVD. We learn the expression subspace and person subspace from a corpus of images showing seven basic facial expressions, rather than resort to expert-coded facial expression parameters. We propose a simultaneous face and facial expression recognition algorithm, which can classify the given image into one of the seven basic facial expression categories, and then other facial expressions of the new person can be synthesized using the learned expression subspace model. The contributions of this work lie mainly in two aspects. First, we propose a new HOSVD based approach to model the mapping between persons and expressions, used for facial expression synthesis for a new person. Second, we realize simultaneous face and facial expression recognition as a result of facial expression decomposition. Experimental results are presented that illustrate the capability of the person subspace and expression subspace in both synthesis and recognition tasks. As a quantitative measure of the quality of synthesis, we propose using gradient minimum square error (GMSE) which measures the gradient difference between the original and synthesized images.
本文提出了一种新的面部表情分解方法——高阶奇异值分解(HOSVD),它是矩阵奇异值分解的自然推广。我们从显示七种基本面部表情的图像语料库中学习表情子空间和人子空间,而不是求助于专家编码的面部表情参数。提出了一种人脸和面部表情同步识别算法,该算法可以将给定图像划分为7个基本面部表情类别之一,然后利用学习到的表情子空间模型综合新人脸的其他面部表情。这项工作的贡献主要体现在两个方面。首先,我们提出了一种新的基于HOSVD的人物与表情映射建模方法,用于新人物的面部表情合成。其次,通过面部表情的分解,实现了人脸和面部表情的同时识别。实验结果说明了人子空间和表情子空间在合成和识别任务中的能力。作为合成质量的定量度量,我们提出使用梯度最小平方误差(GMSE)来度量原始图像和合成图像之间的梯度差。
{"title":"Facial expression decomposition","authors":"Hongcheng Wang, N. Ahuja","doi":"10.1109/ICCV.2003.1238452","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238452","url":null,"abstract":"In this paper, we propose a novel approach for facial expression decomposition - higher-order singular value decomposition (HOSVD), a natural generalization of matrix SVD. We learn the expression subspace and person subspace from a corpus of images showing seven basic facial expressions, rather than resort to expert-coded facial expression parameters. We propose a simultaneous face and facial expression recognition algorithm, which can classify the given image into one of the seven basic facial expression categories, and then other facial expressions of the new person can be synthesized using the learned expression subspace model. The contributions of this work lie mainly in two aspects. First, we propose a new HOSVD based approach to model the mapping between persons and expressions, used for facial expression synthesis for a new person. Second, we realize simultaneous face and facial expression recognition as a result of facial expression decomposition. Experimental results are presented that illustrate the capability of the person subspace and expression subspace in both synthesis and recognition tasks. As a quantitative measure of the quality of synthesis, we propose using gradient minimum square error (GMSE) which measures the gradient difference between the original and synthesized images.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 233
Fast pose estimation with parameter-sensitive hashing 快速姿态估计与参数敏感哈希
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238424
Gregory Shakhnarovich, Paul A. Viola, Trevor Darrell
Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.
当底层系统简单或输入维数较低时,基于实例的方法对参数估计问题是有效的。对于复杂的高维问题,如姿态估计,所需的样本数量和计算复杂度迅速变得令人望而却步。我们引入了一种新的算法,它学习一组哈希函数,有效地索引与特定估计任务相关的示例。我们的算法扩展了位置敏感哈希,这是一种最近发展起来的方法,用于在时间上以亚线性的方式找到近似邻居。这种方法主要依赖于与特定估计问题最优相关的哈希函数的选择。实验表明,我们称之为参数敏感哈希的算法可以快速准确地从大量示例图像数据库中估计出人体的关节姿态。
{"title":"Fast pose estimation with parameter-sensitive hashing","authors":"Gregory Shakhnarovich, Paul A. Viola, Trevor Darrell","doi":"10.1109/ICCV.2003.1238424","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238424","url":null,"abstract":"Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116280881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 911
Integrated edge and junction detection with the boundary tensor 结合边界张量的边缘和结检测
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238377
U. Kothe
The boundaries of image regions necessarily consist of edges (in particular, step and roof edges), corners, and junctions. Currently, different algorithms are used to detect each boundary type separately, but the integration of the results into a single boundary representation is difficult. Therefore, a method for the simultaneous detection of all boundary types is needed. We propose to combine responses of suitable polar separable filters into what we will call the boundary tensor. The trace of this tensor is a measure of boundary strength, while the small eigenvalue and its difference to the large one represent corner/junction and edge strengths respectively. We prove that the edge strength measure behaves like a rotationally invariant quadrature filter. A number of examples demonstrate the properties of the new method and illustrate its application to image segmentation.
图像区域的边界必然由边缘(特别是台阶边缘和屋顶边缘)、拐角和连接点组成。目前,不同的算法分别用于检测每种边界类型,但很难将结果集成到单一的边界表示中。因此,需要一种同时检测所有边界类型的方法。我们建议将合适的极性可分离滤波器的响应组合成我们称之为边界张量的东西。该张量的迹是边界强度的度量,而小特征值及其与大特征值的差值分别代表角/结和边缘强度。我们证明了边缘强度测量的行为就像一个旋转不变的正交滤波器。通过实例验证了该方法的性能,并说明了该方法在图像分割中的应用。
{"title":"Integrated edge and junction detection with the boundary tensor","authors":"U. Kothe","doi":"10.1109/ICCV.2003.1238377","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238377","url":null,"abstract":"The boundaries of image regions necessarily consist of edges (in particular, step and roof edges), corners, and junctions. Currently, different algorithms are used to detect each boundary type separately, but the integration of the results into a single boundary representation is difficult. Therefore, a method for the simultaneous detection of all boundary types is needed. We propose to combine responses of suitable polar separable filters into what we will call the boundary tensor. The trace of this tensor is a measure of boundary strength, while the small eigenvalue and its difference to the large one represent corner/junction and edge strengths respectively. We prove that the edge strength measure behaves like a rotationally invariant quadrature filter. A number of examples demonstrate the properties of the new method and illustrate its application to image segmentation.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114450000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Segmenting foreground objects from a dynamic textured background via a robust Kalman filter 通过鲁棒卡尔曼滤波从动态纹理背景中分割前景对象
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238312
Jing Zhong, S. Sclaroff
The algorithm presented aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the nonstationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an autoregressive moving average model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results.
该算法的目的是在给定时变的纹理背景下分割视频中的前景物体(例如,人)。时变背景的例子包括水面上的波浪、移动的云、风中摇曳的树、汽车交通、移动的人群、自动扶梯等。我们开发了一种新的前景-背景分割算法,该算法明确地解释了许多动态纹理的非平稳性质和杂乱外观。动态纹理采用自回归移动平均模型(ARMA)建模。鲁棒卡尔曼滤波算法迭代估计动态纹理的内在外观,以及前景物体的区域。该方法的初步实验结果令人满意。
{"title":"Segmenting foreground objects from a dynamic textured background via a robust Kalman filter","authors":"Jing Zhong, S. Sclaroff","doi":"10.1109/ICCV.2003.1238312","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238312","url":null,"abstract":"The algorithm presented aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the nonstationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an autoregressive moving average model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114865942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 377
How to deal with point correspondences and tangential velocities in the level set framework 如何处理水平集框架中的点对应和切向速度
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238443
Jean-Philippe Pons, G. Hermosillo, R. Keriven, O. Faugeras
In this paper, we overcome a major drawback of the level set framework: the lack of point correspondences. We maintain explicit backward correspondences from the evolving interface to the initial one by advecting the initial point coordinates with the same speed as the level set function. Our method leads to a system of coupled Eulerian partial differential equations. We show in a variety of numerical experiments that it can handle both normal and tangential velocities, large deformations, shocks, rarefactions and topological changes. Applications are many in computer vision and elsewhere since our method can upgrade virtually any level set evolution. We complement our work with the design of non zero tangential velocities that preserve the relative area of interface patches; this feature may be crucial in such applications as computational geometry, grid generation or unfolding of the organs' surfaces, e.g. brain, in medical imaging.
在本文中,我们克服了水平集框架的一个主要缺点:缺乏点对应。我们通过以与水平集函数相同的速度平流初始点坐标来保持从演化界面到初始界面的显式向后对应。我们的方法得到一个耦合欧拉偏微分方程组。我们在各种数值实验中表明,它可以处理法向和切向速度,大变形,冲击,稀有和拓扑变化。应用于计算机视觉和其他领域,因为我们的方法可以升级几乎任何水平集的进化。我们用非零切向速度的设计来补充我们的工作,这种设计保留了界面补丁的相对面积;这一特性在诸如计算几何、网格生成或器官表面展开(如大脑)等医学成像应用中可能至关重要。
{"title":"How to deal with point correspondences and tangential velocities in the level set framework","authors":"Jean-Philippe Pons, G. Hermosillo, R. Keriven, O. Faugeras","doi":"10.1109/ICCV.2003.1238443","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238443","url":null,"abstract":"In this paper, we overcome a major drawback of the level set framework: the lack of point correspondences. We maintain explicit backward correspondences from the evolving interface to the initial one by advecting the initial point coordinates with the same speed as the level set function. Our method leads to a system of coupled Eulerian partial differential equations. We show in a variety of numerical experiments that it can handle both normal and tangential velocities, large deformations, shocks, rarefactions and topological changes. Applications are many in computer vision and elsewhere since our method can upgrade virtually any level set evolution. We complement our work with the design of non zero tangential velocities that preserve the relative area of interface patches; this feature may be crucial in such applications as computational geometry, grid generation or unfolding of the organs' surfaces, e.g. brain, in medical imaging.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125420769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
期刊
Proceedings Ninth IEEE International Conference on Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1