首页 > 最新文献

Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)最新文献

英文 中文
Hyperbolic "Smoothing" of shapes 形状的双曲“平滑”
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710721
Kaleem Siddiqi, A. Tannenbaum, S. Zucker
We have been developing a theory of generic 2-D shape based on a reaction-diffusion model from mathematical physics. The description of a shape is derived from the singularities of a curve evolution process driven by the reaction (hyperbolic) term. The diffusion (parabolic) term is related to smoothing and shape simplification. However, the unification of the two is problematic, because the slightest amount of diffusion dominates and prevents the formation of generic first-order shocks. The technical issue is whether it is possible to smooth a shape, in any sense, without destroying the shocks. We now report a constructive solution to this problem, by embedding the smoothing term in a global metric against which a purely hyperbolic evolution is performed from the initial curve. This is a new flow for shape, that extends the advantages of the original one. Specific metrics are developed, which lead to a natural hierarchy of shape features, analogous to the simplification one might perceive when viewing an object from increasing distances. We illustrate our new flow with a variety of examples.
我们一直在发展一种基于数学物理的反应-扩散模型的一般二维形状理论。形状的描述是由反应(双曲)项驱动的曲线演化过程的奇点推导出来的。扩散(抛物线)项与平滑和形状简化有关。然而,两者的统一是有问题的,因为最少量的扩散占主导地位,并阻止了一般一阶冲击的形成。技术问题是,是否有可能在不破坏冲击的情况下,在任何意义上使一个形状平滑。我们现在报告了这个问题的建设性解决方案,通过将平滑项嵌入到一个全局度量中,根据该度量从初始曲线进行纯双曲演化。这是一种新的形状流,它扩展了原有形状流的优点。具体的度量标准被开发出来,导致形状特征的自然层次结构,类似于从越来越远的距离观看物体时可能感知到的简化。我们用各种各样的例子来说明我们的新流程。
{"title":"Hyperbolic \"Smoothing\" of shapes","authors":"Kaleem Siddiqi, A. Tannenbaum, S. Zucker","doi":"10.1109/ICCV.1998.710721","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710721","url":null,"abstract":"We have been developing a theory of generic 2-D shape based on a reaction-diffusion model from mathematical physics. The description of a shape is derived from the singularities of a curve evolution process driven by the reaction (hyperbolic) term. The diffusion (parabolic) term is related to smoothing and shape simplification. However, the unification of the two is problematic, because the slightest amount of diffusion dominates and prevents the formation of generic first-order shocks. The technical issue is whether it is possible to smooth a shape, in any sense, without destroying the shocks. We now report a constructive solution to this problem, by embedding the smoothing term in a global metric against which a purely hyperbolic evolution is performed from the initial curve. This is a new flow for shape, that extends the advantages of the original one. Specific metrics are developed, which lead to a natural hierarchy of shape features, analogous to the simplification one might perceive when viewing an object from increasing distances. We illustrate our new flow with a variety of examples.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131439868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A cooperative framework for segmentation using 2D active contours and 3D hybrid models as applied to branching cylindrical structures 一种基于二维活动轮廓和三维混合模型的分支圆柱结构分割协同框架
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710758
Thomas O'Donnell, M. Jolly, Alok Gupta
Hybrid models are powerful tools for recovery in that they simultaneously provide a gross parametric as well as a detailed description of an object. However, it is difficult to directly employ hybrid models in the segmentation process since they are not guaranteed to locate the optimal boundaries in cross-sectional slices. Propagating 2D active contours from slice to slice, on the other hand, to delineate an object's boundaries is often effective, but may run into problems when the object's topology changes, such as at bifurcations or even in areas of high curvature. Here, we present a cooperative framework to exploit the positive aspects of both 3D hybrid model and 2D active contour approaches for segmentation and recovery. In this framework the user-defined parametric component of a 3D hybrid model provides constraints for a set of 2D segmentations performed by active contours. The same hybrid model is then fit both parametrically and locally to this segmentation. For the hybrid model fit we employ several new variations on the physically-motivated paradigm which seek to speed recovery while guaranteeing stability. A by-product of these variations is an increased generality of the method via the elimination, of some of its ad hoc parameters. We apply our cooperative framework to the recovery of branching cylindrical structures from 3D image volumes. The hybrid model we employ has a novel parametric component which is a fusion of individual cylinders. These cylinders have spines that are arbitrary space curves and cross-sections which may be any star shaped planar curve.
混合模型是恢复的强大工具,因为它们同时提供了对象的总体参数和详细描述。然而,由于混合模型不能保证在横截面上找到最优边界,因此在分割过程中很难直接使用混合模型。另一方面,从一个切片到另一个切片传播二维活动轮廓来描绘对象的边界通常是有效的,但当对象的拓扑变化时,例如在分岔处甚至在高曲率区域,可能会遇到问题。在这里,我们提出了一个合作框架,以利用3D混合模型和2D主动轮廓方法的积极方面进行分割和恢复。在该框架中,用户自定义的3D混合模型参数组件为活动轮廓执行的一组2D分割提供了约束。然后将相同的混合模型参数化地和局部地拟合到该分割中。对于混合模型拟合,我们采用了几种物理驱动范式的新变体,以寻求在保证稳定性的同时加速恢复。这些变化的副产品是通过消除一些特别参数而增加了方法的通用性。我们将我们的合作框架应用于从三维图像体中恢复分支圆柱形结构。我们采用的混合动力模型具有一种新颖的参数分量,即单个汽缸的融合。这些圆柱体的脊是任意空间曲线和横截面,可以是任何星形平面曲线。
{"title":"A cooperative framework for segmentation using 2D active contours and 3D hybrid models as applied to branching cylindrical structures","authors":"Thomas O'Donnell, M. Jolly, Alok Gupta","doi":"10.1109/ICCV.1998.710758","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710758","url":null,"abstract":"Hybrid models are powerful tools for recovery in that they simultaneously provide a gross parametric as well as a detailed description of an object. However, it is difficult to directly employ hybrid models in the segmentation process since they are not guaranteed to locate the optimal boundaries in cross-sectional slices. Propagating 2D active contours from slice to slice, on the other hand, to delineate an object's boundaries is often effective, but may run into problems when the object's topology changes, such as at bifurcations or even in areas of high curvature. Here, we present a cooperative framework to exploit the positive aspects of both 3D hybrid model and 2D active contour approaches for segmentation and recovery. In this framework the user-defined parametric component of a 3D hybrid model provides constraints for a set of 2D segmentations performed by active contours. The same hybrid model is then fit both parametrically and locally to this segmentation. For the hybrid model fit we employ several new variations on the physically-motivated paradigm which seek to speed recovery while guaranteeing stability. A by-product of these variations is an increased generality of the method via the elimination, of some of its ad hoc parameters. We apply our cooperative framework to the recovery of branching cylindrical structures from 3D image volumes. The hybrid model we employ has a novel parametric component which is a fusion of individual cylinders. These cylinders have spines that are arbitrary space curves and cross-sections which may be any star shaped planar curve.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132324122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A probabilistic contour discriminant for object localisation 一种用于目标定位的概率轮廓判别法
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710748
J. MacCormick, A. Blake
A method of localising objects in images is proposed. Possible configurations are evaluated using the contour discriminant, a likelihood ratio which is derived from a probabilistic model of the feature detection process. We treat each step in this process probabilistically, including the occurrence of clutter features, and derive the observation densities for both correct "target" configurations and incorrect "clutter" configurations. The contour discriminant distinguishes target objects from the background even in heavy clutter, making only the most general assumptions about the form that clutter might take. The method generates samples stochastically to avoid the cost of processing an entire image, and promises to be particularly suited to the task of initialising contour trackers based on sampling methods.
提出了一种图像中目标的定位方法。使用轮廓判别法评估可能的配置,这是一种来自特征检测过程的概率模型的似然比。我们以概率的方式处理这一过程中的每一步,包括杂波特征的出现,并推导出正确的“目标”构型和错误的“杂波”构型的观测密度。即使在严重的杂波中,轮廓判别法也能将目标物体与背景区分开来,只对杂波可能采取的形式做出最一般的假设。该方法随机生成样本以避免处理整个图像的成本,并承诺特别适合基于采样方法初始化轮廓跟踪器的任务。
{"title":"A probabilistic contour discriminant for object localisation","authors":"J. MacCormick, A. Blake","doi":"10.1109/ICCV.1998.710748","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710748","url":null,"abstract":"A method of localising objects in images is proposed. Possible configurations are evaluated using the contour discriminant, a likelihood ratio which is derived from a probabilistic model of the feature detection process. We treat each step in this process probabilistically, including the occurrence of clutter features, and derive the observation densities for both correct \"target\" configurations and incorrect \"clutter\" configurations. The contour discriminant distinguishes target objects from the background even in heavy clutter, making only the most general assumptions about the form that clutter might take. The method generates samples stochastically to avoid the cost of processing an entire image, and promises to be particularly suited to the task of initialising contour trackers based on sampling methods.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130920442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
3D reconstruction with projective octrees and epipolar geometry 三维重建与射影八叉树和极几何
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710849
B. Garcia, P. Brunet
In this paper, the problem of generating a 3D octree-like structure with the help of epipolar geometry within a projective framework is addressed. After a brief introduction on the basics of octrees and epipolar geometry, the new concept called "projective octree" is introduced together with an algorithm for building this projective structure. Finally, some results of the implementations are presented in the last section together with the conclusions and future work.
本文讨论了在射影框架内利用极极几何生成三维八叉形结构的问题。在简要介绍了八叉树和极几何的基础知识之后,介绍了“射影八叉树”的新概念以及构建该射影结构的算法。最后,在最后一节给出了一些实现结果,以及结论和未来的工作。
{"title":"3D reconstruction with projective octrees and epipolar geometry","authors":"B. Garcia, P. Brunet","doi":"10.1109/ICCV.1998.710849","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710849","url":null,"abstract":"In this paper, the problem of generating a 3D octree-like structure with the help of epipolar geometry within a projective framework is addressed. After a brief introduction on the basics of octrees and epipolar geometry, the new concept called \"projective octree\" is introduced together with an algorithm for building this projective structure. Finally, some results of the implementations are presented in the last section together with the conclusions and future work.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131123237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Object tracking using deformable templates 使用可变形模板进行对象跟踪
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710756
Yu Zhong, Anil K. Jain, M. Jolly
We propose a novel method for object tracking using prototype-based deformable template models. To track an object in an image sequence, we use a criterion which combines two terms: the deviation of the object shape from its shape in the previous frame, and the fidelity of the detected shape to the input image. Shape and gradient information are used to track the object. We have also used the consistency between corresponding object regions throughout the sequence to help in trading the object of interest. Inter-frame motion is also used to track the boundary of moving objects. We have applied the algorithm to a number of image sequences from different sources. The inherent structure in the deformable template, together with region, motion, and image gradient cues, make the algorithm relatively insensitive to the adverse effects of weak image features and moderate partial occlusion.
提出了一种基于原型的可变形模板模型的目标跟踪新方法。为了跟踪图像序列中的目标,我们使用了一个标准,该标准结合了两个术语:目标形状与前一帧中其形状的偏差,以及检测到的形状与输入图像的保真度。形状和梯度信息用于跟踪对象。我们还使用了整个序列中对应对象区域之间的一致性来帮助交易感兴趣的对象。帧间运动还用于跟踪运动对象的边界。我们已经将该算法应用于来自不同来源的许多图像序列。可变形模板的固有结构,加上区域、运动和图像梯度线索,使得该算法对弱图像特征和中度局部遮挡的不利影响相对不敏感。
{"title":"Object tracking using deformable templates","authors":"Yu Zhong, Anil K. Jain, M. Jolly","doi":"10.1109/ICCV.1998.710756","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710756","url":null,"abstract":"We propose a novel method for object tracking using prototype-based deformable template models. To track an object in an image sequence, we use a criterion which combines two terms: the deviation of the object shape from its shape in the previous frame, and the fidelity of the detected shape to the input image. Shape and gradient information are used to track the object. We have also used the consistency between corresponding object regions throughout the sequence to help in trading the object of interest. Inter-frame motion is also used to track the boundary of moving objects. We have applied the algorithm to a number of image sequences from different sources. The inherent structure in the deformable template, together with region, motion, and image gradient cues, make the algorithm relatively insensitive to the adverse effects of weak image features and moderate partial occlusion.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127205585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 197
Recognizing novel 3-D objects under new illumination and viewing position using a small number of example views or even a single view 使用少量示例视图甚至单个视图识别在新的照明和观看位置下的新三维物体
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710713
E. Sali, S. Ullman
A method is presented for class-based recognition using a small number of example views taken under several different viewing conditions. The main emphasis is on using a small number of examples. Previous work assumed that the set of examples is sufficient to span the entire space of possible objects, and that in generalizing to a new viewing conditions a sufficient number of previous examples under the new conditions will be available to the recognition system. Here we have considerably relaxed these assumptions and consequently obtained good class-based generalization from a small number of examples, even a single example view, for both viewing position and illumination changes. In addition, previous class-based approaches only focused on viewing position changes and did not deal with illumination changes. Here we used a class-based approach that can generalize for both illumination and viewing position changes. The method was applied to face and car model images. New views under viewing position and illumination changes were synthesized from a small number of examples.
提出了一种基于类的识别方法,该方法使用了在几种不同观看条件下拍摄的少量示例视图。主要的重点是使用少量的例子。先前的工作假设示例集足以跨越可能对象的整个空间,并且在将其推广到新的观看条件时,识别系统将可以使用新条件下足够数量的先前示例。在这里,我们大大放宽了这些假设,从而从少量示例(甚至是单个示例视图)中获得了良好的基于类的泛化,用于观察位置和光照变化。此外,以前基于类的方法只关注观看位置的变化,而没有处理光照的变化。在这里,我们使用了一种基于类的方法,可以概括照明和观看位置的变化。将该方法应用于人脸和汽车模型图像。从少量实例中合成了观察位置和光照变化下的新视图。
{"title":"Recognizing novel 3-D objects under new illumination and viewing position using a small number of example views or even a single view","authors":"E. Sali, S. Ullman","doi":"10.1109/ICCV.1998.710713","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710713","url":null,"abstract":"A method is presented for class-based recognition using a small number of example views taken under several different viewing conditions. The main emphasis is on using a small number of examples. Previous work assumed that the set of examples is sufficient to span the entire space of possible objects, and that in generalizing to a new viewing conditions a sufficient number of previous examples under the new conditions will be available to the recognition system. Here we have considerably relaxed these assumptions and consequently obtained good class-based generalization from a small number of examples, even a single example view, for both viewing position and illumination changes. In addition, previous class-based approaches only focused on viewing position changes and did not deal with illumination changes. Here we used a class-based approach that can generalize for both illumination and viewing position changes. The method was applied to face and car model images. New views under viewing position and illumination changes were synthesized from a small number of examples.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123853328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
State space construction for behavior acquisition in multi agent environments with vision and action 具有视觉和动作的多智能体环境中行为获取的状态空间构建
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710819
E. Uchibe, M. Asada, K. Hosoda
This paper proposes a method which estimates the relationships between learner's behaviors and other agents' ones in the environment through interactions (observation and action) using the method of system identification. In order to identify the model of each agent, Akaike's Information Criterion is applied to the results of Canonical Variate Analysis for the relationship between the observed data in terms of action and future observation. Next, reinforcement learning based on the estimated state vectors is performed to obtain the optimal behavior. The proposed method is applied to a soccer playing situation, where a rolling ball and other moving agents are well modeled and the learner's behaviors are successfully acquired by the method. Computer simulations and real experiments are shown and a discussion is given.
本文提出了一种利用系统识别的方法,通过观察和行动的相互作用来估计学习者的行为与环境中其他智能体的行为之间的关系。为了识别每个agent的模型,我们将Akaike的信息准则应用到典型变量分析的结果中,以确定观察到的数据在动作方面与未来观察之间的关系。接下来,基于估计的状态向量进行强化学习以获得最优行为。将所提出的方法应用于足球比赛场景中,对滚动的球和其他移动代理进行了很好的建模,并成功地获得了学习者的行为。给出了计算机模拟和实际实验,并进行了讨论。
{"title":"State space construction for behavior acquisition in multi agent environments with vision and action","authors":"E. Uchibe, M. Asada, K. Hosoda","doi":"10.1109/ICCV.1998.710819","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710819","url":null,"abstract":"This paper proposes a method which estimates the relationships between learner's behaviors and other agents' ones in the environment through interactions (observation and action) using the method of system identification. In order to identify the model of each agent, Akaike's Information Criterion is applied to the results of Canonical Variate Analysis for the relationship between the observed data in terms of action and future observation. Next, reinforcement learning based on the estimated state vectors is performed to obtain the optimal behavior. The proposed method is applied to a soccer playing situation, where a rolling ball and other moving agents are well modeled and the learner's behaviors are successfully acquired by the method. Computer simulations and real experiments are shown and a discussion is given.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A chromaticity space for specularity, illumination color- and illumination pose-invariant 3-D object recognition 一种用于反射率、光照颜色和光照姿态不变的三维物体识别的色度空间
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710714
Daniel Berwick, S. W. Lee
Most of the recent color recognition/indexing approaches concentrate on establishing invariance to illumination color to improve the utility of color recognition. However, other effects caused by illumination pose and specularity on three-dimensional object surfaces have not received notable attention. We present a chromaticity recognition method that discounts the effects of illumination pose, illumination color and specularity. It utilizes a chromaticity space based on log-ratio of sensor responses for illumination pose and color invariance. A model-based specularity detection/rejection algorithm can be used to improve the chromaticity recognition and illumination estimation for objects including specular reflections.
目前大多数颜色识别/索引方法都集中于建立对照明颜色的不变性,以提高颜色识别的实用性。然而,在三维物体表面上由光照、姿态和反射率引起的其他影响尚未得到重视。提出了一种不考虑光照姿态、光照颜色和反射率影响的色度识别方法。它利用基于传感器响应对数比的色度空间来实现照明姿态和颜色不变性。一种基于模型的反射性检测/抑制算法可以提高包括反射在内的物体的色度识别和照度估计。
{"title":"A chromaticity space for specularity, illumination color- and illumination pose-invariant 3-D object recognition","authors":"Daniel Berwick, S. W. Lee","doi":"10.1109/ICCV.1998.710714","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710714","url":null,"abstract":"Most of the recent color recognition/indexing approaches concentrate on establishing invariance to illumination color to improve the utility of color recognition. However, other effects caused by illumination pose and specularity on three-dimensional object surfaces have not received notable attention. We present a chromaticity recognition method that discounts the effects of illumination pose, illumination color and specularity. It utilizes a chromaticity space based on log-ratio of sensor responses for illumination pose and color invariance. A model-based specularity detection/rejection algorithm can be used to improve the chromaticity recognition and illumination estimation for objects including specular reflections.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123390771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Euclidean structure from uncalibrated images using fuzzy domain knowledge: application to facial images synthesis 利用模糊领域知识从未校准图像中提取欧几里德结构:在人脸图像合成中的应用
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710807
Zhengyou Zhang, K. Isono, S. Akamatsu
Use of uncalibrated images has found many applications such as image synthesis. However, it is not easy to specify the desired position of the new image in projective or affine space. This paper proposes to recover Euclidean structure from uncalibrated images using domain knowledge such as distances and angles. The knowledge we have is usually about an object category, but not very precise for the particular object being considered. The variation (fuzziness) is modeled as a Gaussian variable. Six types of common knowledge are formulated. Once we have an Euclidean description, the task to specify the desired position in Euclidean space becomes trivial. The proposed technique is then applied to synthesis of new facial images. A number of difficulties existing in image synthesis are identified and solved. For example, we propose to use edge points to deal with occlusion.
使用未经校准的图像已经发现了许多应用,如图像合成。然而,在投影空间或仿射空间中指定新图像的理想位置并不容易。本文提出利用距离和角度等领域知识从未标定图像中恢复欧几里得结构。我们所拥有的知识通常是关于一个对象类别的,但对于正在考虑的特定对象却不是很精确。变异(模糊性)建模为高斯变量。共有六种类型的常识。一旦我们有了欧几里得描述,在欧几里得空间中指定期望位置的任务就变得微不足道了。然后将该技术应用于新的面部图像的合成。识别并解决了图像合成中存在的一些困难。例如,我们建议使用边缘点来处理遮挡。
{"title":"Euclidean structure from uncalibrated images using fuzzy domain knowledge: application to facial images synthesis","authors":"Zhengyou Zhang, K. Isono, S. Akamatsu","doi":"10.1109/ICCV.1998.710807","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710807","url":null,"abstract":"Use of uncalibrated images has found many applications such as image synthesis. However, it is not easy to specify the desired position of the new image in projective or affine space. This paper proposes to recover Euclidean structure from uncalibrated images using domain knowledge such as distances and angles. The knowledge we have is usually about an object category, but not very precise for the particular object being considered. The variation (fuzziness) is modeled as a Gaussian variable. Six types of common knowledge are formulated. Once we have an Euclidean description, the task to specify the desired position in Euclidean space becomes trivial. The proposed technique is then applied to synthesis of new facial images. A number of difficulties existing in image synthesis are identified and solved. For example, we propose to use edge points to deal with occlusion.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Agent orientated annotation in model based visual surveillance 基于模型的视觉监控中面向Agent的标注
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710817
Paolo Remagnino, T. Tan, K. Baker
The paper presents an agent based surveillance system for use in monitoring scenes involving both pedestrians and vehicles. The system supplies textual descriptions for the dynamic activity occurring in the 3D world. These are derived by means of dynamic and probabilistic inference based on geometric information provided by a vision system that tracks vehicles and pedestrians. The symbolic scene annotation is given at two major levels of description: the object level and the inter-object level. At object level, each tracked pedestrian or vehicle is assigned a behaviour agent which uses a Bayesian network to infer the fundamental features of the objects' trajectory, and continuously updates its textual description. The inter-object interaction level is interpreted by a situation agent which is created dynamically when two objects are in close proximity. In the work included here the situation agent can describe a two-object interaction in terms of basic textual annotations, to summarise the dynamics of the local action.
本文提出了一种基于智能体的监控系统,用于行人和车辆的监控场景。该系统为发生在三维世界中的动态活动提供文本描述。这些是通过基于跟踪车辆和行人的视觉系统提供的几何信息的动态和概率推断得出的。符号场景标注主要分为两个层次:对象层次和对象间层次。在物体层面,每个被跟踪的行人或车辆被分配一个行为代理,该行为代理使用贝叶斯网络来推断物体轨迹的基本特征,并不断更新其文本描述。对象间交互级别由一个情境代理来解释,该情境代理是在两个对象非常接近时动态创建的。在这里包含的工作中,情境代理可以根据基本的文本注释来描述两个对象的交互,以总结局部动作的动态。
{"title":"Agent orientated annotation in model based visual surveillance","authors":"Paolo Remagnino, T. Tan, K. Baker","doi":"10.1109/ICCV.1998.710817","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710817","url":null,"abstract":"The paper presents an agent based surveillance system for use in monitoring scenes involving both pedestrians and vehicles. The system supplies textual descriptions for the dynamic activity occurring in the 3D world. These are derived by means of dynamic and probabilistic inference based on geometric information provided by a vision system that tracks vehicles and pedestrians. The symbolic scene annotation is given at two major levels of description: the object level and the inter-object level. At object level, each tracked pedestrian or vehicle is assigned a behaviour agent which uses a Bayesian network to infer the fundamental features of the objects' trajectory, and continuously updates its textual description. The inter-object interaction level is interpreted by a situation agent which is created dynamically when two objects are in close proximity. In the work included here the situation agent can describe a two-object interaction in terms of basic textual annotations, to summarise the dynamics of the local action.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125585724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
期刊
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1