首页 > 最新文献

Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)最新文献

英文 中文
Quadric reconstruction from dual-space geometry 双空间几何的二次重构
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710697
G. Cross, Andrew Zisserman
We describe the recovery of a quadric surface from its image in two or more perspective views. The recovered quadric is used in 3D modeling and image registration applications. There are three novel contributions. First, it is shown that a one parameter family of quadrics is recovered from outlines in two views. The ambiguity is reduced to twofold given a point correspondence. There is no ambiguity from outlines in three or more views. Second, it is shown that degenerate quadrics reduce the ambiguity of reconstruction. Third, it is shown that surfaces can be piecewise quadric approximated from piecewise conic approximations of their outlines. All these cases are illustrated by examples with real images. Implementation details are given and the quality of the results is assessed.
我们描述了在两个或多个透视图中从图像中恢复二次曲面。恢复的二次曲线用于三维建模和图像配准应用。有三个新颖的贡献。首先,证明了在两种视图中从轮廓中恢复出一个单参数的二次曲面族。在给定一个点对应的情况下,歧义减少到两倍。在三个或更多视图中,轮廓没有歧义。其次,简并二次曲面减少了重构的模糊性。第三,证明了曲面可以由其轮廓的分段二次逼近得到分段二次逼近。所有这些案例都用实际图像进行了举例说明。给出了实施细节,并评估了结果的质量。
{"title":"Quadric reconstruction from dual-space geometry","authors":"G. Cross, Andrew Zisserman","doi":"10.1109/ICCV.1998.710697","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710697","url":null,"abstract":"We describe the recovery of a quadric surface from its image in two or more perspective views. The recovered quadric is used in 3D modeling and image registration applications. There are three novel contributions. First, it is shown that a one parameter family of quadrics is recovered from outlines in two views. The ambiguity is reduced to twofold given a point correspondence. There is no ambiguity from outlines in three or more views. Second, it is shown that degenerate quadrics reduce the ambiguity of reconstruction. Third, it is shown that surfaces can be piecewise quadric approximated from piecewise conic approximations of their outlines. All these cases are illustrated by examples with real images. Implementation details are given and the quality of the results is assessed.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127734974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
A general framework for object detection 目标检测的一般框架
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710772
C. Papageorgiou, Michael Oren, T. Poggio
This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a support vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique in two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.
本文提出了一种用于杂乱场景静态图像中目标检测的通用可训练框架。我们开发的检测技术是基于对类实例的统计分析得出的对象类的小波表示。通过在小波基函数的过完备字典的子集中学习一个对象类,我们推导出一个对象类的紧凑表示,该对象类被用作支持向量机分类器的输入。这种表示既克服了类内可变性的问题,又在不受约束的环境中提供了较低的误检率。我们在两个领域展示了该技术的能力,这两个领域的固有信息内容有很大的不同。第一个系统是人脸检测,第二个系统是人的领域,与人脸相比,人的颜色、纹理和图案变化很大。与以前的方法不同,该系统从示例中学习,不依赖于任何先验(手工制作)模型或基于运动的分割。本文还提出了一种基于运动的扩展,以提高视频序列检测算法的性能。这里给出的结果表明,这种架构很可能是非常通用的。
{"title":"A general framework for object detection","authors":"C. Papageorgiou, Michael Oren, T. Poggio","doi":"10.1109/ICCV.1998.710772","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710772","url":null,"abstract":"This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a support vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique in two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122256392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1712
Achieving a Fitts law relationship for visual guided reaching 实现视觉引导到达的菲茨定律关系
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710824
N. Ferrier
In order to take advantage of the top speed of manipulators, vision cannot be tightly integrated into the motion control loop. Past visual servo control systems have performed satisfactorily with this constraint, however it can be shown that the task execution time can be reduced if the vision system is de-coupled from the low level motor control system. For reaching, there is a trade-off between the accuracy of a motion and the time required to execute motion. In studies of human motor control this tradeoff is quantified by Fitts Law, a relationship between the motion time, the target distance, and the target width. These studies suggest that vision is not used tightly within the control loop, i.e. as a sensor that is servo-ed on, but rather vision is used to determine where the reaching target is and whether target has been reached successfully. Through a simple robotic example we demonstrate that a similar trade off exists between motion accuracy and the motion execution time for visual guided robot reaching motions.
为了利用机械手的最高速度,视觉不能紧密地集成到运动控制回路中。过去的视觉伺服控制系统在此约束下的表现令人满意,但可以表明,如果视觉系统与低电平电机控制系统解耦,则可以减少任务执行时间。为了达到目标,需要在动作的准确性和执行动作所需的时间之间进行权衡。在人类运动控制的研究中,这种权衡被菲茨定律量化,即运动时间、目标距离和目标宽度之间的关系。这些研究表明,视觉并没有在控制回路中紧密地使用,即作为一种伺服传感器,而是用于确定到达目标的位置以及是否成功到达目标。通过一个简单的机器人例子,我们证明了在视觉引导机器人到达运动时,运动精度和运动执行时间之间存在类似的权衡。
{"title":"Achieving a Fitts law relationship for visual guided reaching","authors":"N. Ferrier","doi":"10.1109/ICCV.1998.710824","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710824","url":null,"abstract":"In order to take advantage of the top speed of manipulators, vision cannot be tightly integrated into the motion control loop. Past visual servo control systems have performed satisfactorily with this constraint, however it can be shown that the task execution time can be reduced if the vision system is de-coupled from the low level motor control system. For reaching, there is a trade-off between the accuracy of a motion and the time required to execute motion. In studies of human motor control this tradeoff is quantified by Fitts Law, a relationship between the motion time, the target distance, and the target width. These studies suggest that vision is not used tightly within the control loop, i.e. as a sensor that is servo-ed on, but rather vision is used to determine where the reaching target is and whether target has been reached successfully. Through a simple robotic example we demonstrate that a similar trade off exists between motion accuracy and the motion execution time for visual guided robot reaching motions.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Computing Ritz approximations of primary images 计算主图像的里兹近似
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710711
H. Schweitzer
Ritz vectors approximate eigenvectors that are a common choice for primary images in content based indexing. They can be computed efficiently even when the images are accessed through slow communication such as the Internet. We develop an algorithm that computes Ritz vectors in one pass through the images. When iterated, the algorithm can recover the exact eigenvectors. In applications to image indexing and learning it may be necessary to compute primary images for indexing many sub-categories of the image set. The proposed algorithm can compute these age data. Similar computation by other algorithms is much more costly even when access to the images is inexpensive.
里兹向量近似特征向量,是基于内容的索引中主要图像的常用选择。即使通过因特网等慢速通信访问图像,它们也可以有效地计算。我们开发了一种算法,在一次通过图像计算里兹向量。当迭代时,该算法可以准确地恢复特征向量。在图像索引和学习的应用中,为了索引图像集的许多子类别,可能需要计算主图像。提出的算法可以计算这些年龄数据。即使访问图像的成本不高,使用其他算法进行类似计算的成本也要高得多。
{"title":"Computing Ritz approximations of primary images","authors":"H. Schweitzer","doi":"10.1109/ICCV.1998.710711","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710711","url":null,"abstract":"Ritz vectors approximate eigenvectors that are a common choice for primary images in content based indexing. They can be computed efficiently even when the images are accessed through slow communication such as the Internet. We develop an algorithm that computes Ritz vectors in one pass through the images. When iterated, the algorithm can recover the exact eigenvectors. In applications to image indexing and learning it may be necessary to compute primary images for indexing many sub-categories of the image set. The proposed algorithm can compute these age data. Similar computation by other algorithms is much more costly even when access to the images is inexpensive.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"363 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134262222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Ego-motion and omnidirectional cameras 自我运动和全方位相机
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710838
J. Gluckman, S. Nayar
Recent research in image sensors has produced cameras with very large fields of view. An area of computer vision research which will benefit from this technology is the computation of camera motion (ego-motion) from a sequence of images. Traditional cameras stiffer from the problem that the direction of translation may lie outside of the field of view, making the computation of camera motion sensitive to noise. In this paper, we present a method for the recovery of ego-motion using omnidirectional cameras. Noting the relationship between spherical projection and wide-angle imaging devices, we propose mapping the image velocity vectors to a sphere, using the Jacobian of the transformation between the projection model of the camera and spherical projection. Once the velocity vectors are mapped to a sphere, we show how existing ego-motion algorithms can be applied and present some experimental results. These results demonstrate the ability to compute ego-motion with omnidirectional cameras.
最近对图像传感器的研究已经生产出具有非常大视场的相机。计算机视觉研究的一个领域将受益于这项技术,即从一系列图像中计算相机运动(自我运动)。传统摄像机由于平移方向可能在视场之外,使得摄像机运动计算对噪声比较敏感。在本文中,我们提出了一种利用全向相机恢复自我运动的方法。考虑到球面投影与广角成像设备之间的关系,我们提出利用相机投影模型与球面投影之间变换的雅可比矩阵,将图像速度向量映射到球面上。一旦速度矢量被映射到一个球体,我们展示了如何现有的自我运动算法可以应用,并提出了一些实验结果。这些结果证明了用全向相机计算自我运动的能力。
{"title":"Ego-motion and omnidirectional cameras","authors":"J. Gluckman, S. Nayar","doi":"10.1109/ICCV.1998.710838","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710838","url":null,"abstract":"Recent research in image sensors has produced cameras with very large fields of view. An area of computer vision research which will benefit from this technology is the computation of camera motion (ego-motion) from a sequence of images. Traditional cameras stiffer from the problem that the direction of translation may lie outside of the field of view, making the computation of camera motion sensitive to noise. In this paper, we present a method for the recovery of ego-motion using omnidirectional cameras. Noting the relationship between spherical projection and wide-angle imaging devices, we propose mapping the image velocity vectors to a sphere, using the Jacobian of the transformation between the projection model of the camera and spherical projection. Once the velocity vectors are mapped to a sphere, we show how existing ego-motion algorithms can be applied and present some experimental results. These results demonstrate the ability to compute ego-motion with omnidirectional cameras.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130719713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 259
Finding periodicity in space and time 寻找空间和时间的周期性
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710746
Fang Liu, Rosalind W. Picard
An algorithm for simultaneous detection, segmentation, and characterization of spatiotemporal periodicity is presented. The use of periodicity templates is proposed to localize and characterize temporal activities. The templates not only indicate the presence and location of a periodic event, but also give an accurate quantitative periodicity measure. Hence, they can be used as a new means of periodicity representation. The proposed algorithm can also be considered as a "periodicity filter", a low-level model of periodicity perception. The algorithm is computationally simple, and shown to be more robust than optical flow based techniques in the presence of noise. A variety of real-world examples are used to demonstrate the performance of the algorithm.
提出了一种同时检测、分割和表征时空周期性的算法。提出了使用周期性模板来定位和表征时间活动。这些模板不仅表明周期事件的存在和位置,而且给出了精确的定量周期度量。因此,它们可以作为一种新的周期表示方法。所提出的算法也可以被认为是一个“周期性过滤器”,一个周期性感知的低级模型。该算法计算简单,并且在存在噪声的情况下比基于光流的技术具有更强的鲁棒性。各种现实世界的例子被用来证明该算法的性能。
{"title":"Finding periodicity in space and time","authors":"Fang Liu, Rosalind W. Picard","doi":"10.1109/ICCV.1998.710746","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710746","url":null,"abstract":"An algorithm for simultaneous detection, segmentation, and characterization of spatiotemporal periodicity is presented. The use of periodicity templates is proposed to localize and characterize temporal activities. The templates not only indicate the presence and location of a periodic event, but also give an accurate quantitative periodicity measure. Hence, they can be used as a new means of periodicity representation. The proposed algorithm can also be considered as a \"periodicity filter\", a low-level model of periodicity perception. The algorithm is computationally simple, and shown to be more robust than optical flow based techniques in the presence of noise. A variety of real-world examples are used to demonstrate the performance of the algorithm.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132849296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 146
Face surveillance 面对监控
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710786
S. Gutta, Jeffrey R. Huang, Vishal Kakkad, H. Wechsler
Most of the research on face recognition addresses the MATCH problem and it assumes a closed universe where there is no need for a REJECT ('false positive') option. The SURVEILLANCE problem is addressed indirectly, if at all, through the MATCH problem, where the size of the gallery rather than that of the probe set is very large. This paper addresses the proper surveillance problem where the size of the probe ('unknown image') set vs. gallery ('known image') set is 450 vs. 50 frontal images. We developed robust face ID verification ('classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET face data base. The hybrid classifier architecture consists of an ensemble of connectionist networks-Radial Basis Functions (RBF) and inductive decision trees (DT). Experimental results prove the feasibility of our approach and yield 97% accuracy using the probe and gallery sets specified above.
大多数关于人脸识别的研究都解决了匹配问题,它假设了一个封闭的宇宙,在这个宇宙中不需要拒绝(“假阳性”)选项。监视问题是通过MATCH问题间接解决的,在MATCH问题中,库的大小而不是探测集的大小非常大。本文解决了适当的监控问题,其中探针(“未知图像”)集与画廊(“已知图像”)集的大小为450对50正面图像。我们开发了基于混合分类器的鲁棒人脸ID验证(“分类”)和检索方案,并使用FERET人脸数据库证明了它们的可行性。混合分类器结构由连接网络-径向基函数(RBF)和归纳决策树(DT)组成。实验结果证明了该方法的可行性,使用上述探针和画廊集的准确率达到97%。
{"title":"Face surveillance","authors":"S. Gutta, Jeffrey R. Huang, Vishal Kakkad, H. Wechsler","doi":"10.1109/ICCV.1998.710786","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710786","url":null,"abstract":"Most of the research on face recognition addresses the MATCH problem and it assumes a closed universe where there is no need for a REJECT ('false positive') option. The SURVEILLANCE problem is addressed indirectly, if at all, through the MATCH problem, where the size of the gallery rather than that of the probe set is very large. This paper addresses the proper surveillance problem where the size of the probe ('unknown image') set vs. gallery ('known image') set is 450 vs. 50 frontal images. We developed robust face ID verification ('classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET face data base. The hybrid classifier architecture consists of an ensemble of connectionist networks-Radial Basis Functions (RBF) and inductive decision trees (DT). Experimental results prove the feasibility of our approach and yield 97% accuracy using the probe and gallery sets specified above.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"48 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132895054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Constructing virtual worlds using dense stereo 使用密集立体声构建虚拟世界
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710694
P J Narayanan, P. Rander, T. Kanade
We present Virtualized Reality, a technique to create virtual worlds out of dynamic events using densely distributed stereo views. The intensity image and depth map for each camera view at each time instant are combined to form a Visible Surface Model. Immersive interaction with the virtualized event is possible using a dense collection of such models. Additionally, a Complete Surface Model of each instant can be built by merging the depth maps from different cameras into a common volumetric space. The corresponding model is compatible with traditional virtual models and can be interacted with immersively using standard tools. Because both VSMs and CSMs are fully three-dimensional, virtualized models can also be combined and modified to build larger, more complex environments, an important capability for many non-trivial applications. We present results from 3D Dome, our facility to create virtualized models.
我们提出了虚拟现实,一种技术来创建虚拟世界的动态事件使用密集分布的立体视图。每个相机视图在每个时间瞬间的强度图像和深度图被组合成一个可见表面模型。使用此类模型的密集集合,可以实现与虚拟事件的沉浸式交互。此外,通过将来自不同相机的深度图合并到一个共同的体积空间中,可以构建每个瞬间的完整表面模型。相应的模型与传统的虚拟模型兼容,可以使用标准工具进行沉浸式交互。由于vsm和csm都是完全三维的,因此还可以组合和修改虚拟模型,以构建更大、更复杂的环境,这是许多重要应用程序的重要功能。我们展示了3D Dome的结果,这是我们创建虚拟模型的工具。
{"title":"Constructing virtual worlds using dense stereo","authors":"P J Narayanan, P. Rander, T. Kanade","doi":"10.1109/ICCV.1998.710694","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710694","url":null,"abstract":"We present Virtualized Reality, a technique to create virtual worlds out of dynamic events using densely distributed stereo views. The intensity image and depth map for each camera view at each time instant are combined to form a Visible Surface Model. Immersive interaction with the virtualized event is possible using a dense collection of such models. Additionally, a Complete Surface Model of each instant can be built by merging the depth maps from different cameras into a common volumetric space. The corresponding model is compatible with traditional virtual models and can be interacted with immersively using standard tools. Because both VSMs and CSMs are fully three-dimensional, virtualized models can also be combined and modified to build larger, more complex environments, an important capability for many non-trivial applications. We present results from 3D Dome, our facility to create virtualized models.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124181895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 310
Iterative multi-step explicit camera calibration 迭代多步显式摄像机标定
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710795
Jorge Batista, Helder Sabino de Araújo, A. T. Almeida
Perspective camera calibration has been in the last decades a research subject for a large group of researchers and as a result several camera calibration methodologies can be found in the literature. However only a small number of those methods base their approaches on the use of monoplane calibration points. This paper describes one of those methodologies that uses monoplane calibration points to realize an explicit 3D camera calibration. To avoid the singularity obtained with the calibration equations when monoplane calibration points are used, this method computes the calibration parameters in a multi-step procedure and requires a first-guess solution for the intrinsic parameters. These parameters are updated and their accuracy increased through an iterative procedure. A stability analysis as a function of the pose of the camera is presented. Camera pose view strategies for accurate camera orientation computation can be extracted from the pose view stability analysis.
在过去的几十年里,透视相机校准一直是一大批研究人员的研究课题,因此在文献中可以找到几种相机校准方法。然而,这些方法中只有一小部分是基于单面标定点的。本文介绍了一种利用单面标定点实现显式三维摄像机标定的方法。为避免单平面标定点时标定方程产生的奇异性,该方法采用多步计算方法计算标定参数,并要求对固有参数进行初步求解。通过迭代过程更新这些参数并提高其精度。给出了作为摄像机姿态函数的稳定性分析。从姿态视图稳定性分析中提取出精确的摄像机姿态视图策略。
{"title":"Iterative multi-step explicit camera calibration","authors":"Jorge Batista, Helder Sabino de Araújo, A. T. Almeida","doi":"10.1109/ICCV.1998.710795","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710795","url":null,"abstract":"Perspective camera calibration has been in the last decades a research subject for a large group of researchers and as a result several camera calibration methodologies can be found in the literature. However only a small number of those methods base their approaches on the use of monoplane calibration points. This paper describes one of those methodologies that uses monoplane calibration points to realize an explicit 3D camera calibration. To avoid the singularity obtained with the calibration equations when monoplane calibration points are used, this method computes the calibration parameters in a multi-step procedure and requires a first-guess solution for the intrinsic parameters. These parameters are updated and their accuracy increased through an iterative procedure. A stability analysis as a function of the pose of the camera is presented. Camera pose view strategies for accurate camera orientation computation can be extracted from the pose view stability analysis.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125830564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Local symmetries of shapes in arbitrary dimension 任意维度形状的局部对称性
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710857
S. Tari, J. Shah
Motivated by a need to define an object-centered reference system determined by the most salient characteristics of the shape, many methods have been proposed, all of which directly or indirectly involve an axis about which the shape is locally symmetric. Recently, a function /spl upsi/, called "the edge strength function", has been successfully used to determine efficiently the axes of local symmetries of 2-d shapes. The level curves of /spl upsi/ are interpreted as successively smoother versions of the initial shape boundary. The local minima of the absolute gradient /spl par//spl nabla//spl upsi//spl par/ along the level curves of /spl upsi/ are shown to be a robust criterion for determining the shape skeleton. More generally, at an extremal point of /spl par//spl nabla//spl upsi//spl par/ along a level curve, the level curve is locally symmetric with respect to the gradient vector /spl nabla//spl upsi/. That is, at such a point, the level curve is approximately a conic section whose one of the principal axes coincides with the gradient vector. Thus, the locus of the extremal points of /spl par//spl nabla//spl upsi//spl par/ along the level curves determines the axes of local symmetries of the shape. In this paper, we extend this method to shapes of arbitrary dimension.
由于需要定义一个由形状的最显著特征决定的以物体为中心的参照系,因此提出了许多方法,所有这些方法都直接或间接地涉及到形状局部对称的轴。最近,一种被称为“边缘强度函数”的函数/spl upsi/被成功地用于有效地确定二维形状的局部对称轴。/spl upsi/的水平曲线被解释为初始形状边界的连续光滑版本。绝对梯度/spl par//spl nabla//spl upsi//spl par/沿/spl upsi/水平曲线的局部极小值是确定形状骨架的可靠准则。更一般地说,在沿水平曲线的极值点/spl par//spl nabla//spl upsi//spl par/处,水平曲线相对于梯度向量/spl nabla//spl upsi/是局部对称的。也就是说,在这一点上,水平面曲线近似为一条主轴与梯度矢量重合的圆锥截面。因此,/spl par//spl nabla//spl upsi//spl par/的极值点沿水平曲线的轨迹决定了形状的局部对称轴线。在本文中,我们将该方法推广到任意维的形状。
{"title":"Local symmetries of shapes in arbitrary dimension","authors":"S. Tari, J. Shah","doi":"10.1109/ICCV.1998.710857","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710857","url":null,"abstract":"Motivated by a need to define an object-centered reference system determined by the most salient characteristics of the shape, many methods have been proposed, all of which directly or indirectly involve an axis about which the shape is locally symmetric. Recently, a function /spl upsi/, called \"the edge strength function\", has been successfully used to determine efficiently the axes of local symmetries of 2-d shapes. The level curves of /spl upsi/ are interpreted as successively smoother versions of the initial shape boundary. The local minima of the absolute gradient /spl par//spl nabla//spl upsi//spl par/ along the level curves of /spl upsi/ are shown to be a robust criterion for determining the shape skeleton. More generally, at an extremal point of /spl par//spl nabla//spl upsi//spl par/ along a level curve, the level curve is locally symmetric with respect to the gradient vector /spl nabla//spl upsi/. That is, at such a point, the level curve is approximately a conic section whose one of the principal axes coincides with the gradient vector. Thus, the locus of the extremal points of /spl par//spl nabla//spl upsi//spl par/ along the level curves determines the axes of local symmetries of the shape. In this paper, we extend this method to shapes of arbitrary dimension.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124780905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
期刊
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1