首页 > 最新文献

2003 Conference on Computer Vision and Pattern Recognition Workshop最新文献

英文 中文
Accurately Estimating Sherd 3D Surface Geometry with Application to Pot Reconstruction 准确估计碎片三维表面几何形状及其在锅重建中的应用
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10014
A. Willis, Xavier Orriols, D. Cooper
This paper deals with the problem of precise automatic estimation of the surface geometry of pot sherds uncovered at archaeological excavation sites using dense 3D laser-scan data. Critical to ceramic fragment analysis is the ability to geometrically classify excavated sherds, and, if possible, reconstruct the original pots using the sherd fragments. To do this, archaelogists must estimate the pot geometry in terms of an axis and associated profile curve from the discovered fragments. In this paper, we discuss an automatic method for accurately estimating an axis/profile curve pair for each archeological sherd (even when they are small) based on axially symmetric implicit polynomial surface models. Our method estimates the axis/profile curve for a sherd by finding the axially symmetric algebraic surface which best fits the measured set of dense 3D points and associated normals. We note that this method will work on 3D point data alone and does not require any local surface computations such as differentiation. Axis/profile curve estimates are accompanied by a detailed statistical error analysis. Estimation and error analysis are illustrated with application to a number of sherds. These fragments, excavated from Petra, Jordan, are chosen as exemplars of the families of geometrically diverse sherds commonly found on an archeological excavation site. We then briefly discuss how the estimation results may be integrated into a larger pot reconstruction program.
本文研究了利用密集三维激光扫描数据对考古发掘现场出土的罐片表面几何形状进行精确自动估计的问题。陶瓷碎片分析的关键是对出土碎片进行几何分类的能力,如果可能的话,利用碎片碎片重建原始罐子。为了做到这一点,考古学家必须根据发现的碎片的轴线和相关的轮廓曲线来估计罐子的几何形状。在本文中,我们讨论了一种基于轴对称隐式多项式曲面模型的精确估计每个考古碎片(即使它们很小)的轴/轮廓曲线对的自动方法。我们的方法通过找到最适合密集3D点和相关法线的测量集的轴对称代数表面来估计碎片的轴/轮廓曲线。我们注意到,这种方法将单独处理3D点数据,不需要任何局部表面计算,如微分。轴/剖面曲线估计伴随着详细的统计误差分析。通过对若干分片的应用,说明了估计和误差分析。这些从约旦佩特拉出土的碎片被选为考古发掘现场常见的几何形状多样碎片家族的典范。然后,我们简要地讨论了如何将估计结果集成到更大的锅重建程序中。
{"title":"Accurately Estimating Sherd 3D Surface Geometry with Application to Pot Reconstruction","authors":"A. Willis, Xavier Orriols, D. Cooper","doi":"10.1109/CVPRW.2003.10014","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10014","url":null,"abstract":"This paper deals with the problem of precise automatic estimation of the surface geometry of pot sherds uncovered at archaeological excavation sites using dense 3D laser-scan data. Critical to ceramic fragment analysis is the ability to geometrically classify excavated sherds, and, if possible, reconstruct the original pots using the sherd fragments. To do this, archaelogists must estimate the pot geometry in terms of an axis and associated profile curve from the discovered fragments. In this paper, we discuss an automatic method for accurately estimating an axis/profile curve pair for each archeological sherd (even when they are small) based on axially symmetric implicit polynomial surface models. Our method estimates the axis/profile curve for a sherd by finding the axially symmetric algebraic surface which best fits the measured set of dense 3D points and associated normals. We note that this method will work on 3D point data alone and does not require any local surface computations such as differentiation. Axis/profile curve estimates are accompanied by a detailed statistical error analysis. Estimation and error analysis are illustrated with application to a number of sherds. These fragments, excavated from Petra, Jordan, are chosen as exemplars of the families of geometrically diverse sherds commonly found on an archeological excavation site. We then briefly discuss how the estimation results may be integrated into a larger pot reconstruction program.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126347237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
An Efficient Dynamic Multi-Angular Feature Points Matcher for Catadioptric Views 一种面向反射视图的高效动态多角度特征点匹配方法
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10081
S. Ieng, R. Benosman, J. Devars
A new efficient matching algorithm dedicated to catadioptric sensors is proposed in this paper. The presented approach is designed to overcome the varying resolution of the mirror. The aim of this work is to provide a matcher that gives reliable results similar to the ones obtained by classical operators on planar projection images. The matching is based on a dynamical size windows extraction, computed from the viewing angular aperture of the neighborhood around the points of interest. An angular scaling of this angular aperture provides a certain number of different neighborhood resolution around the same considered point. A combinatory cost method is introduced in order to determine the best match between the different angular neighborhood patches of two interest points. Results are presented on sparse matched corner points, that can be used to estimate the epipolar geometry of the scene in order to provide a dense 3D map of the observed environment.
提出了一种针对反射光传感器的高效匹配算法。所提出的方法旨在克服反射镜的不同分辨率。这项工作的目的是提供一个匹配器,提供可靠的结果类似于经典算子在平面投影图像上获得的结果。匹配基于动态大小窗口提取,从感兴趣点周围邻域的视角孔径计算。这个角孔径的角度缩放提供了一定数量的不同的邻域分辨率围绕同一考虑点。为了确定两个兴趣点不同角度邻域补丁之间的最佳匹配,引入了组合代价法。结果呈现在稀疏匹配的角点上,可用于估计场景的极极几何,以提供观测环境的密集3D地图。
{"title":"An Efficient Dynamic Multi-Angular Feature Points Matcher for Catadioptric Views","authors":"S. Ieng, R. Benosman, J. Devars","doi":"10.1109/CVPRW.2003.10081","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10081","url":null,"abstract":"A new efficient matching algorithm dedicated to catadioptric sensors is proposed in this paper. The presented approach is designed to overcome the varying resolution of the mirror. The aim of this work is to provide a matcher that gives reliable results similar to the ones obtained by classical operators on planar projection images. The matching is based on a dynamical size windows extraction, computed from the viewing angular aperture of the neighborhood around the points of interest. An angular scaling of this angular aperture provides a certain number of different neighborhood resolution around the same considered point. A combinatory cost method is introduced in order to determine the best match between the different angular neighborhood patches of two interest points. Results are presented on sparse matched corner points, that can be used to estimate the epipolar geometry of the scene in order to provide a dense 3D map of the observed environment.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124589494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
P-Net: A Representation for Partially-Sequenced, Multi-stream Activity P-Net:部分排序的多流活动的表示
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10037
Yifan Shi, A. Bobick
In this paper, we devise a Propagation Net (P-Net) as a new mechanism for the representation and recognition of multi-stream activity. Most of daily activities can be represented by temporally partial ordered intervals where each interval has not only temporal constraint, i.e., before/after/duration, but also a logical relationship such as a and b both must happen. P-Net associates a node for each interval that is probabilistically triggered function dependent upon the state of its parent nodes. Each node is also associated with an observation distribution function that associates perceptual evidence. This evidence, generated by lower level vision modules, is a positive indicator of the elemental action. Using this architecture, we devise an iterative temporal sequencing algorithm that interprets a multi-dimensional observation sequence of visual evidence as a multi-stream propagation through the P-Net. Simple vision and motion-capture data experiments demonstrate the capabilities of our algorithm.
在本文中,我们设计了一个传播网络(P-Net)作为多流活动表示和识别的新机制。大多数日常活动可以用时间上的偏序间隔来表示,其中每个间隔不仅有时间约束,即之前/之后/持续时间,而且还有逻辑关系,如a和b都必须发生。P-Net为每个间隔关联一个节点,该间隔是依赖于其父节点状态的概率触发函数。每个节点还与一个观测分布函数相关联,该函数与感知证据相关联。这种由低级视觉模块产生的证据是基本动作的积极指示。利用这种架构,我们设计了一种迭代时间排序算法,该算法将视觉证据的多维观测序列解释为通过P-Net的多流传播。简单的视觉和动作捕捉数据实验证明了我们算法的能力。
{"title":"P-Net: A Representation for Partially-Sequenced, Multi-stream Activity","authors":"Yifan Shi, A. Bobick","doi":"10.1109/CVPRW.2003.10037","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10037","url":null,"abstract":"In this paper, we devise a Propagation Net (P-Net) as a new mechanism for the representation and recognition of multi-stream activity. Most of daily activities can be represented by temporally partial ordered intervals where each interval has not only temporal constraint, i.e., before/after/duration, but also a logical relationship such as a and b both must happen. P-Net associates a node for each interval that is probabilistically triggered function dependent upon the state of its parent nodes. Each node is also associated with an observation distribution function that associates perceptual evidence. This evidence, generated by lower level vision modules, is a positive indicator of the elemental action. Using this architecture, we devise an iterative temporal sequencing algorithm that interprets a multi-dimensional observation sequence of visual evidence as a multi-stream propagation through the P-Net. Simple vision and motion-capture data experiments demonstrate the capabilities of our algorithm.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123132338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
GWINDOWS: Towards Robust Perception-Based UI GWINDOWS:走向健壮的基于感知的UI
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10048
Andrew D. Wilson, Nuria Oliver
Perceptual user interfaces promise modes of fluid computer-human interaction that complement the mouse and keyboard, and have been especially motivated in non-desktop scenarios, such as kiosks or smart rooms. Such interfaces, however, have been slow to see use for a variety of reasons, including the computational burden they impose, a lack of robustness outside the laboratory, unreasonable calibration demands, and a shortage of sufficiently compelling applications. We have tackled some of these difficulties by using a fast stereo vision algorithm for recognizing hand positions and gestures. Our system uses two inexpensive video cameras to extract depth information. This depth information enhances automatic object detection and tracking robustness, and may also be used in applications. We demonstrate the algorithm in combination with speech recognition to perform several basic window management tasks, report on a user study probing the ease of using the system, and discuss the implications of such a system for future user interfaces.
感知用户界面承诺了一种流体的人机交互模式,作为鼠标和键盘的补充,尤其在非桌面场景中被激发出来,比如kiosks或智能房间。然而,由于各种原因,这些接口的使用速度很慢,包括它们施加的计算负担,实验室外缺乏鲁棒性,不合理的校准需求,以及缺乏足够引人注目的应用。我们通过使用快速立体视觉算法来识别手部位置和手势,解决了其中的一些困难。我们的系统使用两个便宜的摄像机来提取深度信息。这种深度信息增强了自动目标检测和跟踪的鲁棒性,也可用于应用。我们演示了该算法与语音识别相结合来执行几个基本的窗口管理任务,报告了一项用户研究,探讨了使用该系统的便利性,并讨论了该系统对未来用户界面的影响。
{"title":"GWINDOWS: Towards Robust Perception-Based UI","authors":"Andrew D. Wilson, Nuria Oliver","doi":"10.1109/CVPRW.2003.10048","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10048","url":null,"abstract":"Perceptual user interfaces promise modes of fluid computer-human interaction that complement the mouse and keyboard, and have been especially motivated in non-desktop scenarios, such as kiosks or smart rooms. Such interfaces, however, have been slow to see use for a variety of reasons, including the computational burden they impose, a lack of robustness outside the laboratory, unreasonable calibration demands, and a shortage of sufficiently compelling applications. We have tackled some of these difficulties by using a fast stereo vision algorithm for recognizing hand positions and gestures. Our system uses two inexpensive video cameras to extract depth information. This depth information enhances automatic object detection and tracking robustness, and may also be used in applications. We demonstrate the algorithm in combination with speech recognition to perform several basic window management tasks, report on a user study probing the ease of using the system, and discuss the implications of such a system for future user interfaces.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125345014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Design and Use of an In-Museum System for Artifact Capture 博物馆内人工制品捕获系统的设计与使用
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10005
H. Rushmeier, José Gomes, F. Giordano, H. El-Shishiny, Karen A. Magerlein, F. Bernardini
We describe the design and use of a 3D scanning system currently installed in Cairo's Egyptian Museum.The primary purpose of the system is to capture objects for display on a web site communicating Egyptian culture. The system is designed to capture both the geometry and photometry of the museum artifacts. We describe special features of the system and the calibration procedures designed for it. We also present resulting scans and examples of how they will be used on the web site.
我们描述了目前安装在开罗埃及博物馆的3D扫描系统的设计和使用。该系统的主要目的是捕捉物品,以便在传播埃及文化的网站上展示。该系统旨在捕捉博物馆文物的几何形状和光度。我们描述了系统的特殊功能和为它设计的校准程序。我们还展示了扫描结果以及如何在网站上使用它们的示例。
{"title":"Design and Use of an In-Museum System for Artifact Capture","authors":"H. Rushmeier, José Gomes, F. Giordano, H. El-Shishiny, Karen A. Magerlein, F. Bernardini","doi":"10.1109/CVPRW.2003.10005","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10005","url":null,"abstract":"We describe the design and use of a 3D scanning system currently installed in Cairo's Egyptian Museum.The primary purpose of the system is to capture objects for display on a web site communicating Egyptian culture. The system is designed to capture both the geometry and photometry of the museum artifacts. We describe special features of the system and the calibration procedures designed for it. We also present resulting scans and examples of how they will be used on the web site.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125309201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Learning and Perceptual Interfaces 学习和感知接口
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10053
T. Poggio
The ill-posed problem of learning is one of the main gateways to making intelligent machines and to understanding how the brain works. In this talk I will give an up-to-date outline of some of our recent efforts in developing machines that learn, especially in the context of visual interfaces. Our work on statistical learning theory is being applied to classification (and regression) in various domains -- and in particular to applications in computer vision and computer graphics. In this talk, I will summarize our work on trainable, hierarchical classifiers for problems in object recognition and especially for face and person detection. I will also describe how we used the same learning techniques to synthesize a photorealistic animation of a talking human face. Finally, I will speculate briefly on the implication of our research on how visual cortex learns to recognize and perceive objects and on related work on brain-machines interfaces.
学习的病态问题是制造智能机器和理解大脑如何工作的主要途径之一。在这次演讲中,我将给出我们最近在开发学习机器方面所做的一些最新的努力,特别是在视觉界面方面。我们在统计学习理论方面的工作正被应用于不同领域的分类(和回归),特别是在计算机视觉和计算机图形学方面的应用。在这次演讲中,我将总结我们在可训练的、层次分类器方面的工作,这些分类器用于对象识别问题,特别是人脸和人的检测。我还将描述我们如何使用相同的学习技术来合成一个会说话的人脸的逼真动画。最后,我将简要地推测我们对视觉皮层如何学习识别和感知物体以及脑机接口相关工作的研究的含义。
{"title":"Learning and Perceptual Interfaces","authors":"T. Poggio","doi":"10.1109/CVPRW.2003.10053","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10053","url":null,"abstract":"The ill-posed problem of learning is one of the main gateways to making intelligent machines and to understanding how the brain works. In this talk I will give an up-to-date outline of some of our recent efforts in developing machines that learn, especially in the context of visual interfaces. Our work on statistical learning theory is being applied to classification (and regression) in various domains -- and in particular to applications in computer vision and computer graphics. In this talk, I will summarize our work on trainable, hierarchical classifiers for problems in object recognition and especially for face and person detection. I will also describe how we used the same learning techniques to synthesize a photorealistic animation of a talking human face. Finally, I will speculate briefly on the implication of our research on how visual cortex learns to recognize and perceive objects and on related work on brain-machines interfaces.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114580143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibration of A Structured Light-Based Stereo Catadioptric Sensor 基于结构光的立体反射传感器的标定
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10084
R. Orghidan, J. Salvi, E. Mouaddib
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production.
反射式传感器是由反射镜和透镜组合而成的,目的是获得广阔的视野。在本文中,我们提出了一种新的传感器,它具有全方位的观察能力,并提供深度信息附近的环境。传感器是基于一个传统的照相机与一个激光发射器和两个双曲反射镜耦合。讨论了传感器内外参数的数学表达式和精确规格。我们的方法克服了现有全向传感器的局限性,最终降低了生产成本。
{"title":"Calibration of A Structured Light-Based Stereo Catadioptric Sensor","authors":"R. Orghidan, J. Salvi, E. Mouaddib","doi":"10.1109/CVPRW.2003.10084","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10084","url":null,"abstract":"Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123847285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Automatic reference linking in distributed digital libraries 分布式数字图书馆中的自动参考链接
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10026
K. Dennis, G. Michler, G. Schneider, M. Suzuki
In this article we describe new methods for the automatic recognition of the bibliographical data of cited articles in retrodigitized mathematical journal articles and their use for the automatic production of links to their respective reviews in MathSciNet an Zentralblatt MATH. Thus whenever one of these two review journals has a permanent link from the review to the digital full text of the cited article, the full text can automatically be retrieved, searched and printed. The new links from a digital document to the Mathematics Databases help enlarging the existing distributed digital Mathematics library. Examples of retrodigitized articles with automatic links in PDF and also in DjVu format are presented.
在本文中,我们描述了自动识别数字化数学期刊文章中引用文章的书目数据的新方法,以及它们在MathSciNet和Zentralblatt MATH中用于自动生成相应评论链接的新方法。因此,只要这两种评论期刊中的任何一种具有从评论到被引文章的数字全文的永久链接,就可以自动检索、搜索和打印全文。从数字文档到数学数据库的新链接有助于扩大现有的分布式数字数学库。本文介绍了PDF格式和DjVu格式的带有自动链接的复古数字化文章的例子。
{"title":"Automatic reference linking in distributed digital libraries","authors":"K. Dennis, G. Michler, G. Schneider, M. Suzuki","doi":"10.1109/CVPRW.2003.10026","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10026","url":null,"abstract":"In this article we describe new methods for the automatic recognition of the bibliographical data of cited articles in retrodigitized mathematical journal articles and their use for the automatic production of links to their respective reviews in MathSciNet an Zentralblatt MATH. Thus whenever one of these two review journals has a permanent link from the review to the digital full text of the cited article, the full text can automatically be retrieved, searched and printed. The new links from a digital document to the Mathematics Databases help enlarging the existing distributed digital Mathematics library. Examples of retrodigitized articles with automatic links in PDF and also in DjVu format are presented.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121641920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Learning Visual Feature Detectors for Obstacle Avoidance using Genetic Programming 基于遗传规划的避障视觉特征检测器学习
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10066
Andrew J. Marek, W. Smart, Martin C. Martin
In this paper, we describe the use of Genetic Programming (GP) techniques to learn a visual feature detection for a mobile robot navigation task. We provide experimental results across a number of different environments, each with different characteristics, and draw conclusions about the performance of the learned feature detector. We also explore the utility of seeding the initial population with a previously evolved individual, and discuss the performance of the resulting individuals.
在本文中,我们描述了使用遗传规划(GP)技术来学习移动机器人导航任务的视觉特征检测。我们在许多不同的环境中提供了实验结果,每个环境都有不同的特征,并得出关于学习特征检测器性能的结论。我们还探讨了用先前进化的个体播种初始种群的效用,并讨论了由此产生的个体的性能。
{"title":"Learning Visual Feature Detectors for Obstacle Avoidance using Genetic Programming","authors":"Andrew J. Marek, W. Smart, Martin C. Martin","doi":"10.1109/CVPRW.2003.10066","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10066","url":null,"abstract":"In this paper, we describe the use of Genetic Programming (GP) techniques to learn a visual feature detection for a mobile robot navigation task. We provide experimental results across a number of different environments, each with different characteristics, and draw conclusions about the performance of the learned feature detector. We also explore the utility of seeding the initial population with a previously evolved individual, and discuss the performance of the resulting individuals.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"690 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115117888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
ARGMode - Activity Recognition using Graphical Models ARGMode -活动识别使用图形模型
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10039
Raffay Hamid, Yan Huang, Irfan Essa
This paper presents a new framework for tracking and recognizing complex multi-agent activities using probabilistic tracking coupled with graphical models for recognition. We employ statistical feature based particle filter to robustly track multiple objects in cluttered environments. Both color and shape characteristics are used to differentiate and track different objects so that low level visual information can be reliably extracted for recognition of complex activities. Such extracted spatio-temporal features are then used to build temporal graphical models for characterization of these activities. We demonstrate through examples in different scenarios, the generalizability and robustness of our framework.
本文提出了一种利用概率跟踪和图形模型相结合的方法来跟踪和识别复杂的多智能体活动的新框架。我们采用基于统计特征的粒子滤波来鲁棒跟踪杂乱环境中的多个目标。利用颜色和形状特征来区分和跟踪不同的物体,从而可靠地提取低级视觉信息,用于复杂活动的识别。然后使用这些提取的时空特征来构建用于表征这些活动的时间图形模型。通过不同场景中的示例,我们展示了框架的通用性和健壮性。
{"title":"ARGMode - Activity Recognition using Graphical Models","authors":"Raffay Hamid, Yan Huang, Irfan Essa","doi":"10.1109/CVPRW.2003.10039","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10039","url":null,"abstract":"This paper presents a new framework for tracking and recognizing complex multi-agent activities using probabilistic tracking coupled with graphical models for recognition. We employ statistical feature based particle filter to robustly track multiple objects in cluttered environments. Both color and shape characteristics are used to differentiate and track different objects so that low level visual information can be reliably extracted for recognition of complex activities. Such extracted spatio-temporal features are then used to build temporal graphical models for characterization of these activities. We demonstrate through examples in different scenarios, the generalizability and robustness of our framework.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123135389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
期刊
2003 Conference on Computer Vision and Pattern Recognition Workshop
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1