首页 > 最新文献

The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)最新文献

英文 中文
Evolving a Vision-Based Line-Following Robot Controller 一种基于视觉的直线跟踪机器人控制器
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.32
J. Dupuis, M. Parizeau
This paper presents an original framework for evolving a vision-based mobile robot controller using genetic programming. This framework is built on the Open BEAGLE framework for the evolutionary computations, and on OpenGL for simulating the visual environment of a physical mobile robot. The feasibility of this framework is demonstrated through a simple, yet non-trivial, line following problem.
本文提出了一种基于遗传规划的移动机器人视觉控制器进化框架。该框架基于Open BEAGLE框架进行进化计算,基于OpenGL模拟物理移动机器人的视觉环境。这个框架的可行性是通过一个简单但不平凡的行跟随问题来证明的。
{"title":"Evolving a Vision-Based Line-Following Robot Controller","authors":"J. Dupuis, M. Parizeau","doi":"10.1109/CRV.2006.32","DOIUrl":"https://doi.org/10.1109/CRV.2006.32","url":null,"abstract":"This paper presents an original framework for evolving a vision-based mobile robot controller using genetic programming. This framework is built on the Open BEAGLE framework for the evolutionary computations, and on OpenGL for simulating the visual environment of a physical mobile robot. The feasibility of this framework is demonstrated through a simple, yet non-trivial, line following problem.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114604529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
User Authentication based on Face Recognition with Support Vector Machines 基于支持向量机的人脸识别用户认证
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.83
Paolo Abeni, M. Baltatu, Rosalia D'Alessandro
The present paper proposes an authentication scheme which relies on face biometrics and one-class Support Vector Machines. The proposed recognition procedures are based on both a global approach and on a combination of a global and a component-based approaches. Two different features extraction methods and three light compensation algorithms are tested. The combined system outperforms the global system and yields a significant performance enhancement with respect to the prior results obtained with the one-class Support Vector Machines approach for face recognition.
提出了一种基于人脸生物识别和一类支持向量机的身份验证方案。提出的识别程序既基于全局方法,也基于全局方法和基于组件的方法的结合。测试了两种不同的特征提取方法和三种光补偿算法。该组合系统的性能优于全局系统,并且与先前使用单类支持向量机方法获得的人脸识别结果相比,产生了显着的性能增强。
{"title":"User Authentication based on Face Recognition with Support Vector Machines","authors":"Paolo Abeni, M. Baltatu, Rosalia D'Alessandro","doi":"10.1109/CRV.2006.83","DOIUrl":"https://doi.org/10.1109/CRV.2006.83","url":null,"abstract":"The present paper proposes an authentication scheme which relies on face biometrics and one-class Support Vector Machines. The proposed recognition procedures are based on both a global approach and on a combination of a global and a component-based approaches. Two different features extraction methods and three light compensation algorithms are tested. The combined system outperforms the global system and yields a significant performance enhancement with respect to the prior results obtained with the one-class Support Vector Machines approach for face recognition.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116363071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A GPU-based Algorithm for Estimating 3D Geometry and Motion in Near Real-time 一种基于gpu的近实时三维几何和运动估计算法
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.4
Minglun Gong
Real-time 3D geometry and motion estimation has many important applications in areas such as robot navigation and dynamic image-based rendering. A novel algorithm is proposed in this paper for estimating 3D geometry and motion of dynamic scenes based on captured stereo sequences. All computations are conducted in the 2D image space of the center view and the results are represented in forms of disparity maps and disparity flow maps. A dynamic programming based technique is used for searching global optimal disparity maps and disparity flow maps under an energy minimization framework. To achieve high processing speed, most operations are implemented on the Graphics Processing Units (GPU) of programmable graphics hardware. As a result, the derived algorithm is capable of producing both 3D geometry and motion information for dynamic scenes in near real-time. Experiments on two trinocular stereo sequences demonstrate that the proposed algorithm can handle scenes that contain non-rigid motion as well as those captured by moving cameras.
实时三维几何和运动估计在机器人导航和动态图像绘制等领域有着重要的应用。本文提出了一种基于捕获的立体序列估计动态场景三维几何和运动的新算法。所有计算均在中心视图的二维图像空间中进行,计算结果以视差图和视差流图的形式表示。在能量最小化框架下,采用基于动态规划的方法搜索全局最优视差图和视差流图。为了获得高处理速度,大多数操作都是在可编程图形硬件的图形处理单元(GPU)上实现的。因此,该算法能够近乎实时地生成动态场景的三维几何和运动信息。在两个三视立体序列上的实验表明,该算法可以处理包含非刚性运动的场景以及移动摄像机捕获的场景。
{"title":"A GPU-based Algorithm for Estimating 3D Geometry and Motion in Near Real-time","authors":"Minglun Gong","doi":"10.1109/CRV.2006.4","DOIUrl":"https://doi.org/10.1109/CRV.2006.4","url":null,"abstract":"Real-time 3D geometry and motion estimation has many important applications in areas such as robot navigation and dynamic image-based rendering. A novel algorithm is proposed in this paper for estimating 3D geometry and motion of dynamic scenes based on captured stereo sequences. All computations are conducted in the 2D image space of the center view and the results are represented in forms of disparity maps and disparity flow maps. A dynamic programming based technique is used for searching global optimal disparity maps and disparity flow maps under an energy minimization framework. To achieve high processing speed, most operations are implemented on the Graphics Processing Units (GPU) of programmable graphics hardware. As a result, the derived algorithm is capable of producing both 3D geometry and motion information for dynamic scenes in near real-time. Experiments on two trinocular stereo sequences demonstrate that the proposed algorithm can handle scenes that contain non-rigid motion as well as those captured by moving cameras.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128193888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Single landmark based self-localization of mobile robots 基于单地标的移动机器人自定位
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.67
Abdul Bais, Robert Sablatnig, J. Gu
In this paper we discuss landmark based absolute localization of tiny autonomous mobile robots in a known environment. Landmark features are naturally occurring as it is not allowed to modify the environment with special navigational aids. These features are sparse in our application domain and are frequently occluded by other robots. This makes simultaneous acquisition of two or more landmarks difficult. Therefore, we propose a system that requires a single landmark feature. The algorithm is based on range measurement of a single landmark from two arbitrary points whose displacement can be measured using dead-reckoning sensors. Range estimation is done with a stereo vision system. Simulation results show that the robot can localize itself if it can estimates range of the same landmark from two different position and if the displacement between the two position is known.
本文讨论了基于地标的微型自主移动机器人在已知环境中的绝对定位。地标性特征是自然发生的,因为它不允许用特殊的导航设备来改变环境。这些特征在我们的应用领域是稀疏的,并且经常被其他机器人遮挡。这使得同时获取两个或多个地标变得困难。因此,我们提出了一个需要单一地标特征的系统。该算法基于任意两个点对单个地标的距离测量,这两个点的位移可以用航位推算传感器测量。距离估计是用立体视觉系统完成的。仿真结果表明,如果机器人能够估计两个不同位置的同一地标的距离,并且两个位置之间的位移是已知的,那么机器人可以定位自己。
{"title":"Single landmark based self-localization of mobile robots","authors":"Abdul Bais, Robert Sablatnig, J. Gu","doi":"10.1109/CRV.2006.67","DOIUrl":"https://doi.org/10.1109/CRV.2006.67","url":null,"abstract":"In this paper we discuss landmark based absolute localization of tiny autonomous mobile robots in a known environment. Landmark features are naturally occurring as it is not allowed to modify the environment with special navigational aids. These features are sparse in our application domain and are frequently occluded by other robots. This makes simultaneous acquisition of two or more landmarks difficult. Therefore, we propose a system that requires a single landmark feature. The algorithm is based on range measurement of a single landmark from two arbitrary points whose displacement can be measured using dead-reckoning sensors. Range estimation is done with a stereo vision system. Simulation results show that the robot can localize itself if it can estimates range of the same landmark from two different position and if the displacement between the two position is known.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"4020 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Belief Propagation on the GPU for Stereo Vision 基于GPU的立体视觉信念传播
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.19
A. Brunton, Chang Shu, G. Roth
The power of Markov random field formulations of lowlevel vision problems, such as stereo, has been known for some time. However, recent advances, both algorithmic and in processing power, have made their application practical. This paper presents a novel implementation of Bayesian belief propagation for graphics processing units found in most modern desktop and notebook computers, and applies it to the stereo problem. The stereo problem is used for comparison to other BP algorithms.
马尔可夫随机场公式在低水平视觉问题(如立体)中的作用已经为人所知一段时间了。然而,最近的进展,无论是算法还是处理能力,都使它们的应用变得实际。本文提出了一种新的贝叶斯信念传播方法,用于大多数现代台式机和笔记本电脑的图形处理单元,并将其应用于立体问题。将立体问题与其他BP算法进行比较。
{"title":"Belief Propagation on the GPU for Stereo Vision","authors":"A. Brunton, Chang Shu, G. Roth","doi":"10.1109/CRV.2006.19","DOIUrl":"https://doi.org/10.1109/CRV.2006.19","url":null,"abstract":"The power of Markov random field formulations of lowlevel vision problems, such as stereo, has been known for some time. However, recent advances, both algorithmic and in processing power, have made their application practical. This paper presents a novel implementation of Bayesian belief propagation for graphics processing units found in most modern desktop and notebook computers, and applies it to the stereo problem. The stereo problem is used for comparison to other BP algorithms.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130664926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Avatar: a virtual reality based tool for collaborative production of theater shows 阿凡达:一个基于虚拟现实的工具,用于戏剧表演的协同生产
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.18
Christian Dompierre, D. Laurendeau
One of the more important limitations of actual tools for performing arts production and design is that collaboration between designers is hard to achieve. In fact, designers must actually be co-located to collaborate in the design of a show, something that is not always possible. While teleconference tools could be used to partially solve this problem, this solution offers no direct interactivity and no synchronization between designers. Also some problems like perspective effects and single viewpoint constrained by the camera are inherent to this solution. Specialized software for performing arts design (e.g. "Life Forms") do not generally provide real-time collaboration and are not really convenient for collaborative work. Also, these systems are often expensive and complex to operate. A more adapted solution combining concepts from virtual reality, network technology, and computer vision has then been specifically developed for collaborative work by performing arts designers. This paper presents a virtual reality application for supporting distributed collaborative production of theater shows resulting from our research. Among other constraints, this application has to ensure that the virtual scene that is being shared between multiple designers is always in sync (by use of computer vision) with a real counterpart and that this synchronization is achieved in real-time. Also, system cost must be kept as low as possible, platform independence must be achieved whenever possible and, since it is to be used by people that are not computer experts, the application has to be user-friendly.
表演艺术制作和设计的实际工具的一个更重要的限制是,设计师之间很难实现合作。事实上,设计师们必须在同一地点合作设计时装秀,这并不总是可能的。虽然电话会议工具可以部分地解决这个问题,但这种解决方案不能提供直接的交互性,也不能在设计人员之间实现同步。此外,一些问题,如透视效果和单一视点约束的相机是固有的这种解决方案。专门用于表演艺术设计的软件(如:“生命形式”)通常不提供实时协作,并且对于协作工作并不真正方便。此外,这些系统通常昂贵且操作复杂。一种结合了虚拟现实、网络技术和计算机视觉概念的更适合的解决方案,已经专门为表演艺术设计师的协作工作开发出来。本文提出了一种支持分布式协同制作剧场演出的虚拟现实应用。在其他约束条件中,这个应用程序必须确保多个设计师之间共享的虚拟场景始终与真实场景同步(通过使用计算机视觉),并且这种同步是实时实现的。此外,系统成本必须尽可能低,平台独立性必须尽可能实现,并且,由于它是由非计算机专家使用的,因此应用程序必须是用户友好的。
{"title":"Avatar: a virtual reality based tool for collaborative production of theater shows","authors":"Christian Dompierre, D. Laurendeau","doi":"10.1109/CRV.2006.18","DOIUrl":"https://doi.org/10.1109/CRV.2006.18","url":null,"abstract":"One of the more important limitations of actual tools for performing arts production and design is that collaboration between designers is hard to achieve. In fact, designers must actually be co-located to collaborate in the design of a show, something that is not always possible. While teleconference tools could be used to partially solve this problem, this solution offers no direct interactivity and no synchronization between designers. Also some problems like perspective effects and single viewpoint constrained by the camera are inherent to this solution. Specialized software for performing arts design (e.g. \"Life Forms\") do not generally provide real-time collaboration and are not really convenient for collaborative work. Also, these systems are often expensive and complex to operate. A more adapted solution combining concepts from virtual reality, network technology, and computer vision has then been specifically developed for collaborative work by performing arts designers. This paper presents a virtual reality application for supporting distributed collaborative production of theater shows resulting from our research. Among other constraints, this application has to ensure that the virtual scene that is being shared between multiple designers is always in sync (by use of computer vision) with a real counterpart and that this synchronization is achieved in real-time. Also, system cost must be kept as low as possible, platform independence must be achieved whenever possible and, since it is to be used by people that are not computer experts, the application has to be user-friendly.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"361 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132983441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Collaborative Multi-Camera Surveillance with Automated Person Detection 协同多摄像头监控与自动人员检测
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.21
T. Ahmedali, James J. Clark
This paper presents the groundwork for a distributed network of collaborating, intelligent surveillance cameras, implemented with low-cost embedded microprocessor camera modules. Each camera trains a person detection classifier using the Winnow algorithm for unsupervised, online learning. Training examples are automatically extracted and labelled, and the classifier is then used to locate person instances. To improve detection performance, multiple cameras with overlapping fields of view collaborate to confirm results. We present a novel, unsupervised calibration technique that allows each camera module to represent its spatial relationship with the rest. During runtime, cameras apply the learned spatial correlations to confirm each other’s detections. This technique implicitly handles non-overlapping regions that cannot be confirmed. Its computational efficiency is well-suited to real-time processing on our hardware.
本文介绍了分布式协作智能监控摄像机网络的基础,该网络采用低成本嵌入式微处理器摄像机模块实现。每个摄像头使用Winnow算法训练一个人检测分类器,用于无监督的在线学习。训练样本被自动提取和标记,然后分类器被用来定位人的实例。为了提高检测性能,多个具有重叠视场的摄像机协作来确认结果。我们提出了一种新颖的无监督校准技术,允许每个相机模块表示其与其他模块的空间关系。在运行期间,摄像机应用学习到的空间相关性来确认彼此的检测。该技术隐式地处理无法确认的非重叠区域。它的计算效率非常适合我们硬件上的实时处理。
{"title":"Collaborative Multi-Camera Surveillance with Automated Person Detection","authors":"T. Ahmedali, James J. Clark","doi":"10.1109/CRV.2006.21","DOIUrl":"https://doi.org/10.1109/CRV.2006.21","url":null,"abstract":"This paper presents the groundwork for a distributed network of collaborating, intelligent surveillance cameras, implemented with low-cost embedded microprocessor camera modules. Each camera trains a person detection classifier using the Winnow algorithm for unsupervised, online learning. Training examples are automatically extracted and labelled, and the classifier is then used to locate person instances. To improve detection performance, multiple cameras with overlapping fields of view collaborate to confirm results. We present a novel, unsupervised calibration technique that allows each camera module to represent its spatial relationship with the rest. During runtime, cameras apply the learned spatial correlations to confirm each other’s detections. This technique implicitly handles non-overlapping regions that cannot be confirmed. Its computational efficiency is well-suited to real-time processing on our hardware.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130999485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Using 3D Spline Differentiation to Compute Quantitative Optical Flow 利用三维样条微分法计算定量光流
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.84
J. Barron, M. Daniel, J. Mari
We show that differentiation via fitting B-splines to the spatio-temporal intensity data comprising an image sequence provides at least the same and usually better 2D Lucas and Kanade optical flow than that computed via Simoncelli’s balanced/matched filters.
我们表明,通过将b样条拟合到包含图像序列的时空强度数据上的分化,与通过Simoncelli平衡/匹配滤波器计算的结果相比,至少可以提供相同且通常更好的2D Lucas和Kanade光流。
{"title":"Using 3D Spline Differentiation to Compute Quantitative Optical Flow","authors":"J. Barron, M. Daniel, J. Mari","doi":"10.1109/CRV.2006.84","DOIUrl":"https://doi.org/10.1109/CRV.2006.84","url":null,"abstract":"We show that differentiation via fitting B-splines to the spatio-temporal intensity data comprising an image sequence provides at least the same and usually better 2D Lucas and Kanade optical flow than that computed via Simoncelli’s balanced/matched filters.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121309905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generic Detection of Multi-Part Objects by High-Level Analysis 基于高级分析的多部分对象的通用检测
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.36
J. Bernier, R. Bergevin
A method is proposed to detect multi-part man-made or natural objects in complex images. It consists in first extracting simple curves and straight lines from the edge map. Then, a search tree is expanded by selecting and ordering the segmented primitives on the basis of generic local and global grouping criteria. The set of partial contours provided by the parallel search are combined into more complex forms. Global scores produce a sorted list of potential object silhouettes.
提出了一种复杂图像中多部分人造或自然物体的检测方法。首先从边缘图中提取简单的曲线和直线。然后,根据一般的局部和全局分组标准,通过选择和排序分割的原语来扩展搜索树。由并行搜索提供的部分轮廓集被组合成更复杂的形式。全局得分产生一个潜在对象轮廓的排序列表。
{"title":"Generic Detection of Multi-Part Objects by High-Level Analysis","authors":"J. Bernier, R. Bergevin","doi":"10.1109/CRV.2006.36","DOIUrl":"https://doi.org/10.1109/CRV.2006.36","url":null,"abstract":"A method is proposed to detect multi-part man-made or natural objects in complex images. It consists in first extracting simple curves and straight lines from the edge map. Then, a search tree is expanded by selecting and ordering the segmented primitives on the basis of generic local and global grouping criteria. The set of partial contours provided by the parallel search are combined into more complex forms. Global scores produce a sorted list of potential object silhouettes.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122182174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple-Sensor Indoor Surveillance System 多传感器室内监控系统
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.50
V. Petrushin, Gang Wei, Omer Shakil, D. Roqueiro, A. Gershman
This paper describes a surveillance system that uses a network of sensors of different kind for localizing and tracking people in an office environment. The sensor network consists of video cameras, infrared tag readers, a fingerprint reader and a PTZ camera. The system implements a Bayesian framework that uses noisy, but redundant data from multiple sensor streams and incorporates it with the contextual and domain knowledge. The paper describes approaches to camera specification, dynamic background modeling, object modeling and probabilistic inference. The preliminary experimental results are presented and discussed.
本文描述了一个监控系统,该系统使用不同类型的传感器网络来定位和跟踪办公环境中的人员。传感器网络由摄像机、红外标签读取器、指纹读取器和PTZ摄像机组成。该系统实现了一个贝叶斯框架,该框架使用来自多个传感器流的噪声但冗余的数据,并将其与上下文和领域知识相结合。本文介绍了摄像机规格、动态背景建模、对象建模和概率推理的方法。给出了初步的实验结果并进行了讨论。
{"title":"Multiple-Sensor Indoor Surveillance System","authors":"V. Petrushin, Gang Wei, Omer Shakil, D. Roqueiro, A. Gershman","doi":"10.1109/CRV.2006.50","DOIUrl":"https://doi.org/10.1109/CRV.2006.50","url":null,"abstract":"This paper describes a surveillance system that uses a network of sensors of different kind for localizing and tracking people in an office environment. The sensor network consists of video cameras, infrared tag readers, a fingerprint reader and a PTZ camera. The system implements a Bayesian framework that uses noisy, but redundant data from multiple sensor streams and incorporates it with the contextual and domain knowledge. The paper describes approaches to camera specification, dynamic background modeling, object modeling and probabilistic inference. The preliminary experimental results are presented and discussed.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125232645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
期刊
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1