首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Data fusion with a multisensor system for damage control and situational awareness 数据融合与多传感器系统的损害控制和态势感知
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425329
C. Minor, Kevin J. Johnson, S. Rose-Pehrsson, J. Owrutsky, S. Wales, D. Steinhurst, D. Gottuk
The U.S. Naval Research Laboratory has developed an affordable, multisensory, real-time detection system for damage control and situational awareness, called "volume sensor." The system provides standoff identification of events within a space (e.g. flaming and smoldering fires, pipe ruptures, and gas releases) for U.S. Navy vessels. A data fusion approach was used to integrate spectral sensors, acoustic sensors, and video image detection algorithms. Bayesian-based decision algorithms improved event detection rates while reducing false positives. Full scale testing demonstrated that the prototype Volume Sensor performed as well or better than commercial video image detection and point-detection systems in critical quality metrics for fire detection while also providing additional situational awareness. The design framework developed for volume sensor can serve as a template for the integration of heterogeneous sensors into networks for a variety of real-time sensing and situational awareness applications.
美国海军研究实验室开发了一种经济实惠的多感官实时探测系统,用于损害控制和态势感知,称为“体积传感器”。该系统为美国海军舰艇提供空间内事件的对峙识别(例如燃烧和阴燃火灾,管道破裂和气体释放)。采用数据融合方法将光谱传感器、声学传感器和视频图像检测算法集成在一起。基于贝叶斯的决策算法提高了事件检测率,同时减少了误报。全尺寸测试表明,在火灾探测的关键质量指标上,原型体积传感器的表现与商用视频图像检测和点检测系统一样好,甚至更好,同时还提供了额外的态势感知。为体积传感器开发的设计框架可以作为将异构传感器集成到网络中的模板,用于各种实时传感和态势感知应用。
{"title":"Data fusion with a multisensor system for damage control and situational awareness","authors":"C. Minor, Kevin J. Johnson, S. Rose-Pehrsson, J. Owrutsky, S. Wales, D. Steinhurst, D. Gottuk","doi":"10.1109/AVSS.2007.4425329","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425329","url":null,"abstract":"The U.S. Naval Research Laboratory has developed an affordable, multisensory, real-time detection system for damage control and situational awareness, called \"volume sensor.\" The system provides standoff identification of events within a space (e.g. flaming and smoldering fires, pipe ruptures, and gas releases) for U.S. Navy vessels. A data fusion approach was used to integrate spectral sensors, acoustic sensors, and video image detection algorithms. Bayesian-based decision algorithms improved event detection rates while reducing false positives. Full scale testing demonstrated that the prototype Volume Sensor performed as well or better than commercial video image detection and point-detection systems in critical quality metrics for fire detection while also providing additional situational awareness. The design framework developed for volume sensor can serve as a template for the integration of heterogeneous sensors into networks for a variety of real-time sensing and situational awareness applications.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133916888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Accurate self-calibration of two cameras by observations of a moving person on a ground plane 通过观察在地平面上移动的人,对两台照相机进行精确的自校准
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425298
Tsuhan Chen, A. Bimbo, F. Pernici, G. Serra
A calibration algorithm of two cameras using observations of a moving person is presented. Similar methods have been proposed for self-calibration with a single camera, but internal parameter estimation is only limited to the focal length. Recently it has been demonstrated that principal point supposed in the center of the image causes inaccuracy of all estimated parameters. Our method exploits two cameras, using image points of head and foot locations of a moving person, to determine for both cameras the focal length and the principal point. Moreover with the increasing number of cameras there is a demand of procedures to determine their relative placements. In this paper we also describe a method to find the relative position and orientation of two cameras: the rotation matrix and the translation vector which describe the rigid motion between the coordinate frames fixed in two cameras. Results in synthetic and real scenes are presented to evaluate the performance of the proposed method.
提出了一种基于运动人观测的双相机标定算法。类似的方法也被提出用于单相机的自校准,但内部参数估计仅局限于焦距。最近的研究表明,假设主点位于图像的中心会导致所有估计参数的不准确。我们的方法利用两台摄像机,使用移动的人的头部和脚位置的图像点,来确定两台摄像机的焦距和主点。此外,随着摄像机数量的增加,需要确定其相对位置的程序。本文还描述了一种求两个摄像机相对位置和方向的方法:用旋转矩阵和平移向量来描述两个摄像机固定坐标系之间的刚体运动。给出了合成场景和真实场景的测试结果来评估该方法的性能。
{"title":"Accurate self-calibration of two cameras by observations of a moving person on a ground plane","authors":"Tsuhan Chen, A. Bimbo, F. Pernici, G. Serra","doi":"10.1109/AVSS.2007.4425298","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425298","url":null,"abstract":"A calibration algorithm of two cameras using observations of a moving person is presented. Similar methods have been proposed for self-calibration with a single camera, but internal parameter estimation is only limited to the focal length. Recently it has been demonstrated that principal point supposed in the center of the image causes inaccuracy of all estimated parameters. Our method exploits two cameras, using image points of head and foot locations of a moving person, to determine for both cameras the focal length and the principal point. Moreover with the increasing number of cameras there is a demand of procedures to determine their relative placements. In this paper we also describe a method to find the relative position and orientation of two cameras: the rotation matrix and the translation vector which describe the rigid motion between the coordinate frames fixed in two cameras. Results in synthetic and real scenes are presented to evaluate the performance of the proposed method.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Detection of abandoned objects in crowded environments 在拥挤的环境中检测废弃物体
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425322
Medha Bhargava, Chia-Chih Chen, M. Ryoo, J. Aggarwal
With concerns about terrorism and global security on the rise, it has become vital to have in place efficient threat detection systems that can detect and recognize potentially dangerous situations, and alert the authorities to take appropriate action. Of particular significance is the case of unattended objects in mass transit areas. This paper describes a general framework that recognizes the event of someone leaving a piece of baggage unattended in forbidden areas. Our approach involves the recognition of four sub-events that characterize the activity of interest. When an unaccompanied bag is detected, the system analyzes its history to determine its most likely owner(s), where the owner is defined as the person who brought the bag into the scene before leaving it unattended. Through subsequent frames, the system keeps a lookout for the owner, whose presence in or disappearance from the scene defines the status of the bag, and decides the appropriate course of action. The system was successfully tested on the i-LIDS dataset.
随着人们对恐怖主义和全球安全的担忧日益加剧,建立有效的威胁检测系统变得至关重要,这些系统可以检测和识别潜在的危险情况,并提醒当局采取适当行动。特别重要的是公共交通区域无人看管的物体。本文描述了一个识别某人将行李留在禁区无人看管的事件的一般框架。我们的方法包括识别表征感兴趣活动的四个子事件。当检测到无人携带的包时,系统会分析其历史以确定其最可能的主人,其中主人的定义是在无人看管之前将包带入现场的人。通过随后的画面,系统一直在寻找失主,失主的出现或消失决定了包的状态,并决定了适当的行动方案。该系统在i-LIDS数据集上测试成功。
{"title":"Detection of abandoned objects in crowded environments","authors":"Medha Bhargava, Chia-Chih Chen, M. Ryoo, J. Aggarwal","doi":"10.1109/AVSS.2007.4425322","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425322","url":null,"abstract":"With concerns about terrorism and global security on the rise, it has become vital to have in place efficient threat detection systems that can detect and recognize potentially dangerous situations, and alert the authorities to take appropriate action. Of particular significance is the case of unattended objects in mass transit areas. This paper describes a general framework that recognizes the event of someone leaving a piece of baggage unattended in forbidden areas. Our approach involves the recognition of four sub-events that characterize the activity of interest. When an unaccompanied bag is detected, the system analyzes its history to determine its most likely owner(s), where the owner is defined as the person who brought the bag into the scene before leaving it unattended. Through subsequent frames, the system keeps a lookout for the owner, whose presence in or disappearance from the scene defines the status of the bag, and decides the appropriate course of action. The system was successfully tested on the i-LIDS dataset.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114746523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Compression for 3D face recognition applications 压缩3D人脸识别应用程序
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425282
L. Granai, M. Hamouz, J. Tena, T. Vlachos
This paper proposes a novel 3D lossy compression algorithm tailored for 3D faces. We analyse the effects of compression on the face verification rate and measure recognition performances on the face recognition grand challenge database. Whilst preserving the spatial resolution enabling reconstruction of surface details, the proposed scheme achieves substantial compression to the extent that personal 3D biometric data could fit on a 2D barcode.
提出了一种针对三维人脸的有损压缩算法。分析了压缩对人脸验证率的影响,并在人脸识别大挑战数据库上测量了人脸识别性能。在保留空间分辨率以重建表面细节的同时,所提出的方案实现了大量的压缩,以至于个人3D生物特征数据可以放在二维条形码上。
{"title":"Compression for 3D face recognition applications","authors":"L. Granai, M. Hamouz, J. Tena, T. Vlachos","doi":"10.1109/AVSS.2007.4425282","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425282","url":null,"abstract":"This paper proposes a novel 3D lossy compression algorithm tailored for 3D faces. We analyse the effects of compression on the face verification rate and measure recognition performances on the face recognition grand challenge database. Whilst preserving the spatial resolution enabling reconstruction of surface details, the proposed scheme achieves substantial compression to the extent that personal 3D biometric data could fit on a 2D barcode.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129102611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Face localization by neural networks trained with Zernike moments and Eigenfaces feature vectors. A comparison 用Zernike矩和特征向量训练的神经网络进行人脸定位。一个比较
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425340
Mohammed Saaidia, A. Chaari, S. Lelandais, V. Vigneron, M. Bedda
Face localization using neural network is presented in this communication. Neural network was trained with two different kinds of feature parameters vectors; Zernike moments and eigenfaces. In each case, coordinate vectors of pixels surrounding faces in images were used as target vectors on the supervised training procedure. Thus, trained neural network provides on its output layer a coordinate's vector (rho,thetas) representing pixels surrounding the face contained in treated image. This way to proceed gives accurate faces contours which are well adapted to their shapes. Performances obtained for the two kinds of training feature parameters were recorded using a quantitative measurement criterion according to experiments carried out on the XM2VTS database.
本文提出了一种基于神经网络的人脸定位方法。用两种不同的特征参数向量训练神经网络;泽尼克矩和特征面。在每一种情况下,图像中人脸周围像素的坐标向量作为监督训练过程的目标向量。因此,经过训练的神经网络在其输出层上提供一个坐标向量(rho,theta),表示处理图像中包含的面部周围的像素。这种方法可以得到精确的脸部轮廓,并且很好地适应了脸部的形状。根据在XM2VTS数据库上进行的实验,采用定量测量准则记录两种训练特征参数的性能。
{"title":"Face localization by neural networks trained with Zernike moments and Eigenfaces feature vectors. A comparison","authors":"Mohammed Saaidia, A. Chaari, S. Lelandais, V. Vigneron, M. Bedda","doi":"10.1109/AVSS.2007.4425340","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425340","url":null,"abstract":"Face localization using neural network is presented in this communication. Neural network was trained with two different kinds of feature parameters vectors; Zernike moments and eigenfaces. In each case, coordinate vectors of pixels surrounding faces in images were used as target vectors on the supervised training procedure. Thus, trained neural network provides on its output layer a coordinate's vector (rho,thetas) representing pixels surrounding the face contained in treated image. This way to proceed gives accurate faces contours which are well adapted to their shapes. Performances obtained for the two kinds of training feature parameters were recorded using a quantitative measurement criterion according to experiments carried out on the XM2VTS database.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115318670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Real-time tracking and identification on an intelligent IR-based surveillance system 智能红外监控系统的实时跟踪和识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425323
Juan C. Alonso-Bayal, R. Santiago-Mozos, J. Leiva-Murillo, M. Lázaro, Antonio Artés-Rodríguez
We implement a fixed-point real-time identification system and provide tools for the optimal design of exponential lookup-tables. This intelligent surveillance system is based on infrared image processing, which allows to detect and track people and trigger different actions depending on the region of the monitored area in which they are. The system automatically segments the body to get the face and includes a face classifier based on the support vector machine.
我们实现了一个定点实时识别系统,为指数查找表的优化设计提供了工具。这种智能监控系统基于红外图像处理,可以检测和跟踪人员,并根据他们所在的监控区域的区域触发不同的动作。该系统对人体进行自动分割得到人脸,并包含一个基于支持向量机的人脸分类器。
{"title":"Real-time tracking and identification on an intelligent IR-based surveillance system","authors":"Juan C. Alonso-Bayal, R. Santiago-Mozos, J. Leiva-Murillo, M. Lázaro, Antonio Artés-Rodríguez","doi":"10.1109/AVSS.2007.4425323","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425323","url":null,"abstract":"We implement a fixed-point real-time identification system and provide tools for the optimal design of exponential lookup-tables. This intelligent surveillance system is based on infrared image processing, which allows to detect and track people and trigger different actions depending on the region of the monitored area in which they are. The system automatically segments the body to get the face and includes a face classifier based on the support vector machine.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114897328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving the robustness of particle filter-based visual trackers using online parameter adaptation 利用在线参数自适应提高粒子滤波视觉跟踪器的鲁棒性
Pub Date : 2007-09-01 DOI: 10.1109/AVSS.2007.4425313
Andrew D. Bagdanov, A. Bimbo, F. Dini, W. Nunziati
In particle filter-based visual trackers, dynamic velocity components are typically incorporated into the state update equations. In these cases, there is a risk that the uncertainty in the model update stage can become amplified in unexpected and undesirable ways, leading to erroneous behavior of the tracker. Moreover, the use of a weak appearance model can make the estimates provided by the particle filter inaccurate. To deal with this problem, we propose a continuously adaptive approach to estimating uncertainty in the particle filter, one that balances the uncertainty in its static and dynamic elements. We provide quantitative performance evaluation of the resulting particle filter tracker on a set of ten video sequences. Results are reported in terms of a metric that can be used to objectively evaluate the performance of visual trackers. This metric is used to compare our modified particle filter tracker and the continuously adaptive mean shift tracker. Results show that the performance of the particle filter is significantly improved through adaptive parameter estimation, particularly in cases of occlusion and erratic, nonlinear target motion.
在基于粒子滤波的视觉跟踪器中,动态速度分量通常被纳入状态更新方程中。在这些情况下,模型更新阶段的不确定性可能会以意想不到和不希望的方式被放大,从而导致跟踪器的错误行为。此外,使用弱外观模型会使粒子滤波提供的估计不准确。为了解决这个问题,我们提出了一种连续自适应的方法来估计粒子滤波器中的不确定性,一种平衡其静态和动态元素的不确定性的方法。我们在一组10个视频序列上提供了所得到的粒子滤波跟踪器的定量性能评估。结果报告的一个指标,可用于客观地评价视觉跟踪器的性能。该度量用于比较改进的粒子滤波跟踪器和连续自适应平均位移跟踪器。结果表明,通过自适应参数估计,粒子滤波的性能得到了显著提高,特别是在遮挡和不规则非线性目标运动的情况下。
{"title":"Improving the robustness of particle filter-based visual trackers using online parameter adaptation","authors":"Andrew D. Bagdanov, A. Bimbo, F. Dini, W. Nunziati","doi":"10.1109/AVSS.2007.4425313","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425313","url":null,"abstract":"In particle filter-based visual trackers, dynamic velocity components are typically incorporated into the state update equations. In these cases, there is a risk that the uncertainty in the model update stage can become amplified in unexpected and undesirable ways, leading to erroneous behavior of the tracker. Moreover, the use of a weak appearance model can make the estimates provided by the particle filter inaccurate. To deal with this problem, we propose a continuously adaptive approach to estimating uncertainty in the particle filter, one that balances the uncertainty in its static and dynamic elements. We provide quantitative performance evaluation of the resulting particle filter tracker on a set of ten video sequences. Results are reported in terms of a metric that can be used to objectively evaluate the performance of visual trackers. This metric is used to compare our modified particle filter tracker and the continuously adaptive mean shift tracker. Results show that the performance of the particle filter is significantly improved through adaptive parameter estimation, particularly in cases of occlusion and erratic, nonlinear target motion.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122213569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Camera auto-calibration from articulated motion 摄像机从关节运动自动校准
Pub Date : 2007-09-01 DOI: 10.1109/AVSS.2007.4425299
P. Kuo, Jean-Christophe Nebel, D. Makris
This paper presents a novel auto-calibration method from unconstrained human body motion. It relies on the underlying biomechanical constraints associated with human bipedal locomotion. By analysing positions of key points during a sequence, our technique is able to detect frames where the human body adopts a particular posture which ensures the coplanarity of those key points and therefore allows a successful camera calibration. Our technique includes a 3D model adaptation phase which removes the requirement for a precise geometrical 3D description of those points. Our method is validated using a variety of human bipedal motions and camera configurations.
提出了一种基于无约束人体运动的自动标定方法。它依赖于与人类两足运动相关的潜在生物力学约束。通过分析序列中关键点的位置,我们的技术能够检测到人体采用特定姿势的帧,从而确保这些关键点的共平面性,从而允许成功的相机校准。我们的技术包括一个3D模型适应阶段,它消除了对这些点的精确几何3D描述的要求。我们的方法使用各种人类双足运动和相机配置进行了验证。
{"title":"Camera auto-calibration from articulated motion","authors":"P. Kuo, Jean-Christophe Nebel, D. Makris","doi":"10.1109/AVSS.2007.4425299","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425299","url":null,"abstract":"This paper presents a novel auto-calibration method from unconstrained human body motion. It relies on the underlying biomechanical constraints associated with human bipedal locomotion. By analysing positions of key points during a sequence, our technique is able to detect frames where the human body adopts a particular posture which ensures the coplanarity of those key points and therefore allows a successful camera calibration. Our technique includes a 3D model adaptation phase which removes the requirement for a precise geometrical 3D description of those points. Our method is validated using a variety of human bipedal motions and camera configurations.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134106088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Fusion of background estimation approaches for motion detection in non-static backgrounds 非静态背景下运动检测的融合背景估计方法
Pub Date : 2007-09-01 DOI: 10.1109/AVSS.2007.4425335
Eduardo Monari, Charlotte Pasqual
Detection of moving objects is a fundamental task in video based surveillance and security applications. Many detection systems use background estimation methods to model the observed environment. In outdoor surveillance, moving backgrounds (waving trees, clutter) and illumination changes (weather changes, reflections, etc.) are the major challenges for background modelling and the development of a single model that fulfils all these requirements is usually not possible. In this paper we present a background estimation technique for motion detection in non-static backgrounds that overcomes this problem. We introduce an enhanced background estimation architecture with a long-term model and a short-term model. Our system showed that fusion of the detections of these two complementary approaches, improves the quality and reliability of the detection results.
在基于视频的监控和安全应用中,运动物体的检测是一项基本任务。许多检测系统使用背景估计方法来模拟观测环境。在户外监视中,移动背景(摇曳的树木、杂波)和光照变化(天气变化、反射等)是背景建模的主要挑战,开发一个满足所有这些要求的单一模型通常是不可能的。本文提出了一种用于非静态背景下运动检测的背景估计技术,克服了这一问题。我们介绍了一个具有长期模型和短期模型的增强背景估计体系结构。结果表明,两种互补方法的检测融合,提高了检测结果的质量和可靠性。
{"title":"Fusion of background estimation approaches for motion detection in non-static backgrounds","authors":"Eduardo Monari, Charlotte Pasqual","doi":"10.1109/AVSS.2007.4425335","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425335","url":null,"abstract":"Detection of moving objects is a fundamental task in video based surveillance and security applications. Many detection systems use background estimation methods to model the observed environment. In outdoor surveillance, moving backgrounds (waving trees, clutter) and illumination changes (weather changes, reflections, etc.) are the major challenges for background modelling and the development of a single model that fulfils all these requirements is usually not possible. In this paper we present a background estimation technique for motion detection in non-static backgrounds that overcomes this problem. We introduce an enhanced background estimation architecture with a long-term model and a short-term model. Our system showed that fusion of the detections of these two complementary approaches, improves the quality and reliability of the detection results.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126278535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1