首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Towards robust face recognition for Intelligent-CCTV based surveillance using one gallery image 基于图像库的智能监控人脸鲁棒识别研究
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425356
T. Shan, Shaokang Chen, Conrad Sanderson, B. Lovell
In recent years, the use of Intelligent Closed-Circuit Television (ICCTV) for crime prevention and detection has attracted significant attention. Existing face recognition systems require passport-quality photos to achieve good performance. However, use of CCTV images is much more problematic due to large variations in illumination, facial expressions and pose angle. In this paper we propose a pose variability compensation technique, which synthesizes realistic frontal face images from non-frontal views. It is based on modelling the face via Active Appearance Models and detecting the pose through a correlation model. The proposed technique is coupled with adaptive principal component analysis (APCA), which was previously shown to perform well in the presence of both lighting and expression variations. Experiments on the FERET dataset show up to 6 fold performance improvements. Finally, in addition to implementation and scalability challenges, we discuss issues related to on-going real life trials in public spaces using existing surveillance hardware.
近年来,利用智能闭路电视(ICCTV)预防和侦查犯罪引起了人们的极大关注。现有的人脸识别系统需要护照质量的照片才能实现良好的性能。然而,由于照明、面部表情和姿势角度的巨大变化,CCTV图像的使用问题更大。本文提出了一种姿态可变性补偿技术,从非正面视图合成真实的正面图像。该方法是通过主动外观模型对人脸进行建模,并通过相关模型对姿态进行检测。所提出的技术与自适应主成分分析(APCA)相结合,该分析先前被证明在光照和表达变化的存在下都表现良好。在FERET数据集上的实验表明,性能提高了6倍。最后,除了实现和可扩展性方面的挑战外,我们还讨论了与使用现有监控硬件在公共场所进行的现实生活试验相关的问题。
{"title":"Towards robust face recognition for Intelligent-CCTV based surveillance using one gallery image","authors":"T. Shan, Shaokang Chen, Conrad Sanderson, B. Lovell","doi":"10.1109/AVSS.2007.4425356","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425356","url":null,"abstract":"In recent years, the use of Intelligent Closed-Circuit Television (ICCTV) for crime prevention and detection has attracted significant attention. Existing face recognition systems require passport-quality photos to achieve good performance. However, use of CCTV images is much more problematic due to large variations in illumination, facial expressions and pose angle. In this paper we propose a pose variability compensation technique, which synthesizes realistic frontal face images from non-frontal views. It is based on modelling the face via Active Appearance Models and detecting the pose through a correlation model. The proposed technique is coupled with adaptive principal component analysis (APCA), which was previously shown to perform well in the presence of both lighting and expression variations. Experiments on the FERET dataset show up to 6 fold performance improvements. Finally, in addition to implementation and scalability challenges, we discuss issues related to on-going real life trials in public spaces using existing surveillance hardware.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116683108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Dense disparity estimation from omnidirectional images 全向图像的密集视差估计
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425344
Zafer Arican, P. Frossard
This paper addresses the problem of dense estimation of disparities between omnidirectional images, in a spherical framework. Omnidirectional imaging certainly represents important advantages for the representation and processing of the plenoptic function in 3D scenes for applications in localization, or depth estimation for example. In this context, we propose to perform disparity estimation directly in a spherical framework, in order to avoid discrepancies due to inexact projections of omnidirectional images onto planes. We first perform rectification of the omnidirectional images in the spherical domain. Then we develop a global energy minimization algorithm based on the graph-cut algorithm, in order to perform disparity estimation on the sphere. Experimental results show that the proposed algorithm outperforms typical methods as the ones based on block matching, for both a simple synthetic scene, and complex natural scenes. The proposed method shows promising performances for dense disparity estimation and can be extended efficiently to networks of several camera sensors.
本文解决了在球形框架中全向图像之间差的密集估计问题。全向成像对于3D场景中全视函数的表示和处理具有重要的优势,例如用于定位或深度估计。在这种情况下,我们建议直接在球面框架中执行视差估计,以避免由于全向图像在平面上的不精确投影而导致的差异。首先在球面域对全向图像进行校正。在此基础上,提出了一种基于图切算法的全局能量最小化算法,对球面进行视差估计。实验结果表明,无论对于简单的合成场景还是复杂的自然场景,该算法都优于基于块匹配的典型方法。该方法具有较好的密集视差估计性能,可以有效地推广到多个相机传感器组成的网络中。
{"title":"Dense disparity estimation from omnidirectional images","authors":"Zafer Arican, P. Frossard","doi":"10.1109/AVSS.2007.4425344","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425344","url":null,"abstract":"This paper addresses the problem of dense estimation of disparities between omnidirectional images, in a spherical framework. Omnidirectional imaging certainly represents important advantages for the representation and processing of the plenoptic function in 3D scenes for applications in localization, or depth estimation for example. In this context, we propose to perform disparity estimation directly in a spherical framework, in order to avoid discrepancies due to inexact projections of omnidirectional images onto planes. We first perform rectification of the omnidirectional images in the spherical domain. Then we develop a global energy minimization algorithm based on the graph-cut algorithm, in order to perform disparity estimation on the sphere. Experimental results show that the proposed algorithm outperforms typical methods as the ones based on block matching, for both a simple synthetic scene, and complex natural scenes. The proposed method shows promising performances for dense disparity estimation and can be extended efficiently to networks of several camera sensors.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126869932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Camera tamper detection using wavelet analysis for video surveillance 基于小波分析的视频监控摄像机篡改检测
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425371
A. Aksay, A. Temi̇zel, A. Cetin
It is generally accepted that video surveillance system operators lose their concentration after a short period of time and may miss important events taking place. In addition, many surveillance systems are frequently left unattended. Because of these reasons, automated analysis of the live video feed and automatic detection of suspicious activity have recently gained importance. To prevent capture of their images, criminals resort to several techniques such as deliberately obscuring the camera view, covering the lens with a foreign object, spraying or de-focusing the camera lens. In this paper, we propose some computationally efficient wavelet domain methods for rapid camera tamper detection and identify some real-life problems and propose solutions to these.
人们普遍认为,视频监控系统操作员在短时间内注意力不集中,可能会错过正在发生的重要事件。此外,许多监控系统经常无人值守。由于这些原因,实时视频馈送的自动分析和可疑活动的自动检测最近变得越来越重要。为了防止他们的图像被捕获,犯罪分子采取了几种技术,如故意模糊相机视野,用异物覆盖镜头,喷洒或使相机镜头失焦。在本文中,我们提出了一些计算效率高的小波域方法来快速检测相机篡改,并识别了一些现实生活中的问题,并提出了解决这些问题的方法。
{"title":"Camera tamper detection using wavelet analysis for video surveillance","authors":"A. Aksay, A. Temi̇zel, A. Cetin","doi":"10.1109/AVSS.2007.4425371","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425371","url":null,"abstract":"It is generally accepted that video surveillance system operators lose their concentration after a short period of time and may miss important events taking place. In addition, many surveillance systems are frequently left unattended. Because of these reasons, automated analysis of the live video feed and automatic detection of suspicious activity have recently gained importance. To prevent capture of their images, criminals resort to several techniques such as deliberately obscuring the camera view, covering the lens with a foreign object, spraying or de-focusing the camera lens. In this paper, we propose some computationally efficient wavelet domain methods for rapid camera tamper detection and identify some real-life problems and propose solutions to these.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115090872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Towards fast 3D ear recognition for real-life biometric applications 面向现实生活中生物识别应用的快速3D耳朵识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425283
G. Passalis, I. Kakadiaris, T. Theoharis, G. Toderici, Theodoros Papaioannou
Three-dimensional data are increasingly being used for biometric purposes as they offer resilience to problems common in two-dimensional data. They have been successfully applied to face recognition and more recently to ear recognition. However, real-life biometric applications require algorithms that are both robust and efficient so that they scale well with the size of the databases. A novel ear recognition method is presented that uses a generic annotated ear model to register and fit each ear dataset. Then a compact biometric signature is extracted that retains 3D information. The proposed method is evaluated using the largest publicly available 3D ear database appended with our own database, resulting in a database containing data from multiple 3D sensor types. Using this database it is shown that the proposed method is not only robust, accurate and sensor invariant but also extremely efficient, thus making it suitable for real-life biometric applications.
三维数据越来越多地被用于生物识别目的,因为它们对二维数据中常见的问题提供了弹性。它们已经成功地应用于人脸识别,最近又应用于耳朵识别。然而,现实生活中的生物识别应用需要既健壮又高效的算法,这样它们才能很好地适应数据库的规模。提出了一种新的耳朵识别方法,该方法使用通用的标注耳朵模型对每个耳朵数据集进行配准和拟合。然后提取一个紧凑的生物特征签名,保留3D信息。我们使用最大的公开3D耳朵数据库和我们自己的数据库对所提出的方法进行了评估,从而形成了一个包含多种3D传感器类型数据的数据库。实验结果表明,该方法具有鲁棒性、准确性和传感器不变性,而且效率极高,适用于现实生活中的生物识别应用。
{"title":"Towards fast 3D ear recognition for real-life biometric applications","authors":"G. Passalis, I. Kakadiaris, T. Theoharis, G. Toderici, Theodoros Papaioannou","doi":"10.1109/AVSS.2007.4425283","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425283","url":null,"abstract":"Three-dimensional data are increasingly being used for biometric purposes as they offer resilience to problems common in two-dimensional data. They have been successfully applied to face recognition and more recently to ear recognition. However, real-life biometric applications require algorithms that are both robust and efficient so that they scale well with the size of the databases. A novel ear recognition method is presented that uses a generic annotated ear model to register and fit each ear dataset. Then a compact biometric signature is extracted that retains 3D information. The proposed method is evaluated using the largest publicly available 3D ear database appended with our own database, resulting in a database containing data from multiple 3D sensor types. Using this database it is shown that the proposed method is not only robust, accurate and sensor invariant but also extremely efficient, thus making it suitable for real-life biometric applications.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129513887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Resolution limits of closely spaced random signals given the desired success rate 给定期望成功率的紧密间隔随机信号的分辨率极限
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425359
A. Amar, A. Weiss
Fundamental limitations on estimation accuracy are well known and include a variety of lower bounds including the celebrated Cramer Rao Lower Bound. However, similar theoretical limitations on resolution have not yet been presented. We exploit results from detection theory for deriving fundamental limitations on resolution. In this paper we discuss the resolution of two zero mean complex random Gaussian signals with a general and predefined covariance matrix observed with additive white Gaussian noise. The results are not based on any specific resolution technique and thus hold for any method and any resolution success rate. The theoretical limit is a simple expression of the observation interval, the user's pre-specified resolution success rate and the second derivative of the covariance matrix. We apply the results to the bearing resolution of two emitters with closely spaced direction of arrival impinging on an array of sensors. The derived limits are verified experimentally by model order selection methods such as the Akaike Information Criterion and the Minimum Description Length.
估计精度的基本限制是众所周知的,包括各种下界,包括著名的Cramer Rao下界。然而,关于分辨率的类似理论限制尚未提出。我们利用探测理论的结果来推导分辨率的基本限制。本文讨论了用加性高斯白噪声观测到的一般和预定义协方差矩阵的两个零均值复随机高斯信号的分辨问题。结果不基于任何特定的解决技术,因此适用于任何方法和任何解决成功率。理论极限是观测区间、用户预先指定的分辨成功率和协方差矩阵二阶导数的简单表达式。我们将结果应用于两个距离很近的到达方向的发射器对传感器阵列的方位分辨率。用赤池信息准则和最小描述长度等模型阶次选择方法验证了所导出的极限。
{"title":"Resolution limits of closely spaced random signals given the desired success rate","authors":"A. Amar, A. Weiss","doi":"10.1109/AVSS.2007.4425359","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425359","url":null,"abstract":"Fundamental limitations on estimation accuracy are well known and include a variety of lower bounds including the celebrated Cramer Rao Lower Bound. However, similar theoretical limitations on resolution have not yet been presented. We exploit results from detection theory for deriving fundamental limitations on resolution. In this paper we discuss the resolution of two zero mean complex random Gaussian signals with a general and predefined covariance matrix observed with additive white Gaussian noise. The results are not based on any specific resolution technique and thus hold for any method and any resolution success rate. The theoretical limit is a simple expression of the observation interval, the user's pre-specified resolution success rate and the second derivative of the covariance matrix. We apply the results to the bearing resolution of two emitters with closely spaced direction of arrival impinging on an array of sensors. The derived limits are verified experimentally by model order selection methods such as the Akaike Information Criterion and the Minimum Description Length.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129580439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MCMC based multi-body tracking using full 3D model of both target and environment 基于MCMC的目标和环境全三维模型的多体跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425314
Tatsuya Osawa, Xiaojun Wu, K. Sudo, K. Wakabayashi, Hiroyuki Arai, T. Yasuno
In this paper, we present a new approach for the stable tracking of variable interacting targets under severe occlusion in 3D space. We formulate the state of multiple targets as a union state space of each target, and recursively estimate the multi-body configuration and the position of each target in 3D space by using the framework of Trans-dimensional Markov Chain Monte Carlo(MCMC). The 3D environmental model, which replicates the real-world 3D structure, is used for handling occlusions created by fixed objects in the environment, and reliably estimating the number of targets in the monitoring area. Experiments show that our system can stably track multiple humans that are interacting with each other and entering and leaving the monitored area.
本文提出了一种三维空间中严重遮挡下可变相互作用目标的稳定跟踪方法。我们将多个目标的状态表述为每个目标的联合状态空间,并利用跨维马尔可夫链蒙特卡罗(MCMC)框架递归估计出每个目标在三维空间中的多体构型和位置。3D环境模型复制了现实世界的3D结构,用于处理环境中固定物体造成的遮挡,并可靠地估计监测区域内目标的数量。实验表明,该系统可以稳定地跟踪多个人之间的互动和进出监控区域。
{"title":"MCMC based multi-body tracking using full 3D model of both target and environment","authors":"Tatsuya Osawa, Xiaojun Wu, K. Sudo, K. Wakabayashi, Hiroyuki Arai, T. Yasuno","doi":"10.1109/AVSS.2007.4425314","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425314","url":null,"abstract":"In this paper, we present a new approach for the stable tracking of variable interacting targets under severe occlusion in 3D space. We formulate the state of multiple targets as a union state space of each target, and recursively estimate the multi-body configuration and the position of each target in 3D space by using the framework of Trans-dimensional Markov Chain Monte Carlo(MCMC). The 3D environmental model, which replicates the real-world 3D structure, is used for handling occlusions created by fixed objects in the environment, and reliably estimating the number of targets in the monitoring area. Experiments show that our system can stably track multiple humans that are interacting with each other and entering and leaving the monitored area.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126799947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Vehicular traffic density estimation via statistical methods with automated state learning 基于自动状态学习的统计方法估计车辆交通密度
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425304
Evan Tan, Jing Chen
This paper proposes a novel approach of combining an unsupervised clustering scheme called AutoClass with Hidden Markov Models (HMMs) to determine the traffic density state in a Region Of Interest (ROI) of a road in a traffic video. Firstly, low-level features are extracted from the ROI of each frame. Secondly, an unsupervised clustering algorithm called AutoClass is then applied to the low-level features to obtain a set of clusters for each pre-defined traffic density state. Finally, four HMM models are constructed for each traffic state respectively with each cluster corresponding to a state in the HMM and the structure of HMM is determined based on the cluster information. This approach improves over previous approaches that used Gaussian Mixture HMMs (GMHMM) by circumventing the need to make an arbitrary choice on the structure of the HMM as well as determining the number of mixtures used for each density traffic state. The results show that this approach can classify the traffic density in a ROI of a traffic video accurately with the property of being able to handle the varying illumination elegantly.
本文提出了一种将无监督聚类方案AutoClass与隐马尔可夫模型(hmm)相结合的新方法,以确定交通视频中道路感兴趣区域(ROI)的交通密度状态。首先,从每一帧的ROI中提取底层特征;其次,将一种称为AutoClass的无监督聚类算法应用于底层特征,为每个预定义的交通密度状态获得一组聚类。最后,针对每种交通状态分别构建4个隐马尔可夫模型,每个聚类对应隐马尔可夫模型中的一种状态,并根据聚类信息确定隐马尔可夫的结构。这种方法改进了以前使用高斯混合HMM (GMHMM)的方法,避免了对HMM的结构进行任意选择的需要,以及确定每个密度交通状态使用的混合物的数量。结果表明,该方法能够准确地对交通视频ROI中的交通密度进行分类,并且能够很好地处理光照的变化。
{"title":"Vehicular traffic density estimation via statistical methods with automated state learning","authors":"Evan Tan, Jing Chen","doi":"10.1109/AVSS.2007.4425304","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425304","url":null,"abstract":"This paper proposes a novel approach of combining an unsupervised clustering scheme called AutoClass with Hidden Markov Models (HMMs) to determine the traffic density state in a Region Of Interest (ROI) of a road in a traffic video. Firstly, low-level features are extracted from the ROI of each frame. Secondly, an unsupervised clustering algorithm called AutoClass is then applied to the low-level features to obtain a set of clusters for each pre-defined traffic density state. Finally, four HMM models are constructed for each traffic state respectively with each cluster corresponding to a state in the HMM and the structure of HMM is determined based on the cluster information. This approach improves over previous approaches that used Gaussian Mixture HMMs (GMHMM) by circumventing the need to make an arbitrary choice on the structure of the HMM as well as determining the number of mixtures used for each density traffic state. The results show that this approach can classify the traffic density in a ROI of a traffic video accurately with the property of being able to handle the varying illumination elegantly.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127936004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Stationary objects in multiple object tracking 多目标跟踪中的静止目标
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425318
S. Guler, Jason A. Silverstein, Ian A. Pushee
This paper presents an approach to detect stationary foreground objects in naturally busy surveillance video scenes with several moving objects. Our approach is inspired by human's visual cognition processes and builds upon a multi-tier video tracking paradigm with main layers being the spatially based "peripheral tracking" loosely corresponding to the peripheral vision and the object based "vision tunnels " for focused attention and analysis of tracked objects. Humans allocate their attention to different aspects of objects and scenes based on a defined task. In our model, a specific processing layer corresponding to allocation of attention is used for detection of objects that become stationary. The static object layer, a natural extension of this framework, detects and maintains the stationary foreground objects by using the moving object and scene information from Peripheral Tracker and the Scene Description layers. Simple event detection modules then use the enduring stationary objects to determine events such as Parked Vehicles or Abandoned Bags.
本文提出了一种在自然繁忙的监控视频场景中检测静止前景目标的方法。我们的方法受到人类视觉认知过程的启发,建立在多层视频跟踪范式的基础上,主要层是基于空间的“周边跟踪”,松散地对应于周边视觉和基于对象的“视觉隧道”,用于集中注意力和分析跟踪对象。人类根据既定任务将注意力分配到物体和场景的不同方面。在我们的模型中,使用一个与注意力分配相对应的特定处理层来检测静止的物体。静态对象层是该框架的自然扩展,通过使用来自外围跟踪器层和场景描述层的移动对象和场景信息来检测和维护静止的前景对象。简单的事件检测模块然后使用持久的静止物体来确定事件,例如停放的车辆或丢弃的袋子。
{"title":"Stationary objects in multiple object tracking","authors":"S. Guler, Jason A. Silverstein, Ian A. Pushee","doi":"10.1109/AVSS.2007.4425318","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425318","url":null,"abstract":"This paper presents an approach to detect stationary foreground objects in naturally busy surveillance video scenes with several moving objects. Our approach is inspired by human's visual cognition processes and builds upon a multi-tier video tracking paradigm with main layers being the spatially based \"peripheral tracking\" loosely corresponding to the peripheral vision and the object based \"vision tunnels \" for focused attention and analysis of tracked objects. Humans allocate their attention to different aspects of objects and scenes based on a defined task. In our model, a specific processing layer corresponding to allocation of attention is used for detection of objects that become stationary. The static object layer, a natural extension of this framework, detects and maintains the stationary foreground objects by using the moving object and scene information from Peripheral Tracker and the Scene Description layers. Simple event detection modules then use the enduring stationary objects to determine events such as Parked Vehicles or Abandoned Bags.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114553868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Infrared image processing and its application to forest fire surveillance 红外图像处理及其在森林火灾监测中的应用
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425324
I. Bosch, S. Gomez, L. Vergara, J. Moragues
This paper describes an scheme for automatic forest surveillance. A complete system for forest fire detection is firstly presented although we focus on infrared image processing. The proposed scheme based on infrared image processing performs early detection of any fire threat. With the aim of determining the presence or absence of fire, the proposed algorithms performs the fusion of different detectors which exploit different expected characteristics of a real fire, like persistence and increase. Theoretical results and practical simulations are presented to corroborate the control of the system related with probability of false alarm (PFA). Probability of detection (PD) dependence on signal to noise ration (SNR) is also evaluated.
本文介绍了一种森林自动监测方案。本文首次提出了一套完整的森林火灾探测系统,并对红外图像进行了处理。该方案基于红外图像处理,能够对任何火灾威胁进行早期检测。为了确定火灾的存在与否,本文提出的算法对不同的探测器进行融合,这些探测器利用了真实火灾的不同预期特征,如持久性和增加性。理论结果和实际仿真验证了与虚警概率(PFA)相关的系统控制。检测概率(PD)依赖于信噪比(SNR)也进行了评估。
{"title":"Infrared image processing and its application to forest fire surveillance","authors":"I. Bosch, S. Gomez, L. Vergara, J. Moragues","doi":"10.1109/AVSS.2007.4425324","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425324","url":null,"abstract":"This paper describes an scheme for automatic forest surveillance. A complete system for forest fire detection is firstly presented although we focus on infrared image processing. The proposed scheme based on infrared image processing performs early detection of any fire threat. With the aim of determining the presence or absence of fire, the proposed algorithms performs the fusion of different detectors which exploit different expected characteristics of a real fire, like persistence and increase. Theoretical results and practical simulations are presented to corroborate the control of the system related with probability of false alarm (PFA). Probability of detection (PD) dependence on signal to noise ration (SNR) is also evaluated.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
View adaptive detection and distributed site wide tracking 查看自适应检测和分布式站点范围跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425286
P. Tu, N. Krahnstoever, J. Rittscher
Using a detect and track paradigm, we present a surveillance framework where each camera uses local resources to perform real-time person detection. These detections are then processed by a distributed site-wide tracking system. The detectors themselves are based on boosted user-defined exemplars, which capture both appearance and shape information. The detectors take integral images of both intensity and Sobel responses as input. This data representation enables efficient processing without relying on background subtraction or other motion cues. View-specific person detectors are constructed by iteratively presenting the boosting algorithm with training data associated with each individual camera. These detections are then transmitted from a distributed set of tracking clients to a server, which maintains a set of site-wide target tracks. Automatic calibration methods allow for tracking to be performed in a ground plane representation, which enables effective camera hand-off. Factors such as network latencies and scalability will be discussed.
使用检测和跟踪范例,我们提出了一个监视框架,其中每个摄像机使用本地资源执行实时人员检测。然后,这些检测结果由一个分布式的全站点跟踪系统进行处理。检测器本身是基于用户定义的增强样本,可以捕获外观和形状信息。探测器将强度和索贝尔响应的积分图像作为输入。这种数据表示方式可以在不依赖背景减法或其他运动线索的情况下进行高效处理。通过迭代呈现与每个单独摄像机相关的训练数据的增强算法来构建特定视点的人检测器。然后,这些检测结果从一组分布式跟踪客户端传输到服务器,服务器维护一组站点范围的目标跟踪。自动校准方法允许在地平面表示中执行跟踪,从而实现有效的相机切换。将讨论网络延迟和可伸缩性等因素。
{"title":"View adaptive detection and distributed site wide tracking","authors":"P. Tu, N. Krahnstoever, J. Rittscher","doi":"10.1109/AVSS.2007.4425286","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425286","url":null,"abstract":"Using a detect and track paradigm, we present a surveillance framework where each camera uses local resources to perform real-time person detection. These detections are then processed by a distributed site-wide tracking system. The detectors themselves are based on boosted user-defined exemplars, which capture both appearance and shape information. The detectors take integral images of both intensity and Sobel responses as input. This data representation enables efficient processing without relying on background subtraction or other motion cues. View-specific person detectors are constructed by iteratively presenting the boosting algorithm with training data associated with each individual camera. These detections are then transmitted from a distributed set of tracking clients to a server, which maintains a set of site-wide target tracks. Automatic calibration methods allow for tracking to be performed in a ground plane representation, which enables effective camera hand-off. Factors such as network latencies and scalability will be discussed.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125956445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1