首页 > 最新文献

2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance最新文献

英文 中文
Comparison of target detection algorithms using adaptive background models 基于自适应背景模型的目标检测算法比较
Daniela Hall, J. Nascimento, P. Ribeiro, E. Andrade, Plinio Moreno, S. Pesnel, T. List, R. Emonet, R. Fisher, J. S. Victor, J. Crowley
This article compares the performance of target detectors based on adaptive background differencing on public benchmark data. Five state of the art methods are described. The performance is evaluated using state of the art measures with respect to ground truth. The original points are the comparison to hand labelled ground truth and the evaluation on a large database. The simpler methods LOTS and SGM are more appropriate to the particular task as MGM using a more complex background model.
本文在公共基准数据上比较了基于自适应背景差分的目标检测器的性能。描述了五种最先进的方法。性能是用最先进的测量方法来评估的。原始点是与手工标记的地面真值的比较和对大型数据库的评估。简单的方法LOTS和SGM更适合于MGM使用更复杂的背景模型的特定任务。
{"title":"Comparison of target detection algorithms using adaptive background models","authors":"Daniela Hall, J. Nascimento, P. Ribeiro, E. Andrade, Plinio Moreno, S. Pesnel, T. List, R. Emonet, R. Fisher, J. S. Victor, J. Crowley","doi":"10.1109/VSPETS.2005.1570905","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570905","url":null,"abstract":"This article compares the performance of target detectors based on adaptive background differencing on public benchmark data. Five state of the art methods are described. The performance is evaluated using state of the art measures with respect to ground truth. The original points are the comparison to hand labelled ground truth and the evaluation on a large database. The simpler methods LOTS and SGM are more appropriate to the particular task as MGM using a more complex background model.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125101262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Using SVM for Efficient Detection of Human Motion 基于支持向量机的人体运动检测
J. Grahn, H. Kjellstromg
This paper presents a method for detection of humans in video. Detection is here formulated as the problem of classifying the image patterns in a range of windows of different size in a video frame as "human" or "non-human". Computational efficiency is of core importance, which leads us to utilize fast methods for image preprocessing and classification. Linear spatio-temporal difference filters are used to represent motion information in the image. Patterns of spatio-temporal pixel difference is classified using SVM, a classification method proven efficient for problems with high dimensionality and highly non-linear feature spaces. Furthermore, a cascade architecture is employed, to make use of the fact that most windows are easy to classify, while a few are difficult. The detection method shows promising results when tested on images from street scenes with humans of varying sizes and clothing.
本文提出了一种视频中人物的检测方法。在这里,检测被表述为将视频帧中不同大小窗口中的图像模式分类为“人类”或“非人类”的问题。计算效率是最重要的,这使得我们使用快速的方法进行图像预处理和分类。使用线性时空差分滤波器来表示图像中的运动信息。利用支持向量机(SVM)对时空像元差异模式进行分类,该方法在高维、高度非线性的特征空间中被证明是有效的。此外,采用了级联架构,以利用大多数窗口易于分类而少数窗口难以分类的事实。当对街道场景中不同身材和服装的人的图像进行测试时,这种检测方法显示出了令人满意的结果。
{"title":"Using SVM for Efficient Detection of Human Motion","authors":"J. Grahn, H. Kjellstromg","doi":"10.1109/VSPETS.2005.1570920","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570920","url":null,"abstract":"This paper presents a method for detection of humans in video. Detection is here formulated as the problem of classifying the image patterns in a range of windows of different size in a video frame as \"human\" or \"non-human\". Computational efficiency is of core importance, which leads us to utilize fast methods for image preprocessing and classification. Linear spatio-temporal difference filters are used to represent motion information in the image. Patterns of spatio-temporal pixel difference is classified using SVM, a classification method proven efficient for problems with high dimensionality and highly non-linear feature spaces. Furthermore, a cascade architecture is employed, to make use of the fact that most windows are easy to classify, while a few are difficult. The detection method shows promising results when tested on images from street scenes with humans of varying sizes and clothing.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116702929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
PETS Metrics: On-Line Performance Evaluation Service PETS指标:在线绩效评估服务
D. P. Young, J. Ferryman
This paper presents the PETS Metrics On-line Evaluation Service for computational visual surveillance algorithms. The service allows researchers to submit their algorithm results for evaluation against a set of applicable metrics. The results of the evaluation processes are publicly displayed allowing researchers to instantly view how their algorithm performs against previously submitted algorithms. The approach has been validated using seven motion segmentation algorithms.
本文介绍了用于计算视觉监视算法的PETS度量在线评估服务。该服务允许研究人员提交他们的算法结果,以对一组适用的指标进行评估。评估过程的结果是公开显示的,允许研究人员立即查看他们的算法与先前提交的算法相比表现如何。该方法已使用七种运动分割算法进行了验证。
{"title":"PETS Metrics: On-Line Performance Evaluation Service","authors":"D. P. Young, J. Ferryman","doi":"10.1109/VSPETS.2005.1570931","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570931","url":null,"abstract":"This paper presents the PETS Metrics On-line Evaluation Service for computational visual surveillance algorithms. The service allows researchers to submit their algorithm results for evaluation against a set of applicable metrics. The results of the evaluation processes are publicly displayed allowing researchers to instantly view how their algorithm performs against previously submitted algorithms. The approach has been validated using seven motion segmentation algorithms.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130301175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 128
Performance evaluating the evaluator 对评估者进行绩效评估
T. List, J. Bins, J. Vazquez, R. Fisher
When evaluating the performance of a computer-based visual tracking system one often wishes to compare results with a standard human observer. It is a natural assumption that humans fully understand the relatively simple scenes we subject our computers to and because of this, two human observers would draw the same conclusions about object positions, tracks, size and even simple behaviour patterns. But is that actually the case? This paper provides a baseline for how computer-based tracking results can be compared to a standard human observer.
在评估基于计算机的视觉跟踪系统的性能时,人们通常希望将结果与标准的人类观察者进行比较。这是一个很自然的假设,人类完全理解我们让计算机看到的相对简单的场景,正因为如此,两个人类观察者会得出关于物体位置、轨迹、大小甚至简单行为模式的相同结论。但事实果真如此吗?本文为如何将基于计算机的跟踪结果与标准的人类观察者进行比较提供了一个基线。
{"title":"Performance evaluating the evaluator","authors":"T. List, J. Bins, J. Vazquez, R. Fisher","doi":"10.1109/VSPETS.2005.1570907","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570907","url":null,"abstract":"When evaluating the performance of a computer-based visual tracking system one often wishes to compare results with a standard human observer. It is a natural assumption that humans fully understand the relatively simple scenes we subject our computers to and because of this, two human observers would draw the same conclusions about object positions, tracks, size and even simple behaviour patterns. But is that actually the case? This paper provides a baseline for how computer-based tracking results can be compared to a standard human observer.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130060361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
A Quantitative Evaluation of Video-based 3D Person Tracking 基于视频的三维人物跟踪的定量评价
A. O. Balan, L. Sigal, Michael J. Black
The Bayesian estimation of 3D human motion from video sequences is quantitatively evaluated using synchronized, multi-camera, calibrated video and 3D ground truth poses acquired with a commercial motion capture system. While many methods for human pose estimation and tracking have been proposed, to date there has been no quantitative comparison. Our goal is to evaluate how different design choices influence tracking performance. Toward that end, we independently implemented two fairly standard Bayesian person trackers using two variants of particle filtering and propose an evaluation measure appropriate for assessing the quality of probabilistic tracking methods. In the Bayesian framework we compare various image likelihood functions and prior models of human motion that have been proposed in the literature. Our results suggest that in constrained laboratory environments, current methods perform quite well. Multiple cameras and background subtraction, however, are required to achieve reliable tracking suggesting that many current methods may be inappropriate in more natural settings. We discuss the implications of the study and the directions for future research that it entails
利用同步、多摄像机、校准视频和商业动作捕捉系统获得的3D地面真实姿态,定量评估了视频序列中3D人体运动的贝叶斯估计。虽然已经提出了许多人体姿态估计和跟踪的方法,但迄今为止还没有进行定量比较。我们的目标是评估不同的设计选择如何影响跟踪性能。为此,我们使用粒子滤波的两种变体独立实现了两个相当标准的贝叶斯人跟踪器,并提出了适合于评估概率跟踪方法质量的评估度量。在贝叶斯框架中,我们比较了文献中提出的各种图像似然函数和先前的人体运动模型。我们的结果表明,在受限的实验室环境中,目前的方法表现相当好。然而,为了实现可靠的跟踪,需要多个摄像机和背景减法,这表明许多当前的方法可能不适合更自然的环境。我们讨论了研究的意义和未来研究的方向,它需要
{"title":"A Quantitative Evaluation of Video-based 3D Person Tracking","authors":"A. O. Balan, L. Sigal, Michael J. Black","doi":"10.1109/VSPETS.2005.1570935","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570935","url":null,"abstract":"The Bayesian estimation of 3D human motion from video sequences is quantitatively evaluated using synchronized, multi-camera, calibrated video and 3D ground truth poses acquired with a commercial motion capture system. While many methods for human pose estimation and tracking have been proposed, to date there has been no quantitative comparison. Our goal is to evaluate how different design choices influence tracking performance. Toward that end, we independently implemented two fairly standard Bayesian person trackers using two variants of particle filtering and propose an evaluation measure appropriate for assessing the quality of probabilistic tracking methods. In the Bayesian framework we compare various image likelihood functions and prior models of human motion that have been proposed in the literature. Our results suggest that in constrained laboratory environments, current methods perform quite well. Multiple cameras and background subtraction, however, are required to achieve reliable tracking suggesting that many current methods may be inappropriate in more natural settings. We discuss the implications of the study and the directions for future research that it entails","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132294163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 191
Efficient Hidden Semi-Markov Model Inference for Structured Video Sequences 结构化视频序列的有效隐半马尔可夫模型推断
David Tweed, Robert B. Fisher, J. Bins, T. List
The semantic interpretation of video sequences by computer is often formulated as probabilistically relating lower-level features to higher-level states, constrained by a transition graph. Using hidden Markov models inference is efficient but time-in-state data cannot be included, whereas using hidden semi-Markov models we can model duration but have inefficient inference. We present a new efficient O(T) algorithm for inference in certain HSMMs and show experimental results on video sequence interpretation in television footage to demonstrate that explicitly modelling time-in-state improves interpretation performance
计算机对视频序列的语义解释通常被表述为低级别特征与高级别状态的概率关联,并受到转移图的约束。使用隐马尔可夫模型进行推理是有效的,但不能包含状态时间数据,而使用隐半马尔可夫模型可以对持续时间进行建模,但推理效率低下。我们提出了一种新的高效的O(T)算法用于某些hsmm的推理,并展示了在电视镜头中视频序列解释的实验结果,以证明明确建模状态时间可以提高解释性能
{"title":"Efficient Hidden Semi-Markov Model Inference for Structured Video Sequences","authors":"David Tweed, Robert B. Fisher, J. Bins, T. List","doi":"10.1109/VSPETS.2005.1570922","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570922","url":null,"abstract":"The semantic interpretation of video sequences by computer is often formulated as probabilistically relating lower-level features to higher-level states, constrained by a transition graph. Using hidden Markov models inference is efficient but time-in-state data cannot be included, whereas using hidden semi-Markov models we can model duration but have inefficient inference. We present a new efficient O(T) algorithm for inference in certain HSMMs and show experimental results on video sequence interpretation in television footage to demonstrate that explicitly modelling time-in-state improves interpretation performance","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116267560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Improving Performance via Post Track Analysis 通过事后跟踪分析提高绩效
L. Brown, M. Lu, Chiao-Fe Shu, Ying-li Tian, A. Hampapur
In this paper, we improve the effective performance of a surveillance system via post track analysis. Our system performs object detection via background subtraction followed by appearance based tracking. The primary outputs of the system however, are customized alarms which depend on the user's domain and needs. The ultimate performance therefore depends most critically on the Receiver Operating Characteristic curve of these alarms. We show that by strategically designing post tracking and alarm conditions, the effective performance of the system can be improved dramatically. This addresses the most significant error sources, namely, errors due to shadows, ghosting, temporally or spatially missing fragments and many of the false positives due to extreme lighting variations, specular reflections or irrelevant motion.
在本文中,我们通过后航迹分析来提高监控系统的有效性能。我们的系统通过背景减相和基于外观的跟踪来执行目标检测。然而,系统的主要输出是根据用户的领域和需求定制的警报。因此,最终性能最关键地取决于这些报警器的接收机工作特性曲线。研究表明,通过策略性地设计后置跟踪和报警条件,可以显著提高系统的有效性能。这解决了最重要的误差来源,即由于阴影,鬼影,时间或空间缺失的碎片和许多由于极端光照变化,镜面反射或无关运动而导致的误报。
{"title":"Improving Performance via Post Track Analysis","authors":"L. Brown, M. Lu, Chiao-Fe Shu, Ying-li Tian, A. Hampapur","doi":"10.1109/VSPETS.2005.1570934","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570934","url":null,"abstract":"In this paper, we improve the effective performance of a surveillance system via post track analysis. Our system performs object detection via background subtraction followed by appearance based tracking. The primary outputs of the system however, are customized alarms which depend on the user's domain and needs. The ultimate performance therefore depends most critically on the Receiver Operating Characteristic curve of these alarms. We show that by strategically designing post tracking and alarm conditions, the effective performance of the system can be improved dramatically. This addresses the most significant error sources, namely, errors due to shadows, ghosting, temporally or spatially missing fragments and many of the false positives due to extreme lighting variations, specular reflections or irrelevant motion.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123637975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Discriminant Locality Preserving Projections: A New Method to Face Representation and Recognition 判别保域投影:一种人脸表示与识别的新方法
Wei-wei Yu, Xiao-long Teng, Chong-qing Liu
Locality Preserving Projections (LPP) is a linear projective map that arises by solving a variational problem that optimally preserves the neighborhood structure of the data set. Though LPP has been applied in many domains, it has limits to solve recognition problem. Thus, Discriminant Locality Preserving Projections (DLPP) is presented in this paper. The improvement of DLPP algorithm over LPP method benefits mostly from two aspects. One aspect is that DLPP tries to find the subspace that best discriminates different face classes by maximizing the between-class distance, while minimizing the within-class distance. The other aspect is that DLPP reduces the energy of noise and transformation difference as much as possible without sacrificing much of intrinsic difference. In the experiments, DLPP achieves the better face recognition performance than LPP.
局域保持投影(Locality Preserving projection, LPP)是一种线性投影映射,它是通过解决一个最优地保留数据集的邻域结构的变分问题而产生的。尽管LPP在许多领域得到了应用,但它在解决识别问题方面存在局限性。因此,本文提出了判别局部保持投影(DLPP)。DLPP算法相对于LPP方法的改进主要受益于两个方面。一个方面是DLPP试图通过最大化类间距离,最小化类内距离来找到区分不同人脸类别的最佳子空间。另一方面,DLPP在不牺牲太多固有差分的前提下,尽可能地降低噪声和变换差分的能量。在实验中,DLPP取得了比LPP更好的人脸识别性能。
{"title":"Discriminant Locality Preserving Projections: A New Method to Face Representation and Recognition","authors":"Wei-wei Yu, Xiao-long Teng, Chong-qing Liu","doi":"10.1109/VSPETS.2005.1570916","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570916","url":null,"abstract":"Locality Preserving Projections (LPP) is a linear projective map that arises by solving a variational problem that optimally preserves the neighborhood structure of the data set. Though LPP has been applied in many domains, it has limits to solve recognition problem. Thus, Discriminant Locality Preserving Projections (DLPP) is presented in this paper. The improvement of DLPP algorithm over LPP method benefits mostly from two aspects. One aspect is that DLPP tries to find the subspace that best discriminates different face classes by maximizing the between-class distance, while minimizing the within-class distance. The other aspect is that DLPP reduces the energy of noise and transformation difference as much as possible without sacrificing much of intrinsic difference. In the experiments, DLPP achieves the better face recognition performance than LPP.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130574397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Illumination and motion-based video enhancement for night surveillance 用于夜间监视的照明和基于运动的视频增强
Jing Li, S.Z. Li, Q. Pan, Tao Yang
This work presents a context enhancement method of low illumination video for night surveillance. A unique characteristic of the algorithm is its ability to extract and maintenance the meaningful information like highlight area or moving objects with low contrast in the enhanced image, meanwhile recover the surrounding scene information by fusing the daytime background image. A main challenge comes from the extraction of meaningful area in the night video sequence. To address this problem, a novel bidirectional extraction approach is presented. In evaluation experiments with real data, the notable information of the night video is extracted successfully and the background scene is fused smoothly with the night images to show enhanced surveillance video for observers.
本文提出了一种用于夜间监控的低照度视频的上下文增强方法。该算法的独特之处在于能够提取和保持增强图像中高亮区域或低对比度运动物体等有意义的信息,同时通过融合白天背景图像恢复周围的场景信息。一个主要的挑战是如何从夜间视频序列中提取有意义的区域。为了解决这一问题,提出了一种新的双向提取方法。在真实数据评价实验中,成功提取了夜间视频的显著信息,并将背景场景与夜间图像平滑融合,为观察者呈现增强的监控视频。
{"title":"Illumination and motion-based video enhancement for night surveillance","authors":"Jing Li, S.Z. Li, Q. Pan, Tao Yang","doi":"10.1109/VSPETS.2005.1570912","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570912","url":null,"abstract":"This work presents a context enhancement method of low illumination video for night surveillance. A unique characteristic of the algorithm is its ability to extract and maintenance the meaningful information like highlight area or moving objects with low contrast in the enhanced image, meanwhile recover the surrounding scene information by fusing the daytime background image. A main challenge comes from the extraction of meaningful area in the night video sequence. To address this problem, a novel bidirectional extraction approach is presented. In evaluation experiments with real data, the notable information of the night video is extracted successfully and the background scene is fused smoothly with the night images to show enhanced surveillance video for observers.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132559148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Distant targets identification as an on-line dynamic vehicle routing problem using an active-zooming camera 远距离目标识别是一种基于有源变焦摄像机的在线动态车辆路径问题
A. del Bimbo, F. Pernici
This paper considers the problem of modeling an active observer to plan a sequence of decisions regarding what target to look at, through a foveal-sensing action. The gathered images by the active observer provides meaningful identification imagery of distant targets which are not recognizable in a wide angle view. We propose a framework in which a pan/tilt/zoom (PTZ) camera schedules saccades in order to acquire high resolution images of as many moving targets as possible before they leave the scene. We cast the whole problem as a particular kind of dynamic discrete optimization, specially as a novel on-line dynamic vehicle routing problem (DVRP) with deadlines. We show that using an optimal choice for the sensing order of targets the total time spent in visiting the targets by the active camera can be significantly reduced. To show the effectiveness of our approach we apply congestion analysis to a dual camera system in a master-slave configuration. We report that our framework gives good results in monitoring wide areas with little extra costs with respect to approaches using a large number of cameras.
本文考虑了建模一个主动观察者的问题,通过中央凹感知动作来计划一系列关于要看什么目标的决策。主动观测者采集到的图像为广角视野下无法识别的远距离目标提供了有意义的识别图像。我们提出了一个框架,其中平移/倾斜/变焦(PTZ)相机调度扫视,以获取尽可能多的运动目标的高分辨率图像之前,他们离开现场。我们把整个问题看作是一类特殊的动态离散优化问题,特别是一类新颖的带最后期限的在线动态车辆路径问题。研究表明,通过对目标感知顺序的优化选择,可以显著减少主动摄像机访问目标的总时间。为了证明我们方法的有效性,我们将拥塞分析应用于主从配置的双摄像头系统。我们报告说,与使用大量摄像机的方法相比,我们的框架在监测广大地区方面取得了良好的结果,而且几乎没有额外的费用。
{"title":"Distant targets identification as an on-line dynamic vehicle routing problem using an active-zooming camera","authors":"A. del Bimbo, F. Pernici","doi":"10.1109/VSPETS.2005.1570903","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570903","url":null,"abstract":"This paper considers the problem of modeling an active observer to plan a sequence of decisions regarding what target to look at, through a foveal-sensing action. The gathered images by the active observer provides meaningful identification imagery of distant targets which are not recognizable in a wide angle view. We propose a framework in which a pan/tilt/zoom (PTZ) camera schedules saccades in order to acquire high resolution images of as many moving targets as possible before they leave the scene. We cast the whole problem as a particular kind of dynamic discrete optimization, specially as a novel on-line dynamic vehicle routing problem (DVRP) with deadlines. We show that using an optimal choice for the sensing order of targets the total time spent in visiting the targets by the active camera can be significantly reduced. To show the effectiveness of our approach we apply congestion analysis to a dual camera system in a master-slave configuration. We report that our framework gives good results in monitoring wide areas with little extra costs with respect to approaches using a large number of cameras.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130270455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1