首页 > 最新文献

2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance最新文献

英文 中文
The Terrascope Dataset: Scripted Multi-Camera Indoor Video Surveillance with Ground-truth Terrascope数据集:脚本多摄像机室内视频监控与地面实况
C. Jaynes, A. Kale, N. Sanders, E. Grossmann
This paper introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed throughout an indoor setting and augmented with groundtruth data. The dataset includes ten minutes of individuals who are moving throughout the sensor network. In addition, three scripted scenarios that contain behaviors exhibtied over a wide-area, such as "gathering for a meeting" or "stealing an object" are included to assist researchers who are interested in wide-area surveillance and behavior recognition. In addition to the video data, a face and gait database for all twelve individuals observed by the network of cameras is supplied. Hand-segmented ground-truth foreground regions are provided for every 500th frame in all cameras and for many sequential frames in two overlapping views. The entrance and exit time of each individual in each camera for one of the scenarios is provided in an XML database. We believe that the dataset will help provide a common development and verifcation framework for the increasing number of research efforts related to video surveillance in multiple, potentially non-overlapping, camera networks.
本文介绍了一种新的视频监控数据集,该数据集由放置在室内环境中的同步摄像机网络捕获,并辅以真实数据。该数据集包括在传感器网络中移动的个人的十分钟记录。此外,还包括三个脚本场景,其中包含在广域范围内展示的行为,例如“聚集会议”或“偷东西”,以帮助对广域监视和行为识别感兴趣的研究人员。除了视频数据外,还提供了摄像头网络观察到的所有12个人的面部和步态数据库。在所有相机中,每500帧和两个重叠视图中的许多连续帧提供手动分割的真实前景区域。在一个XML数据库中提供了每种场景中每个人在每个摄像机中的进入和退出时间。我们相信,该数据集将有助于为越来越多的与视频监控相关的研究工作提供一个共同的开发和验证框架,这些研究工作涉及多个可能不重叠的摄像机网络。
{"title":"The Terrascope Dataset: Scripted Multi-Camera Indoor Video Surveillance with Ground-truth","authors":"C. Jaynes, A. Kale, N. Sanders, E. Grossmann","doi":"10.1109/VSPETS.2005.1570930","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570930","url":null,"abstract":"This paper introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed throughout an indoor setting and augmented with groundtruth data. The dataset includes ten minutes of individuals who are moving throughout the sensor network. In addition, three scripted scenarios that contain behaviors exhibtied over a wide-area, such as \"gathering for a meeting\" or \"stealing an object\" are included to assist researchers who are interested in wide-area surveillance and behavior recognition. In addition to the video data, a face and gait database for all twelve individuals observed by the network of cameras is supplied. Hand-segmented ground-truth foreground regions are provided for every 500th frame in all cameras and for many sequential frames in two overlapping views. The entrance and exit time of each individual in each camera for one of the scenarios is provided in an XML database. We believe that the dataset will help provide a common development and verifcation framework for the increasing number of research efforts related to video surveillance in multiple, potentially non-overlapping, camera networks.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133730832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Evaluation of MPEG7 color descriptors for visual surveillance retrieval 用于视觉监控检索的MPEG7颜色描述符的评价
J. Annesley, J. Orwell, John-Paul Renno
This paper presents the results to evaluate the effectiveness of MPEG 7 color descriptors in visual surveillance retrieval problems. A set of image sequences of pedestrians entering and leaving a room, viewed by two cameras, is used to create a test set. The problem posed is the correct identification of other sequences showing the same person as contained in an example image. Color descriptors from the MPEG7 standard are used, including dominant color, color layout, color structure and scalable color experiments are presented that compare the performance of these, and also compare automatic and manual techniques to examine the sensitivity of the retrieval rate on segmentation accuracy. In addition, results are presented on innovative methods to combine the output from different descriptors, and also different components of the observed people. The evaluation measure used is the ANMRR, a standard in content-based retrieval experiments.
本文给出了评价mpeg7颜色描述符在视觉监控检索问题中的有效性的结果。由两个摄像头观察的行人进入和离开房间的一组图像序列用于创建测试集。所提出的问题是正确识别其他序列显示同一个人包含在示例图像中。采用了MPEG7标准中的颜色描述符,包括主色、颜色布局、颜色结构和可扩展颜色,并通过实验比较了这些描述符的性能,并比较了自动和手动技术对分割精度的敏感性。此外,还介绍了结合不同描述符输出的创新方法,以及观察到的人的不同组成部分的结果。使用的评价度量是基于内容的检索实验中的标准ANMRR。
{"title":"Evaluation of MPEG7 color descriptors for visual surveillance retrieval","authors":"J. Annesley, J. Orwell, John-Paul Renno","doi":"10.1109/VSPETS.2005.1570904","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570904","url":null,"abstract":"This paper presents the results to evaluate the effectiveness of MPEG 7 color descriptors in visual surveillance retrieval problems. A set of image sequences of pedestrians entering and leaving a room, viewed by two cameras, is used to create a test set. The problem posed is the correct identification of other sequences showing the same person as contained in an example image. Color descriptors from the MPEG7 standard are used, including dominant color, color layout, color structure and scalable color experiments are presented that compare the performance of these, and also compare automatic and manual techniques to examine the sensitivity of the retrieval rate on segmentation accuracy. In addition, results are presented on innovative methods to combine the output from different descriptors, and also different components of the observed people. The evaluation measure used is the ANMRR, a standard in content-based retrieval experiments.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131339769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Evaluation of Motion Segmentation Quality for Aircraft Activity Surveillance 飞机活动监视运动分割质量评价
J. Aguilera, H. Wildenauer, M. Kampel, M. Borg, D. Thirde, J. Ferryman
Recent interest has been shown in performance evaluation of visual surveillance systems. The main purpose of performance evaluation on computer vision systems is the statistical testing and tuning in order to improve algorithm's reliability and robustness. In this paper we investigate the use of empirical discrepancy metrics for quantitative analysis of motion segmentation algorithms. We are concerned with the case of visual surveillance on an airport's apron, that is the area where aircrafts are parked and serviced by specialized ground support vehicles. Robust detection of individuals and vehicles is of major concern for the purpose of tracking objects and understanding the scene. In this paper, different discrepancy metrics for motion segmentation evaluation are illustrated and used to assess the performance of three motion segmentors on video sequences of an airport's apron.
最近,人们对视觉监视系统的性能评估表现出了兴趣。计算机视觉系统性能评估的主要目的是进行统计测试和调优,以提高算法的可靠性和鲁棒性。在本文中,我们研究了使用经验差异度量对运动分割算法进行定量分析。我们关切的是在机场停机坪上进行视觉监视的情况,停机坪是飞机停放和由专门地面支援车辆提供服务的区域。对个人和车辆的鲁棒检测是跟踪目标和理解场景的主要关注点。本文阐述了运动分割评估的不同差异度量,并用于评估三种运动分割器在机场停机坪视频序列上的性能。
{"title":"Evaluation of Motion Segmentation Quality for Aircraft Activity Surveillance","authors":"J. Aguilera, H. Wildenauer, M. Kampel, M. Borg, D. Thirde, J. Ferryman","doi":"10.1109/VSPETS.2005.1570928","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570928","url":null,"abstract":"Recent interest has been shown in performance evaluation of visual surveillance systems. The main purpose of performance evaluation on computer vision systems is the statistical testing and tuning in order to improve algorithm's reliability and robustness. In this paper we investigate the use of empirical discrepancy metrics for quantitative analysis of motion segmentation algorithms. We are concerned with the case of visual surveillance on an airport's apron, that is the area where aircrafts are parked and serviced by specialized ground support vehicles. Robust detection of individuals and vehicles is of major concern for the purpose of tracking objects and understanding the scene. In this paper, different discrepancy metrics for motion segmentation evaluation are illustrated and used to assess the performance of three motion segmentors on video sequences of an airport's apron.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133108897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Performance evaluation of a real time video surveillance system 实时视频监控系统的性能评估
S. Muller-Schneiders, T. Jager, H. Loos, W. Niem
This paper presents a thorough introduction to the real time video surveillance system which has been developed at Bosch Corporate Research considering robustness as the major design goal. A robust surveillance system should especially aim for a low number of false positives since surveillance guards might get distracted by too many alarms caused by, e.g., moving trees, rain, small camera motion, or varying illumination conditions. Since a missed security related event could cause a serious threat for an installation site, the before mentioned criterion is obviously not sufficient for designing a robust system and thus a low number of false negatives should simultaneously be achieved. Due to the fact that the false negative rate should ideally be equal to zero, the surveillance system should be able to cope with varying illumination conditions, low contrast and occlusion situations. Besides presenting the building blocks of our video surveillance system, the measures taken to achieve robustness is illustrated in this paper. Since our system is based on algorithms for video motion detection, which has been described e.g. in M. Mayer et al., (1996), the previous set of algorithms had to be extended to feature a complete video content analysis system. This transition from simple motion detection to video content analysis is also discussed in the following. In order to measure the performance of our system, quality measures calculated for various PETS sequences is presented.
本文全面介绍了实时视频监控系统,该系统是由博世公司研究开发的,以鲁棒性为主要设计目标。一个强大的监控系统应该以低误报率为目标,因为监控人员可能会被太多的警报分散注意力,例如,移动的树木、下雨、小摄像机运动或不同的照明条件。由于错过的安全相关事件可能会对安装站点造成严重威胁,因此前面提到的标准显然不足以设计健壮的系统,因此应该同时实现低数量的假阴性。由于理想情况下假阴性率应该等于零,因此监视系统应该能够应对不同的照明条件,低对比度和遮挡情况。本文除了介绍了我们的视频监控系统的组成部分外,还说明了实现鲁棒性的措施。由于我们的系统基于视频运动检测算法,例如在M. Mayer等人(1996)中已经描述过,因此必须扩展之前的一组算法以提供完整的视频内容分析系统。下面还将讨论从简单的运动检测到视频内容分析的转变。为了测量系统的性能,给出了各种pet序列的质量度量。
{"title":"Performance evaluation of a real time video surveillance system","authors":"S. Muller-Schneiders, T. Jager, H. Loos, W. Niem","doi":"10.1109/VSPETS.2005.1570908","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570908","url":null,"abstract":"This paper presents a thorough introduction to the real time video surveillance system which has been developed at Bosch Corporate Research considering robustness as the major design goal. A robust surveillance system should especially aim for a low number of false positives since surveillance guards might get distracted by too many alarms caused by, e.g., moving trees, rain, small camera motion, or varying illumination conditions. Since a missed security related event could cause a serious threat for an installation site, the before mentioned criterion is obviously not sufficient for designing a robust system and thus a low number of false negatives should simultaneously be achieved. Due to the fact that the false negative rate should ideally be equal to zero, the surveillance system should be able to cope with varying illumination conditions, low contrast and occlusion situations. Besides presenting the building blocks of our video surveillance system, the measures taken to achieve robustness is illustrated in this paper. Since our system is based on algorithms for video motion detection, which has been described e.g. in M. Mayer et al., (1996), the previous set of algorithms had to be extended to feature a complete video content analysis system. This transition from simple motion detection to video content analysis is also discussed in the following. In order to measure the performance of our system, quality measures calculated for various PETS sequences is presented.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Evaluation of object tracking for aircraft activity surveillance 航空器活动监视中目标跟踪的评价
D. Thirde, M. Borg, J. Aguilera, J. Ferryman, K. Baker, M. Kampel
This paper presents the evaluation of an object tracking system that has been developed in the context of aircraft activity monitoring. The overall tracking system comprises three main modules - motion detection, object tracking and data fusion. In this paper we primarily focus on performance evaluation of the object tracking module, with emphasis given to the general 2D tracking performance and the 3D object localisation.
本文介绍了在飞机活动监测的背景下开发的目标跟踪系统的评估。整个跟踪系统包括三个主要模块:运动检测、目标跟踪和数据融合。本文主要研究了目标跟踪模块的性能评估,重点介绍了一般的二维跟踪性能和三维目标定位。
{"title":"Evaluation of object tracking for aircraft activity surveillance","authors":"D. Thirde, M. Borg, J. Aguilera, J. Ferryman, K. Baker, M. Kampel","doi":"10.1109/VSPETS.2005.1570909","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570909","url":null,"abstract":"This paper presents the evaluation of an object tracking system that has been developed in the context of aircraft activity monitoring. The overall tracking system comprises three main modules - motion detection, object tracking and data fusion. In this paper we primarily focus on performance evaluation of the object tracking module, with emphasis given to the general 2D tracking performance and the 3D object localisation.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Rao-Blackwellised particle filter for tracking with application in visual surveillance rao - blackwell粒子滤波跟踪及其在视觉监控中的应用
Xinyu Xu, Baoxin Li
Particle filters have become popular tools for visual tracking since they do not require the modeling system to be Gaussian and linear. However, when applied to a high dimensional state-space, particle filters can be inefficient because a prohibitively large number of samples may be required in order to approximate the underlying density functions with desired accuracy. In this paper, by proposing a tracking algorithm based on Rao-Blackwellised particle filter (RBPF), we show how to exploit the analytical relationship between state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, we estimate some of the state variables as in a regular particle filter, and the distributions of the remaining variables are updated analytically using an exact filter (Kalman filter in this paper). We discuss how the proposed method can be applied to facilitate the visual tracking task in typical surveillance applications. Experiments using both simulated data and real video sequences show that the proposed method results in more accurate and more efficient tracking than a regular particle filter.
粒子过滤器已经成为视觉跟踪的流行工具,因为它们不需要建模系统是高斯和线性的。然而,当应用于高维状态空间时,粒子滤波器可能是低效的,因为为了以期望的精度近似潜在的密度函数,可能需要大量的样本。本文提出了一种基于rao - blackwell化粒子滤波器(RBPF)的跟踪算法,展示了如何利用状态变量之间的解析关系来提高常规粒子滤波器的效率和精度。本质上,我们估计一些状态变量在一个规则的粒子滤波器中,剩余变量的分布使用一个精确的滤波器(本文中的卡尔曼滤波器)进行解析更新。我们讨论了该方法如何应用于典型监控应用中的视觉跟踪任务。在模拟数据和真实视频序列上的实验表明,该方法比常规粒子滤波方法具有更高的跟踪精度和效率。
{"title":"Rao-Blackwellised particle filter for tracking with application in visual surveillance","authors":"Xinyu Xu, Baoxin Li","doi":"10.1109/VSPETS.2005.1570893","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570893","url":null,"abstract":"Particle filters have become popular tools for visual tracking since they do not require the modeling system to be Gaussian and linear. However, when applied to a high dimensional state-space, particle filters can be inefficient because a prohibitively large number of samples may be required in order to approximate the underlying density functions with desired accuracy. In this paper, by proposing a tracking algorithm based on Rao-Blackwellised particle filter (RBPF), we show how to exploit the analytical relationship between state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, we estimate some of the state variables as in a regular particle filter, and the distributions of the remaining variables are updated analytically using an exact filter (Kalman filter in this paper). We discuss how the proposed method can be applied to facilitate the visual tracking task in typical surveillance applications. Experiments using both simulated data and real video sequences show that the proposed method results in more accurate and more efficient tracking than a regular particle filter.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131656790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Reconstruction of 3D Face from a Single 2D Image for Face Recognition 从单个二维图像重建三维人脸用于人脸识别
Yuankui Hu, Ying Zheng, Zengfu Wang
In this paper, a synthetic exemplar based framework for face recognition with variant pose and illumination is proposed. Our purpose is to construct a face recognition system only according to one single frontal face image of each person for recognition. The framework consists of three main parts. First, a deformation based 3D face modeling technique is introduced to create an individual 3D face model from a single frontal face image of a person with a generic 3D face model. Then, the virtual faces for recognition at various lightings and views are synthesized. Finally, an Eigenfaces based classifier is constructed where the virtual faces synthesized are used as training exemplars. The experimental results show that the proposed 3D face modeling technique is efficient and the synthetic face exemplars can significantly improve the accuracy of face recognition with variant pose and illumination.
本文提出了一种基于合成样例的人脸识别框架,用于不同姿态和光照条件下的人脸识别。我们的目的是构建一个人脸识别系统,仅根据每个人的单一正面人脸图像进行识别。该框架由三个主要部分组成。首先,介绍了一种基于变形的三维人脸建模技术,该技术可以从具有通用三维人脸模型的人的单个正面人脸图像中创建单个三维人脸模型。然后,对不同光照和视角下的虚拟人脸进行合成。最后,以合成的虚拟人脸作为训练样本,构造了基于特征人脸的分类器。实验结果表明,所提出的三维人脸建模技术是有效的,合成的人脸样本可以显著提高不同姿态和光照条件下的人脸识别精度。
{"title":"Reconstruction of 3D Face from a Single 2D Image for Face Recognition","authors":"Yuankui Hu, Ying Zheng, Zengfu Wang","doi":"10.1109/VSPETS.2005.1570918","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570918","url":null,"abstract":"In this paper, a synthetic exemplar based framework for face recognition with variant pose and illumination is proposed. Our purpose is to construct a face recognition system only according to one single frontal face image of each person for recognition. The framework consists of three main parts. First, a deformation based 3D face modeling technique is introduced to create an individual 3D face model from a single frontal face image of a person with a generic 3D face model. Then, the virtual faces for recognition at various lightings and views are synthesized. Finally, an Eigenfaces based classifier is constructed where the virtual faces synthesized are used as training exemplars. The experimental results show that the proposed 3D face modeling technique is efficient and the synthetic face exemplars can significantly improve the accuracy of face recognition with variant pose and illumination.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127464234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A Comparison of Active-Contour Models Based on Blurring and on Marginalization 基于模糊和边缘的主动轮廓模型的比较
A. Pece
Many different active-contour models have been proposed over the last 20 years, but very few comparisons between them have been carried out. Further, most of these comparisons have been either exclusively theoretical or exclusively experimental. This paper presents a combined theoretical and experimental comparison between two contour models. The models are put into a common theoretical framework and performance comparisons are carried out on a vehicle tracking task in the PETS test sequences. Using a Condensation tracker helps to find the few frames where either model fails to provide a good fit to the image. The results show that (a) neither model has a definitive advantage over the other, and (b) Kalman filtering might actually be more effective than particle filtering for both models.
在过去的20年中,人们提出了许多不同的活动轮廓模型,但很少进行它们之间的比较。此外,大多数这些比较要么完全是理论的,要么完全是实验的。本文对两种等高线模型进行了理论和实验比较。将这些模型放到一个通用的理论框架中,并在pet测试序列中对车辆跟踪任务进行了性能比较。使用冷凝跟踪器可以帮助找到两个模型都不能很好地匹配图像的少数帧。结果表明:(a)两种模型都没有绝对的优势,(b)卡尔曼滤波实际上可能比粒子滤波对两种模型都有效。
{"title":"A Comparison of Active-Contour Models Based on Blurring and on Marginalization","authors":"A. Pece","doi":"10.1109/VSPETS.2005.1570933","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570933","url":null,"abstract":"Many different active-contour models have been proposed over the last 20 years, but very few comparisons between them have been carried out. Further, most of these comparisons have been either exclusively theoretical or exclusively experimental. This paper presents a combined theoretical and experimental comparison between two contour models. The models are put into a common theoretical framework and performance comparisons are carried out on a vehicle tracking task in the PETS test sequences. Using a Condensation tracker helps to find the few frames where either model fails to provide a good fit to the image. The results show that (a) neither model has a definitive advantage over the other, and (b) Kalman filtering might actually be more effective than particle filtering for both models.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114797946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Face recognition through mismatch driven representations of the face 通过不匹配驱动的面部表征进行人脸识别
S. Lucey, Tsuhan Chen
Performance of face verification systems can be adversely affected by a number of different mismatches (e.g. illumination, expression, alignment, etc.) between gallery and probe images. In this paper, we demonstrate that representations of the face used during the verification process should be driven by their sensitivity to these mismatches. Two representation categories of the face are proposed, parts and reflectance, each motivated by their own properties of invariance and sensitivity to different types of mismatches (i.e. spatial and spectral). We additionally demonstrate that the employment of the sum rule gives approximately equivalent performance to more exotic combination strategies based on support vector machine (SVM) classifiers, without the need for training on a tuning set. Improved performance is demonstrated, with a reduction in false reject rate of over 30% when compared to the single representation algorithm. Experiments were conducted on a subset of the challenging face recognition grand challenge (FRGC) v1.0 dataset.
人脸验证系统的性能可能会受到画廊和探针图像之间许多不同的不匹配(例如照明,表情,对齐等)的不利影响。在本文中,我们证明了在验证过程中使用的人脸表征应该由它们对这些不匹配的敏感性驱动。提出了人脸的两种表示类型:部分和反射率,每一种都有其自身的不变性和对不同类型的不匹配(即空间和光谱)的敏感性。此外,我们还证明,使用和规则与基于支持向量机(SVM)分类器的更奇特的组合策略具有近似等效的性能,而无需在调优集上进行训练。改进的性能被证明,与单一表示算法相比,错误拒绝率降低了30%以上。在挑战性人脸识别大挑战(FRGC) v1.0数据集的一个子集上进行了实验。
{"title":"Face recognition through mismatch driven representations of the face","authors":"S. Lucey, Tsuhan Chen","doi":"10.1109/VSPETS.2005.1570915","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570915","url":null,"abstract":"Performance of face verification systems can be adversely affected by a number of different mismatches (e.g. illumination, expression, alignment, etc.) between gallery and probe images. In this paper, we demonstrate that representations of the face used during the verification process should be driven by their sensitivity to these mismatches. Two representation categories of the face are proposed, parts and reflectance, each motivated by their own properties of invariance and sensitivity to different types of mismatches (i.e. spatial and spectral). We additionally demonstrate that the employment of the sum rule gives approximately equivalent performance to more exotic combination strategies based on support vector machine (SVM) classifiers, without the need for training on a tuning set. Improved performance is demonstrated, with a reduction in false reject rate of over 30% when compared to the single representation algorithm. Experiments were conducted on a subset of the challenging face recognition grand challenge (FRGC) v1.0 dataset.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123451688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Data fusion for robust head tracking by particles 基于粒子鲁棒头部跟踪的数据融合
Yonggang Jin, F. Mokhtarian
The paper presents a data fusion panicle filter for robust head tracking in video surveillance applications. With head detection based on moving region contour analysis, we propose a data fusion particle filter to fuse head detection results with colour and edge cues for robust head tracking. Connections of the proposed particle filter with previous work are also discussed where proposal distributions of M. Isard and A. Blake (1998) and P.Perez et al., (2004) are shown to be an approximation with fixed ratio of importance and prior samples. Experimental results demonstrate the robustness of head tracking using proposed data fusion particle filter.
提出了一种用于视频监控中鲁棒头部跟踪的数据融合穗滤波器。在基于运动区域轮廓分析的头部检测中,我们提出了一种数据融合粒子滤波器,将头部检测结果与颜色和边缘线索融合,实现鲁棒头部跟踪。还讨论了所提出的粒子滤波器与先前工作的联系,其中M. Isard和A. Blake(1998)以及P.Perez等人(2004)的建议分布显示为具有固定的重要性比和先验样本的近似值。实验结果表明,所提出的数据融合粒子滤波方法具有较好的头部跟踪鲁棒性。
{"title":"Data fusion for robust head tracking by particles","authors":"Yonggang Jin, F. Mokhtarian","doi":"10.1109/VSPETS.2005.1570895","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570895","url":null,"abstract":"The paper presents a data fusion panicle filter for robust head tracking in video surveillance applications. With head detection based on moving region contour analysis, we propose a data fusion particle filter to fuse head detection results with colour and edge cues for robust head tracking. Connections of the proposed particle filter with previous work are also discussed where proposal distributions of M. Isard and A. Blake (1998) and P.Perez et al., (2004) are shown to be an approximation with fixed ratio of importance and prior samples. Experimental results demonstrate the robustness of head tracking using proposed data fusion particle filter.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125393652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1