Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570930
C. Jaynes, A. Kale, N. Sanders, E. Grossmann
This paper introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed throughout an indoor setting and augmented with groundtruth data. The dataset includes ten minutes of individuals who are moving throughout the sensor network. In addition, three scripted scenarios that contain behaviors exhibtied over a wide-area, such as "gathering for a meeting" or "stealing an object" are included to assist researchers who are interested in wide-area surveillance and behavior recognition. In addition to the video data, a face and gait database for all twelve individuals observed by the network of cameras is supplied. Hand-segmented ground-truth foreground regions are provided for every 500th frame in all cameras and for many sequential frames in two overlapping views. The entrance and exit time of each individual in each camera for one of the scenarios is provided in an XML database. We believe that the dataset will help provide a common development and verifcation framework for the increasing number of research efforts related to video surveillance in multiple, potentially non-overlapping, camera networks.
{"title":"The Terrascope Dataset: Scripted Multi-Camera Indoor Video Surveillance with Ground-truth","authors":"C. Jaynes, A. Kale, N. Sanders, E. Grossmann","doi":"10.1109/VSPETS.2005.1570930","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570930","url":null,"abstract":"This paper introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed throughout an indoor setting and augmented with groundtruth data. The dataset includes ten minutes of individuals who are moving throughout the sensor network. In addition, three scripted scenarios that contain behaviors exhibtied over a wide-area, such as \"gathering for a meeting\" or \"stealing an object\" are included to assist researchers who are interested in wide-area surveillance and behavior recognition. In addition to the video data, a face and gait database for all twelve individuals observed by the network of cameras is supplied. Hand-segmented ground-truth foreground regions are provided for every 500th frame in all cameras and for many sequential frames in two overlapping views. The entrance and exit time of each individual in each camera for one of the scenarios is provided in an XML database. We believe that the dataset will help provide a common development and verifcation framework for the increasing number of research efforts related to video surveillance in multiple, potentially non-overlapping, camera networks.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133730832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570904
J. Annesley, J. Orwell, John-Paul Renno
This paper presents the results to evaluate the effectiveness of MPEG 7 color descriptors in visual surveillance retrieval problems. A set of image sequences of pedestrians entering and leaving a room, viewed by two cameras, is used to create a test set. The problem posed is the correct identification of other sequences showing the same person as contained in an example image. Color descriptors from the MPEG7 standard are used, including dominant color, color layout, color structure and scalable color experiments are presented that compare the performance of these, and also compare automatic and manual techniques to examine the sensitivity of the retrieval rate on segmentation accuracy. In addition, results are presented on innovative methods to combine the output from different descriptors, and also different components of the observed people. The evaluation measure used is the ANMRR, a standard in content-based retrieval experiments.
{"title":"Evaluation of MPEG7 color descriptors for visual surveillance retrieval","authors":"J. Annesley, J. Orwell, John-Paul Renno","doi":"10.1109/VSPETS.2005.1570904","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570904","url":null,"abstract":"This paper presents the results to evaluate the effectiveness of MPEG 7 color descriptors in visual surveillance retrieval problems. A set of image sequences of pedestrians entering and leaving a room, viewed by two cameras, is used to create a test set. The problem posed is the correct identification of other sequences showing the same person as contained in an example image. Color descriptors from the MPEG7 standard are used, including dominant color, color layout, color structure and scalable color experiments are presented that compare the performance of these, and also compare automatic and manual techniques to examine the sensitivity of the retrieval rate on segmentation accuracy. In addition, results are presented on innovative methods to combine the output from different descriptors, and also different components of the observed people. The evaluation measure used is the ANMRR, a standard in content-based retrieval experiments.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131339769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570928
J. Aguilera, H. Wildenauer, M. Kampel, M. Borg, D. Thirde, J. Ferryman
Recent interest has been shown in performance evaluation of visual surveillance systems. The main purpose of performance evaluation on computer vision systems is the statistical testing and tuning in order to improve algorithm's reliability and robustness. In this paper we investigate the use of empirical discrepancy metrics for quantitative analysis of motion segmentation algorithms. We are concerned with the case of visual surveillance on an airport's apron, that is the area where aircrafts are parked and serviced by specialized ground support vehicles. Robust detection of individuals and vehicles is of major concern for the purpose of tracking objects and understanding the scene. In this paper, different discrepancy metrics for motion segmentation evaluation are illustrated and used to assess the performance of three motion segmentors on video sequences of an airport's apron.
{"title":"Evaluation of Motion Segmentation Quality for Aircraft Activity Surveillance","authors":"J. Aguilera, H. Wildenauer, M. Kampel, M. Borg, D. Thirde, J. Ferryman","doi":"10.1109/VSPETS.2005.1570928","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570928","url":null,"abstract":"Recent interest has been shown in performance evaluation of visual surveillance systems. The main purpose of performance evaluation on computer vision systems is the statistical testing and tuning in order to improve algorithm's reliability and robustness. In this paper we investigate the use of empirical discrepancy metrics for quantitative analysis of motion segmentation algorithms. We are concerned with the case of visual surveillance on an airport's apron, that is the area where aircrafts are parked and serviced by specialized ground support vehicles. Robust detection of individuals and vehicles is of major concern for the purpose of tracking objects and understanding the scene. In this paper, different discrepancy metrics for motion segmentation evaluation are illustrated and used to assess the performance of three motion segmentors on video sequences of an airport's apron.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133108897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570908
S. Muller-Schneiders, T. Jager, H. Loos, W. Niem
This paper presents a thorough introduction to the real time video surveillance system which has been developed at Bosch Corporate Research considering robustness as the major design goal. A robust surveillance system should especially aim for a low number of false positives since surveillance guards might get distracted by too many alarms caused by, e.g., moving trees, rain, small camera motion, or varying illumination conditions. Since a missed security related event could cause a serious threat for an installation site, the before mentioned criterion is obviously not sufficient for designing a robust system and thus a low number of false negatives should simultaneously be achieved. Due to the fact that the false negative rate should ideally be equal to zero, the surveillance system should be able to cope with varying illumination conditions, low contrast and occlusion situations. Besides presenting the building blocks of our video surveillance system, the measures taken to achieve robustness is illustrated in this paper. Since our system is based on algorithms for video motion detection, which has been described e.g. in M. Mayer et al., (1996), the previous set of algorithms had to be extended to feature a complete video content analysis system. This transition from simple motion detection to video content analysis is also discussed in the following. In order to measure the performance of our system, quality measures calculated for various PETS sequences is presented.
{"title":"Performance evaluation of a real time video surveillance system","authors":"S. Muller-Schneiders, T. Jager, H. Loos, W. Niem","doi":"10.1109/VSPETS.2005.1570908","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570908","url":null,"abstract":"This paper presents a thorough introduction to the real time video surveillance system which has been developed at Bosch Corporate Research considering robustness as the major design goal. A robust surveillance system should especially aim for a low number of false positives since surveillance guards might get distracted by too many alarms caused by, e.g., moving trees, rain, small camera motion, or varying illumination conditions. Since a missed security related event could cause a serious threat for an installation site, the before mentioned criterion is obviously not sufficient for designing a robust system and thus a low number of false negatives should simultaneously be achieved. Due to the fact that the false negative rate should ideally be equal to zero, the surveillance system should be able to cope with varying illumination conditions, low contrast and occlusion situations. Besides presenting the building blocks of our video surveillance system, the measures taken to achieve robustness is illustrated in this paper. Since our system is based on algorithms for video motion detection, which has been described e.g. in M. Mayer et al., (1996), the previous set of algorithms had to be extended to feature a complete video content analysis system. This transition from simple motion detection to video content analysis is also discussed in the following. In order to measure the performance of our system, quality measures calculated for various PETS sequences is presented.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570909
D. Thirde, M. Borg, J. Aguilera, J. Ferryman, K. Baker, M. Kampel
This paper presents the evaluation of an object tracking system that has been developed in the context of aircraft activity monitoring. The overall tracking system comprises three main modules - motion detection, object tracking and data fusion. In this paper we primarily focus on performance evaluation of the object tracking module, with emphasis given to the general 2D tracking performance and the 3D object localisation.
{"title":"Evaluation of object tracking for aircraft activity surveillance","authors":"D. Thirde, M. Borg, J. Aguilera, J. Ferryman, K. Baker, M. Kampel","doi":"10.1109/VSPETS.2005.1570909","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570909","url":null,"abstract":"This paper presents the evaluation of an object tracking system that has been developed in the context of aircraft activity monitoring. The overall tracking system comprises three main modules - motion detection, object tracking and data fusion. In this paper we primarily focus on performance evaluation of the object tracking module, with emphasis given to the general 2D tracking performance and the 3D object localisation.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570893
Xinyu Xu, Baoxin Li
Particle filters have become popular tools for visual tracking since they do not require the modeling system to be Gaussian and linear. However, when applied to a high dimensional state-space, particle filters can be inefficient because a prohibitively large number of samples may be required in order to approximate the underlying density functions with desired accuracy. In this paper, by proposing a tracking algorithm based on Rao-Blackwellised particle filter (RBPF), we show how to exploit the analytical relationship between state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, we estimate some of the state variables as in a regular particle filter, and the distributions of the remaining variables are updated analytically using an exact filter (Kalman filter in this paper). We discuss how the proposed method can be applied to facilitate the visual tracking task in typical surveillance applications. Experiments using both simulated data and real video sequences show that the proposed method results in more accurate and more efficient tracking than a regular particle filter.
{"title":"Rao-Blackwellised particle filter for tracking with application in visual surveillance","authors":"Xinyu Xu, Baoxin Li","doi":"10.1109/VSPETS.2005.1570893","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570893","url":null,"abstract":"Particle filters have become popular tools for visual tracking since they do not require the modeling system to be Gaussian and linear. However, when applied to a high dimensional state-space, particle filters can be inefficient because a prohibitively large number of samples may be required in order to approximate the underlying density functions with desired accuracy. In this paper, by proposing a tracking algorithm based on Rao-Blackwellised particle filter (RBPF), we show how to exploit the analytical relationship between state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, we estimate some of the state variables as in a regular particle filter, and the distributions of the remaining variables are updated analytically using an exact filter (Kalman filter in this paper). We discuss how the proposed method can be applied to facilitate the visual tracking task in typical surveillance applications. Experiments using both simulated data and real video sequences show that the proposed method results in more accurate and more efficient tracking than a regular particle filter.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131656790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570918
Yuankui Hu, Ying Zheng, Zengfu Wang
In this paper, a synthetic exemplar based framework for face recognition with variant pose and illumination is proposed. Our purpose is to construct a face recognition system only according to one single frontal face image of each person for recognition. The framework consists of three main parts. First, a deformation based 3D face modeling technique is introduced to create an individual 3D face model from a single frontal face image of a person with a generic 3D face model. Then, the virtual faces for recognition at various lightings and views are synthesized. Finally, an Eigenfaces based classifier is constructed where the virtual faces synthesized are used as training exemplars. The experimental results show that the proposed 3D face modeling technique is efficient and the synthetic face exemplars can significantly improve the accuracy of face recognition with variant pose and illumination.
{"title":"Reconstruction of 3D Face from a Single 2D Image for Face Recognition","authors":"Yuankui Hu, Ying Zheng, Zengfu Wang","doi":"10.1109/VSPETS.2005.1570918","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570918","url":null,"abstract":"In this paper, a synthetic exemplar based framework for face recognition with variant pose and illumination is proposed. Our purpose is to construct a face recognition system only according to one single frontal face image of each person for recognition. The framework consists of three main parts. First, a deformation based 3D face modeling technique is introduced to create an individual 3D face model from a single frontal face image of a person with a generic 3D face model. Then, the virtual faces for recognition at various lightings and views are synthesized. Finally, an Eigenfaces based classifier is constructed where the virtual faces synthesized are used as training exemplars. The experimental results show that the proposed 3D face modeling technique is efficient and the synthetic face exemplars can significantly improve the accuracy of face recognition with variant pose and illumination.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127464234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570933
A. Pece
Many different active-contour models have been proposed over the last 20 years, but very few comparisons between them have been carried out. Further, most of these comparisons have been either exclusively theoretical or exclusively experimental. This paper presents a combined theoretical and experimental comparison between two contour models. The models are put into a common theoretical framework and performance comparisons are carried out on a vehicle tracking task in the PETS test sequences. Using a Condensation tracker helps to find the few frames where either model fails to provide a good fit to the image. The results show that (a) neither model has a definitive advantage over the other, and (b) Kalman filtering might actually be more effective than particle filtering for both models.
{"title":"A Comparison of Active-Contour Models Based on Blurring and on Marginalization","authors":"A. Pece","doi":"10.1109/VSPETS.2005.1570933","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570933","url":null,"abstract":"Many different active-contour models have been proposed over the last 20 years, but very few comparisons between them have been carried out. Further, most of these comparisons have been either exclusively theoretical or exclusively experimental. This paper presents a combined theoretical and experimental comparison between two contour models. The models are put into a common theoretical framework and performance comparisons are carried out on a vehicle tracking task in the PETS test sequences. Using a Condensation tracker helps to find the few frames where either model fails to provide a good fit to the image. The results show that (a) neither model has a definitive advantage over the other, and (b) Kalman filtering might actually be more effective than particle filtering for both models.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114797946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570915
S. Lucey, Tsuhan Chen
Performance of face verification systems can be adversely affected by a number of different mismatches (e.g. illumination, expression, alignment, etc.) between gallery and probe images. In this paper, we demonstrate that representations of the face used during the verification process should be driven by their sensitivity to these mismatches. Two representation categories of the face are proposed, parts and reflectance, each motivated by their own properties of invariance and sensitivity to different types of mismatches (i.e. spatial and spectral). We additionally demonstrate that the employment of the sum rule gives approximately equivalent performance to more exotic combination strategies based on support vector machine (SVM) classifiers, without the need for training on a tuning set. Improved performance is demonstrated, with a reduction in false reject rate of over 30% when compared to the single representation algorithm. Experiments were conducted on a subset of the challenging face recognition grand challenge (FRGC) v1.0 dataset.
{"title":"Face recognition through mismatch driven representations of the face","authors":"S. Lucey, Tsuhan Chen","doi":"10.1109/VSPETS.2005.1570915","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570915","url":null,"abstract":"Performance of face verification systems can be adversely affected by a number of different mismatches (e.g. illumination, expression, alignment, etc.) between gallery and probe images. In this paper, we demonstrate that representations of the face used during the verification process should be driven by their sensitivity to these mismatches. Two representation categories of the face are proposed, parts and reflectance, each motivated by their own properties of invariance and sensitivity to different types of mismatches (i.e. spatial and spectral). We additionally demonstrate that the employment of the sum rule gives approximately equivalent performance to more exotic combination strategies based on support vector machine (SVM) classifiers, without the need for training on a tuning set. Improved performance is demonstrated, with a reduction in false reject rate of over 30% when compared to the single representation algorithm. Experiments were conducted on a subset of the challenging face recognition grand challenge (FRGC) v1.0 dataset.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123451688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570895
Yonggang Jin, F. Mokhtarian
The paper presents a data fusion panicle filter for robust head tracking in video surveillance applications. With head detection based on moving region contour analysis, we propose a data fusion particle filter to fuse head detection results with colour and edge cues for robust head tracking. Connections of the proposed particle filter with previous work are also discussed where proposal distributions of M. Isard and A. Blake (1998) and P.Perez et al., (2004) are shown to be an approximation with fixed ratio of importance and prior samples. Experimental results demonstrate the robustness of head tracking using proposed data fusion particle filter.
{"title":"Data fusion for robust head tracking by particles","authors":"Yonggang Jin, F. Mokhtarian","doi":"10.1109/VSPETS.2005.1570895","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570895","url":null,"abstract":"The paper presents a data fusion panicle filter for robust head tracking in video surveillance applications. With head detection based on moving region contour analysis, we propose a data fusion particle filter to fuse head detection results with colour and edge cues for robust head tracking. Connections of the proposed particle filter with previous work are also discussed where proposal distributions of M. Isard and A. Blake (1998) and P.Perez et al., (2004) are shown to be an approximation with fixed ratio of importance and prior samples. Experimental results demonstrate the robustness of head tracking using proposed data fusion particle filter.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125393652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}