Pub Date : 2013-06-06DOI: 10.1109/PETS.2013.6523792
P. Foggia, G. Percannella, A. Saggese, M. Vento
In this paper we present a real-time tracking algorithm able to follow simultaneously single objects and groups of objects. The proposed method is an improvement of the approach that we recently proposed in [1], able to exploit the history of moving objects by means of a Finite State Automaton. The main novelty of the proposed method refers to the strategy used to associate the evidence at the current frame to the objects tracked in the previous one. This strategy is able to take into account only the possible feasible combinations by means of an efficient and robust graph-based approach, which exploit the spatio-temporal continuity of moving objects. The method has been compared over a standard dataset with the participants to the international PETS 2010 contest, confirming good efficiency and generality.
{"title":"Real-time tracking of single people and groups simultaneously by contextual graph-based reasoning dealing complex occlusions","authors":"P. Foggia, G. Percannella, A. Saggese, M. Vento","doi":"10.1109/PETS.2013.6523792","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523792","url":null,"abstract":"In this paper we present a real-time tracking algorithm able to follow simultaneously single objects and groups of objects. The proposed method is an improvement of the approach that we recently proposed in [1], able to exploit the history of moving objects by means of a Finite State Automaton. The main novelty of the proposed method refers to the strategy used to associate the evidence at the current frame to the objects tracked in the previous one. This strategy is able to take into account only the possible feasible combinations by means of an efficient and robust graph-based approach, which exploit the spatio-temporal continuity of moving objects. The method has been compared over a standard dataset with the participants to the international PETS 2010 contest, confirming good efficiency and generality.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114926156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-06DOI: 10.1109/PETS.2013.6523793
G. Phadke, R. Velmurugan
Object tracking is critical to visual surveillance and activity analysis. The color based mean shift has been addressed as an effective and fast algorithm for tracking. But it fails in case of objects with low color intensity, clutter in background and total occlusion for several frames. We present a new scheme based on multiple feature integration for visual tracking. The proposed method integrates the color, texture and edge features of the target to construct the target model and a fragmented mean shift to handle occlusion. For further improvement target center is updated with Kalman filter and target model is also updated. The overall frame work is computationally simple. The proposed approach has been compared with other trackers using challenging videos and has been found to be performing better.
{"title":"Improved mean shift for multi-target tracking","authors":"G. Phadke, R. Velmurugan","doi":"10.1109/PETS.2013.6523793","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523793","url":null,"abstract":"Object tracking is critical to visual surveillance and activity analysis. The color based mean shift has been addressed as an effective and fast algorithm for tracking. But it fails in case of objects with low color intensity, clutter in background and total occlusion for several frames. We present a new scheme based on multiple feature integration for visual tracking. The proposed method integrates the color, texture and edge features of the target to construct the target model and a fragmented mean shift to handle occlusion. For further improvement target center is updated with Kalman filter and target model is also updated. The overall frame work is computationally simple. The proposed approach has been compared with other trackers using challenging videos and has been found to be performing better.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-06DOI: 10.1109/PETS.2013.6523790
A. Heili, J. Odobez
We present a detection-based approach to multi-object tracking formulated as a statistical labeling task and solved using a Conditional Random Field (CRF) model. The CRF model relies on factors involving detection pairs and their corresponding hidden labels. These factors model pairwise position or color similarities as well as dissimilarities, and one critical issue is to be able to learn their parameters in an accurate and unsupervised way. We argue in this paper that tracklets and local context can help to obtain relevant parameters. In this context, the contributions are as follows: i) a factor term global parameter estimation based on intermediate tracking results; ii) a detection dependent parameter adaptation scheme that allows to take into account the local detection contextual information during online tracking. Experiments on PETS 2009 and CAVIAR datasets show the validity of our approach, and similar or better performance than recent state-of-the-art algorithms.
{"title":"Parameter estimation and contextual adaptation for a multi-object tracking CRF model","authors":"A. Heili, J. Odobez","doi":"10.1109/PETS.2013.6523790","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523790","url":null,"abstract":"We present a detection-based approach to multi-object tracking formulated as a statistical labeling task and solved using a Conditional Random Field (CRF) model. The CRF model relies on factors involving detection pairs and their corresponding hidden labels. These factors model pairwise position or color similarities as well as dissimilarities, and one critical issue is to be able to learn their parameters in an accurate and unsupervised way. We argue in this paper that tracklets and local context can help to obtain relevant parameters. In this context, the contributions are as follows: i) a factor term global parameter estimation based on intermediate tracking results; ii) a detection dependent parameter adaptation scheme that allows to take into account the local detection contextual information during online tracking. Experiments on PETS 2009 and CAVIAR datasets show the validity of our approach, and similar or better performance than recent state-of-the-art algorithms.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134640022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-06DOI: 10.1109/PETS.2013.6523795
A. Zweng, M. Kampel
In this paper, we evaluate a new algorithm for pedestrian detection using a relational feature model (RFM) in combination with histogram similarity functions. For histogram comparison, we use the bhattacharyya distance, histogram intersection, histogram correlation and the chi-square χ2 histogram similarity function. Relational features using the HOG descriptor compute the similarity between histograms of the HOG descriptor. The features are computed for all combinations of extracted histograms from a feature detection algorithm. Our experiments show, that the information of spatial histogram similarities reduces the number of false positives while preserving true positive detections. The detection algorithm is done, using a multi-scale overlapping sliding window approach. In our experiments, we show results for different sizes of the cell size from the HOG descriptor due to the large size of the resulting relational feature vector as well as different results from the mentioned histogram similarity functions. Additionally, the results show the influence of the amount of positive example images and negative example images during training on the classification performance of our approach.
{"title":"Performance evaluation of an improved relational feature model for pedestrian detection","authors":"A. Zweng, M. Kampel","doi":"10.1109/PETS.2013.6523795","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523795","url":null,"abstract":"In this paper, we evaluate a new algorithm for pedestrian detection using a relational feature model (RFM) in combination with histogram similarity functions. For histogram comparison, we use the bhattacharyya distance, histogram intersection, histogram correlation and the chi-square χ2 histogram similarity function. Relational features using the HOG descriptor compute the similarity between histograms of the HOG descriptor. The features are computed for all combinations of extracted histograms from a feature detection algorithm. Our experiments show, that the information of spatial histogram similarities reduces the number of false positives while preserving true positive detections. The detection algorithm is done, using a multi-scale overlapping sliding window approach. In our experiments, we show results for different sizes of the cell size from the HOG descriptor due to the large size of the resulting relational feature vector as well as different results from the mentioned histogram similarity functions. Additionally, the results show the influence of the amount of positive example images and negative example images during training on the classification performance of our approach.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"546 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131780681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-06DOI: 10.1109/PETS.2013.6523794
Tian Wang, H. Snoussi
In this paper, we propose an algorithm to detect abnormal events based on video streams. The algorithm is based on histograms of the orientation of optical flow descriptor and one-class SVM classifier. We introduce grids of Histograms of the Orientation of Optical Flow (HOF) as the descriptors for motion information of the monolithic video frame. The one-class SVM, after a learning period characterizing normal behaviors, detects the abnormality which is considered as the event needed to be recognized in the current frame. Extensive testing on dataset corroborates the effectiveness of the proposed detection method.
{"title":"Histograms of optical flow orientation for abnormal events detection","authors":"Tian Wang, H. Snoussi","doi":"10.1109/PETS.2013.6523794","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523794","url":null,"abstract":"In this paper, we propose an algorithm to detect abnormal events based on video streams. The algorithm is based on histograms of the orientation of optical flow descriptor and one-class SVM classifier. We introduce grids of Histograms of the Orientation of Optical Flow (HOF) as the descriptors for motion information of the monolithic video frame. The one-class SVM, after a learning period characterizing normal behaviors, detects the abnormality which is considered as the event needed to be recognized in the current frame. Extensive testing on dataset corroborates the effectiveness of the proposed detection method.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130756710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-06DOI: 10.1109/PETS.2013.6523789
Volker Eiselein, T. Senst, I. Keller, T. Sikora
The Probability Hypothesis Density (PHD) filter is a multi-object Bayes filter which has been recently becoming popular in the tracking community especially for its linear complexity and its ability to filter out a high amount of clutter. However, its application to Computer Vision scenarios can be difficult as it requires high detection probabilities. Many human detectors suffer from a significant miss-match rate which causes problems for the PHD filter. This article presents an implementation of a Gaussian Mixture PHD (GM-PHD) filter which is enhanced by Optical Flow information in order to account for missed detections. We give a detailed mathematical discussion for the parameters of the proposed system and justify our results by extensive tests showing the performance in several contexts and on different datasets.
{"title":"A motion-enhanced hybrid Probability Hypothesis Density filter for real-time multi-human tracking in video surveillance scenarios","authors":"Volker Eiselein, T. Senst, I. Keller, T. Sikora","doi":"10.1109/PETS.2013.6523789","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523789","url":null,"abstract":"The Probability Hypothesis Density (PHD) filter is a multi-object Bayes filter which has been recently becoming popular in the tracking community especially for its linear complexity and its ability to filter out a high amount of clutter. However, its application to Computer Vision scenarios can be difficult as it requires high detection probabilities. Many human detectors suffer from a significant miss-match rate which causes problems for the PHD filter. This article presents an implementation of a Gaussian Mixture PHD (GM-PHD) filter which is enhanced by Optical Flow information in order to account for missed detections. We give a detailed mathematical discussion for the parameters of the proposed system and justify our results by extensive tests showing the performance in several contexts and on different datasets.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131258600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/PETS.2013.6523791
M. Hofmann, M. Haag, G. Rigoll
This paper presents a unified hierarchical multi-object tracking scheme. The problem of simultaneously tracking multiple objects is cast as a global MAP problem which aims at maximizing the probability of trajectories given the observations in each frame. Directly solving this problem is infeasible, due to computational considerations and the difficulty of reliably estimate necessary transition probabilities. Without breaking the MAP formulation, we propose a three stage hierarchical tracking framework which makes solving the MAP feasible. In addition, using a hierarchical framework allows for modeling inter-object occlusions. Occlusion handling thus smoothly and implicitly integrates into the proposed framework without any explicit occlusion reasoning. Finally, we evaluate the proposed method on the publicly available PETS 2009 tracking data and show improvements over the current state of the art for most sequences.
{"title":"Unified hierarchical multi-object tracking using global data association","authors":"M. Hofmann, M. Haag, G. Rigoll","doi":"10.1109/PETS.2013.6523791","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523791","url":null,"abstract":"This paper presents a unified hierarchical multi-object tracking scheme. The problem of simultaneously tracking multiple objects is cast as a global MAP problem which aims at maximizing the probability of trajectories given the observations in each frame. Directly solving this problem is infeasible, due to computational considerations and the difficulty of reliably estimate necessary transition probabilities. Without breaking the MAP formulation, we propose a three stage hierarchical tracking framework which makes solving the MAP feasible. In addition, using a hierarchical framework allows for modeling inter-object occlusions. Occlusion handling thus smoothly and implicitly integrates into the proposed framework without any explicit occlusion reasoning. Finally, we evaluate the proposed method on the publicly available PETS 2009 tracking data and show improvements over the current state of the art for most sequences.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131004484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/PETS.2013.6523788
E. Cermeño, Silvana Mallor, Juan Alberto Sigüenza
This paper presents a new method for event recognition based on machine learning techniques. One machine is trained per kind of event using color, texture and shape features. Testing is performed on the PETS 2009 dataset. We evaluate accuracy of our automatic system with six different kind of events and then compare the results with human classification.
{"title":"Learning crowd behavior for event recognition","authors":"E. Cermeño, Silvana Mallor, Juan Alberto Sigüenza","doi":"10.1109/PETS.2013.6523788","DOIUrl":"https://doi.org/10.1109/PETS.2013.6523788","url":null,"abstract":"This paper presents a new method for event recognition based on machine learning techniques. One machine is trained per kind of event using color, texture and shape features. Testing is performed on the PETS 2009 dataset. We evaluate accuracy of our automatic system with six different kind of events and then compare the results with human classification.","PeriodicalId":385403,"journal":{"name":"2013 IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134467053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}