{"title":"Supervised particle filter for tracking 2D human pose in monocular video","authors":"S. Sedai, D. Huynh, Bennamoun","doi":"10.1109/WACV.2011.5711527","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a hybrid method that combines supervised learning and particle filtering to track the 2D pose of a human subject in monocular video sequences. Our approach, which we call a supervised particle filter method, consists of two steps: the training step and the tracking step. In the training step, we use a supervised learning method to train the regressors that take the silhouette descriptors as input and produce the 2D poses as output. In the tracking step, the output pose estimated from the regressors is combined with the particle filter to track the 2D pose in each video frame. Unlike the particle filter, our method does not require any manual initialization. We have tested our approach using the HumanEva video datasets and compared it with the standard particle filter and 2D pose estimation on individual frames. Our experimental results show that our approach can successfully track the pose over long video sequences and that it gives more accurate 2D human pose tracking than the particle filter and 2D pose estimation.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2011.5711527","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, we propose a hybrid method that combines supervised learning and particle filtering to track the 2D pose of a human subject in monocular video sequences. Our approach, which we call a supervised particle filter method, consists of two steps: the training step and the tracking step. In the training step, we use a supervised learning method to train the regressors that take the silhouette descriptors as input and produce the 2D poses as output. In the tracking step, the output pose estimated from the regressors is combined with the particle filter to track the 2D pose in each video frame. Unlike the particle filter, our method does not require any manual initialization. We have tested our approach using the HumanEva video datasets and compared it with the standard particle filter and 2D pose estimation on individual frames. Our experimental results show that our approach can successfully track the pose over long video sequences and that it gives more accurate 2D human pose tracking than the particle filter and 2D pose estimation.