Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779982
Neda Azouji, Z. Azimifar
Human action recognition is the process of labeling videos contain human motion with action classes. The run time complexity is one of the most important challenges in action recognition. In this paper, we address this problem using video abstraction techniques including key-frame extraction and video skimming. At first we extract key-frames and then skim the video clip by concatenating excerpts around the selected key-frames. This shorter sequence is used as input for classifier. Our proposed approach not only reduces the space complexity but also reduces the run time in both train and test steps. The experimental results provided on KTH action datasets show that the proposed method achieves good performance without losing considerable classification accuracy.
{"title":"A new approach to speed up in action recognition based on key-frame extraction","authors":"Neda Azouji, Z. Azimifar","doi":"10.1109/IRANIANMVIP.2013.6779982","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779982","url":null,"abstract":"Human action recognition is the process of labeling videos contain human motion with action classes. The run time complexity is one of the most important challenges in action recognition. In this paper, we address this problem using video abstraction techniques including key-frame extraction and video skimming. At first we extract key-frames and then skim the video clip by concatenating excerpts around the selected key-frames. This shorter sequence is used as input for classifier. Our proposed approach not only reduces the space complexity but also reduces the run time in both train and test steps. The experimental results provided on KTH action datasets show that the proposed method achieves good performance without losing considerable classification accuracy.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779980
A. Feizi, A. Aghagolzadeh, Hadi Seyedarabi
In this paper we propose an efficient method for behavior recognition and identification of anomalous behavior in video surveillance data. This approach consists of two phases of training and testing. In the training phase, first, we use background subtraction method to extract the moving pixels. Then optical flow vectors are extracted for moving pixels. We propose behavior features of each pixel as the average all optical flow vectors in the pixel over several frames in video data. Next, we use spectral clustering to classify behaviors wherein pixels that have similar behavior features are clustered together. Then we obtain a behavior model for each cluster using the normal distribution of the samples. Once the behavior models are obtained, in the testing phase, we use these models to detect anomalous behavior in a test video of the same scene. Experimental results on video surveillance sequences show the effectiveness and speed of proposed method.
{"title":"Using optical flow and spectral clustering for behavior recognition and detection of anomalous behaviors","authors":"A. Feizi, A. Aghagolzadeh, Hadi Seyedarabi","doi":"10.1109/IRANIANMVIP.2013.6779980","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779980","url":null,"abstract":"In this paper we propose an efficient method for behavior recognition and identification of anomalous behavior in video surveillance data. This approach consists of two phases of training and testing. In the training phase, first, we use background subtraction method to extract the moving pixels. Then optical flow vectors are extracted for moving pixels. We propose behavior features of each pixel as the average all optical flow vectors in the pixel over several frames in video data. Next, we use spectral clustering to classify behaviors wherein pixels that have similar behavior features are clustered together. Then we obtain a behavior model for each cluster using the normal distribution of the samples. Once the behavior models are obtained, in the testing phase, we use these models to detect anomalous behavior in a test video of the same scene. Experimental results on video surveillance sequences show the effectiveness and speed of proposed method.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125277096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780003
R. Panahi, I. Gholampour, M. Jamzad
Tracking objects using Mean Shift algorithm fails when there is a full/partial occlusion or when the background color and the desired object are close. In this paper we proposed a method using Kalman Filter and Mean Shift for handling these situations. Using similarity measure of Mean Shift algorithm we are able to detect an occlusion. Kalman Filter comes into the play for occlusion handling in a Buffer-Mode Process. We implemented this algorithm both on PC and DSP 64x+ Texas Instrument and the results are both tabulated. The results reveal the ability of our method to locate the object soon after occlusion disappearance.
{"title":"Real time occlusion handling using Kalman Filter and mean-shift","authors":"R. Panahi, I. Gholampour, M. Jamzad","doi":"10.1109/IRANIANMVIP.2013.6780003","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780003","url":null,"abstract":"Tracking objects using Mean Shift algorithm fails when there is a full/partial occlusion or when the background color and the desired object are close. In this paper we proposed a method using Kalman Filter and Mean Shift for handling these situations. Using similarity measure of Mean Shift algorithm we are able to detect an occlusion. Kalman Filter comes into the play for occlusion handling in a Buffer-Mode Process. We implemented this algorithm both on PC and DSP 64x+ Texas Instrument and the results are both tabulated. The results reveal the ability of our method to locate the object soon after occlusion disappearance.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114992294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}