{"title":"SAMHIS: A Robust Motion Space for Human Activity Recognition","authors":"S. Raghuraman, B. Prabhakaran","doi":"10.1109/ISM.2012.75","DOIUrl":null,"url":null,"abstract":"In recent years, many local descriptor based approaches have been proposed for human activity recognition, which perform well on challenging datasets. However, most of these approaches are computationally intensive, extract irrelevant background features and fail to capture global temporal information. We propose to overcome these issues by introducing a compact and robust motion space that can be used to extract both spatial and temporal aspects of activities using local descriptors. We present Speed Adapted Motion History Image Space (SAMHIS) that employs a variant of Motion History Image for representing motion. This space alleviates both self-occlusion as well as the speed-related issues associated with different kinds of motion. We go on to show using a standard bag of visual words model that extracting appearance based local descriptors from this space is very effective for recognizing activity. Our approach yields promising results on the KTH and Weizmann dataset.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Symposium on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISM.2012.75","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, many local descriptor based approaches have been proposed for human activity recognition, which perform well on challenging datasets. However, most of these approaches are computationally intensive, extract irrelevant background features and fail to capture global temporal information. We propose to overcome these issues by introducing a compact and robust motion space that can be used to extract both spatial and temporal aspects of activities using local descriptors. We present Speed Adapted Motion History Image Space (SAMHIS) that employs a variant of Motion History Image for representing motion. This space alleviates both self-occlusion as well as the speed-related issues associated with different kinds of motion. We go on to show using a standard bag of visual words model that extracting appearance based local descriptors from this space is very effective for recognizing activity. Our approach yields promising results on the KTH and Weizmann dataset.