{"title":"Human Activity Recognition Based on 3D Mesh MoSIFT Feature Descriptor","authors":"Yue Ming","doi":"10.1109/SocialCom.2013.151","DOIUrl":null,"url":null,"abstract":"The times of Big Data promotes increasingly higher demands for information processing. The rapid development of 3D digital capturing devices prompts the traditional behavior analysis towards fine motion recognition, such as hands and gesture. In this paper, a complete framework of 3D human activity recognition is presented for the behavior analysis of hands and gesture. First, the improved graph cuts method is introduced to hand segmentation and tracking. Then, combined with 3D geometric characteristics and human behavior prior information, 3D Mesh MoSIFT feature descriptor is proposed to represent the discriminant property of human activity. Simulation orthogonal matching pursuit (SOMP) is used to encode the visual code words. Experiments, based on a RGB-D video dataset and ChaLearn gesture dataset, show the improved accuracy of human activity recognition.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SocialCom.2013.151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The times of Big Data promotes increasingly higher demands for information processing. The rapid development of 3D digital capturing devices prompts the traditional behavior analysis towards fine motion recognition, such as hands and gesture. In this paper, a complete framework of 3D human activity recognition is presented for the behavior analysis of hands and gesture. First, the improved graph cuts method is introduced to hand segmentation and tracking. Then, combined with 3D geometric characteristics and human behavior prior information, 3D Mesh MoSIFT feature descriptor is proposed to represent the discriminant property of human activity. Simulation orthogonal matching pursuit (SOMP) is used to encode the visual code words. Experiments, based on a RGB-D video dataset and ChaLearn gesture dataset, show the improved accuracy of human activity recognition.