{"title":"基于多视图图像序列的人体动作识别","authors":"Mohiudding Ahmad, Seong-Whan Lee","doi":"10.1109/FGR.2006.65","DOIUrl":null,"url":null,"abstract":"Recognizing human action from image sequences is an active area of research in computer vision. In this paper, we present a novel method for human action recognition from image sequences in different viewing angles that uses the Cartesian component of optical flow velocity and human body shape feature vector information. We use principal component analysis to reduce the higher dimensional shape feature space into low dimensional shape feature space. We represent each action using a set of multidimensional discrete hidden Markov model and model each action for any viewing direction. We performed experiments of the proposed method by using KU gesture database. Experimental results based on this database of different actions show that our method is robust","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"54","resultStr":"{\"title\":\"Human action recognition using multi-view image sequences\",\"authors\":\"Mohiudding Ahmad, Seong-Whan Lee\",\"doi\":\"10.1109/FGR.2006.65\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recognizing human action from image sequences is an active area of research in computer vision. In this paper, we present a novel method for human action recognition from image sequences in different viewing angles that uses the Cartesian component of optical flow velocity and human body shape feature vector information. We use principal component analysis to reduce the higher dimensional shape feature space into low dimensional shape feature space. We represent each action using a set of multidimensional discrete hidden Markov model and model each action for any viewing direction. We performed experiments of the proposed method by using KU gesture database. Experimental results based on this database of different actions show that our method is robust\",\"PeriodicalId\":109260,\"journal\":{\"name\":\"7th International Conference on Automatic Face and Gesture Recognition (FGR06)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"54\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"7th International Conference on Automatic Face and Gesture Recognition (FGR06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FGR.2006.65\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FGR.2006.65","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human action recognition using multi-view image sequences
Recognizing human action from image sequences is an active area of research in computer vision. In this paper, we present a novel method for human action recognition from image sequences in different viewing angles that uses the Cartesian component of optical flow velocity and human body shape feature vector information. We use principal component analysis to reduce the higher dimensional shape feature space into low dimensional shape feature space. We represent each action using a set of multidimensional discrete hidden Markov model and model each action for any viewing direction. We performed experiments of the proposed method by using KU gesture database. Experimental results based on this database of different actions show that our method is robust