{"title":"通过整合光流和深度来跟踪具有三维运动的人","authors":"R. Okada, Y. Shirai, J. Miura","doi":"10.1109/AFGR.2000.840656","DOIUrl":null,"url":null,"abstract":"This paper describes a method of tracking a person with 3D translation and rotation by integrating optical flow and depth. The target region is first extracted based on the probability of each pixel belonging to the target person. The target state (3D position, posture, motion) is estimated based on the shape and the position of the target region in addition to optical flow and depth. Multiple target states are maintained when the image measurements give rise to ambiguities about the target state. Experimental results with real image sequences show the effectiveness of our method.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":"{\"title\":\"Tracking a person with 3-D motion by integrating optical flow and depth\",\"authors\":\"R. Okada, Y. Shirai, J. Miura\",\"doi\":\"10.1109/AFGR.2000.840656\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes a method of tracking a person with 3D translation and rotation by integrating optical flow and depth. The target region is first extracted based on the probability of each pixel belonging to the target person. The target state (3D position, posture, motion) is estimated based on the shape and the position of the target region in addition to optical flow and depth. Multiple target states are maintained when the image measurements give rise to ambiguities about the target state. Experimental results with real image sequences show the effectiveness of our method.\",\"PeriodicalId\":360065,\"journal\":{\"name\":\"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"42\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AFGR.2000.840656\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AFGR.2000.840656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tracking a person with 3-D motion by integrating optical flow and depth
This paper describes a method of tracking a person with 3D translation and rotation by integrating optical flow and depth. The target region is first extracted based on the probability of each pixel belonging to the target person. The target state (3D position, posture, motion) is estimated based on the shape and the position of the target region in addition to optical flow and depth. Multiple target states are maintained when the image measurements give rise to ambiguities about the target state. Experimental results with real image sequences show the effectiveness of our method.