{"title":"使用单个深度传感器的无标记动作捕捉","authors":"Amit Bleiweiss, E. Eilat, Gershom Kutliroff","doi":"10.1145/1667146.1667172","DOIUrl":null,"url":null,"abstract":"We present a robust framework for tracking skeleton joints in real-time by using a single time-of-flight depth sensor. The framework is able to remove the background noise inherent in time-of-flight cameras, detect multiple people, and track up to 30 joints of free motion for each person. The approach has several advantages over traditional motion capture, as it is a cheap alternative to magnetic and optical systems, and requires no markers whatsoever. Unlike markerless systems based on RGB cameras [Duetscher et al. 2000; Kehl and Van Gool 2006], our framework yields dependable results at an interactive rate using a single camera.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Markerless motion capture using a single depth sensor\",\"authors\":\"Amit Bleiweiss, E. Eilat, Gershom Kutliroff\",\"doi\":\"10.1145/1667146.1667172\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a robust framework for tracking skeleton joints in real-time by using a single time-of-flight depth sensor. The framework is able to remove the background noise inherent in time-of-flight cameras, detect multiple people, and track up to 30 joints of free motion for each person. The approach has several advantages over traditional motion capture, as it is a cheap alternative to magnetic and optical systems, and requires no markers whatsoever. Unlike markerless systems based on RGB cameras [Duetscher et al. 2000; Kehl and Van Gool 2006], our framework yields dependable results at an interactive rate using a single camera.\",\"PeriodicalId\":180587,\"journal\":{\"name\":\"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1667146.1667172\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1667146.1667172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
摘要
我们提出了一个鲁棒框架,通过使用单个飞行时间深度传感器实时跟踪骨骼关节。该框架能够消除飞行时间相机固有的背景噪声,检测多人,并跟踪每个人多达30个自由运动的关节。与传统的动作捕捉相比,这种方法有几个优点,因为它是磁性和光学系统的廉价替代品,而且不需要任何标记。与基于RGB相机的无标记系统不同[Duetscher et al. 2000;Kehl and Van Gool 2006],我们的框架使用单个相机以交互速率产生可靠的结果。
Markerless motion capture using a single depth sensor
We present a robust framework for tracking skeleton joints in real-time by using a single time-of-flight depth sensor. The framework is able to remove the background noise inherent in time-of-flight cameras, detect multiple people, and track up to 30 joints of free motion for each person. The approach has several advantages over traditional motion capture, as it is a cheap alternative to magnetic and optical systems, and requires no markers whatsoever. Unlike markerless systems based on RGB cameras [Duetscher et al. 2000; Kehl and Van Gool 2006], our framework yields dependable results at an interactive rate using a single camera.