{"title":"Towards Robust 3D Skeleton Tracking Using Data Fusion from Multiple Depth Sensors","authors":"Yuanjie Wu, Lei Gao, S. Hoermann, R. Lindeman","doi":"10.1109/VS-Games.2018.8493443","DOIUrl":null,"url":null,"abstract":"Real-time full-body tracking in VR is important for providing realistic experiences, especially for applications such as training, education, and social VR. The Microsoft Kinect v2 sensor can provide skeleton data for a user in real-time, however, due to occlusion issues and front/back ambiguity errors, one Kinect is not always reliable enough for the correct capture of 360-degree movements. In this paper, we present work to provide robust, real-time tracking using multiple Kinect v2 cameras. An adaptive data fusion method is described that constructs a high-quality 3D skeleton which can be used to drive a VR avatar regardless of the user's orientation. We compare three different approaches to fusing the data from the three Kinects, and compare against ground truth using an OptiTrack system. A static pose and a dynamic movement were captured to compare errors of each joint using the three fusion algorithms. Our results show that an adaptive weighting adjustment fusion method for combining skeleton data from the three Kinects according to the current facing direction performed best in terms of joint error.","PeriodicalId":264923,"journal":{"name":"2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)","volume":"9 10","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VS-Games.2018.8493443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Real-time full-body tracking in VR is important for providing realistic experiences, especially for applications such as training, education, and social VR. The Microsoft Kinect v2 sensor can provide skeleton data for a user in real-time, however, due to occlusion issues and front/back ambiguity errors, one Kinect is not always reliable enough for the correct capture of 360-degree movements. In this paper, we present work to provide robust, real-time tracking using multiple Kinect v2 cameras. An adaptive data fusion method is described that constructs a high-quality 3D skeleton which can be used to drive a VR avatar regardless of the user's orientation. We compare three different approaches to fusing the data from the three Kinects, and compare against ground truth using an OptiTrack system. A static pose and a dynamic movement were captured to compare errors of each joint using the three fusion algorithms. Our results show that an adaptive weighting adjustment fusion method for combining skeleton data from the three Kinects according to the current facing direction performed best in terms of joint error.