M. M. Marinho, Yuki Yatsushima, T. Maekawa, Y. Namioka
{"title":"Preliminary Evaluation of a Framework for Overhead Skeleton Tracking in Factory Environments using Kinect","authors":"M. M. Marinho, Yuki Yatsushima, T. Maekawa, Y. Namioka","doi":"10.1145/3134230.3134232","DOIUrl":null,"url":null,"abstract":"This paper presents a preliminary evaluation of a framework that allows an overhead RGBD camera to segment and track workers skeleton in an unstructured factory environment. The default Kinect skeleton tracking algorithm was developed using front-view artificial depth images generated from a 3D model of a person in an empty room. The proposed framework is inspired in this concept, and works by capturing motion data of worker movements performing a real factory task. That motion data is matched to the 3D model of the worker. In a novel approach, the largest elements in the workspace (e.g. desks, racks) are modeled with simple shapes, and the artificial depth images are generated in a \"simplified workspace\" in contrast with an \"empty workspace\". We show in preliminary experiments that the addition of the simplified models during training can increase, ceteris paribus, the segmentation accuracy by over 3 times and the recall by about one and a half times when the workspace is highly cluttered. Evaluation is made using real depth images obtained in a factory environment, and as ground-truth manually segmented images are used.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3134230.3134232","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper presents a preliminary evaluation of a framework that allows an overhead RGBD camera to segment and track workers skeleton in an unstructured factory environment. The default Kinect skeleton tracking algorithm was developed using front-view artificial depth images generated from a 3D model of a person in an empty room. The proposed framework is inspired in this concept, and works by capturing motion data of worker movements performing a real factory task. That motion data is matched to the 3D model of the worker. In a novel approach, the largest elements in the workspace (e.g. desks, racks) are modeled with simple shapes, and the artificial depth images are generated in a "simplified workspace" in contrast with an "empty workspace". We show in preliminary experiments that the addition of the simplified models during training can increase, ceteris paribus, the segmentation accuracy by over 3 times and the recall by about one and a half times when the workspace is highly cluttered. Evaluation is made using real depth images obtained in a factory environment, and as ground-truth manually segmented images are used.