{"title":"Depth Assisted Palm Region Extraction Using the Kinect v2 Sensor","authors":"S. Samoil, S. Yanushkevich","doi":"10.1109/EST.2015.11","DOIUrl":null,"url":null,"abstract":"This paper evaluates the feasibility of using the fusion of multispectral data from a Kinect v2 sensor as a way to extract the palm region of hand in an unconstrained environment. The depth data was used to both track the hand and extract palm regions. This extracted palm region was then used to extract the palm region in the RGB and Near Infrared data. One of the underlying goals was to maintain real time performance and as such relatively simple techniques such as using a sliding window were used. The lower boundary of the usable frames extracted was 50%, while in the case when the user is positioned directly in front of the sensor with hands extended outward from the body, the percentage of usable frames reached 75%.","PeriodicalId":402244,"journal":{"name":"2015 Sixth International Conference on Emerging Security Technologies (EST)","volume":"230 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Sixth International Conference on Emerging Security Technologies (EST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EST.2015.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
This paper evaluates the feasibility of using the fusion of multispectral data from a Kinect v2 sensor as a way to extract the palm region of hand in an unconstrained environment. The depth data was used to both track the hand and extract palm regions. This extracted palm region was then used to extract the palm region in the RGB and Near Infrared data. One of the underlying goals was to maintain real time performance and as such relatively simple techniques such as using a sliding window were used. The lower boundary of the usable frames extracted was 50%, while in the case when the user is positioned directly in front of the sensor with hands extended outward from the body, the percentage of usable frames reached 75%.