{"title":"Detecting hand-palm orientation and hand shapes for sign language gesture recognition using 3D images","authors":"L. K. Phadtare, R. Kushalnagar, N. Cahill","doi":"10.1109/WNYIPW.2012.6466652","DOIUrl":null,"url":null,"abstract":"Automatic gesture recognition, specifically for the purpose of understanding sign language, can be an important aid in communicating with the deaf and hard-of-hearing. Recognition of sign languages requires understanding of various linguistic components such as palm orientation, hand shape, hand location and facial expression. We propose a method and system to estimate the palm orientation and the hand shape of a signer. Our system uses Microsoft Kinect to capture color and the depth images of a signer. It analyzes the depth data corresponding to the hand point region and fits plane to this data and defines the normal to this plane as the orientation of the palm. Then it uses 3-D shape context to determine the hand shape by comparing it to example shapes in the database. Palm orientation of the hand was found to be correct in varying poses. The shape context method for hand shape classification was found to identify 20 test hand shapes correctly and 10 shapes were matched to other but very similar shapes.","PeriodicalId":218110,"journal":{"name":"2012 Western New York Image Processing Workshop","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Western New York Image Processing Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WNYIPW.2012.6466652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Automatic gesture recognition, specifically for the purpose of understanding sign language, can be an important aid in communicating with the deaf and hard-of-hearing. Recognition of sign languages requires understanding of various linguistic components such as palm orientation, hand shape, hand location and facial expression. We propose a method and system to estimate the palm orientation and the hand shape of a signer. Our system uses Microsoft Kinect to capture color and the depth images of a signer. It analyzes the depth data corresponding to the hand point region and fits plane to this data and defines the normal to this plane as the orientation of the palm. Then it uses 3-D shape context to determine the hand shape by comparing it to example shapes in the database. Palm orientation of the hand was found to be correct in varying poses. The shape context method for hand shape classification was found to identify 20 test hand shapes correctly and 10 shapes were matched to other but very similar shapes.