{"title":"Arabic sign language fingerspelling recognition from depth and intensity images","authors":"S. Aly, Basma Osman, Walaa Aly, Mahmoud Saber","doi":"10.1109/ICENCO.2016.7856452","DOIUrl":null,"url":null,"abstract":"Automatic Arabic sign language recognition (ArSL) and fingerspelling considered to be the preferred communication method among deaf people. In this paper, we propose a system for alphabetic Arabic sign language recognition using depth and intensity images which acquired from SOFTKINECT™ sensor. The proposed method does not require any extra gloves or any visual marks. Local features from depth and intensity images are learned using unsupervised deep learning method called PCANet. The extracted features are then recognized using linear support vector machine classifier. The performance of the proposed method is evaluated on dataset of real images captured from multi-users. Experiments using a combination of depth and intensity images and also using depth and intensity images separately are performed. The obtained results show that the performance of the proposed system improved by combining both depth and intensity information which give an average accuracy of 99:5%.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 12th International Computer Engineering Conference (ICENCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICENCO.2016.7856452","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28
Abstract
Automatic Arabic sign language recognition (ArSL) and fingerspelling considered to be the preferred communication method among deaf people. In this paper, we propose a system for alphabetic Arabic sign language recognition using depth and intensity images which acquired from SOFTKINECT™ sensor. The proposed method does not require any extra gloves or any visual marks. Local features from depth and intensity images are learned using unsupervised deep learning method called PCANet. The extracted features are then recognized using linear support vector machine classifier. The performance of the proposed method is evaluated on dataset of real images captured from multi-users. Experiments using a combination of depth and intensity images and also using depth and intensity images separately are performed. The obtained results show that the performance of the proposed system improved by combining both depth and intensity information which give an average accuracy of 99:5%.