{"title":"Semi-automatic annotation tool for sign languages","authors":"K. Aitpayev, Shynggys Islam, A. Imashev","doi":"10.1109/ICAICT.2016.7991803","DOIUrl":null,"url":null,"abstract":"The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this work, we describe strength and weaknesses of both approaches. Finally, we propose the semi-automatic web-based annotation tool based on second technique, which uses hand and face movement detection algorithms. Furthermore, proposed algorithm could be used not only for annotating clean training data, but also for automatic sign language recognition, as it is works in real time and quite robust to variability in intensity and background. Results are presented in our corpus1 with free access.","PeriodicalId":446472,"journal":{"name":"2016 IEEE 10th International Conference on Application of Information and Communication Technologies (AICT)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 10th International Conference on Application of Information and Communication Technologies (AICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAICT.2016.7991803","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this work, we describe strength and weaknesses of both approaches. Finally, we propose the semi-automatic web-based annotation tool based on second technique, which uses hand and face movement detection algorithms. Furthermore, proposed algorithm could be used not only for annotating clean training data, but also for automatic sign language recognition, as it is works in real time and quite robust to variability in intensity and background. Results are presented in our corpus1 with free access.