{"title":"Two Hand Indian Sign Language dataset for benchmarking classification models of Machine Learning","authors":"Leela Surya Teja Mangamuri, Lakshay Jain, Abhishek Sharmay","doi":"10.1109/ICICT46931.2019.8977713","DOIUrl":null,"url":null,"abstract":"Currently, a lot of research is going in the field of sign language recognition. Recognition of gesture poses a serious challenge to the system due to inconsistent illuminance and background conditions, different skin colours of the hand and each person has his/her own trait of making the gesture. It gets even more difficult with Two Hand Indian Sign Language (THISL) due to the representation of gesture with both hands. There is no proper THISL dataset available to the public. So, we present a THISL dataset consisting of 26 gestures each representing the English alphabet. This dataset consists of 50x50 images of total 9100 in which each gesture is made of 350 images and it is divided into two parts, training and test. The training set consists of 7020 images and the test set consists of 2080 images. In this paper, THISL dataset is validated on various classification models of machine learning and overall accuracy of 91.72% is achieved. This dataset serves a very good purpose for benchmarking machine learning algorithms and it is freely available to people on request to authors.","PeriodicalId":412668,"journal":{"name":"2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICT46931.2019.8977713","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Currently, a lot of research is going in the field of sign language recognition. Recognition of gesture poses a serious challenge to the system due to inconsistent illuminance and background conditions, different skin colours of the hand and each person has his/her own trait of making the gesture. It gets even more difficult with Two Hand Indian Sign Language (THISL) due to the representation of gesture with both hands. There is no proper THISL dataset available to the public. So, we present a THISL dataset consisting of 26 gestures each representing the English alphabet. This dataset consists of 50x50 images of total 9100 in which each gesture is made of 350 images and it is divided into two parts, training and test. The training set consists of 7020 images and the test set consists of 2080 images. In this paper, THISL dataset is validated on various classification models of machine learning and overall accuracy of 91.72% is achieved. This dataset serves a very good purpose for benchmarking machine learning algorithms and it is freely available to people on request to authors.