Parama Sridevi, Tahmida Islam, Urmi Debnath, Noor A Nazia, Rajat Chakraborty, C. Shahnaz
{"title":"Sign Language Recognition for Speech and Hearing Impaired by Image Processing in MATLAB","authors":"Parama Sridevi, Tahmida Islam, Urmi Debnath, Noor A Nazia, Rajat Chakraborty, C. Shahnaz","doi":"10.1109/R10-HTC.2018.8629823","DOIUrl":null,"url":null,"abstract":"The paper presents the model of a sign language interpreter that can verbalize American Sign Language (ASL). This robust model is based on creating a human-computer interface (HCI) using the user's hand gesture only. The combination of Hardware and software interfaces-webcam and MATLAB 2016a-performs the feature extraction process from the image captured from real-time video of hand signs. These features are compared with the features of the database images and after some image processing techniques in MATLAB, the system generates outputs depending on the prediction of highest resemblance. As the model is free from any other apparatus or accessories, it is solely practical and easy to use. This model provided satisfactory accuracy in our tests without any need of any constant or unicolor background. The proposed technique, together with a vast source database, will definitely be highly beneficial for mitigating the communication gap between the people with speaking and hearing abilities and those without them.","PeriodicalId":404432,"journal":{"name":"2018 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/R10-HTC.2018.8629823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
The paper presents the model of a sign language interpreter that can verbalize American Sign Language (ASL). This robust model is based on creating a human-computer interface (HCI) using the user's hand gesture only. The combination of Hardware and software interfaces-webcam and MATLAB 2016a-performs the feature extraction process from the image captured from real-time video of hand signs. These features are compared with the features of the database images and after some image processing techniques in MATLAB, the system generates outputs depending on the prediction of highest resemblance. As the model is free from any other apparatus or accessories, it is solely practical and easy to use. This model provided satisfactory accuracy in our tests without any need of any constant or unicolor background. The proposed technique, together with a vast source database, will definitely be highly beneficial for mitigating the communication gap between the people with speaking and hearing abilities and those without them.