Advancing human-computer interaction: AI-driven translation of American Sign Language to Nepali using convolutional neural networks and text-to-speech conversion application
{"title":"Advancing human-computer interaction: AI-driven translation of American Sign Language to Nepali using convolutional neural networks and text-to-speech conversion application","authors":"Biplov Paneru , Bishwash Paneru , Khem Narayan Poudyal","doi":"10.1016/j.sasc.2024.200165","DOIUrl":null,"url":null,"abstract":"<div><div>Advanced technology that serves people with impairments is severely lacking in Nepal, especially when it comes to helping the hearing impaired communicate. Although sign language is one of the oldest and most organic ways to communicate, there aren't many resources available in Nepal to help with the communication gap between Nepali and American Sign Language (ASL). This study investigates the application of Convolutional Neural Networks (CNN) and AI-driven methods for translating ASL into Nepali text and speech to bridge the technical divide. Two pre-trained transfer learning models, ResNet50 and VGG16, were refined to classify ASL signs using extensive ASL image datasets. The system utilizes the Python gTTS package to translate signs into Nepali text and speech, integrating with an OpenCV video input TKinter-based Graphical User Interface (GUI). With both CNN architectures, the model's accuracy of over 99 % allowed for the smooth conversion of ASL to speech output. By providing a workable solution to improve inclusion and communication, the deployment of an AI-driven translation system represents a significant step in lowering the technological obstacles that disabled people in Nepal must overcome.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"6 ","pages":"Article 200165"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772941924000942","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Advanced technology that serves people with impairments is severely lacking in Nepal, especially when it comes to helping the hearing impaired communicate. Although sign language is one of the oldest and most organic ways to communicate, there aren't many resources available in Nepal to help with the communication gap between Nepali and American Sign Language (ASL). This study investigates the application of Convolutional Neural Networks (CNN) and AI-driven methods for translating ASL into Nepali text and speech to bridge the technical divide. Two pre-trained transfer learning models, ResNet50 and VGG16, were refined to classify ASL signs using extensive ASL image datasets. The system utilizes the Python gTTS package to translate signs into Nepali text and speech, integrating with an OpenCV video input TKinter-based Graphical User Interface (GUI). With both CNN architectures, the model's accuracy of over 99 % allowed for the smooth conversion of ASL to speech output. By providing a workable solution to improve inclusion and communication, the deployment of an AI-driven translation system represents a significant step in lowering the technological obstacles that disabled people in Nepal must overcome.