{"title":"American Sign Language Letter Recognition from Images Using CNN","authors":"Neeraj Singla","doi":"10.1109/ICEEICT56924.2023.10156922","DOIUrl":null,"url":null,"abstract":"American Sign Language (ASL) is a complex and diverse language used by millions of individuals with hearing impairments or disabilities. Accurate and efficient recognition of ASL letters from images is crucial for effective communication and accessibility.[1] However, this is a difficult task due to different hand shapes, orientations, and lighting conditions.In this study, we present a deep learning-based approach for accurately recognizing ASL characters from images. We trained three convolutional neural network (CNN) models, namely VGG16, InceptionV3, and MobileNetV2, on a large dataset of ASL letter images. These models were chosen because they have shown impressive performance in image classification tasks in various contexts. After training, we evaluated the models on a test set of ASL letter images, achieving classification accuracies of 90.7%, 95.7%, and 98% for VGG16, InceptionV3, and MobileNetV2 respectively.Our research provides significant contributions to the field of computer vision, particularly in the recognition of ASL letters from images. Our findings highlight the potential of deep learning-based research development for improving communication technology and accessibility for individuals with hearing impairments, by providing accurate and efficient recognition of ASL letters from images.","PeriodicalId":345324,"journal":{"name":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","volume":"155 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEICT56924.2023.10156922","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
American Sign Language (ASL) is a complex and diverse language used by millions of individuals with hearing impairments or disabilities. Accurate and efficient recognition of ASL letters from images is crucial for effective communication and accessibility.[1] However, this is a difficult task due to different hand shapes, orientations, and lighting conditions.In this study, we present a deep learning-based approach for accurately recognizing ASL characters from images. We trained three convolutional neural network (CNN) models, namely VGG16, InceptionV3, and MobileNetV2, on a large dataset of ASL letter images. These models were chosen because they have shown impressive performance in image classification tasks in various contexts. After training, we evaluated the models on a test set of ASL letter images, achieving classification accuracies of 90.7%, 95.7%, and 98% for VGG16, InceptionV3, and MobileNetV2 respectively.Our research provides significant contributions to the field of computer vision, particularly in the recognition of ASL letters from images. Our findings highlight the potential of deep learning-based research development for improving communication technology and accessibility for individuals with hearing impairments, by providing accurate and efficient recognition of ASL letters from images.