{"title":"American Sign language Recognition using Deep Learning","authors":"Anusha Puchakayala, Srivarshini Nalla, Pranathi K","doi":"10.1109/ICCMC56507.2023.10084015","DOIUrl":null,"url":null,"abstract":"Communication plays a vital role in day-to-day life but consider a scenario in which two people are unable to communicate with one another because one of them does not comprehend what the other person is attempting to say. Most of the deaf-mute community encounter this when conversing with ordinary folks. As sign language is used by persons with impairments, normal people don'tknow or lacks understanding of it. This communication gap must be bridged. Therefore, a model has been developed to assist normal people and enable deaf-mute individuals to communicate with one another. One such model is the sign language detection system, which uses a deep learning strategy to identify American Sign Language (ASL) gestures and output the corresponding alphabet in text format. A CNN model and YOLOv5 model were built and compared against each other. YOLO model has produced an accuracy of 84.96%, whereas CNN model has produced an accuracy of 80.59%","PeriodicalId":197059,"journal":{"name":"2023 7th International Conference on Computing Methodologies and Communication (ICCMC)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Computing Methodologies and Communication (ICCMC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCMC56507.2023.10084015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Communication plays a vital role in day-to-day life but consider a scenario in which two people are unable to communicate with one another because one of them does not comprehend what the other person is attempting to say. Most of the deaf-mute community encounter this when conversing with ordinary folks. As sign language is used by persons with impairments, normal people don'tknow or lacks understanding of it. This communication gap must be bridged. Therefore, a model has been developed to assist normal people and enable deaf-mute individuals to communicate with one another. One such model is the sign language detection system, which uses a deep learning strategy to identify American Sign Language (ASL) gestures and output the corresponding alphabet in text format. A CNN model and YOLOv5 model were built and compared against each other. YOLO model has produced an accuracy of 84.96%, whereas CNN model has produced an accuracy of 80.59%