{"title":"Imperative Methodology to Detect the Palm Gestures (American Sign Language) using Y010v5 and MediaPipe","authors":"Gouri Anilkumar, M. S. Fouzia, G. S. Anisha","doi":"10.1109/CONIT55038.2022.9847703","DOIUrl":null,"url":null,"abstract":"Humans place a high importance on the ability to interact. People with hearing or speaking difficulties had trouble expressing themselves. Despite the fact that sign language solved the problem, they were still unable to engage with the general populace., necessitating the development of sign language detectors. A variety of sign language detection algorithms are effectively open. This research investigates two well-known models for recognizing American Sign Language Gestures for alphabets: MediaPipe-LSTM and YOLO v5-PyTorch. They were given custom datasets., and the outcomes were inferred and compared to see how accurate and effective the models were.","PeriodicalId":270445,"journal":{"name":"2022 2nd International Conference on Intelligent Technologies (CONIT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Intelligent Technologies (CONIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONIT55038.2022.9847703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Humans place a high importance on the ability to interact. People with hearing or speaking difficulties had trouble expressing themselves. Despite the fact that sign language solved the problem, they were still unable to engage with the general populace., necessitating the development of sign language detectors. A variety of sign language detection algorithms are effectively open. This research investigates two well-known models for recognizing American Sign Language Gestures for alphabets: MediaPipe-LSTM and YOLO v5-PyTorch. They were given custom datasets., and the outcomes were inferred and compared to see how accurate and effective the models were.