Soumen Das, Saroj Kr. Biswas, Biswajit Purkayastha
{"title":"An Expert System for Indian Sign Language Recognition using Spatial Attention based Feature and Temporal Feature","authors":"Soumen Das, Saroj Kr. Biswas, Biswajit Purkayastha","doi":"10.1145/3643824","DOIUrl":null,"url":null,"abstract":"<p>Sign Language (SL) is the only means of communication for the hearing-impaired people. Normal people have difficulty understanding SL, resulting in a communication barrier between hearing impaired people and hearing community. However, the Sign Language Recognition System (SLRS) has helped to bridge the communication gap. Many SLRs are proposed for recognizing SL; however, a limited number of works are reported for Indian Sign Language (ISL). Most of the existing SLRS focus on global features other than the Region of Interest (ROI). Focusing more on the hand region and extracting local features from the ROI improves system accuracy. The attention mechanism is a widely used technique for emphasizing the ROI. However, only a few SLRS used the attention method. They employed the Convolution Block Attention Module (CBAM) and temporal attention but Spatial Attention (SA) is not utilized in previous SLRS. Therefore, a novel SA based SLRS named Spatial Attention-based Sign Language Recognition Module (SASLRM) is proposed to recognize ISL words for emergency situations. SASLRM recognizes ISL words by combining convolution features from a pretrained VGG-19 model and attention features from a SA module. The proposed model accomplished an average accuracy of 95.627% on the ISL dataset. The proposed SASLRM is further validated on LSA64, WLASL and Cambridge Hand Gesture Recognition (HGR) datasets where, the proposed model reached an accuracy of 97.84 %, 98.86% and 98.22’% respectively. The results indicate the effectiveness of the proposed SLRS in comparison with the existing SLRS.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Asian and Low-Resource Language Information Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3643824","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Sign Language (SL) is the only means of communication for the hearing-impaired people. Normal people have difficulty understanding SL, resulting in a communication barrier between hearing impaired people and hearing community. However, the Sign Language Recognition System (SLRS) has helped to bridge the communication gap. Many SLRs are proposed for recognizing SL; however, a limited number of works are reported for Indian Sign Language (ISL). Most of the existing SLRS focus on global features other than the Region of Interest (ROI). Focusing more on the hand region and extracting local features from the ROI improves system accuracy. The attention mechanism is a widely used technique for emphasizing the ROI. However, only a few SLRS used the attention method. They employed the Convolution Block Attention Module (CBAM) and temporal attention but Spatial Attention (SA) is not utilized in previous SLRS. Therefore, a novel SA based SLRS named Spatial Attention-based Sign Language Recognition Module (SASLRM) is proposed to recognize ISL words for emergency situations. SASLRM recognizes ISL words by combining convolution features from a pretrained VGG-19 model and attention features from a SA module. The proposed model accomplished an average accuracy of 95.627% on the ISL dataset. The proposed SASLRM is further validated on LSA64, WLASL and Cambridge Hand Gesture Recognition (HGR) datasets where, the proposed model reached an accuracy of 97.84 %, 98.86% and 98.22’% respectively. The results indicate the effectiveness of the proposed SLRS in comparison with the existing SLRS.
期刊介绍:
The ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) publishes high quality original archival papers and technical notes in the areas of computation and processing of information in Asian languages, low-resource languages of Africa, Australasia, Oceania and the Americas, as well as related disciplines. The subject areas covered by TALLIP include, but are not limited to:
-Computational Linguistics: including computational phonology, computational morphology, computational syntax (e.g. parsing), computational semantics, computational pragmatics, etc.
-Linguistic Resources: including computational lexicography, terminology, electronic dictionaries, cross-lingual dictionaries, electronic thesauri, etc.
-Hardware and software algorithms and tools for Asian or low-resource language processing, e.g., handwritten character recognition.
-Information Understanding: including text understanding, speech understanding, character recognition, discourse processing, dialogue systems, etc.
-Machine Translation involving Asian or low-resource languages.
-Information Retrieval: including natural language processing (NLP) for concept-based indexing, natural language query interfaces, semantic relevance judgments, etc.
-Information Extraction and Filtering: including automatic abstraction, user profiling, etc.
-Speech processing: including text-to-speech synthesis and automatic speech recognition.
-Multimedia Asian Information Processing: including speech, image, video, image/text translation, etc.
-Cross-lingual information processing involving Asian or low-resource languages.
-Papers that deal in theory, systems design, evaluation and applications in the aforesaid subjects are appropriate for TALLIP. Emphasis will be placed on the originality and the practical significance of the reported research.