{"title":"基于ei - rnn的静态和动态孤立手语视频文本生成","authors":"S. Subburaj, S. Murugavalli, B. Muthusenthil","doi":"10.3233/jifs-233610","DOIUrl":null,"url":null,"abstract":"SLR, which assists hearing-impaired people to communicate with other persons by sign language, is considered as a promising method. However, as the features of some of the static SL could be the same as the feature in a single frame of dynamic Isolated Sign Language (ISL), the generation of accurate text corresponding to the SL is necessary during the SLR. Therefore, Edge-directed Interpolation-based Recurrent Neural Network (EI-RNN)-centered text generation with varied features of the static and dynamic Isolated SL is proposed in this article. Primarily, ISL videos are converted to frames and pre-processed with key frame extraction and illumination control. After that, the foreground is separated with the Symmetric Normalised Laplacian-centered Otsu Thresholding (SLOT) technique for finding accurate key points in the human pose. The human pose’s key points are extracted with the Media Pipeline Holistic (MPH) pipeline approach and to improve the features of the face and hand sign, the resultant frame is fused with the depth image. After that, to differentiate the static and dynamic actions, the action change in the fused frames is determined with a correlation matrix. After that, to engender the output text for the respective SL, features are extracted individually as of the static and dynamic frames. It is obtained from the analysis that when analogized to the prevailing models, the proposed EI-RNN’s translation accuracy is elevated by 2.05% in INCLUDE 50 Indian SL based Dataset and Top 1 Accuracy 2.44% and Top 10 accuracy, 1.71% improved in WLASL 100 American SL.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"26 4","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EI-RNN-based text generation for the static and dynamic isolated sign language videos\",\"authors\":\"S. Subburaj, S. Murugavalli, B. Muthusenthil\",\"doi\":\"10.3233/jifs-233610\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SLR, which assists hearing-impaired people to communicate with other persons by sign language, is considered as a promising method. However, as the features of some of the static SL could be the same as the feature in a single frame of dynamic Isolated Sign Language (ISL), the generation of accurate text corresponding to the SL is necessary during the SLR. Therefore, Edge-directed Interpolation-based Recurrent Neural Network (EI-RNN)-centered text generation with varied features of the static and dynamic Isolated SL is proposed in this article. Primarily, ISL videos are converted to frames and pre-processed with key frame extraction and illumination control. After that, the foreground is separated with the Symmetric Normalised Laplacian-centered Otsu Thresholding (SLOT) technique for finding accurate key points in the human pose. The human pose’s key points are extracted with the Media Pipeline Holistic (MPH) pipeline approach and to improve the features of the face and hand sign, the resultant frame is fused with the depth image. After that, to differentiate the static and dynamic actions, the action change in the fused frames is determined with a correlation matrix. After that, to engender the output text for the respective SL, features are extracted individually as of the static and dynamic frames. It is obtained from the analysis that when analogized to the prevailing models, the proposed EI-RNN’s translation accuracy is elevated by 2.05% in INCLUDE 50 Indian SL based Dataset and Top 1 Accuracy 2.44% and Top 10 accuracy, 1.71% improved in WLASL 100 American SL.\",\"PeriodicalId\":54795,\"journal\":{\"name\":\"Journal of Intelligent & Fuzzy Systems\",\"volume\":\"26 4\",\"pages\":\"0\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligent & Fuzzy Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/jifs-233610\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Fuzzy Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/jifs-233610","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
EI-RNN-based text generation for the static and dynamic isolated sign language videos
SLR, which assists hearing-impaired people to communicate with other persons by sign language, is considered as a promising method. However, as the features of some of the static SL could be the same as the feature in a single frame of dynamic Isolated Sign Language (ISL), the generation of accurate text corresponding to the SL is necessary during the SLR. Therefore, Edge-directed Interpolation-based Recurrent Neural Network (EI-RNN)-centered text generation with varied features of the static and dynamic Isolated SL is proposed in this article. Primarily, ISL videos are converted to frames and pre-processed with key frame extraction and illumination control. After that, the foreground is separated with the Symmetric Normalised Laplacian-centered Otsu Thresholding (SLOT) technique for finding accurate key points in the human pose. The human pose’s key points are extracted with the Media Pipeline Holistic (MPH) pipeline approach and to improve the features of the face and hand sign, the resultant frame is fused with the depth image. After that, to differentiate the static and dynamic actions, the action change in the fused frames is determined with a correlation matrix. After that, to engender the output text for the respective SL, features are extracted individually as of the static and dynamic frames. It is obtained from the analysis that when analogized to the prevailing models, the proposed EI-RNN’s translation accuracy is elevated by 2.05% in INCLUDE 50 Indian SL based Dataset and Top 1 Accuracy 2.44% and Top 10 accuracy, 1.71% improved in WLASL 100 American SL.
期刊介绍:
The purpose of the Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology is to foster advancements of knowledge and help disseminate results concerning recent applications and case studies in the areas of fuzzy logic, intelligent systems, and web-based applications among working professionals and professionals in education and research, covering a broad cross-section of technical disciplines.