Afridi Ibn Rahman, Zebel-E.-Noor Akhand, Tasin Al Nahian Khan, Anirudh Sarda, Subhi Bhuiyan, Mma Rakib, Zubayer Ahmed Fahim, Indronil Kundu
{"title":"使用深度学习模型的连续手语文本解释","authors":"Afridi Ibn Rahman, Zebel-E.-Noor Akhand, Tasin Al Nahian Khan, Anirudh Sarda, Subhi Bhuiyan, Mma Rakib, Zubayer Ahmed Fahim, Indronil Kundu","doi":"10.1109/ICCIT57492.2022.10054721","DOIUrl":null,"url":null,"abstract":"The COVID-19 pandemic has obligated people to adopt the virtual lifestyle. Currently, the use of videoconferencing to conduct business meetings is prevalent owing to the numerous benefits it presents. However, a large number of people with speech impediment find themselves handicapped to the new normal as they cannot communicate their ideas effectively, especially in fast paced meetings. Therefore, this paper aims to introduce an enriched dataset using an action recognition method with the most common phrases translated into American Sign Language (ASL) that are routinely used in professional meetings. It further proposes a sign language detecting and classifying model employing deep learning architectures, namely, CNN and LSTM. The performances of these models are analysed by employing different performance metrics like accuracy, recall, F1- Score and Precision. CNN and LSTM models yield an accuracy of 93.75% and 96.54% respectively, after being trained with the dataset introduced in this study. Therefore, the incorporation of the LSTM model into different cloud services, virtual private networks and softwares will allow people with speech impairment to use sign language, which will automatically be translated into captions using moving camera circumstances in real time. This will in turn equip other people with the tool to understand and grasp the message that is being conveyed and easily discuss and effectuate the ideas.","PeriodicalId":255498,"journal":{"name":"2022 25th International Conference on Computer and Information Technology (ICCIT)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Continuous Sign Language Interpretation to Text Using Deep Learning Models\",\"authors\":\"Afridi Ibn Rahman, Zebel-E.-Noor Akhand, Tasin Al Nahian Khan, Anirudh Sarda, Subhi Bhuiyan, Mma Rakib, Zubayer Ahmed Fahim, Indronil Kundu\",\"doi\":\"10.1109/ICCIT57492.2022.10054721\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The COVID-19 pandemic has obligated people to adopt the virtual lifestyle. Currently, the use of videoconferencing to conduct business meetings is prevalent owing to the numerous benefits it presents. However, a large number of people with speech impediment find themselves handicapped to the new normal as they cannot communicate their ideas effectively, especially in fast paced meetings. Therefore, this paper aims to introduce an enriched dataset using an action recognition method with the most common phrases translated into American Sign Language (ASL) that are routinely used in professional meetings. It further proposes a sign language detecting and classifying model employing deep learning architectures, namely, CNN and LSTM. The performances of these models are analysed by employing different performance metrics like accuracy, recall, F1- Score and Precision. CNN and LSTM models yield an accuracy of 93.75% and 96.54% respectively, after being trained with the dataset introduced in this study. Therefore, the incorporation of the LSTM model into different cloud services, virtual private networks and softwares will allow people with speech impairment to use sign language, which will automatically be translated into captions using moving camera circumstances in real time. This will in turn equip other people with the tool to understand and grasp the message that is being conveyed and easily discuss and effectuate the ideas.\",\"PeriodicalId\":255498,\"journal\":{\"name\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIT57492.2022.10054721\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT57492.2022.10054721","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Continuous Sign Language Interpretation to Text Using Deep Learning Models
The COVID-19 pandemic has obligated people to adopt the virtual lifestyle. Currently, the use of videoconferencing to conduct business meetings is prevalent owing to the numerous benefits it presents. However, a large number of people with speech impediment find themselves handicapped to the new normal as they cannot communicate their ideas effectively, especially in fast paced meetings. Therefore, this paper aims to introduce an enriched dataset using an action recognition method with the most common phrases translated into American Sign Language (ASL) that are routinely used in professional meetings. It further proposes a sign language detecting and classifying model employing deep learning architectures, namely, CNN and LSTM. The performances of these models are analysed by employing different performance metrics like accuracy, recall, F1- Score and Precision. CNN and LSTM models yield an accuracy of 93.75% and 96.54% respectively, after being trained with the dataset introduced in this study. Therefore, the incorporation of the LSTM model into different cloud services, virtual private networks and softwares will allow people with speech impairment to use sign language, which will automatically be translated into captions using moving camera circumstances in real time. This will in turn equip other people with the tool to understand and grasp the message that is being conveyed and easily discuss and effectuate the ideas.