Tonjih Tazalli, Zarin Anan Aunshu, Sumaya Sadbeen Liya, Magfirah Hossain, Zareen Mehjabeen, M. Ahmed, Muhammad Iqbal Hossain
{"title":"基于计算机视觉的孟加拉手语文本生成","authors":"Tonjih Tazalli, Zarin Anan Aunshu, Sumaya Sadbeen Liya, Magfirah Hossain, Zareen Mehjabeen, M. Ahmed, Muhammad Iqbal Hossain","doi":"10.1109/IPAS55744.2022.10052928","DOIUrl":null,"url":null,"abstract":"In the whole world, around 7% of people have hearing and speech impairment problems. They use sign language as their communication method. As for our country, there are lots of people born with hearing and speech impairment problems. Therefore, our primary focus is to work for those people by converting Bangla sign language into text. There are already various projects on Bangla sign language done by other people. However, they focused more on the separate alphabets and numerical numbers. That is why, we want to concentrate on Bangla word signs since communication is done using words or phrases rather than alphabets. There is no proper database for Bangla word sign language, so we want to make a database for our work using BDSL. In recognition of sign language (SLR), there usually are two types of scenarios: isolated SLR, which takes words by word and completes recognize action, and the other one is continuous SLR, which completes action by translating the whole sentence at once. We are working on isolated SLR. We introduce a method where we are going to use PyTorch and YOLOv5 for a video classification model to convert Bangla sign language into the text from the video where each video has only one sign language word. Here, we have achieved an accuracy rate of 76.29% on the training dataset and 51.44% on the testing dataset. We are working to build a system that will make it easier for hearing and speech-disabled people to interact with the general public.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Computer Vision-Based Bengali Sign Language To Text Generation\",\"authors\":\"Tonjih Tazalli, Zarin Anan Aunshu, Sumaya Sadbeen Liya, Magfirah Hossain, Zareen Mehjabeen, M. Ahmed, Muhammad Iqbal Hossain\",\"doi\":\"10.1109/IPAS55744.2022.10052928\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the whole world, around 7% of people have hearing and speech impairment problems. They use sign language as their communication method. As for our country, there are lots of people born with hearing and speech impairment problems. Therefore, our primary focus is to work for those people by converting Bangla sign language into text. There are already various projects on Bangla sign language done by other people. However, they focused more on the separate alphabets and numerical numbers. That is why, we want to concentrate on Bangla word signs since communication is done using words or phrases rather than alphabets. There is no proper database for Bangla word sign language, so we want to make a database for our work using BDSL. In recognition of sign language (SLR), there usually are two types of scenarios: isolated SLR, which takes words by word and completes recognize action, and the other one is continuous SLR, which completes action by translating the whole sentence at once. We are working on isolated SLR. We introduce a method where we are going to use PyTorch and YOLOv5 for a video classification model to convert Bangla sign language into the text from the video where each video has only one sign language word. Here, we have achieved an accuracy rate of 76.29% on the training dataset and 51.44% on the testing dataset. We are working to build a system that will make it easier for hearing and speech-disabled people to interact with the general public.\",\"PeriodicalId\":322228,\"journal\":{\"name\":\"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPAS55744.2022.10052928\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPAS55744.2022.10052928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Computer Vision-Based Bengali Sign Language To Text Generation
In the whole world, around 7% of people have hearing and speech impairment problems. They use sign language as their communication method. As for our country, there are lots of people born with hearing and speech impairment problems. Therefore, our primary focus is to work for those people by converting Bangla sign language into text. There are already various projects on Bangla sign language done by other people. However, they focused more on the separate alphabets and numerical numbers. That is why, we want to concentrate on Bangla word signs since communication is done using words or phrases rather than alphabets. There is no proper database for Bangla word sign language, so we want to make a database for our work using BDSL. In recognition of sign language (SLR), there usually are two types of scenarios: isolated SLR, which takes words by word and completes recognize action, and the other one is continuous SLR, which completes action by translating the whole sentence at once. We are working on isolated SLR. We introduce a method where we are going to use PyTorch and YOLOv5 for a video classification model to convert Bangla sign language into the text from the video where each video has only one sign language word. Here, we have achieved an accuracy rate of 76.29% on the training dataset and 51.44% on the testing dataset. We are working to build a system that will make it easier for hearing and speech-disabled people to interact with the general public.