Prashant G. Ahire, Kshitija B. Tilekar, Tejaswini A. Jawake, Pramod B. Warale
{"title":"Two Way Communicator between Deaf and Dumb People and Normal People","authors":"Prashant G. Ahire, Kshitija B. Tilekar, Tejaswini A. Jawake, Pramod B. Warale","doi":"10.1109/ICCUBEA.2015.131","DOIUrl":null,"url":null,"abstract":"One of the most precious gift of nature to human beings is the ability to express himself by responding to the events occurring in his surroundings.Every normal human being sees, listens and then reacts to the situations by speaking himself out. But there are some unfortunate ones who are deprived of this valuable gift. This creates a gap between the normal human beings and the deprived ones. This application will help for both of them to communicate with each other. The system is mainly consists of two modules, first module is drawing out Indian Sign Language(ISL) gestures from real-time video and mapping it with human-understandable speech. Accordingly, second module will take natural language as input and map it with equivalent Indian Sign Language animated gestures. Processing from video to speech will include frame formation from videos, finding region of interest (ROI) and mapping of images with language knowledge base using Correlation based approach then relevant audio generation using Google Text-to-Speech(TTS) API. The other way round, natural language is mapped with equivalent Indian Sign Language gestures by conversion of speech to text using Google Speech-to-Text(STT) API, further mapping the text to relevant animated gestures from the database.","PeriodicalId":325841,"journal":{"name":"2015 International Conference on Computing Communication Control and Automation","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Computing Communication Control and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCUBEA.2015.131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
One of the most precious gift of nature to human beings is the ability to express himself by responding to the events occurring in his surroundings.Every normal human being sees, listens and then reacts to the situations by speaking himself out. But there are some unfortunate ones who are deprived of this valuable gift. This creates a gap between the normal human beings and the deprived ones. This application will help for both of them to communicate with each other. The system is mainly consists of two modules, first module is drawing out Indian Sign Language(ISL) gestures from real-time video and mapping it with human-understandable speech. Accordingly, second module will take natural language as input and map it with equivalent Indian Sign Language animated gestures. Processing from video to speech will include frame formation from videos, finding region of interest (ROI) and mapping of images with language knowledge base using Correlation based approach then relevant audio generation using Google Text-to-Speech(TTS) API. The other way round, natural language is mapped with equivalent Indian Sign Language gestures by conversion of speech to text using Google Speech-to-Text(STT) API, further mapping the text to relevant animated gestures from the database.