R. Gatti, J. Avinash, N. Nataraja, G. Poornima, S. Santosh Kumar, K. S. Sunil Kumar
{"title":"Design and Implementation of Vision Module for Visually Impaired People","authors":"R. Gatti, J. Avinash, N. Nataraja, G. Poornima, S. Santosh Kumar, K. S. Sunil Kumar","doi":"10.1109/RTEICT49044.2020.9315645","DOIUrl":null,"url":null,"abstract":"Several problems faced by the visually impaired people were addressed over the past 3 decades. It includes transportation, text to voice conversion, alarm, and usage of the internet. There exist still several areas, where support and help for the visually impaired people are dependent on others. Among them is accessing the daily essential needs is of prime concern. Design and implementation of the visual system are proposed in this paper to help and support visually impaired people using Artificial Neural Networks (ANNs). Here deep learning technique is used for the identification of the objects and the distance of the object is measured using an ultrasonic sensor. The proposed methodology suits better for the conversion of the visual scenarios into voice messages along with the distinct location of the objects. The accuracy of the proposed visual model depends on the data sets used in the ANN algorithm. As the depth of the training data set increases, the performance of the prototype also increases with reduced processing delay in identifying the objects. OpenCV platform is used along with the python programming language to navigate through the surrounding.","PeriodicalId":367246,"journal":{"name":"2020 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RTEICT49044.2020.9315645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Several problems faced by the visually impaired people were addressed over the past 3 decades. It includes transportation, text to voice conversion, alarm, and usage of the internet. There exist still several areas, where support and help for the visually impaired people are dependent on others. Among them is accessing the daily essential needs is of prime concern. Design and implementation of the visual system are proposed in this paper to help and support visually impaired people using Artificial Neural Networks (ANNs). Here deep learning technique is used for the identification of the objects and the distance of the object is measured using an ultrasonic sensor. The proposed methodology suits better for the conversion of the visual scenarios into voice messages along with the distinct location of the objects. The accuracy of the proposed visual model depends on the data sets used in the ANN algorithm. As the depth of the training data set increases, the performance of the prototype also increases with reduced processing delay in identifying the objects. OpenCV platform is used along with the python programming language to navigate through the surrounding.