{"title":"基于android的视障对象识别应用程序","authors":"Akilesh Salunkhe, Manthan Raut, Shayantan Santra, Sumedha Bhagwat","doi":"10.1051/itmconf/20214003001","DOIUrl":null,"url":null,"abstract":"Detecting objects in real-time and converting them into an audio output was a challenging task. Recent advancement in computer vision has allowed the development of various real-time object detection applications. This paper describes a simple android app that would help the visually impaired people in understanding their surroundings. The information about the surrounding environment was captured through a phone’s camera where real-time object recognition through tensorflow’s object detection API was done. The detected objects were then converted into an audio output by using android’s text-to-speech library. Tensorflow lite made the offline processing of complex algorithms simple. The overall accuracy of the proposed system was found to be approximately 90%.","PeriodicalId":433898,"journal":{"name":"ITM Web of Conferences","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Android-based object recognition application for visually impaired\",\"authors\":\"Akilesh Salunkhe, Manthan Raut, Shayantan Santra, Sumedha Bhagwat\",\"doi\":\"10.1051/itmconf/20214003001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Detecting objects in real-time and converting them into an audio output was a challenging task. Recent advancement in computer vision has allowed the development of various real-time object detection applications. This paper describes a simple android app that would help the visually impaired people in understanding their surroundings. The information about the surrounding environment was captured through a phone’s camera where real-time object recognition through tensorflow’s object detection API was done. The detected objects were then converted into an audio output by using android’s text-to-speech library. Tensorflow lite made the offline processing of complex algorithms simple. The overall accuracy of the proposed system was found to be approximately 90%.\",\"PeriodicalId\":433898,\"journal\":{\"name\":\"ITM Web of Conferences\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ITM Web of Conferences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1051/itmconf/20214003001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ITM Web of Conferences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1051/itmconf/20214003001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Android-based object recognition application for visually impaired
Detecting objects in real-time and converting them into an audio output was a challenging task. Recent advancement in computer vision has allowed the development of various real-time object detection applications. This paper describes a simple android app that would help the visually impaired people in understanding their surroundings. The information about the surrounding environment was captured through a phone’s camera where real-time object recognition through tensorflow’s object detection API was done. The detected objects were then converted into an audio output by using android’s text-to-speech library. Tensorflow lite made the offline processing of complex algorithms simple. The overall accuracy of the proposed system was found to be approximately 90%.