IMPLEMENTING ASSISTIVE TECHNOLOGY FOR THE VISUALLY IMPAIRED USING DEEP LEARNING, COMPUTER VISION AND NLP

Rashi Agarwal, Vineet Jaruhar
{"title":"IMPLEMENTING ASSISTIVE TECHNOLOGY FOR THE VISUALLY IMPAIRED USING DEEP LEARNING, COMPUTER VISION AND NLP","authors":"Rashi Agarwal, Vineet Jaruhar","doi":"10.51767/jc1311","DOIUrl":null,"url":null,"abstract":"Visual impairment can cause difficulties in performing daily activities. Unfamiliarity of places, hindrance in known routes, water puddles, pot holes, stray animals, sudden road incidents etc. decreases the confidence and interaction of the Visually Impaired People (VIP) leading to human assistance. We are presenting a human-like autonomous assistant which is in the form of a stick made with technologies such as Computer Vision, NLP, GPS, Deep Learning and Embedded Systems. VIP can communicate with the stick through Chatbot. He can get answers to general questions like weather status, time of the day, what is my location, where is point A, etc. and specific questions like what is in front of me, in which room I am, provide me the navigation from point A to point B etc.. In situations like an obstacle in the path, pot holes, ascending/descending stairs, approaching people etc., our stick will notify the VIP and will give him the clear path also. The stick will come along with a smart phone based application which will help in keeping a track of the location which can be shared with anyone. We have used CNN to make an object detection model which can classify 20 different objects. A day and night classifier is also included to correctly figure out the time of the day. The stick is equipped with a camera, ultrasonic and wet sensors, GPS module with controller. Real time video through camera will be captured and processed by the controller. Then this will evaluate the feed through object detection model. The outcome can be notified to the VIP through Chatbot. This will enable safety, security, control and more independence to the VIPs.","PeriodicalId":408370,"journal":{"name":"BSSS Journal of Computer","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BSSS Journal of Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.51767/jc1311","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual impairment can cause difficulties in performing daily activities. Unfamiliarity of places, hindrance in known routes, water puddles, pot holes, stray animals, sudden road incidents etc. decreases the confidence and interaction of the Visually Impaired People (VIP) leading to human assistance. We are presenting a human-like autonomous assistant which is in the form of a stick made with technologies such as Computer Vision, NLP, GPS, Deep Learning and Embedded Systems. VIP can communicate with the stick through Chatbot. He can get answers to general questions like weather status, time of the day, what is my location, where is point A, etc. and specific questions like what is in front of me, in which room I am, provide me the navigation from point A to point B etc.. In situations like an obstacle in the path, pot holes, ascending/descending stairs, approaching people etc., our stick will notify the VIP and will give him the clear path also. The stick will come along with a smart phone based application which will help in keeping a track of the location which can be shared with anyone. We have used CNN to make an object detection model which can classify 20 different objects. A day and night classifier is also included to correctly figure out the time of the day. The stick is equipped with a camera, ultrasonic and wet sensors, GPS module with controller. Real time video through camera will be captured and processed by the controller. Then this will evaluate the feed through object detection model. The outcome can be notified to the VIP through Chatbot. This will enable safety, security, control and more independence to the VIPs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用深度学习、计算机视觉和自然语言处理技术,为视障人士提供辅助技术
视力障碍会造成日常活动的困难。视障人士不熟悉的地方、已知路线上的障碍、水坑、坑洞、流浪动物、突然的道路事故等,会降低视障人士的信心和互动,从而导致他们无法获得人工协助。我们正在展示一种类似人类的自动助手,它以一根棍子的形式,由计算机视觉、自然语言处理、全球定位系统、深度学习和嵌入式系统等技术制成。VIP可以通过聊天机器人与摇杆进行交流。他可以得到一般问题的答案,比如天气状况,一天中的时间,我的位置,A点在哪里等等,以及一些具体问题,比如我面前是什么,我在哪个房间,为我提供从A点到B点的导航等等。在道路上有障碍物、坑洞、上下楼梯、接近人群等情况下,我们的操纵杆会通知VIP,并为他提供清晰的路径。这款棍子将会附带一个基于智能手机的应用程序,可以帮助你跟踪位置,并与任何人共享。我们用CNN做了一个物体检测模型,可以对20个不同的物体进行分类。还包括一个昼夜分类器,以正确计算一天的时间。该摇杆配备了摄像头、超声波和湿传感器、带控制器的GPS模块。通过摄像头实时捕捉视频并由控制器进行处理。然后,这将通过对象检测模型评估馈送。结果可以通过聊天机器人通知VIP。这将为vip提供安全、保障、控制和更多的独立性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
ACCIDENT DETECTION & ALERT MESSAGING –ANDROID APP PICK AND DELIVERY SYSTEM USING ANDROID MOBILE APPLICATION IMPROVING CLASSIFICATION PERFORMANCE USING ENSEMBLE LEARNING APPROACH UNDERSTANDING THE DISEASES THROUGH COMPUTERS: FUTURE OF ARTIFICIAL INTELLIGENCE PERCEPTION OF HUMAN EXPRESSION, AGE AND GENDER RELATED VARIATIONS OF EMOTIONRECOGNITION BASED ON LIVE CAMERA IMAGES IN DEEP LEARNING USING PYTHON
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1