{"title":"Outdoor Obstacle Detection for Visually Impaired using AI Technique","authors":"Loubna Bougheloum, M. B. Salah, M. Bettayeb","doi":"10.1109/ICETSIS61505.2024.10459374","DOIUrl":null,"url":null,"abstract":"Obstacle detection is a crucial factor in ensuring the safety and mobility of visually impaired individuals. This paper introduces a comprehensive system designed to support individuals with visual impairments in outdoor environments, employing recent advancements in artificial intelligence (AI). The core of the system involves the use of YOLOv5 for efficient object recognition and Google Text-to-Speech (GTTS) for the conversion of detection results into clear and informative audio feedback. The model is trained on a customized dataset encompassing 10 specific outdoor object categories, in addition with the widely used MS COCO dataset. This strategic combination allows the system to attain heigh accuracy in obstacle detection, surpassing the performance of previous techniques. The model's ability to accurately identify and classify outdoor objects contributes to its efficacy in real-world scenarios. To ensure user accessibility, the system transforms output labels into text, which is then converted into an audio format. This audio feedback is seamlessly delivered to visually impaired users via earphones, providing real-time information about their surroundings. This approach represents a significant advancement in AI-driven outdoor obstacle detection, promising not only improved accuracy but also enhanced usability for individuals with visual impairments. By addressing the challenges of outdoor navigation, this new approach has the capacity to significantly enhance the autonomy and well-being of people with visual impairments in their everyday activities.","PeriodicalId":518932,"journal":{"name":"2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICETSIS61505.2024.10459374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Obstacle detection is a crucial factor in ensuring the safety and mobility of visually impaired individuals. This paper introduces a comprehensive system designed to support individuals with visual impairments in outdoor environments, employing recent advancements in artificial intelligence (AI). The core of the system involves the use of YOLOv5 for efficient object recognition and Google Text-to-Speech (GTTS) for the conversion of detection results into clear and informative audio feedback. The model is trained on a customized dataset encompassing 10 specific outdoor object categories, in addition with the widely used MS COCO dataset. This strategic combination allows the system to attain heigh accuracy in obstacle detection, surpassing the performance of previous techniques. The model's ability to accurately identify and classify outdoor objects contributes to its efficacy in real-world scenarios. To ensure user accessibility, the system transforms output labels into text, which is then converted into an audio format. This audio feedback is seamlessly delivered to visually impaired users via earphones, providing real-time information about their surroundings. This approach represents a significant advancement in AI-driven outdoor obstacle detection, promising not only improved accuracy but also enhanced usability for individuals with visual impairments. By addressing the challenges of outdoor navigation, this new approach has the capacity to significantly enhance the autonomy and well-being of people with visual impairments in their everyday activities.