PC-CS-YOLO: High-Precision Obstacle Detection for Visually Impaired Safety.

IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Sensors Pub Date : 2025-01-17 DOI:10.3390/s25020534
Jincheng Li, Menglin Zheng, Danyang Dong, Xing Xie
{"title":"PC-CS-YOLO: High-Precision Obstacle Detection for Visually Impaired Safety.","authors":"Jincheng Li, Menglin Zheng, Danyang Dong, Xing Xie","doi":"10.3390/s25020534","DOIUrl":null,"url":null,"abstract":"<p><p>The issue of obstacle avoidance and safety for visually impaired individuals has been a major topic of research. However, complex street environments still pose significant challenges for blind obstacle detection systems. Existing solutions often fail to provide real-time, accurate obstacle avoidance decisions. In this study, we propose a blind obstacle detection system based on the PC-CS-YOLO model. The system improves the backbone network by adopting the partial convolutional feed-forward network (PCFN) to reduce computational redundancy. Additionally, to enhance the network's robustness in multi-scale feature fusion, we introduce the Cross-Scale Attention Fusion (CSAF) mechanism, which integrates features from different sensory domains to achieve superior performance. Compared to state-of-the-art networks, our system shows improvements of 2.0%, 3.9%, and 1.5% in precision, recall, and mAP50, respectively. When evaluated on a GPU, the inference speed is 20.6 ms, which is 15.3 ms faster than YOLO11, meeting the real-time requirements for blind obstacle avoidance systems.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"25 2","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11768684/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sensors","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.3390/s25020534","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, ANALYTICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The issue of obstacle avoidance and safety for visually impaired individuals has been a major topic of research. However, complex street environments still pose significant challenges for blind obstacle detection systems. Existing solutions often fail to provide real-time, accurate obstacle avoidance decisions. In this study, we propose a blind obstacle detection system based on the PC-CS-YOLO model. The system improves the backbone network by adopting the partial convolutional feed-forward network (PCFN) to reduce computational redundancy. Additionally, to enhance the network's robustness in multi-scale feature fusion, we introduce the Cross-Scale Attention Fusion (CSAF) mechanism, which integrates features from different sensory domains to achieve superior performance. Compared to state-of-the-art networks, our system shows improvements of 2.0%, 3.9%, and 1.5% in precision, recall, and mAP50, respectively. When evaluated on a GPU, the inference speed is 20.6 ms, which is 15.3 ms faster than YOLO11, meeting the real-time requirements for blind obstacle avoidance systems.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PC-CS-YOLO:用于视障安全的高精度障碍物检测。
视障人士的避障与安全问题一直是视障人士研究的一个重要课题。然而,复杂的街道环境仍然对盲障碍物检测系统提出了重大挑战。现有的解决方案往往不能提供实时、准确的避障决策。在本研究中,我们提出了一种基于PC-CS-YOLO模型的盲障碍物检测系统。系统采用部分卷积前馈网络(PCFN)对骨干网进行改进,减少了计算冗余。此外,为了增强网络在多尺度特征融合中的鲁棒性,我们引入了跨尺度注意力融合(Cross-Scale Attention fusion, CSAF)机制,该机制集成了来自不同感觉域的特征以获得更好的性能。与最先进的网络相比,我们的系统在精度、召回率和mAP50方面分别提高了2.0%、3.9%和1.5%。在GPU上进行评估时,推理速度为20.6 ms,比YOLO11快15.3 ms,满足盲避障系统的实时性要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Sensors
Sensors 工程技术-电化学
CiteScore
7.30
自引率
12.80%
发文量
8430
审稿时长
1.7 months
期刊介绍: Sensors (ISSN 1424-8220) provides an advanced forum for the science and technology of sensors and biosensors. It publishes reviews (including comprehensive reviews on the complete sensors products), regular research papers and short notes. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced.
期刊最新文献
A Joint Framework of IMM-LSTM-C Tracking and IBPDO-Based Node Selection for Energy-Efficient Cooperative Tracking in Underwater Acoustic Sensor Networks. A Deep Learning-Based Method for Stress Measurement Using Longitudinal Critically Refracted Waves. LEACH Protocol Evolution in WSN: A Review of Energy Consumption Optimization and Security Reinforcement. Efficient Mesh Reconstruction and Texturing of Oracle Bones. A Cheonjiin Layout Mental Speller: Developing a Simple and Cost-Effective EEG-Based Brain-Computer Interface System.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1