Deep Learning for Automatic Road Marking Detection with Yolov5

Rung-Ching Chen, Yong-Cun Zhuang, Jeang-Kuo Chen, Christine Dewi
{"title":"Deep Learning for Automatic Road Marking Detection with Yolov5","authors":"Rung-Ching Chen, Yong-Cun Zhuang, Jeang-Kuo Chen, Christine Dewi","doi":"10.1109/ICMLC56445.2022.9941313","DOIUrl":null,"url":null,"abstract":"One of the most important responsibilities of a visual driver aid system is recognizing and tracking road signs. In recent years, tremendous progress has been made in both deep learning and the identification of road markings. Pedestrian crossings, directional arrows, zebra crossings, speed limit signs, and similar signs and text are all road surface markings. These markings are painted directly onto the surface of the road. This paper implements YOLOv5s and YOLOv5m to identify the road marking sign. We built a dataset and focused on the Taiwan road marking sign. According to the findings of our experiments, YOLOv5m contains eleven categories of whose training accuracy is superior to that of YOLOv5s. It has been discovered that the YOLOv5m model is the most accurate, scoring 87.30 percent overall throughout testing, while the YOLOv5s model scores an average of 83.60 percent.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC56445.2022.9941313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

One of the most important responsibilities of a visual driver aid system is recognizing and tracking road signs. In recent years, tremendous progress has been made in both deep learning and the identification of road markings. Pedestrian crossings, directional arrows, zebra crossings, speed limit signs, and similar signs and text are all road surface markings. These markings are painted directly onto the surface of the road. This paper implements YOLOv5s and YOLOv5m to identify the road marking sign. We built a dataset and focused on the Taiwan road marking sign. According to the findings of our experiments, YOLOv5m contains eleven categories of whose training accuracy is superior to that of YOLOv5s. It has been discovered that the YOLOv5m model is the most accurate, scoring 87.30 percent overall throughout testing, while the YOLOv5s model scores an average of 83.60 percent.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于Yolov5的深度学习道路标记自动检测
视觉驾驶辅助系统最重要的职责之一是识别和跟踪道路标志。近年来,深度学习和道路标志识别都取得了巨大的进步。人行横道、方向箭头、斑马线、限速标志以及类似的标志和文字都是路面标记。这些标志是直接画在路面上的。本文实现了YOLOv5s和YOLOv5m对道路标线标志的识别。我们建立了一个数据集,并专注于台湾道路标记标志。根据我们的实验结果,YOLOv5m包含11个类别,它们的训练准确率优于YOLOv5s。结果发现,YOLOv5m模型的准确率最高,在整个测试过程中得分为87.30%,而YOLOv5s模型的平均得分为83.60%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast Semantic Segmentation for Vectorization of Line Drawings Based on Deep Neural Networks Real-Time Vehicle Counting by Deep-Learning Networks Unsupervised Representation Learning Method In Sensor Based Human Activity Recognition Improvement and Evaluation of Object Shape Presentation System Using Linear Actuators Examination of Analysis Methods for E-Learning System Grade Data Using Formal Concept Analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1