Real-time Traffic Monitoring System Based on Deep Learning and YOLOv8

IF 1.2 Q3 MULTIDISCIPLINARY SCIENCES ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY Pub Date : 2023-11-16 DOI:10.14500/aro.11327
Saif B. Neamah, Abdulamir A. Karim
{"title":"Real-time Traffic Monitoring System Based on Deep Learning and YOLOv8","authors":"Saif B. Neamah, Abdulamir A. Karim","doi":"10.14500/aro.11327","DOIUrl":null,"url":null,"abstract":"Computer vision applications are important nowadays because they provide solutions to critical problems that relate to traffic in a cost-effective manner to reduce accidents and preserve lives. This paper proposes a system for real-time traffic monitoring based on cutting-edge deep learning techniques through the state-of-the-art you-only-look-once v8 algorithm, benefiting from its functionalities to provide vehicle detection, classification, and segmentation. The proposed work provides various important traffic information, including vehicle counting, classification, speed estimation, and size estimation. This information helps enforce traffic laws. The proposed system consists of five stages: The preprocessing stage, which includes camera calibration, ROI calculation, and preparing the source video input; the vehicle detection stage, which uses the convolutional neural network model to localize vehicles in the video frames; the tracking stage, which uses the ByteTrack algorithm to track the detected vehicles; the speed estimation stage, which estimates the speed for the tracked vehicles; and the size estimation stage, which estimates the vehicle size. The results of the proposed system running on the Nvidia GTX 1070 GPU show that the detection and tracking stages have an average accuracy of 96.58% with an average error of 3.42%, the vehicle counting stage has an average accuracy of 97.54% with a 2.46% average error, the speed estimation stage has an average accuracy of 96.75% with a 3.25% average error, and the size estimation stage has an average accuracy of 87.28% with a 12.72% average error.","PeriodicalId":8398,"journal":{"name":"ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14500/aro.11327","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Computer vision applications are important nowadays because they provide solutions to critical problems that relate to traffic in a cost-effective manner to reduce accidents and preserve lives. This paper proposes a system for real-time traffic monitoring based on cutting-edge deep learning techniques through the state-of-the-art you-only-look-once v8 algorithm, benefiting from its functionalities to provide vehicle detection, classification, and segmentation. The proposed work provides various important traffic information, including vehicle counting, classification, speed estimation, and size estimation. This information helps enforce traffic laws. The proposed system consists of five stages: The preprocessing stage, which includes camera calibration, ROI calculation, and preparing the source video input; the vehicle detection stage, which uses the convolutional neural network model to localize vehicles in the video frames; the tracking stage, which uses the ByteTrack algorithm to track the detected vehicles; the speed estimation stage, which estimates the speed for the tracked vehicles; and the size estimation stage, which estimates the vehicle size. The results of the proposed system running on the Nvidia GTX 1070 GPU show that the detection and tracking stages have an average accuracy of 96.58% with an average error of 3.42%, the vehicle counting stage has an average accuracy of 97.54% with a 2.46% average error, the speed estimation stage has an average accuracy of 96.75% with a 3.25% average error, and the size estimation stage has an average accuracy of 87.28% with a 12.72% average error.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习和 YOLOv8 的实时交通监控系统
计算机视觉应用如今非常重要,因为它们能以经济高效的方式为与交通相关的关键问题提供解决方案,从而减少事故,保护生命。本文通过最先进的 you-only-look-once v8 算法,提出了一种基于前沿深度学习技术的实时交通监控系统,利用其功能提供车辆检测、分类和分割。拟议的工作可提供各种重要的交通信息,包括车辆计数、分类、速度估计和大小估计。这些信息有助于执行交通法规。拟议的系统由五个阶段组成:预处理阶段,包括摄像机校准、ROI 计算和准备源视频输入;车辆检测阶段,使用卷积神经网络模型定位视频帧中的车辆;跟踪阶段,使用 ByteTrack 算法跟踪检测到的车辆;速度估算阶段,估算被跟踪车辆的速度;以及尺寸估算阶段,估算车辆尺寸。拟议系统在 Nvidia GTX 1070 GPU 上运行的结果显示,检测和跟踪阶段的平均准确率为 96.58%,平均误差为 3.42%;车辆计数阶段的平均准确率为 97.54%,平均误差为 2.46%;速度估计阶段的平均准确率为 96.75%,平均误差为 3.25%;尺寸估计阶段的平均准确率为 87.28%,平均误差为 12.72%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY
ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY MULTIDISCIPLINARY SCIENCES-
自引率
33.30%
发文量
33
审稿时长
16 weeks
期刊最新文献
Encryption of Color Images with a New Framework Microstrip Passive Components for Energy Harvesting and 5G Applications Optimizing Emotional Insight through Unimodal and Multimodal Long Short-term Memory Models A Review on Adverse Drug Reaction Detection Techniques Deep Learning-Based Optical Music Recognition for Semantic Representation of Non-overlap and Overlap Music Notes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1