Wei-Chun Chen , Ming-Jay Deng , Ping-Yu Liu , Chun-Chi Lai , Yu-Hao Lin
{"title":"A framework for real-time vehicle counting and velocity estimation using deep learning","authors":"Wei-Chun Chen , Ming-Jay Deng , Ping-Yu Liu , Chun-Chi Lai , Yu-Hao Lin","doi":"10.1016/j.suscom.2023.100927","DOIUrl":null,"url":null,"abstract":"<div><p>To better control traffic and promote environmental sustainability, this study proposed a framework to monitor vehicle number and velocity at real time. First, You Only Look Once-v4 (Yolo-v4) algorithm based on deep learning can greatly improve the accuracy of object detection in an image, and trackers, including Sort and Deepsort, resolved the identity switch problem to track efficiently the multiple objects. To that end, this study combined Yolo-v4 with Sort and Deepsort to develop two trajectory models, which are known as YS and YDS, respectively. In addition, different regions of interest (ROI) with different pixel distances (PDs), named ROI-10 and ROI-14, were converted by road marking to calibrate the PD. Finally, a high-resolution benchmark video and two real-time low-resolution videos of highway both were employed to validate this proposed framework. Results show the YDS with ROI-10 achieved 90% accuracy of vehicle counting, when compared to the number of actual vehicles, and this outperformed the YS with ROI-10. However, the YDS with ROI-14 generated relatively good estimates of vehicle velocity. As shown in the real-time low-resolution videos, the YDS with ROI-10 achieved 89.5% and 83.7% accuracy of vehicle counting in Nantun and Daya sites of highway, respectively, and reasonable estimates of vehicle velocity were obtained. In the future, more bus and light truck images could be collected to effectively train the Yolo-v4 and improve the detection of bus and light truck. A better mechanism for precise vehicle velocity estimation and the vehicle detection in different environment conditions should be further investigated.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100927"},"PeriodicalIF":3.8000,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainable Computing-Informatics & Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210537923000823","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
To better control traffic and promote environmental sustainability, this study proposed a framework to monitor vehicle number and velocity at real time. First, You Only Look Once-v4 (Yolo-v4) algorithm based on deep learning can greatly improve the accuracy of object detection in an image, and trackers, including Sort and Deepsort, resolved the identity switch problem to track efficiently the multiple objects. To that end, this study combined Yolo-v4 with Sort and Deepsort to develop two trajectory models, which are known as YS and YDS, respectively. In addition, different regions of interest (ROI) with different pixel distances (PDs), named ROI-10 and ROI-14, were converted by road marking to calibrate the PD. Finally, a high-resolution benchmark video and two real-time low-resolution videos of highway both were employed to validate this proposed framework. Results show the YDS with ROI-10 achieved 90% accuracy of vehicle counting, when compared to the number of actual vehicles, and this outperformed the YS with ROI-10. However, the YDS with ROI-14 generated relatively good estimates of vehicle velocity. As shown in the real-time low-resolution videos, the YDS with ROI-10 achieved 89.5% and 83.7% accuracy of vehicle counting in Nantun and Daya sites of highway, respectively, and reasonable estimates of vehicle velocity were obtained. In the future, more bus and light truck images could be collected to effectively train the Yolo-v4 and improve the detection of bus and light truck. A better mechanism for precise vehicle velocity estimation and the vehicle detection in different environment conditions should be further investigated.
为了更好地控制交通和促进环境的可持续性,本研究提出了一个实时监测车辆数量和速度的框架。首先,基于深度学习的You Only Look Once-v4 (Yolo-v4)算法可以大大提高图像中目标检测的准确性,包括Sort和Deepsort在内的跟踪器解决了身份切换问题,可以高效地跟踪多个目标。为此,本研究将Yolo-v4与Sort和Deepsort相结合,开发了两种轨迹模型,分别称为YS和YDS。此外,通过道路标记转换具有不同像素距离的不同感兴趣区域(ROI),命名为ROI-10和ROI-14,以校准PD。最后,利用一个高分辨率基准视频和两个公路低分辨率实时视频对该框架进行了验证。结果表明,与实际车辆数量相比,具有ROI-10的YDS实现了90%的车辆计数准确率,这优于具有ROI-10的YS。然而,具有ROI-14的YDS生成了相对较好的车辆速度估计。从实时低分辨率视频中可以看出,ROI-10的YDS在高速公路南屯和大雅站点的车辆计数准确率分别达到89.5%和83.7%,并得到了合理的车速估计。未来可以收集更多的客车和轻卡图像,有效训练Yolo-v4,提高对客车和轻卡的检测能力。在不同的环境条件下,需要进一步研究更精确的车辆速度估计和车辆检测机制。
期刊介绍:
Sustainable computing is a rapidly expanding research area spanning the fields of computer science and engineering, electrical engineering as well as other engineering disciplines. The aim of Sustainable Computing: Informatics and Systems (SUSCOM) is to publish the myriad research findings related to energy-aware and thermal-aware management of computing resource. Equally important is a spectrum of related research issues such as applications of computing that can have ecological and societal impacts. SUSCOM publishes original and timely research papers and survey articles in current areas of power, energy, temperature, and environment related research areas of current importance to readers. SUSCOM has an editorial board comprising prominent researchers from around the world and selects competitively evaluated peer-reviewed papers.