从城市交叉路口的多向视频中提取和整合车辆轨迹

IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Displays Pub Date : 2024-09-07 DOI:10.1016/j.displa.2024.102834
Jinjun Tang, Weihe Wang
{"title":"从城市交叉路口的多向视频中提取和整合车辆轨迹","authors":"Jinjun Tang,&nbsp;Weihe Wang","doi":"10.1016/j.displa.2024.102834","DOIUrl":null,"url":null,"abstract":"<div><p>With the gradual maturity of computer vision technology, using intersection surveillance videos for vehicle trajectory extraction has become a popular method to analyze vehicle conflicts and safety in urban intersection. However, many intersection surveillance videos have blind spots, failing to fully cover the entire intersection. Vehicles may also obstruct each other, resulting in incomplete vehicle trajectories. The angle of surveillance videos can also lead to inaccurate trajectory extraction. In response to these challenges, this study proposes an vehicle trajectory extraction and integration framework using surveillance videos collected from four entrance of urban intersection. The framework first employs the improved YOLOv5s model to detect the positions of vehicles. Then, we proposed an object tracking model MS-SORT to extract the trajectories in each surveillance video. Subsequently, the trajectories of each surveillance video are mapped into the same coordinate system. Then the integration of trajectories is achieved using space–time information and re-identification (ReID) methods. The framework extracts and integrates trajectories from four intersection surveillance videos, obtaining trajectories with significantly broader temporal and spatial coverage compared to those obtained from any single direction of surveillance video. Our detection model improved mAP by 1.3 percentage points compared to the basic YOLOv5s, and our object tracking model improved MOTA and IDF1 by 2.6 and 2.1 percentage points compared to DeepSORT. The trojectory integration method achieved 94.7 % of F1-Score and RMSE of 0.51 m. The average length and number of the extracted trajectories has increased by at least 47.6 % and 24.2 % respectively compared to trajectories extracted from a single video.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102834"},"PeriodicalIF":3.7000,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vehicle trajectory extraction and integration from multi-direction video on urban intersection\",\"authors\":\"Jinjun Tang,&nbsp;Weihe Wang\",\"doi\":\"10.1016/j.displa.2024.102834\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the gradual maturity of computer vision technology, using intersection surveillance videos for vehicle trajectory extraction has become a popular method to analyze vehicle conflicts and safety in urban intersection. However, many intersection surveillance videos have blind spots, failing to fully cover the entire intersection. Vehicles may also obstruct each other, resulting in incomplete vehicle trajectories. The angle of surveillance videos can also lead to inaccurate trajectory extraction. In response to these challenges, this study proposes an vehicle trajectory extraction and integration framework using surveillance videos collected from four entrance of urban intersection. The framework first employs the improved YOLOv5s model to detect the positions of vehicles. Then, we proposed an object tracking model MS-SORT to extract the trajectories in each surveillance video. Subsequently, the trajectories of each surveillance video are mapped into the same coordinate system. Then the integration of trajectories is achieved using space–time information and re-identification (ReID) methods. The framework extracts and integrates trajectories from four intersection surveillance videos, obtaining trajectories with significantly broader temporal and spatial coverage compared to those obtained from any single direction of surveillance video. Our detection model improved mAP by 1.3 percentage points compared to the basic YOLOv5s, and our object tracking model improved MOTA and IDF1 by 2.6 and 2.1 percentage points compared to DeepSORT. The trojectory integration method achieved 94.7 % of F1-Score and RMSE of 0.51 m. The average length and number of the extracted trajectories has increased by at least 47.6 % and 24.2 % respectively compared to trajectories extracted from a single video.</p></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"85 \",\"pages\":\"Article 102834\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938224001987\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224001987","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

随着计算机视觉技术的逐渐成熟,利用十字路口监控视频进行车辆轨迹提取已成为分析城市十字路口车辆冲突和安全的常用方法。然而,许多交叉路口监控视频存在盲区,无法完全覆盖整个交叉路口。车辆之间也可能相互遮挡,导致车辆轨迹不完整。监控视频的角度也会导致轨迹提取不准确。为应对这些挑战,本研究利用从城市十字路口四个入口采集的监控视频,提出了一种车辆轨迹提取和整合框架。该框架首先采用改进的 YOLOv5s 模型来检测车辆的位置。然后,我们提出了一个物体跟踪模型 MS-SORT,以提取每个监控视频中的轨迹。随后,将每个监控视频的轨迹映射到同一坐标系中。然后使用时空信息和重新识别(ReID)方法实现轨迹整合。该框架从四个交叉路口的监控视频中提取并整合轨迹,获得的轨迹与从任何单一方向的监控视频中获得的轨迹相比,在时间和空间覆盖范围上都明显更广。与基本的 YOLOv5s 相比,我们的检测模型将 mAP 提高了 1.3 个百分点;与 DeepSORT 相比,我们的目标跟踪模型将 MOTA 和 IDF1 提高了 2.6 和 2.1 个百分点。与从单个视频中提取的轨迹相比,提取轨迹的平均长度和数量至少分别增加了 47.6% 和 24.2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Vehicle trajectory extraction and integration from multi-direction video on urban intersection

With the gradual maturity of computer vision technology, using intersection surveillance videos for vehicle trajectory extraction has become a popular method to analyze vehicle conflicts and safety in urban intersection. However, many intersection surveillance videos have blind spots, failing to fully cover the entire intersection. Vehicles may also obstruct each other, resulting in incomplete vehicle trajectories. The angle of surveillance videos can also lead to inaccurate trajectory extraction. In response to these challenges, this study proposes an vehicle trajectory extraction and integration framework using surveillance videos collected from four entrance of urban intersection. The framework first employs the improved YOLOv5s model to detect the positions of vehicles. Then, we proposed an object tracking model MS-SORT to extract the trajectories in each surveillance video. Subsequently, the trajectories of each surveillance video are mapped into the same coordinate system. Then the integration of trajectories is achieved using space–time information and re-identification (ReID) methods. The framework extracts and integrates trajectories from four intersection surveillance videos, obtaining trajectories with significantly broader temporal and spatial coverage compared to those obtained from any single direction of surveillance video. Our detection model improved mAP by 1.3 percentage points compared to the basic YOLOv5s, and our object tracking model improved MOTA and IDF1 by 2.6 and 2.1 percentage points compared to DeepSORT. The trojectory integration method achieved 94.7 % of F1-Score and RMSE of 0.51 m. The average length and number of the extracted trajectories has increased by at least 47.6 % and 24.2 % respectively compared to trajectories extracted from a single video.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
期刊最新文献
Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment Virtual reality in medical education: Effectiveness of Immersive Virtual Anatomy Laboratory (IVAL) compared to traditional learning approaches Weighted ensemble deep learning approach for classification of gastrointestinal diseases in colonoscopy images aided by explainable AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1