Li Li, Yu-Tao Liu, Er-Long Tan, Li-Yong Zheng, Run-Min Wang
Traditional methods of vehicle trajectory reconstruction heavily rely on the data of cross-sectional traffic detectors, but the effectiveness of existing methods is limited by insufficient information of the cross-sectional data. In response to this, this study proposes a novel trajectory reconstruction method based on long-range traffic detectors. It builds upon Newell's car-following model and its derived inverse following model. By taking driver heterogeneity into account through individualised calibration of the key parameter, namely the spatial shift, which is optimised by the whale optimisation algorithm, the accuracy of trajectory reconstruction is enhanced. Furthermore, to connect trajectories of the same vehicle in the same area that are reconstructed by adjacent long-range detectors, a particle filter-based trajectory fusion method is developed. It can fuse overlapped reconstructed trajectories and smoothly connect sectional trajectories into a complete and seamlessly connected one. The performance of the trajectory reconstruction method is evaluated on the NGSIM I-80 dataset, while the trajectory fusion method was tested on both the I-80 and the TRJD TS datasets. Results show that the reconstruction method generates complete vehicle trajectories across various traffic flow conditions, achieving an average of 28.65% reduction in mean absolute error compared to methods that do not account for driver heterogeneity. The mean absolute error of the fused trajectories was reduced by 49.23% and 59.69% on average for two datasets, respectively, compared to reconstructed trajectories using a single detector. The trajectory reconstruction accuracy of the proposed method also outperforms that of a deep convolutional neural network and an improved adaptive smoothing method.
{"title":"Heterogeneous Driver-Aware Vehicle Trajectory Reconstruction and Fusion for Multiple Long-Range Traffic Detectors","authors":"Li Li, Yu-Tao Liu, Er-Long Tan, Li-Yong Zheng, Run-Min Wang","doi":"10.1049/itr2.70116","DOIUrl":"10.1049/itr2.70116","url":null,"abstract":"<p>Traditional methods of vehicle trajectory reconstruction heavily rely on the data of cross-sectional traffic detectors, but the effectiveness of existing methods is limited by insufficient information of the cross-sectional data. In response to this, this study proposes a novel trajectory reconstruction method based on long-range traffic detectors. It builds upon Newell's car-following model and its derived inverse following model. By taking driver heterogeneity into account through individualised calibration of the key parameter, namely the spatial shift, which is optimised by the whale optimisation algorithm, the accuracy of trajectory reconstruction is enhanced. Furthermore, to connect trajectories of the same vehicle in the same area that are reconstructed by adjacent long-range detectors, a particle filter-based trajectory fusion method is developed. It can fuse overlapped reconstructed trajectories and smoothly connect sectional trajectories into a complete and seamlessly connected one. The performance of the trajectory reconstruction method is evaluated on the NGSIM I-80 dataset, while the trajectory fusion method was tested on both the I-80 and the TRJD TS datasets. Results show that the reconstruction method generates complete vehicle trajectories across various traffic flow conditions, achieving an average of 28.65% reduction in mean absolute error compared to methods that do not account for driver heterogeneity. The mean absolute error of the fused trajectories was reduced by 49.23% and 59.69% on average for two datasets, respectively, compared to reconstructed trajectories using a single detector. The trajectory reconstruction accuracy of the proposed method also outperforms that of a deep convolutional neural network and an improved adaptive smoothing method.</p>","PeriodicalId":50381,"journal":{"name":"IET Intelligent Transport Systems","volume":"19 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we introduced a multi-task deep learning framework that concurrently forecasts traffic accident risk and severity by integrating convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) units, and a self-attention mechanism. Unlike conventional single-task approaches, our model leverages shared spatiotemporal representations to capture complex patterns in traffic data, thereby enhancing both predictive accuracy and generalizability. Evaluations on large-scale datasets from New York City and Chicago demonstrate that our approach achieves high accuracy (up to 92% for accident risk and 89% for severity) and remains robust across diverse urban contexts. Moreover, an enhanced SHAP-based interpretability module provides granular insights into the influence of both observable and latent factors, such as driver behaviour or road surface conditions, on prediction outcomes. The self-attention mechanism further mitigates unobserved heterogeneity by highlighting critical time steps and feature interactions. With competitive real-time performance and throughput, our framework offers a practical solution for dynamic traffic safety applications. Future work will focus on extending evaluations to broader urban settings and integrating latent variable models to better quantify unobserved influences, ultimately advancing the development of safer, more efficient transportation systems.
{"title":"A Multi-Task ConvoBiLSTM Model With Self-Attention for Concurrent Forecasting of Traffic Accident Risk and Severity","authors":"Auwal Sagir Muhammad, Longbiao Chen, Cheng Wang","doi":"10.1049/itr2.70108","DOIUrl":"10.1049/itr2.70108","url":null,"abstract":"<p>In this study, we introduced a multi-task deep learning framework that concurrently forecasts traffic accident risk and severity by integrating convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) units, and a self-attention mechanism. Unlike conventional single-task approaches, our model leverages shared spatiotemporal representations to capture complex patterns in traffic data, thereby enhancing both predictive accuracy and generalizability. Evaluations on large-scale datasets from New York City and Chicago demonstrate that our approach achieves high accuracy (up to 92% for accident risk and 89% for severity) and remains robust across diverse urban contexts. Moreover, an enhanced SHAP-based interpretability module provides granular insights into the influence of both observable and latent factors, such as driver behaviour or road surface conditions, on prediction outcomes. The self-attention mechanism further mitigates unobserved heterogeneity by highlighting critical time steps and feature interactions. With competitive real-time performance and throughput, our framework offers a practical solution for dynamic traffic safety applications. Future work will focus on extending evaluations to broader urban settings and integrating latent variable models to better quantify unobserved influences, ultimately advancing the development of safer, more efficient transportation systems.</p>","PeriodicalId":50381,"journal":{"name":"IET Intelligent Transport Systems","volume":"19 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Doreen Sebastian Sarwatt, Frank Kulwa, Huansheng Ning, Adamu Gaston Philipo, Xuanxia Yao, Jianguo Ding
<p>Autonomous vehicles (AVs) depend critically on vision-based perception systems, with traffic sign classification (TSC) playing a crucial role in interpreting regulatory and warning signs for safe navigation. However, these systems are highly vulnerable to adversarial attacks, subtle input perturbations that deceive deep learning models while appearing benign to human drivers. While detection has been the primary focus of defense, recovery of adversarial perturbed signs remains significantly underexplored, despite its importance for maintaining real-time decision-making and operational safety. To bridge this gap, we present the first comprehensive benchmarking of state-of-the-art image classification recovery methods adapted to the traffic sign domain. We address three domain-specific challenges for autonomous driving: (1) robustness to real-world conditions (e.g., weather, occlusion), (2) latency compatible with real-time pipelines (100 ms), and (3) preservation of geometric/structural integrity. Our adaptations combine weather-resilient preprocessing, shape-preserving restoration, and latency-aware implementation. Under unified white-box attacks, we evaluate across TSRD, BTSC, and GTSRB using recovery rate (RR), structural similarity (SSIM), and recovery time (RT). To connect latency to function, we introduce the recovery-induced distance (RID), which maps recovery time (RT) to added travel distance. PuVAE, VAE, c-GAN, and CD-GAN achieve subfewmillisecond RT with <span></span><math>