Pub Date : 2025-10-22DOI: 10.1109/TIV.2025.3620581
{"title":"Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2025.3620581","DOIUrl":"https://doi.org/10.1109/TIV.2025.3620581","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 8","pages":"4360-4360"},"PeriodicalIF":14.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11214313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145339688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-22DOI: 10.1109/TIV.2025.3623575
Faizan M. Tariq;Zheng-Hang Yeh;Avinash Singh;David Isele;Sangjae Bae
Path-speed decomposition-based trajectory planning schemes have garnered widespread usage in real-world robotics applications due to their efficacy and computational efficiency. While a global route can be planned offline, generating a local path adaptive to real-time situations online remains essential. We propose a local path planning algorithm that prioritizes smoothness and low computational complexity, facilitating scalability to dense environments with various on-road entities. Our algorithm leverages a sparse graph structure to generate crucial obstacle-specific nodes and connect them via spline edges. Several conditional checks are introduced to maintain graph sparsity, boosting computational efficiency without compromising performance. The final path evaluation considers both the smoothness of the path and the risks to vulnerable road users. The effectiveness of the proposed algorithm is demonstrated through CARLA simulation studies and extensive comparative analysis against benchmarking methods. Finally, a scaled car demonstration with a dynamic vehicle on a curved road is presented to showcase the performance of the proposed method on a physical system.
{"title":"Frenet Enveloping Planner: An Efficient Local Path Planning Framework for Autonomous Driving","authors":"Faizan M. Tariq;Zheng-Hang Yeh;Avinash Singh;David Isele;Sangjae Bae","doi":"10.1109/TIV.2025.3623575","DOIUrl":"https://doi.org/10.1109/TIV.2025.3623575","url":null,"abstract":"Path-speed decomposition-based trajectory planning schemes have garnered widespread usage in real-world robotics applications due to their efficacy and computational efficiency. While a global route can be planned offline, generating a local path adaptive to real-time situations online remains essential. We propose a local path planning algorithm that prioritizes smoothness and low computational complexity, facilitating scalability to dense environments with various on-road entities. Our algorithm leverages a sparse graph structure to generate crucial obstacle-specific nodes and connect them via spline edges. Several conditional checks are introduced to maintain graph sparsity, boosting computational efficiency without compromising performance. The final path evaluation considers both the smoothness of the path and the risks to vulnerable road users. The effectiveness of the proposed algorithm is demonstrated through CARLA simulation studies and extensive comparative analysis against benchmarking methods. Finally, a scaled car demonstration with a dynamic vehicle on a curved road is presented to showcase the performance of the proposed method on a physical system.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 3","pages":"361-370"},"PeriodicalIF":14.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147280524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perception sensors, particularly camera and Lidar, are key elements of Autonomous Driving Systems (ADS) that enable them to comprehend their surroundings for informed driving and control decisions. Therefore, developing realistic simulation models for these sensors is essential for conducting effective simulation-based testing of ADS. Moreover, the rise of deep learning-based perception models has increased the utility of sensor simulation models for synthesising diverse training datasets. The traditional sensor simulation models rely on computationally expensive physics-based algorithms, specifically in complex systems such as ADS. Hence, the current potential resides in data-driven approaches, fuelled by the exceptional performance of deep generative models in capturing high-dimensional data distribution and volume renderers in accurately representing scenes. This paper reviews the current state-of-the-art data-driven camera and Lidar simulation models and their evaluation methods. It explores a spectrum of models from the novel perspective of generative models and volume renderers. Generative models are discussed in terms of their input-output types, while volume renderers are categorised based on their input encoding. Finally, the paper illustrates commonly used evaluation techniques for assessing sensor simulation models and highlights the existing research gaps in the area.
{"title":"Data-Driven Camera and Lidar Simulation Models for Autonomous Driving: A Review From Generative Models to Volume Renderers","authors":"Hamed Haghighi;Xiaomeng Wang;Hao Jing;Mehrdad Dianati","doi":"10.1109/TIV.2025.3624111","DOIUrl":"https://doi.org/10.1109/TIV.2025.3624111","url":null,"abstract":"Perception sensors, particularly camera and Lidar, are key elements of Autonomous Driving Systems (ADS) that enable them to comprehend their surroundings for informed driving and control decisions. Therefore, developing realistic simulation models for these sensors is essential for conducting effective simulation-based testing of ADS. Moreover, the rise of deep learning-based perception models has increased the utility of sensor simulation models for synthesising diverse training datasets. The traditional sensor simulation models rely on computationally expensive physics-based algorithms, specifically in complex systems such as ADS. Hence, the current potential resides in data-driven approaches, fuelled by the exceptional performance of deep generative models in capturing high-dimensional data distribution and volume renderers in accurately representing scenes. This paper reviews the current state-of-the-art data-driven camera and Lidar simulation models and their evaluation methods. It explores a spectrum of models from the novel perspective of generative models and volume renderers. Generative models are discussed in terms of their input-output types, while volume renderers are categorised based on their input encoding. Finally, the paper illustrates commonly used evaluation techniques for assessing sensor simulation models and highlights the existing research gaps in the area.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 2","pages":"283-310"},"PeriodicalIF":14.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1109/TIV.2025.3617876
{"title":"Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2025.3617876","DOIUrl":"https://doi.org/10.1109/TIV.2025.3617876","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 7","pages":"4126-4126"},"PeriodicalIF":14.3,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11205923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1109/TIV.2025.3621906
Dhawal Salwan;Satyam Agarwal;Brijesh Kumbhani
In this article, we propose novel strategies for preamble detection and synchronization to detect the symbols from the phase shift keying-linear frequency modulated (PSK-LFM) joint sensing and communication waveform at the communication receiver of an uncrewed aerial vehicle (UAV). Since radar waveform requires huge bandwidth (order of 100 MHz–2 GHz) for better range resolution, processing the same waveform at the UAV's communication receiver necessitates high analog-to-digital converter (ADC) sampling rates. To reduce the ADC sampling requirements, in this paper, we propose two signal processing schemes for the communication receiver. In first, a part of the received signal is filtered using a low pass filter (LPF) and sampled for preamble detection, symbol synchronisation, and detecting the symbols. In the other case, the entire signal undergoes undersampling for subsequent processing. Furthermore, we obtain bit error rate (BER) performance for all the cases by considering time, phase, and carrier frequency offsets. We show that processing the undersampled signal yields superior BER performance compared to the filtering approach, even when both operate at an equivalent sampling rate. Furthermore, for all cases, we compare the simulation results with and without offsets, along with the analytical results obtained without offsets.
{"title":"Design and Analysis of Low-Complexity Communication Receiver for PSK–LFM Joint Sensing and Communication Waveform","authors":"Dhawal Salwan;Satyam Agarwal;Brijesh Kumbhani","doi":"10.1109/TIV.2025.3621906","DOIUrl":"https://doi.org/10.1109/TIV.2025.3621906","url":null,"abstract":"In this article, we propose novel strategies for preamble detection and synchronization to detect the symbols from the phase shift keying-linear frequency modulated (PSK-LFM) joint sensing and communication waveform at the communication receiver of an uncrewed aerial vehicle (UAV). Since radar waveform requires huge bandwidth (order of 100 MHz–2 GHz) for better range resolution, processing the same waveform at the UAV's communication receiver necessitates high analog-to-digital converter (ADC) sampling rates. To reduce the ADC sampling requirements, in this paper, we propose two signal processing schemes for the communication receiver. In first, a part of the received signal is filtered using a low pass filter (LPF) and sampled for preamble detection, symbol synchronisation, and detecting the symbols. In the other case, the entire signal undergoes undersampling for subsequent processing. Furthermore, we obtain bit error rate (BER) performance for all the cases by considering time, phase, and carrier frequency offsets. We show that processing the undersampled signal yields superior BER performance compared to the filtering approach, even when both operate at an equivalent sampling rate. Furthermore, for all cases, we compare the simulation results with and without offsets, along with the analytical results obtained without offsets.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"236-248"},"PeriodicalIF":14.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1109/TIV.2025.3621205
Pengzhi Zhong;Xiaoyu Guo;Defeng Huang;Xiaojun Peng;Yian Li;Qijun Zhao;Shuiwang Li
In recent years, the field of visual tracking has made significant progress with the application of large-scale training datasets. These datasets have supported the development of sophisticated algorithms, enhancing the accuracy and stability of visual object tracking. However, most research has primarily focused on favorable illumination circumstances, neglecting the challenges of tracking in low-ligh environments. In low-light scenes, lighting may change dramatically, targets may lack distinct texture features, and in some scenarios, targets may not be directly observable. These factors can lead to a severe decline in tracking performance. To address this issue, we introduce LLOT, a benchmark specifically designed for Low-Light Object Tracking. LLOT comprises 269 challenging sequences with a total of over 132 K frames, each carefully annotated with bounding boxes. This specially designed dataset aims to promote innovation and advancement in object tracking techniques for low-light conditions, addressing challenges not adequately covered by existing benchmarks. To assess the performance of existing methods on LLOT, we conducted extensive tests on 39 state-of-the-art tracking algorithms. The results highlight a considerable gap in low-light tracking performance. In response, we propose H-DCPT, a novel tracker that incorporates historical and darkness clue prompts to set a stronger baseline. H-DCPT outperformed all 39 evaluated methods in our experiments, demonstrating significant improvements. We hope that our benchmark and H-DCPT will stimulate the development of novel and accurate methods for tracking objects in low-light conditions.
{"title":"Low-Light Object Tracking: A Benchmark","authors":"Pengzhi Zhong;Xiaoyu Guo;Defeng Huang;Xiaojun Peng;Yian Li;Qijun Zhao;Shuiwang Li","doi":"10.1109/TIV.2025.3621205","DOIUrl":"https://doi.org/10.1109/TIV.2025.3621205","url":null,"abstract":"In recent years, the field of visual tracking has made significant progress with the application of large-scale training datasets. These datasets have supported the development of sophisticated algorithms, enhancing the accuracy and stability of visual object tracking. However, most research has primarily focused on favorable illumination circumstances, neglecting the challenges of tracking in low-ligh environments. In low-light scenes, lighting may change dramatically, targets may lack distinct texture features, and in some scenarios, targets may not be directly observable. These factors can lead to a severe decline in tracking performance. To address this issue, we introduce LLOT, a benchmark specifically designed for Low-Light Object Tracking. LLOT comprises 269 challenging sequences with a total of over 132 K frames, each carefully annotated with bounding boxes. This specially designed dataset aims to promote innovation and advancement in object tracking techniques for low-light conditions, addressing challenges not adequately covered by existing benchmarks. To assess the performance of existing methods on LLOT, we conducted extensive tests on 39 state-of-the-art tracking algorithms. The results highlight a considerable gap in low-light tracking performance. In response, we propose H-DCPT, a novel tracker that incorporates historical and darkness clue prompts to set a stronger baseline. H-DCPT outperformed all 39 evaluated methods in our experiments, demonstrating significant improvements. We hope that our benchmark and H-DCPT will stimulate the development of novel and accurate methods for tracking objects in low-light conditions.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"220-235"},"PeriodicalIF":14.3,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Urban intersections are critical areas where traffic flows converge and conflict, significantly influencing traffic safety, economic benefits, and energy consumption. Effective management and control of intersections have become a central focus in transportation research. With the advancement of automation technology, intersection management methods combined with connected autonomous vehicles (CAVs) have been rapidly developed. However, a comprehensive analysis of these emerging approaches remains lacking, from intersection design to management. This paper systematically reviews recent research on cooperative intersection management (CIM). Firstly, it explores the design of intersections. Secondly, various management objectives and evaluation methods are outlined. Thirdly, relevant research on vehicle trajectory control at the micro level, intersection management at the meso level, and arterial traffic flow regulation and network management at the macro level are discussed in detail. Based on this analysis, this paper identifies future research themes, emphasizing the need for trade-offs, integration, and coordination. Key areas for further study include enhancing the alignment between abstract models and real-world applications, balancing the performance of control methods with their implementation efficiency, and integrating various intersection control strategies to collectively enhance traffic efficiency, sustainability, and safety cooperatively.
{"title":"A Review of Cooperative Intersection: From Design to Management","authors":"Zhihong Yao;Yingying Zhao;Haoran Jiang;Yangsheng Jiang","doi":"10.1109/TIV.2025.3620807","DOIUrl":"https://doi.org/10.1109/TIV.2025.3620807","url":null,"abstract":"Urban intersections are critical areas where traffic flows converge and conflict, significantly influencing traffic safety, economic benefits, and energy consumption. Effective management and control of intersections have become a central focus in transportation research. With the advancement of automation technology, intersection management methods combined with connected autonomous vehicles (CAVs) have been rapidly developed. However, a comprehensive analysis of these emerging approaches remains lacking, from intersection design to management. This paper systematically reviews recent research on cooperative intersection management (CIM). Firstly, it explores the design of intersections. Secondly, various management objectives and evaluation methods are outlined. Thirdly, relevant research on vehicle trajectory control at the micro level, intersection management at the meso level, and arterial traffic flow regulation and network management at the macro level are discussed in detail. Based on this analysis, this paper identifies future research themes, emphasizing the need for trade-offs, integration, and coordination. Key areas for further study include enhancing the alignment between abstract models and real-world applications, balancing the performance of control methods with their implementation efficiency, and integrating various intersection control strategies to collectively enhance traffic efficiency, sustainability, and safety cooperatively.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"197-219"},"PeriodicalIF":14.3,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}