Pub Date : 2025-10-22DOI: 10.1109/TIV.2025.3620581
{"title":"Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2025.3620581","DOIUrl":"https://doi.org/10.1109/TIV.2025.3620581","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 8","pages":"4360-4360"},"PeriodicalIF":14.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11214313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145339688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perception sensors, particularly camera and Lidar, are key elements of Autonomous Driving Systems (ADS) that enable them to comprehend their surroundings for informed driving and control decisions. Therefore, developing realistic simulation models for these sensors is essential for conducting effective simulation-based testing of ADS. Moreover, the rise of deep learning-based perception models has increased the utility of sensor simulation models for synthesising diverse training datasets. The traditional sensor simulation models rely on computationally expensive physics-based algorithms, specifically in complex systems such as ADS. Hence, the current potential resides in data-driven approaches, fuelled by the exceptional performance of deep generative models in capturing high-dimensional data distribution and volume renderers in accurately representing scenes. This paper reviews the current state-of-the-art data-driven camera and Lidar simulation models and their evaluation methods. It explores a spectrum of models from the novel perspective of generative models and volume renderers. Generative models are discussed in terms of their input-output types, while volume renderers are categorised based on their input encoding. Finally, the paper illustrates commonly used evaluation techniques for assessing sensor simulation models and highlights the existing research gaps in the area.
{"title":"Data-Driven Camera and Lidar Simulation Models for Autonomous Driving: A Review From Generative Models to Volume Renderers","authors":"Hamed Haghighi;Xiaomeng Wang;Hao Jing;Mehrdad Dianati","doi":"10.1109/TIV.2025.3624111","DOIUrl":"https://doi.org/10.1109/TIV.2025.3624111","url":null,"abstract":"Perception sensors, particularly camera and Lidar, are key elements of Autonomous Driving Systems (ADS) that enable them to comprehend their surroundings for informed driving and control decisions. Therefore, developing realistic simulation models for these sensors is essential for conducting effective simulation-based testing of ADS. Moreover, the rise of deep learning-based perception models has increased the utility of sensor simulation models for synthesising diverse training datasets. The traditional sensor simulation models rely on computationally expensive physics-based algorithms, specifically in complex systems such as ADS. Hence, the current potential resides in data-driven approaches, fuelled by the exceptional performance of deep generative models in capturing high-dimensional data distribution and volume renderers in accurately representing scenes. This paper reviews the current state-of-the-art data-driven camera and Lidar simulation models and their evaluation methods. It explores a spectrum of models from the novel perspective of generative models and volume renderers. Generative models are discussed in terms of their input-output types, while volume renderers are categorised based on their input encoding. Finally, the paper illustrates commonly used evaluation techniques for assessing sensor simulation models and highlights the existing research gaps in the area.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 2","pages":"283-310"},"PeriodicalIF":14.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1109/TIV.2025.3617876
{"title":"Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2025.3617876","DOIUrl":"https://doi.org/10.1109/TIV.2025.3617876","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 7","pages":"4126-4126"},"PeriodicalIF":14.3,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11205923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1109/TIV.2025.3621906
Dhawal Salwan;Satyam Agarwal;Brijesh Kumbhani
In this article, we propose novel strategies for preamble detection and synchronization to detect the symbols from the phase shift keying-linear frequency modulated (PSK-LFM) joint sensing and communication waveform at the communication receiver of an uncrewed aerial vehicle (UAV). Since radar waveform requires huge bandwidth (order of 100 MHz–2 GHz) for better range resolution, processing the same waveform at the UAV's communication receiver necessitates high analog-to-digital converter (ADC) sampling rates. To reduce the ADC sampling requirements, in this paper, we propose two signal processing schemes for the communication receiver. In first, a part of the received signal is filtered using a low pass filter (LPF) and sampled for preamble detection, symbol synchronisation, and detecting the symbols. In the other case, the entire signal undergoes undersampling for subsequent processing. Furthermore, we obtain bit error rate (BER) performance for all the cases by considering time, phase, and carrier frequency offsets. We show that processing the undersampled signal yields superior BER performance compared to the filtering approach, even when both operate at an equivalent sampling rate. Furthermore, for all cases, we compare the simulation results with and without offsets, along with the analytical results obtained without offsets.
{"title":"Design and Analysis of Low-Complexity Communication Receiver for PSK–LFM Joint Sensing and Communication Waveform","authors":"Dhawal Salwan;Satyam Agarwal;Brijesh Kumbhani","doi":"10.1109/TIV.2025.3621906","DOIUrl":"https://doi.org/10.1109/TIV.2025.3621906","url":null,"abstract":"In this article, we propose novel strategies for preamble detection and synchronization to detect the symbols from the phase shift keying-linear frequency modulated (PSK-LFM) joint sensing and communication waveform at the communication receiver of an uncrewed aerial vehicle (UAV). Since radar waveform requires huge bandwidth (order of 100 MHz–2 GHz) for better range resolution, processing the same waveform at the UAV's communication receiver necessitates high analog-to-digital converter (ADC) sampling rates. To reduce the ADC sampling requirements, in this paper, we propose two signal processing schemes for the communication receiver. In first, a part of the received signal is filtered using a low pass filter (LPF) and sampled for preamble detection, symbol synchronisation, and detecting the symbols. In the other case, the entire signal undergoes undersampling for subsequent processing. Furthermore, we obtain bit error rate (BER) performance for all the cases by considering time, phase, and carrier frequency offsets. We show that processing the undersampled signal yields superior BER performance compared to the filtering approach, even when both operate at an equivalent sampling rate. Furthermore, for all cases, we compare the simulation results with and without offsets, along with the analytical results obtained without offsets.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"236-248"},"PeriodicalIF":14.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1109/TIV.2025.3621205
Pengzhi Zhong;Xiaoyu Guo;Defeng Huang;Xiaojun Peng;Yian Li;Qijun Zhao;Shuiwang Li
In recent years, the field of visual tracking has made significant progress with the application of large-scale training datasets. These datasets have supported the development of sophisticated algorithms, enhancing the accuracy and stability of visual object tracking. However, most research has primarily focused on favorable illumination circumstances, neglecting the challenges of tracking in low-ligh environments. In low-light scenes, lighting may change dramatically, targets may lack distinct texture features, and in some scenarios, targets may not be directly observable. These factors can lead to a severe decline in tracking performance. To address this issue, we introduce LLOT, a benchmark specifically designed for Low-Light Object Tracking. LLOT comprises 269 challenging sequences with a total of over 132 K frames, each carefully annotated with bounding boxes. This specially designed dataset aims to promote innovation and advancement in object tracking techniques for low-light conditions, addressing challenges not adequately covered by existing benchmarks. To assess the performance of existing methods on LLOT, we conducted extensive tests on 39 state-of-the-art tracking algorithms. The results highlight a considerable gap in low-light tracking performance. In response, we propose H-DCPT, a novel tracker that incorporates historical and darkness clue prompts to set a stronger baseline. H-DCPT outperformed all 39 evaluated methods in our experiments, demonstrating significant improvements. We hope that our benchmark and H-DCPT will stimulate the development of novel and accurate methods for tracking objects in low-light conditions.
{"title":"Low-Light Object Tracking: A Benchmark","authors":"Pengzhi Zhong;Xiaoyu Guo;Defeng Huang;Xiaojun Peng;Yian Li;Qijun Zhao;Shuiwang Li","doi":"10.1109/TIV.2025.3621205","DOIUrl":"https://doi.org/10.1109/TIV.2025.3621205","url":null,"abstract":"In recent years, the field of visual tracking has made significant progress with the application of large-scale training datasets. These datasets have supported the development of sophisticated algorithms, enhancing the accuracy and stability of visual object tracking. However, most research has primarily focused on favorable illumination circumstances, neglecting the challenges of tracking in low-ligh environments. In low-light scenes, lighting may change dramatically, targets may lack distinct texture features, and in some scenarios, targets may not be directly observable. These factors can lead to a severe decline in tracking performance. To address this issue, we introduce LLOT, a benchmark specifically designed for Low-Light Object Tracking. LLOT comprises 269 challenging sequences with a total of over 132 K frames, each carefully annotated with bounding boxes. This specially designed dataset aims to promote innovation and advancement in object tracking techniques for low-light conditions, addressing challenges not adequately covered by existing benchmarks. To assess the performance of existing methods on LLOT, we conducted extensive tests on 39 state-of-the-art tracking algorithms. The results highlight a considerable gap in low-light tracking performance. In response, we propose H-DCPT, a novel tracker that incorporates historical and darkness clue prompts to set a stronger baseline. H-DCPT outperformed all 39 evaluated methods in our experiments, demonstrating significant improvements. We hope that our benchmark and H-DCPT will stimulate the development of novel and accurate methods for tracking objects in low-light conditions.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"220-235"},"PeriodicalIF":14.3,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Urban intersections are critical areas where traffic flows converge and conflict, significantly influencing traffic safety, economic benefits, and energy consumption. Effective management and control of intersections have become a central focus in transportation research. With the advancement of automation technology, intersection management methods combined with connected autonomous vehicles (CAVs) have been rapidly developed. However, a comprehensive analysis of these emerging approaches remains lacking, from intersection design to management. This paper systematically reviews recent research on cooperative intersection management (CIM). Firstly, it explores the design of intersections. Secondly, various management objectives and evaluation methods are outlined. Thirdly, relevant research on vehicle trajectory control at the micro level, intersection management at the meso level, and arterial traffic flow regulation and network management at the macro level are discussed in detail. Based on this analysis, this paper identifies future research themes, emphasizing the need for trade-offs, integration, and coordination. Key areas for further study include enhancing the alignment between abstract models and real-world applications, balancing the performance of control methods with their implementation efficiency, and integrating various intersection control strategies to collectively enhance traffic efficiency, sustainability, and safety cooperatively.
{"title":"A Review of Cooperative Intersection: From Design to Management","authors":"Zhihong Yao;Yingying Zhao;Haoran Jiang;Yangsheng Jiang","doi":"10.1109/TIV.2025.3620807","DOIUrl":"https://doi.org/10.1109/TIV.2025.3620807","url":null,"abstract":"Urban intersections are critical areas where traffic flows converge and conflict, significantly influencing traffic safety, economic benefits, and energy consumption. Effective management and control of intersections have become a central focus in transportation research. With the advancement of automation technology, intersection management methods combined with connected autonomous vehicles (CAVs) have been rapidly developed. However, a comprehensive analysis of these emerging approaches remains lacking, from intersection design to management. This paper systematically reviews recent research on cooperative intersection management (CIM). Firstly, it explores the design of intersections. Secondly, various management objectives and evaluation methods are outlined. Thirdly, relevant research on vehicle trajectory control at the micro level, intersection management at the meso level, and arterial traffic flow regulation and network management at the macro level are discussed in detail. Based on this analysis, this paper identifies future research themes, emphasizing the need for trade-offs, integration, and coordination. Key areas for further study include enhancing the alignment between abstract models and real-world applications, balancing the performance of control methods with their implementation efficiency, and integrating various intersection control strategies to collectively enhance traffic efficiency, sustainability, and safety cooperatively.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"197-219"},"PeriodicalIF":14.3,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/TIV.2025.3620769
Keita Ishii;Mitsuhiro Nishida;Takeshi Masago;Teppei Mori;Shunsuke Ono
Intelligent tire systems have garnered considerable interest as a technology that enhances tire safety through the monitoring of tire characteristics and tire–road interactions, which directly influence vehicle dynamics. However, battery life limitations owing to computational demands constrain their practical implementation. This study focused on studless winter tires to improve safety on icy and snowy roads, where freezing and snow accumulation increase braking distances, elevating the risk of accidents. Specifically, we developed computationally efficient features from tire acceleration signals to estimate longitudinal force, a key factor in tire–road interaction. Acceleration signals were analyzed to extract features most effective for force estimation. To reduce power consumption, only the most relevant features were selected. The selected features were applied to a machine learning model (ExtraTree regressor) to estimate longitudinal force. The method achieved high estimation accuracy with a normalized root mean square error (NRMSE) of 3.3%, while significantly minimizing computational load and power consumption. Compared to transmitting raw signals, the proposed approach reduced power consumption from 49.4 mW to 0.11 mW per second. Direct observations of the tire–road contact patch using a high-speed camera were conducted to validate the features. Time–frequency analysis of acceleration signals further supported the features' effectiveness, revealing that they correspond to tread vibrations caused by the relaxation phenomenon, where deformed tread elements recover after road contact. The proposed approach offers a promising method to enhance safety and efficiency in winter driving conditions by providing accurate, real-time tire–road interaction data while conserving energy.
{"title":"Longitudinal Force Estimation in Intelligent Tires Using Key Features and Tread Dynamics Validation","authors":"Keita Ishii;Mitsuhiro Nishida;Takeshi Masago;Teppei Mori;Shunsuke Ono","doi":"10.1109/TIV.2025.3620769","DOIUrl":"https://doi.org/10.1109/TIV.2025.3620769","url":null,"abstract":"Intelligent tire systems have garnered considerable interest as a technology that enhances tire safety through the monitoring of tire characteristics and tire–road interactions, which directly influence vehicle dynamics. However, battery life limitations owing to computational demands constrain their practical implementation. This study focused on studless winter tires to improve safety on icy and snowy roads, where freezing and snow accumulation increase braking distances, elevating the risk of accidents. Specifically, we developed computationally efficient features from tire acceleration signals to estimate longitudinal force, a key factor in tire–road interaction. Acceleration signals were analyzed to extract features most effective for force estimation. To reduce power consumption, only the most relevant features were selected. The selected features were applied to a machine learning model (ExtraTree regressor) to estimate longitudinal force. The method achieved high estimation accuracy with a normalized root mean square error (NRMSE) of 3.3%, while significantly minimizing computational load and power consumption. Compared to transmitting raw signals, the proposed approach reduced power consumption from 49.4 mW to 0.11 mW per second. Direct observations of the tire–road contact patch using a high-speed camera were conducted to validate the features. Time–frequency analysis of acceleration signals further supported the features' effectiveness, revealing that they correspond to tread vibrations caused by the relaxation phenomenon, where deformed tread elements recover after road contact. The proposed approach offers a promising method to enhance safety and efficiency in winter driving conditions by providing accurate, real-time tire–road interaction data while conserving energy.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"185-196"},"PeriodicalIF":14.3,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11202615","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}