Pub Date : 2025-06-17DOI: 10.1109/TIV.2024.3502281
{"title":"TechRxiv: Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2024.3502281","DOIUrl":"https://doi.org/10.1109/TIV.2024.3502281","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 10","pages":"6750-6750"},"PeriodicalIF":14.0,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11039041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drivable Area Segmentation and Lane Detection constitute crucial tasks for the Visual Perception system of an Autonomous Vehicle. The majority of the approaches dealing with these tasks are addressed as Semantic Segmentation problems using heavy deep learning models that become computationally expensive. In this paper, a dual-task lightweight model is proposed, which comprises a novel dual-task feature fusion mechanism allowing it to exploit global, high-level information while retaining useful low-level details for each task. This model excels not only in terms of accuracy but also achieves real-time performance by solving these two tasks in a multi-task fashion. Our comparative study which was conducted on the standard BDD100 K dataset shows that our proposed method compares favorably with the state-of-the-art offering an optimal trade-off between accuracy and efficiency.
{"title":"Dual-Task Learning for Real-Time Semantic Segmentation in Autonomous Driving","authors":"Ilias Papadeas;Lazaros Tsochatzidis;Ioannis Pratikakis","doi":"10.1109/TIV.2025.3579878","DOIUrl":"https://doi.org/10.1109/TIV.2025.3579878","url":null,"abstract":"Drivable Area Segmentation and Lane Detection constitute crucial tasks for the Visual Perception system of an Autonomous Vehicle. The majority of the approaches dealing with these tasks are addressed as Semantic Segmentation problems using heavy deep learning models that become computationally expensive. In this paper, a dual-task lightweight model is proposed, which comprises a novel dual-task feature fusion mechanism allowing it to exploit global, high-level information while retaining useful low-level details for each task. This model excels not only in terms of accuracy but also achieves real-time performance by solving these two tasks in a multi-task fashion. Our comparative study which was conducted on the standard BDD100 K dataset shows that our proposed method compares favorably with the state-of-the-art offering an optimal trade-off between accuracy and efficiency.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"94-103"},"PeriodicalIF":14.3,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13DOI: 10.1109/TIV.2025.3578939
Zafar Aziz;Odilbek Urmonov;Shoaib Sajid;HyungWon Kim
Self-training is a novel learning paradigm that generates pseudo-labels for unlabeled data, enabling deep learning models to be trained without the need for human-labeled data. This article proposes self-training through accurate labeling and class balancing (SALB) method that enhances the pre-trained models through periodic multi-round self-training with pseudo-labeled data. In this context, we focus on generating high quality labels by predicting the maximum possible detection labels using different augmented views of the same image. We consolidate all predictions using a modified version of Weighted Box Fusion (WBF) and validate final pseudo-labels through adaptive confidence thresholding. Finally, we recover missing pseudo-labels through our bidirectional tracking technique. Due to the class imbalance in most available public training datasets, pre-trained models occasionally yield incorrect detections for minority object instances, resulting in a bias towards predicting the objects representing majority classes. To tackle this issue, we use copy-paste augmentation technique that enables the copy of minority instances from labeled or high confidence pseudo-labeled data and paste them into pseudo-labeled data to ensure class balance. Our experiments prove that our self-training framework outperforms reference methods on Waymo dataset by achieving 8.7% mAP improvement of the initial pre-trained model with only 10% labeled data used during the model self-training.
{"title":"Improving Object Detection of Intelligent Vehicles Through Self-Training With Accurate Labeling and Class Balancing","authors":"Zafar Aziz;Odilbek Urmonov;Shoaib Sajid;HyungWon Kim","doi":"10.1109/TIV.2025.3578939","DOIUrl":"https://doi.org/10.1109/TIV.2025.3578939","url":null,"abstract":"Self-training is a novel learning paradigm that generates pseudo-labels for unlabeled data, enabling deep learning models to be trained without the need for human-labeled data. This article proposes self-training through accurate labeling and class balancing (SALB) method that enhances the pre-trained models through periodic multi-round self-training with pseudo-labeled data. In this context, we focus on generating high quality labels by predicting the maximum possible detection labels using different augmented views of the same image. We consolidate all predictions using a modified version of Weighted Box Fusion (WBF) and validate final pseudo-labels through adaptive confidence thresholding. Finally, we recover missing pseudo-labels through our bidirectional tracking technique. Due to the class imbalance in most available public training datasets, pre-trained models occasionally yield incorrect detections for minority object instances, resulting in a bias towards predicting the objects representing majority classes. To tackle this issue, we use copy-paste augmentation technique that enables the copy of minority instances from labeled or high confidence pseudo-labeled data and paste them into pseudo-labeled data to ensure class balance. Our experiments prove that our self-training framework outperforms reference methods on Waymo dataset by achieving 8.7% mAP improvement of the initial pre-trained model with only 10% labeled data used during the model self-training.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"76-93"},"PeriodicalIF":14.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current research aims to gain an in-depth understanding of mobility characteristics in order to accurately assess each mobility defect and maintain network performance. Therefore, this article investigates the use of the Density-Based Spatial Clustering of Applications with Noise(DBSCAN) algorithm to examine the performance of a cluster within a mobility model. The six mobility models: Manhattan Grid Mobility Model(MGMM), Reference Point Group Mobility(RPGM), Nomadic Community Mobility Model (NCMM), PURSUE, Self-Similar Least Action Walk Mobility Model (SLAW), and SMOOTH, are used to evaluate the effectiveness of clustering in capturing the spatial structure of mobile models, including their spatio-temporal locations. The algorithm measures various factors, such as the number of clusters, points in each cluster, and cluster indices. We continue the study that validates synthetic mobility models, focusing on a metric known as “Mobile Neighbors Range”. This metric allows a detailed analysis of the interactions between moving neighbors over time. In this article, a new index called “Global Index” is introduced, based on the variation of clustering, to evaluate the movement of nodes in synthetic mobility models over time using artificial intelligence techniques, namely DBSCAN clustering, which integrates the concept of clusters. The goal is to analyze the dispersion and clustering of nodes over time in each mobility model to gain a more complete understanding of their behavior. This in-depth investigation, marked by 1,200,000 iterations, highlights the scale and precision of our efforts to delineate mobility models, thereby significantly enriching the scientific discourse on mobile networks and systems. The need for this exploration wasapparent, filling a gap in current scientific understanding.
{"title":"Advanced Clustering Metric for Mobility Model Using DBSCAN","authors":"Siham Sadiki;Nisrine Ibadah;Hanae Belmajdoub;Khalid Minaoui","doi":"10.1109/TIV.2024.3519999","DOIUrl":"https://doi.org/10.1109/TIV.2024.3519999","url":null,"abstract":"Current research aims to gain an in-depth understanding of mobility characteristics in order to accurately assess each mobility defect and maintain network performance. Therefore, this article investigates the use of the Density-Based Spatial Clustering of Applications with Noise(DBSCAN) algorithm to examine the performance of a cluster within a mobility model. The six mobility models: Manhattan Grid Mobility Model(MGMM), Reference Point Group Mobility(RPGM), Nomadic Community Mobility Model (NCMM), PURSUE, Self-Similar Least Action Walk Mobility Model (SLAW), and SMOOTH, are used to evaluate the effectiveness of clustering in capturing the spatial structure of mobile models, including their spatio-temporal locations. The algorithm measures various factors, such as the number of clusters, points in each cluster, and cluster indices. We continue the study that validates synthetic mobility models, focusing on a metric known as “Mobile Neighbors Range”. This metric allows a detailed analysis of the interactions between moving neighbors over time. In this article, a new index called “Global Index” is introduced, based on the variation of clustering, to evaluate the movement of nodes in synthetic mobility models over time using artificial intelligence techniques, namely DBSCAN clustering, which integrates the concept of clusters. The goal is to analyze the dispersion and clustering of nodes over time in each mobility model to gain a more complete understanding of their behavior. This in-depth investigation, marked by 1,200,000 iterations, highlights the scale and precision of our efforts to delineate mobility models, thereby significantly enriching the scientific discourse on mobile networks and systems. The need for this exploration wasapparent, filling a gap in current scientific understanding.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 12","pages":"5249-5261"},"PeriodicalIF":14.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145772075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13DOI: 10.1109/TIV.2025.3578923
Liang Zhao;Fang Wang;Mingye Zhang;Yong Bai
Efficient and reliable planning for unmanned surface vehicles (USVs) is essential to ensure prompt maritime rescue. However, existing methods for practical maritime rescue are limited in two key aspects. On the one hand, rescue operations are time-sensitive, where the USVs must accomplish as many rescue tasks as possible during the early rounds to reduce potential risks and losses due to the delays. Furthermore, the constrained visibility of USVs may cause inadequate time to complete the necessary avoiding maneuvers, preventing the avoidance strategy from being activated timely. To address these challenges, we introduce a planning framework by integrating the time-sensitive rescue task allocation and a visually-compliant motion planner. The time-sensitive task allocation model uses an accumulated reward function to maximize early task completion, with a uniquely designed heuristic algorithm to find high-quality solutions. Furthermore, the motion planning framework integrates a sampling-based global planner with an online planner using quadratic programming. Both planners incorporate collision and visually-compliant Control Barrier Functions (CBFs) to ensure USV safety under constrained visibility. Extensive simulations show that our model quickly identifies high-quality solutions for both large and small-scale problems, outperforming current state-of-the-art methods. Semi-physical USV simulations demonstrate its effectiveness in navigating and responding to unknown environment under constrained visibility.
{"title":"Efficient Rescues at Sea: A Hierarchical Framework of Time-Sensitive Rescue Scheduling and Motion Planning for Unmanned Surface Vehicles","authors":"Liang Zhao;Fang Wang;Mingye Zhang;Yong Bai","doi":"10.1109/TIV.2025.3578923","DOIUrl":"https://doi.org/10.1109/TIV.2025.3578923","url":null,"abstract":"Efficient and reliable planning for unmanned surface vehicles (USVs) is essential to ensure prompt maritime rescue. However, existing methods for practical maritime rescue are limited in two key aspects. On the one hand, rescue operations are time-sensitive, where the USVs must accomplish as many rescue tasks as possible during the early rounds to reduce potential risks and losses due to the delays. Furthermore, the constrained visibility of USVs may cause inadequate time to complete the necessary avoiding maneuvers, preventing the avoidance strategy from being activated timely. To address these challenges, we introduce a planning framework by integrating the time-sensitive rescue task allocation and a visually-compliant motion planner. The time-sensitive task allocation model uses an accumulated reward function to maximize early task completion, with a uniquely designed heuristic algorithm to find high-quality solutions. Furthermore, the motion planning framework integrates a sampling-based global planner with an online planner using quadratic programming. Both planners incorporate collision and visually-compliant Control Barrier Functions (CBFs) to ensure USV safety under constrained visibility. Extensive simulations show that our model quickly identifies high-quality solutions for both large and small-scale problems, outperforming current state-of-the-art methods. Semi-physical USV simulations demonstrate its effectiveness in navigating and responding to unknown environment under constrained visibility.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"27-41"},"PeriodicalIF":14.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, pseudo-point-based 3D object detection framework has demonstrated superior performance and attracted significant attention. Unfortunately, current pseudo point generators, the core of this framework, fail to consider accuracy, robustness, and efficiency simultaneously. To meet these challenges, we propose a novel pseudo point generator named VIL-PPGen, which leverages a visible light camera, an infrared camera and a lidar to specifically cater to the requirements of pseudo-point-based detectors. The proposed VIL-PPGen mainly consists of three modules: Dual-spectrum Depth Completion Module (DDCM), Sparse Cost Volume Module (SCVM) and Adaptive Depth Correction Module (ADCM). The DDCM adopts a dual structure for depth completion on different spectrums to maintain accuracy, the SCVM utilizes sparse operations for cost volume computation to improve efficiency, and the ADCM employs adaptive confidence and offset for correction to reinforce accuracy and robustness. Ultimately, we can acquire high-quality pseudo points under all-day lighting conditions, which can directly improve the performance of subsequent detectors. To validate the efficacy of our designs, we construct a dataset from real driving scenarios and conduct extensive experiments. The proposed VIL-PPGen achieves 1.083 m MAE (improved by 0.581 m) for depth prediction and 53.10% mAP (improved by 3.75%) for multi-class detection in day scenes, while achieving 1.025 m MAE (improved by 0.551 m) for depth prediction and 55.64% mAP (improved by 3.94%) for multi-class detection in night scenes. It also exhibits excellent real-time performance, achieving a processing speed of 10 FPS.
{"title":"VIL-PPGen: A Novel Pseudo Point Generator Based on Visible Light Camera, Infrared Camera and Lidar","authors":"Fuyuan Ai;Amjad Hussain;Zecheng Li;Yuying Song;Chunyi Song;Zhiwei Xu","doi":"10.1109/TIV.2024.3511923","DOIUrl":"https://doi.org/10.1109/TIV.2024.3511923","url":null,"abstract":"Recently, pseudo-point-based 3D object detection framework has demonstrated superior performance and attracted significant attention. Unfortunately, current pseudo point generators, the core of this framework, fail to consider accuracy, robustness, and efficiency simultaneously. To meet these challenges, we propose a novel pseudo point generator named VIL-PPGen, which leverages a visible light camera, an infrared camera and a lidar to specifically cater to the requirements of pseudo-point-based detectors. The proposed VIL-PPGen mainly consists of three modules: Dual-spectrum Depth Completion Module (DDCM), Sparse Cost Volume Module (SCVM) and Adaptive Depth Correction Module (ADCM). The DDCM adopts a dual structure for depth completion on different spectrums to maintain accuracy, the SCVM utilizes sparse operations for cost volume computation to improve efficiency, and the ADCM employs adaptive confidence and offset for correction to reinforce accuracy and robustness. Ultimately, we can acquire high-quality pseudo points under all-day lighting conditions, which can directly improve the performance of subsequent detectors. To validate the efficacy of our designs, we construct a dataset from real driving scenarios and conduct extensive experiments. The proposed VIL-PPGen achieves 1.083 m MAE (improved by 0.581 m) for depth prediction and 53.10% mAP (improved by 3.75%) for multi-class detection in day scenes, while achieving 1.025 m MAE (improved by 0.551 m) for depth prediction and 55.64% mAP (improved by 3.94%) for multi-class detection in night scenes. It also exhibits excellent real-time performance, achieving a processing speed of 10 FPS.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 11","pages":"5042-5054"},"PeriodicalIF":14.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates a reach-avoid differential game between two groups of underactuated autonomous surface vehicles (ASVs) in a complex marine environment. All ASVs are assumed with the same resultant velocity, and every two defending ASVs are assigned to intercept a single attacking ASV. An optimal safe reach-avoid differential game guidance strategy is proposed to enable the defending ASVs to capture the attacking ASVs that intend to enter a target area without collisions. Firstly, a geometric approach is used to calculate the optimal capture position, and optimal strategies are proposed for both teams of ASVs to reach the capture position based on the differential game theory. In particular, the defenders in the same group can decide whether to cooperate in defense with groupmate or intercept the attacker individually online according to a orthogonal-line-related condition, ensuring that the attackers can be captured with higher efficiency. Secondly, to guarantee the safety during the game process, a quadratic programming problem based on control barrier functions (CBFs) is formulated subject to velocity constraints of the ASVs. Finally, an optimal safe game guidance law is proposed based on an auxiliary variable technique such that an underactuated ASV is able to execute the optimal game strategy and achieve the target area protection task. Optimality and safety analyses indicate that the designed reach-avoid strategies are multilaterally optimal and ensures system safety. Simulation results validate the effectiveness of the optimal safe reach-avoid differential game guidance strategy for area protection tasks.
{"title":"Collision-Free Reach-Avoid Differential Game of Multiple Underactuated Autonomous Surface Vehicles via Control Barrier Functions","authors":"Fangyuan Xu;Nan Gu;Zhouhua Peng;Bing Han;Weidong Zhang","doi":"10.1109/TIV.2025.3578928","DOIUrl":"https://doi.org/10.1109/TIV.2025.3578928","url":null,"abstract":"This paper investigates a reach-avoid differential game between two groups of underactuated autonomous surface vehicles (ASVs) in a complex marine environment. All ASVs are assumed with the same resultant velocity, and every two defending ASVs are assigned to intercept a single attacking ASV. An optimal safe reach-avoid differential game guidance strategy is proposed to enable the defending ASVs to capture the attacking ASVs that intend to enter a target area without collisions. Firstly, a geometric approach is used to calculate the optimal capture position, and optimal strategies are proposed for both teams of ASVs to reach the capture position based on the differential game theory. In particular, the defenders in the same group can decide whether to cooperate in defense with groupmate or intercept the attacker individually online according to a orthogonal-line-related condition, ensuring that the attackers can be captured with higher efficiency. Secondly, to guarantee the safety during the game process, a quadratic programming problem based on control barrier functions (CBFs) is formulated subject to velocity constraints of the ASVs. Finally, an optimal safe game guidance law is proposed based on an auxiliary variable technique such that an underactuated ASV is able to execute the optimal game strategy and achieve the target area protection task. Optimality and safety analyses indicate that the designed reach-avoid strategies are multilaterally optimal and ensures system safety. Simulation results validate the effectiveness of the optimal safe reach-avoid differential game guidance strategy for area protection tasks.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"42-51"},"PeriodicalIF":14.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13DOI: 10.1109/TIV.2025.3578935
Abdulrazzak Selman
In this paper, we propose a conceptually different backstepping approach to solve the global asymptotic stabilization problem for a class of nonlinear input-coupled systems with parameter uncertainties and both state and input constraints. This approach avoids both input-decoupling transformations and the cancellation of time derivatives of virtual control functions—steps that are typically required in conventional backstepping-based control designs for input-coupled systems. As a by-product, it broadens the applicability of existing backstepping techniques and significantly reduces the computational burden—a major obstacle for real-time implementation of these methods. The proposed approach relies on an innovative combination of control tools, including non-quadratic Lyapunov-like analysis, the concept of Input-to-State Stability (ISS), and the Invariance Principle, enabling the construction of a control law without quadratic (smooth) control Lyapunov functions—an advantage over standard Lyapunov-based designs, where constructing such functions is challenging in the presence of input constraints. Applied to the nonlinear lateral dynamics of autonomous vehicles, particularly in lane-keeping scenarios, it solves the lateral control and trajectory tracking problem, effectively addresses key limitations of standard backstepping designs, and demonstrates clear advantages over a representative existing method—proving its potential practical applicability in real-world control applications within dynamic and complex driving environments, such as lane-changing scenarios.
{"title":"Lateral Control for Autonomous Vehicles: A Robust Bounded Back-Stepping Technique","authors":"Abdulrazzak Selman","doi":"10.1109/TIV.2025.3578935","DOIUrl":"https://doi.org/10.1109/TIV.2025.3578935","url":null,"abstract":"In this paper, we propose a conceptually different backstepping approach to solve the global asymptotic stabilization problem for a class of nonlinear input-coupled systems with parameter uncertainties and both state and input constraints. This approach avoids both input-decoupling transformations and the cancellation of time derivatives of virtual control functions—steps that are typically required in conventional backstepping-based control designs for input-coupled systems. As a by-product, it broadens the applicability of existing backstepping techniques and significantly reduces the computational burden—a major obstacle for real-time implementation of these methods. The proposed approach relies on an innovative combination of control tools, including non-quadratic Lyapunov-like analysis, the concept of Input-to-State Stability (ISS), and the Invariance Principle, enabling the construction of a control law without quadratic (smooth) control Lyapunov functions—an advantage over standard Lyapunov-based designs, where constructing such functions is challenging in the presence of input constraints. Applied to the nonlinear lateral dynamics of autonomous vehicles, particularly in lane-keeping scenarios, it solves the lateral control and trajectory tracking problem, effectively addresses key limitations of standard backstepping designs, and demonstrates clear advantages over a representative existing method—proving its potential practical applicability in real-world control applications within dynamic and complex driving environments, such as lane-changing scenarios.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"52-62"},"PeriodicalIF":14.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}