Pub Date : 2024-10-04DOI: 10.1109/TITS.2024.3460988
Simona Sacone
Summary form only: Abstracts of articles presented in this issue of the publication.
仅为摘要形式:在本期刊物上发表的文章摘要。
{"title":"Scanning the Issue","authors":"Simona Sacone","doi":"10.1109/TITS.2024.3460988","DOIUrl":"https://doi.org/10.1109/TITS.2024.3460988","url":null,"abstract":"Summary form only: Abstracts of articles presented in this issue of the publication.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 10","pages":"12846-12875"},"PeriodicalIF":7.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1109/TITS.2024.3461502
{"title":"IEEE Intelligent Transportation Systems Society Information","authors":"","doi":"10.1109/TITS.2024.3461502","DOIUrl":"https://doi.org/10.1109/TITS.2024.3461502","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 10","pages":"C3-C3"},"PeriodicalIF":7.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705327","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1109/TITS.2024.3454768
Alexander Rasch;Carol Flannagan;Marco Dozza
For cyclists, being overtaken represents a safety risk of possibly being side-swiped or cut in by overtaking drivers. For drivers, such maneuvers are challenging–not only do they need to decide when to initiate the maneuver, but they also need to time their return well to complete the maneuver. In the presence of oncoming traffic, the problem of completing an overtaking maneuver extends to balancing head-on with side-swipe collision risks. Active safety systems such as blind-spot or forward-collision warning systems, or, more recently, automated driving features, may assist drivers in avoiding such collisions and completing the maneuver successfully. However, such systems must interact carefully with the driver and prevent false-positive alerts that reduce the driver’s trust in the system. In this study, we developed a driver-behavior model of the drivers’ return onset in cyclist-overtaking maneuvers that could improve such a safety system. To provide cumulative evidence about driver behavior, we used data from two different sources: test track and naturalistic driving. We developed Bayesian survival models for the two datasets that can predict the probability of a driver returning, given time-varying inputs about the current situation. We evaluated the models in an in-sample and out-of-sample evaluation. Both models showed that drivers use the displacement of the cyclist to time their return decision, which is accelerated if an oncoming vehicle is present and close. We discuss how the models could be integrated into an active-safety system to improve driver acceptance.
{"title":"When Is It Safe to Complete an Overtaking Maneuver? Modeling Drivers’ Decision to Return After Passing a Cyclist","authors":"Alexander Rasch;Carol Flannagan;Marco Dozza","doi":"10.1109/TITS.2024.3454768","DOIUrl":"https://doi.org/10.1109/TITS.2024.3454768","url":null,"abstract":"For cyclists, being overtaken represents a safety risk of possibly being side-swiped or cut in by overtaking drivers. For drivers, such maneuvers are challenging–not only do they need to decide when to initiate the maneuver, but they also need to time their return well to complete the maneuver. In the presence of oncoming traffic, the problem of completing an overtaking maneuver extends to balancing head-on with side-swipe collision risks. Active safety systems such as blind-spot or forward-collision warning systems, or, more recently, automated driving features, may assist drivers in avoiding such collisions and completing the maneuver successfully. However, such systems must interact carefully with the driver and prevent false-positive alerts that reduce the driver’s trust in the system. In this study, we developed a driver-behavior model of the drivers’ return onset in cyclist-overtaking maneuvers that could improve such a safety system. To provide cumulative evidence about driver behavior, we used data from two different sources: test track and naturalistic driving. We developed Bayesian survival models for the two datasets that can predict the probability of a driver returning, given time-varying inputs about the current situation. We evaluated the models in an in-sample and out-of-sample evaluation. Both models showed that drivers use the displacement of the cyclist to time their return decision, which is accelerated if an oncoming vehicle is present and close. We discuss how the models could be integrated into an active-safety system to improve driver acceptance.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"15587-15599"},"PeriodicalIF":7.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705323","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1109/TITS.2024.3465242
Hu Xiong;Ting Yao;Yaxin Zhao;Lingxiao Gong;Kuo-Hui Yeh
With the rise of intelligent transportation, various mobile value-added services can be provided by the service provider (SP) in the Internet of Vehicles (IoV). To guarantee the dependability of services, it is essential to implement a mutual authentication protocol between the vehicles and the SP. Existing mutual authentication protocols to secure the communication between the SP and the vehicle raise challenges such as providing fine-grained forward security for the SP and achieving backward security for the vehicle. To handle these challenges, this paper proposes a conditional privacy-preserving mutual authentication protocol featured with fine-grained forward security and backward security for IoV, which can be implemented via two building blocks we have constructed. Specifically, we present a new puncturable signature (PS) scheme without false-positive probability and the update of the public key as well as the first proxy re-signature scheme with parallel key-insulation (PKI-PRS). What’s more, both the proposed PKI-PRS and PS still have interest beyond this protocol. Then, an anonymous mutual authentication protocol with resistance to key leakage is constructed by incorporating the above signature schemes. The proposed protocol not only provides fine-grained forward security for the SP, but also ensures forward security as well as backward security for the vehicles. Besides, the approach to achieving anonymous authentication can efficiently provide conditional privacy-preserving for the vehicles. With the support of the random oracle model and experimental simulations, the formal security proof and the superiority of the proposed protocol is explicitly given.
{"title":"A Conditional Privacy-Preserving Mutual Authentication Protocol With Fine-Grained Forward and Backward Security in IoV","authors":"Hu Xiong;Ting Yao;Yaxin Zhao;Lingxiao Gong;Kuo-Hui Yeh","doi":"10.1109/TITS.2024.3465242","DOIUrl":"https://doi.org/10.1109/TITS.2024.3465242","url":null,"abstract":"With the rise of intelligent transportation, various mobile value-added services can be provided by the service provider (SP) in the Internet of Vehicles (IoV). To guarantee the dependability of services, it is essential to implement a mutual authentication protocol between the vehicles and the SP. Existing mutual authentication protocols to secure the communication between the SP and the vehicle raise challenges such as providing fine-grained forward security for the SP and achieving backward security for the vehicle. To handle these challenges, this paper proposes a conditional privacy-preserving mutual authentication protocol featured with fine-grained forward security and backward security for IoV, which can be implemented via two building blocks we have constructed. Specifically, we present a new puncturable signature (PS) scheme without false-positive probability and the update of the public key as well as the first proxy re-signature scheme with parallel key-insulation (PKI-PRS). What’s more, both the proposed PKI-PRS and PS still have interest beyond this protocol. Then, an anonymous mutual authentication protocol with resistance to key leakage is constructed by incorporating the above signature schemes. The proposed protocol not only provides fine-grained forward security for the SP, but also ensures forward security as well as backward security for the vehicles. Besides, the approach to achieving anonymous authentication can efficiently provide conditional privacy-preserving for the vehicles. With the support of the random oracle model and experimental simulations, the formal security proof and the superiority of the proposed protocol is explicitly given.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"15493-15511"},"PeriodicalIF":7.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1109/TITS.2024.3465442
Zhaojie Wang;Guangquan Lu;Haitian Tan
In most studies on modeling driving behavior at uncontrolled intersections, multi-vehicle interaction scenarios are usually categorized and modeled separately as moving-across behavior and merging behavior. However, it is inappropriate to use a single-behavior model to accurately represent general driving behavior in uncontrolled intersections. In this case, we constructed a general driving behavior model for multi-vehicle interaction at uncontrolled intersections. Initially, the IMM model is employed to anticipate the movement of the vehicle within the driver’s visual field. The risk field theory is applied to assess potential hazards that the vehicle might confront, drawing from the risk homeostasis theory and preview-follower theory, which aids in determining a trajectory that aligns with the drivers’ real-life actions while also meeting the risk constraints. Drivers’ heterogeneity is reflected by risk threshold. This model can simulate driver behavior in traffic congestion at uncontrolled intersections by adjusting risk thresholds when the vehicles are caught in a deadlock situation. Results show that our model can accurately reproduce the priority and trajectory of vehicles crossing the intersection and resolve multi-vehicle conflicts within a reasonable time. This model can be used for traffic simulation at uncontrolled intersections and to provide test validation for automated driving systems.
{"title":"Driving Behavior Model for Multi-Vehicle Interaction at Uncontrolled Intersections Based on Risk Field Considering Drivers’ Visual Field Characteristics","authors":"Zhaojie Wang;Guangquan Lu;Haitian Tan","doi":"10.1109/TITS.2024.3465442","DOIUrl":"https://doi.org/10.1109/TITS.2024.3465442","url":null,"abstract":"In most studies on modeling driving behavior at uncontrolled intersections, multi-vehicle interaction scenarios are usually categorized and modeled separately as moving-across behavior and merging behavior. However, it is inappropriate to use a single-behavior model to accurately represent general driving behavior in uncontrolled intersections. In this case, we constructed a general driving behavior model for multi-vehicle interaction at uncontrolled intersections. Initially, the IMM model is employed to anticipate the movement of the vehicle within the driver’s visual field. The risk field theory is applied to assess potential hazards that the vehicle might confront, drawing from the risk homeostasis theory and preview-follower theory, which aids in determining a trajectory that aligns with the drivers’ real-life actions while also meeting the risk constraints. Drivers’ heterogeneity is reflected by risk threshold. This model can simulate driver behavior in traffic congestion at uncontrolled intersections by adjusting risk thresholds when the vehicles are caught in a deadlock situation. Results show that our model can accurately reproduce the priority and trajectory of vehicles crossing the intersection and resolve multi-vehicle conflicts within a reasonable time. This model can be used for traffic simulation at uncontrolled intersections and to provide test validation for automated driving systems.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"15532-15546"},"PeriodicalIF":7.9,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1109/TITS.2024.3436012
Zhengwei Bai;Guoyuan Wu;Matthew J. Barth;Yongkang Liu;Emrah Akin Sisbot;Kentaro Oguchi;Zhitong Huang
Perceiving the environment is one of the most fundamental keys to enabling Cooperative Driving Automation, which is regarded as the revolutionary solution to addressing the safety, mobility, and sustainability issues of contemporary transportation systems. Although an unprecedented evolution is now happening in the area of computer vision for object perception, state-of-the-art perception methods are still struggling with sophisticated real-world traffic environments due to the inevitable physical occlusion and limited receptive field of single-vehicle systems. Based on multiple spatially separated perception nodes, Cooperative Perception (CP) is born to unlock the bottleneck of perception for driving automation. In this paper, we comprehensively review and analyze the research progress on CP, and we propose a unified CP framework. The architectures and taxonomy of CP systems based on different types of sensors are reviewed to show a high-level description of the workflow and different structures for CP systems. The node structure, sensing modality, and fusion schemes are reviewed and analyzed with detailed explanations for CP. A Hierarchical Cooperative Perception (HCP) framework is proposed, followed by a review of existing open-source tools that support CP development. The discussion highlights the current opportunities, open challenges, and anticipated future trends.
{"title":"A Survey and Framework of Cooperative Perception: From Heterogeneous Singleton to Hierarchical Cooperation","authors":"Zhengwei Bai;Guoyuan Wu;Matthew J. Barth;Yongkang Liu;Emrah Akin Sisbot;Kentaro Oguchi;Zhitong Huang","doi":"10.1109/TITS.2024.3436012","DOIUrl":"https://doi.org/10.1109/TITS.2024.3436012","url":null,"abstract":"Perceiving the environment is one of the most fundamental keys to enabling Cooperative Driving Automation, which is regarded as the revolutionary solution to addressing the safety, mobility, and sustainability issues of contemporary transportation systems. Although an unprecedented evolution is now happening in the area of computer vision for object perception, state-of-the-art perception methods are still struggling with sophisticated real-world traffic environments due to the inevitable physical occlusion and limited receptive field of single-vehicle systems. Based on multiple spatially separated perception nodes, Cooperative Perception (CP) is born to unlock the bottleneck of perception for driving automation. In this paper, we comprehensively review and analyze the research progress on CP, and we propose a unified CP framework. The architectures and taxonomy of CP systems based on different types of sensors are reviewed to show a high-level description of the workflow and different structures for CP systems. The node structure, sensing modality, and fusion schemes are reviewed and analyzed with detailed explanations for CP. A Hierarchical Cooperative Perception (HCP) framework is proposed, followed by a review of existing open-source tools that support CP development. The discussion highlights the current opportunities, open challenges, and anticipated future trends.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"15191-15209"},"PeriodicalIF":7.9,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective beam alignment is essential for vehicle-to-infrastructure (V2I) millimeter wave (mmWave) communication systems, particularly in high-mobility vehicle scenarios. This paper explores a three-dimensional (3D) vehicle environment and introduces a novel deep learning (DL)-based beam search method that incorporates an image-based coding (IBC) technique. The mmWave beam search is approached as an image processing problem based on situational awareness. We propose IBC to leverage the locations, sizes, and information of vehicles, and utilize convolutional neural network (CNN) to train the image dataset. Consequently, the optimal beam pair index(BPI)can be determined. Simulation results demonstrate that the proposed beam search method achieves satisfactory performance in terms of accuracy and robustness compared to conventional methods.
{"title":"Image-Based Beam Tracking With Deep Learning for mmWave V2I Communication Systems","authors":"Weizhi Zhong;Lulu Zhang;Haowen Jin;Xin Liu;Qiuming Zhu;Yi He;Farman Ali;Zhipeng Lin;Kai Mao;Tariq S. Durrani","doi":"10.1109/TITS.2024.3438875","DOIUrl":"https://doi.org/10.1109/TITS.2024.3438875","url":null,"abstract":"Effective beam alignment is essential for vehicle-to-infrastructure (V2I) millimeter wave (mmWave) communication systems, particularly in high-mobility vehicle scenarios. This paper explores a three-dimensional (3D) vehicle environment and introduces a novel deep learning (DL)-based beam search method that incorporates an image-based coding (IBC) technique. The mmWave beam search is approached as an image processing problem based on situational awareness. We propose IBC to leverage the locations, sizes, and information of vehicles, and utilize convolutional neural network (CNN) to train the image dataset. Consequently, the optimal beam pair index(BPI)can be determined. Simulation results demonstrate that the proposed beam search method achieves satisfactory performance in terms of accuracy and robustness compared to conventional methods.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"19110-19116"},"PeriodicalIF":7.9,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic accident prediction plays a vital role in Intelligent Transportation Systems (ITS), where a large number of traffic streaming data are generated on a daily basis for spatiotemporal big data analysis. The rarity of accidents and the absent interconnection information make it hard for spatiotemporal modeling. Moreover, the inherent characteristic of the black box predictive model makes it difficult to interpret the reliability and effectiveness of the deep learning model. To address these issues, a novel self-explanatory spatial-temporal deep learning model–Attention Spatial-Temporal Multi-Graph Convolutional Network (ASTMGCN) is proposed for traffic accident prediction. The original recorded rare accident data is formulated as a multivariate irregularly interval-aligned dataset, and the temporal discretization method is used to transfer into regularly sampled time series. Multiple graphs are defined to construct edge features and represent spatial relationships when node-related information is missing. Multi-graph convolutional operators and attention mechanisms are integrated into a Sequence-to-Sequence (Seq2Seq) framework to effectively capture dynamic spatial and temporal features and correlations in multi-step prediction. Comparative experiments and interpretability analysis are conducted on a real-world data set, and results indicate that our model can not only yield superior prediction performance but also has the advantage of interpretability.
{"title":"Interpretable Traffic Accident Prediction: Attention Spatial–Temporal Multi-Graph Traffic Stream Learning Approach","authors":"Chaojie Li;Borui Zhang;Zeyu Wang;Yin Yang;Xiaojun Zhou;Shirui Pan;Xinghuo Yu","doi":"10.1109/TITS.2024.3435995","DOIUrl":"https://doi.org/10.1109/TITS.2024.3435995","url":null,"abstract":"Traffic accident prediction plays a vital role in Intelligent Transportation Systems (ITS), where a large number of traffic streaming data are generated on a daily basis for spatiotemporal big data analysis. The rarity of accidents and the absent interconnection information make it hard for spatiotemporal modeling. Moreover, the inherent characteristic of the black box predictive model makes it difficult to interpret the reliability and effectiveness of the deep learning model. To address these issues, a novel self-explanatory spatial-temporal deep learning model–Attention Spatial-Temporal Multi-Graph Convolutional Network (ASTMGCN) is proposed for traffic accident prediction. The original recorded rare accident data is formulated as a multivariate irregularly interval-aligned dataset, and the temporal discretization method is used to transfer into regularly sampled time series. Multiple graphs are defined to construct edge features and represent spatial relationships when node-related information is missing. Multi-graph convolutional operators and attention mechanisms are integrated into a Sequence-to-Sequence (Seq2Seq) framework to effectively capture dynamic spatial and temporal features and correlations in multi-step prediction. Comparative experiments and interpretability analysis are conducted on a real-world data set, and results indicate that our model can not only yield superior prediction performance but also has the advantage of interpretability.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"15574-15586"},"PeriodicalIF":7.9,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1109/TITS.2024.3436530
Tristan Schneider;Matheus V. A. Pedrosa;Timo P. Gros;Verena Wolf;Kathrin Flaßkamp
Motion planning for autonomous vehicles is commonly implemented via graph-search methods, which pose limitations to the model accuracy and environmental complexity that can be handled under real-time constraints. In contrast, reinforcement learning, specifically the deep Q-learning (DQL) algorithm, provides an interesting alternative for real-time solutions. Some approaches, such as the deep Q-network (DQN), model the RL-action space by quantizing the continuous control inputs. Here, we propose to use motion primitives, which encode continuous-time nonlinear system behavior as the action space. The novel methodology of motion primitives-DQL planning is evaluated in a numerical example using a single-track vehicle model and different planning scenarios. We show that our approach outperforms a state-of-the-art graph-search method in computation time and probability of reaching the goal.
{"title":"Motion Primitives as the Action Space of Deep Q-Learning for Planning in Autonomous Driving","authors":"Tristan Schneider;Matheus V. A. Pedrosa;Timo P. Gros;Verena Wolf;Kathrin Flaßkamp","doi":"10.1109/TITS.2024.3436530","DOIUrl":"https://doi.org/10.1109/TITS.2024.3436530","url":null,"abstract":"Motion planning for autonomous vehicles is commonly implemented via graph-search methods, which pose limitations to the model accuracy and environmental complexity that can be handled under real-time constraints. In contrast, reinforcement learning, specifically the deep Q-learning (DQL) algorithm, provides an interesting alternative for real-time solutions. Some approaches, such as the deep Q-network (DQN), model the RL-action space by quantizing the continuous control inputs. Here, we propose to use motion primitives, which encode continuous-time nonlinear system behavior as the action space. The novel methodology of motion primitives-DQL planning is evaluated in a numerical example using a single-track vehicle model and different planning scenarios. We show that our approach outperforms a state-of-the-art graph-search method in computation time and probability of reaching the goal.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"17852-17864"},"PeriodicalIF":7.9,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}