Pub Date : 2025-02-07DOI: 10.1109/TITS.2024.3524117
Ahmed Elgazwy;Khalid Elgazzar;Alaa Khamis
The enhancement of the vehicle perception model represents a crucial undertaking in the successful integration of assisted and automated vehicle driving. By enhancing the perceptual capabilities of the model to accurately anticipate the actions of vulnerable road users, the overall driving experience can be significantly improved, ensuring higher levels of safety. Existing research efforts focusing on the prediction of pedestrians’ crossing intentions have predominantly relied on vision-based deep learning models. However, these models continue to exhibit shortcomings in terms of robustness when faced with adverse weather conditions and domain adaptation challenges. Furthermore, little attention has been given to evaluating the real-time performance of these models. To address these aforementioned limitations, this study introduces an innovative framework for pedestrian crossing intention prediction. The framework incorporates an image enhancement pipeline, which enables the detection and rectification of various defects that may arise during unfavorable weather conditions. Subsequently, a transformer-based network, featuring a self-attention mechanism, is employed to predict the crossing intentions of target pedestrians. This augmentation enhances the model’s resilience and accuracy in classification tasks. Through evaluation on the Joint Attention in Autonomous Driving (JAAD) dataset, our framework attains state-of-the-art performance while maintaining a notably low inference time. Moreover, a deployment environment is established to assess the real-time performance of the model. The results of this evaluation demonstrate that our approach exhibits the shortest model inference time and the lowest end-to-end prediction time, accounting for the processing duration of the selected inputs.
{"title":"Predicting Pedestrian Crossing Intentions in Adverse Weather With Self-Attention Models","authors":"Ahmed Elgazwy;Khalid Elgazzar;Alaa Khamis","doi":"10.1109/TITS.2024.3524117","DOIUrl":"https://doi.org/10.1109/TITS.2024.3524117","url":null,"abstract":"The enhancement of the vehicle perception model represents a crucial undertaking in the successful integration of assisted and automated vehicle driving. By enhancing the perceptual capabilities of the model to accurately anticipate the actions of vulnerable road users, the overall driving experience can be significantly improved, ensuring higher levels of safety. Existing research efforts focusing on the prediction of pedestrians’ crossing intentions have predominantly relied on vision-based deep learning models. However, these models continue to exhibit shortcomings in terms of robustness when faced with adverse weather conditions and domain adaptation challenges. Furthermore, little attention has been given to evaluating the real-time performance of these models. To address these aforementioned limitations, this study introduces an innovative framework for pedestrian crossing intention prediction. The framework incorporates an image enhancement pipeline, which enables the detection and rectification of various defects that may arise during unfavorable weather conditions. Subsequently, a transformer-based network, featuring a self-attention mechanism, is employed to predict the crossing intentions of target pedestrians. This augmentation enhances the model’s resilience and accuracy in classification tasks. Through evaluation on the Joint Attention in Autonomous Driving (JAAD) dataset, our framework attains state-of-the-art performance while maintaining a notably low inference time. Moreover, a deployment environment is established to assess the real-time performance of the model. The results of this evaluation demonstrate that our approach exhibits the shortest model inference time and the lowest end-to-end prediction time, accounting for the processing duration of the selected inputs.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3250-3261"},"PeriodicalIF":7.9,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concept of shared control has garnered significant attention within the realm of human-machine hybrid intelligence research. This study introduces a novel approach, specifically a dynamic control authority allocation method, for implementing shared control in autonomous vehicles. Unlike conventional mixed-initiative control techniques that blend human and vehicle inputs with weights determined by predefined index, the proposed method utilizes optimization-based techniques to obtain an optimal dynamic allocation for human and vehicle inputs that satisfies safety constraints. Specifically, a convex quadratic programm (QP) is constructed incorporating control barrier functions (CBF) for safety and control Lyapunov functions (CLF) for satisfying automated control objectives. The cost function of the QP is designed such that human weight increases with the magnitude of human input. A smooth control authority transition is obtained by optimizing over the change rate of the weight instead of the weight itself. The proposed method is verified in lane-changing scenarios with human-in-the-loop (HmIL) and hardware-in-the-loop (HdIL) experiments. Results show that the proposed method outperforms index-based control authority allocation method in terms of agility, safety and comfort.
{"title":"Dynamic Control Authority Allocation in Indirect Shared Control for Steering Assistance","authors":"Yutao Chen;Hongliang Zhang;Haocong Chen;Jie Huang;Bin Wang;Zixiang Xiong;Yuyi Wang;Xiwen Yuan","doi":"10.1109/TITS.2024.3520107","DOIUrl":"https://doi.org/10.1109/TITS.2024.3520107","url":null,"abstract":"The concept of shared control has garnered significant attention within the realm of human-machine hybrid intelligence research. This study introduces a novel approach, specifically a dynamic control authority allocation method, for implementing shared control in autonomous vehicles. Unlike conventional mixed-initiative control techniques that blend human and vehicle inputs with weights determined by predefined index, the proposed method utilizes optimization-based techniques to obtain an optimal dynamic allocation for human and vehicle inputs that satisfies safety constraints. Specifically, a convex quadratic programm (QP) is constructed incorporating control barrier functions (CBF) for safety and control Lyapunov functions (CLF) for satisfying automated control objectives. The cost function of the QP is designed such that human weight increases with the magnitude of human input. A smooth control authority transition is obtained by optimizing over the change rate of the weight instead of the weight itself. The proposed method is verified in lane-changing scenarios with human-in-the-loop (HmIL) and hardware-in-the-loop (HdIL) experiments. Results show that the proposed method outperforms index-based control authority allocation method in terms of agility, safety and comfort.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3458-3470"},"PeriodicalIF":7.9,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-05DOI: 10.1109/TITS.2025.3531665
Qing Yuan;Junbo Wang;Yu Han;Zhi Liu;Wanquan Liu
It is necessary to establish a spatio-temporal correlation model in the traffic data to predict the state of the transportation system. Existing research has focused on traditional graph neural networks, which use predefined graphs and have shared parameters. But intuitive predefined graphs introduce biases into prediction tasks and the fine-grained spatio-temporal information can not be obtained by the parameter sharing model. In this paper, we consider it is crucial to learn node-specific parameters and adaptive graphs with complete edge information. To show this, we design a model based on graph structure that decouples nodes and edges into two modules. Each module extracts temporal and spatial features simultaneously. The adaptive node optimization module is used to learn the specific parameter patterns of all nodes, and the adaptive edge optimization module aims to mine the interdependencies among different nodes. Then we propose a Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting (DAGCAN), which relies on the above two modules to dynamically capture the fine-grained spatio-temporal relationships in traffic data. Experimental results on four public transportation datasets, demonstrate that our model can further improve the accuracy of traffic prediction.
{"title":"DAGCAN: Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting","authors":"Qing Yuan;Junbo Wang;Yu Han;Zhi Liu;Wanquan Liu","doi":"10.1109/TITS.2025.3531665","DOIUrl":"https://doi.org/10.1109/TITS.2025.3531665","url":null,"abstract":"It is necessary to establish a spatio-temporal correlation model in the traffic data to predict the state of the transportation system. Existing research has focused on traditional graph neural networks, which use predefined graphs and have shared parameters. But intuitive predefined graphs introduce biases into prediction tasks and the fine-grained spatio-temporal information can not be obtained by the parameter sharing model. In this paper, we consider it is crucial to learn node-specific parameters and adaptive graphs with complete edge information. To show this, we design a model based on graph structure that decouples nodes and edges into two modules. Each module extracts temporal and spatial features simultaneously. The adaptive node optimization module is used to learn the specific parameter patterns of all nodes, and the adaptive edge optimization module aims to mine the interdependencies among different nodes. Then we propose a Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting (DAGCAN), which relies on the above two modules to dynamically capture the fine-grained spatio-temporal relationships in traffic data. Experimental results on four public transportation datasets, demonstrate that our model can further improve the accuracy of traffic prediction.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3513-3526"},"PeriodicalIF":7.9,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1109/TITS.2025.3528298
Simona Sacone
Summary form only: Abstract of article "Scanning the Issue."
{"title":"Scanning the Issue","authors":"Simona Sacone","doi":"10.1109/TITS.2025.3528298","DOIUrl":"https://doi.org/10.1109/TITS.2025.3528298","url":null,"abstract":"Summary form only: Abstract of article \"Scanning the Issue.\"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"1354-1374"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1109/TITS.2025.3527807
{"title":"IEEE Intelligent Transportation Systems Society Information","authors":"","doi":"10.1109/TITS.2025.3527807","DOIUrl":"https://doi.org/10.1109/TITS.2025.3527807","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"C3-C3"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871219","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1109/TITS.2025.3533560
Lingqiang Chen;Qinglin Zhao;Guanghui Li;Mengchu Zhou;Chenglong Dai;Yiming Feng;Xiaowei Liu;Jinjiang Li
Deep graph convolutional networks (GCNs) have shown promising performance in traffic prediction tasks, but their practical deployment on resource-constrained devices faces challenges. First, few models consider the potential influence of historical and future auxiliary information, such as weather and holidays, on complex traffic patterns. Second, the computational complexity of dynamic graph convolution operations grows quadratically with the number of traffic nodes, limiting model scalability. To address these challenges, this study proposes a deep encoder-decoder model named AIMSAN, which comprises an auxiliary information-aware module (AIM) and a sparse cross-attention-based graph convolutional network (SAN). From historical or future perspectives, AIM prunes multi-attribute auxiliary data into diverse time frames, and embeds them into one tensor. SAN employs a cross-attention mechanism to merge traffic data with historical embedded data in each encoder layer, forming dynamic adjacency matrices. Subsequently, it applies diffusion GCN to capture rich spatial-temporal dynamics from the traffic data. Additionally, AIMSAN utilizes the spatial sparsity of traffic nodes as a mask to mitigate the quadratic computational complexity of SAN, thereby improving overall computational efficiency. In the decoder layer, future embedded data are fused with feed-forward traffic data to generate prediction results. Experimental evaluations on three public traffic datasets demonstrate that AIMSAN achieves competitive performance compared to state-of-the-art algorithms, while reducing GPU memory consumption by 41.24%, training time by 62.09%, and validation time by 65.17% on average.
{"title":"A Sparse Cross Attention-Based Graph Convolution Network With Auxiliary Information Awareness for Traffic Flow Prediction","authors":"Lingqiang Chen;Qinglin Zhao;Guanghui Li;Mengchu Zhou;Chenglong Dai;Yiming Feng;Xiaowei Liu;Jinjiang Li","doi":"10.1109/TITS.2025.3533560","DOIUrl":"https://doi.org/10.1109/TITS.2025.3533560","url":null,"abstract":"Deep graph convolutional networks (GCNs) have shown promising performance in traffic prediction tasks, but their practical deployment on resource-constrained devices faces challenges. First, few models consider the potential influence of historical and future auxiliary information, such as weather and holidays, on complex traffic patterns. Second, the computational complexity of dynamic graph convolution operations grows quadratically with the number of traffic nodes, limiting model scalability. To address these challenges, this study proposes a deep encoder-decoder model named AIMSAN, which comprises an auxiliary information-aware module (AIM) and a sparse cross-attention-based graph convolutional network (SAN). From historical or future perspectives, AIM prunes multi-attribute auxiliary data into diverse time frames, and embeds them into one tensor. SAN employs a cross-attention mechanism to merge traffic data with historical embedded data in each encoder layer, forming dynamic adjacency matrices. Subsequently, it applies diffusion GCN to capture rich spatial-temporal dynamics from the traffic data. Additionally, AIMSAN utilizes the spatial sparsity of traffic nodes as a mask to mitigate the quadratic computational complexity of SAN, thereby improving overall computational efficiency. In the decoder layer, future embedded data are fused with feed-forward traffic data to generate prediction results. Experimental evaluations on three public traffic datasets demonstrate that AIMSAN achieves competitive performance compared to state-of-the-art algorithms, while reducing GPU memory consumption by 41.24%, training time by 62.09%, and validation time by 65.17% on average.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3210-3222"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1109/TITS.2025.3531115
Yeping Peng;Jianrui Xu;Guangzhong Cao;Runhao Zeng
Binocular stereo matching is a crucial task in autonomous driving for accurately estimating the depth information of objects and scenes. This task, however, is challenging due to various ill-posed regions within binocular image pairs, such as repeated textures and weak textures which present complex correspondences between the points. Existing methods extract features from binocular input images mainly by relying on deep convolutional neural networks with a substantial number of convolutional layers, which may incur high memory and computation costs, thus making it hard to deploy in real-world applications. Additionally, previous methods do not consider the correlation between view unary features during the construction of the cost volume, thus leading to inferior results. To address these issues, a novel lightweight binocular-separated feature extraction module is proposed that includes a view-shared multi-dilation fusion module and a view-specific feature extractor. Our method leverages a shallow neural network with a multi-dilation modeling module to provide similar receptive fields as deep neural networks but with fewer parameters and better computational efficiency. Furthermore, we propose incorporating the correlations of view-shared features to dynamically select view-specific features during the construction of the cost volume. Extensive experiments conducted on two public benchmark datasets show that our proposed method outperforms the deep model-based baseline method (i.e., 13.6% improvement on Scene Flow and 2.0% on KITTI 2015) while using 29.7% fewer parameters. Ablation experiments show that our method achieves superior matching performance in weak texture and edge regions. The source code will be made publicly available.
{"title":"Binocular-Separated Modeling for Efficient Binocular Stereo Matching","authors":"Yeping Peng;Jianrui Xu;Guangzhong Cao;Runhao Zeng","doi":"10.1109/TITS.2025.3531115","DOIUrl":"https://doi.org/10.1109/TITS.2025.3531115","url":null,"abstract":"Binocular stereo matching is a crucial task in autonomous driving for accurately estimating the depth information of objects and scenes. This task, however, is challenging due to various ill-posed regions within binocular image pairs, such as repeated textures and weak textures which present complex correspondences between the points. Existing methods extract features from binocular input images mainly by relying on deep convolutional neural networks with a substantial number of convolutional layers, which may incur high memory and computation costs, thus making it hard to deploy in real-world applications. Additionally, previous methods do not consider the correlation between view unary features during the construction of the cost volume, thus leading to inferior results. To address these issues, a novel lightweight binocular-separated feature extraction module is proposed that includes a view-shared multi-dilation fusion module and a view-specific feature extractor. Our method leverages a shallow neural network with a multi-dilation modeling module to provide similar receptive fields as deep neural networks but with fewer parameters and better computational efficiency. Furthermore, we propose incorporating the correlations of view-shared features to dynamically select view-specific features during the construction of the cost volume. Extensive experiments conducted on two public benchmark datasets show that our proposed method outperforms the deep model-based baseline method (i.e., 13.6% improvement on Scene Flow and 2.0% on KITTI 2015) while using 29.7% fewer parameters. Ablation experiments show that our method achieves superior matching performance in weak texture and edge regions. The source code will be made publicly available.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3028-3038"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1109/TITS.2025.3530872
Jibran A. Abbasi;Ashkan Parsi;Nicolas Ringelstein;Patrice Reilhac;Edward Jones;Martin Glavin
In urban areas, roads with dedicated cycle lanes play a vital role in cyclist safety. However, accidents can still occur when vehicles cross the cycle lane at intersections. Accidents mostly occur due to failure of the driver to see a cyclist on the cycle lane, particularly when the cyclist is going straight through the intersection, and the vehicle is turning. For safe driving, it is critical that the drivers visually scan the area in the vicinity of the junction and the car, particularly using the wing-mirror, prior to making turns. This paper describes results from a set of test drives using in-vehicle non-invasive eye-tracking and in-vehicle CAN bus sensors to determine driver behaviour. In total, 20 drivers were monitored through 5 different intersections with cycle lanes. The study found that approximately 83% of drivers did not check their wing mirror prior to, or during their turning manoeuvre, potentially putting pedestrian, cyclists, scooter and hoverboard users in danger. An algorithm was developed to analyse driver gaze during the turning manoeuvre to identify cases where they failed to look at the wing mirror. The gaze pattern and gaze concentration on the mirror helps to identify safe and unsafe driving behaviour. This information can then be used to improve Advanced Driver-Assistance Systems (ADAS) to create a safer environment for all road users.
{"title":"Enhancing Cyclist Safety Through Driver Gaze Analysis at Intersections With Cycle Lanes","authors":"Jibran A. Abbasi;Ashkan Parsi;Nicolas Ringelstein;Patrice Reilhac;Edward Jones;Martin Glavin","doi":"10.1109/TITS.2025.3530872","DOIUrl":"https://doi.org/10.1109/TITS.2025.3530872","url":null,"abstract":"In urban areas, roads with dedicated cycle lanes play a vital role in cyclist safety. However, accidents can still occur when vehicles cross the cycle lane at intersections. Accidents mostly occur due to failure of the driver to see a cyclist on the cycle lane, particularly when the cyclist is going straight through the intersection, and the vehicle is turning. For safe driving, it is critical that the drivers visually scan the area in the vicinity of the junction and the car, particularly using the wing-mirror, prior to making turns. This paper describes results from a set of test drives using in-vehicle non-invasive eye-tracking and in-vehicle CAN bus sensors to determine driver behaviour. In total, 20 drivers were monitored through 5 different intersections with cycle lanes. The study found that approximately 83% of drivers did not check their wing mirror prior to, or during their turning manoeuvre, potentially putting pedestrian, cyclists, scooter and hoverboard users in danger. An algorithm was developed to analyse driver gaze during the turning manoeuvre to identify cases where they failed to look at the wing mirror. The gaze pattern and gaze concentration on the mirror helps to identify safe and unsafe driving behaviour. This information can then be used to improve Advanced Driver-Assistance Systems (ADAS) to create a safer environment for all road users.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3175-3184"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1109/TITS.2025.3527256
Huhnkuk Lim;Sajjad Ahmad Khan
Presents corrections to the paper, Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”.
{"title":"Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”","authors":"Huhnkuk Lim;Sajjad Ahmad Khan","doi":"10.1109/TITS.2025.3527256","DOIUrl":"https://doi.org/10.1109/TITS.2025.3527256","url":null,"abstract":"Presents corrections to the paper, Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"2811-2811"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871176","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}