首页 > 最新文献

IEEE Transactions on Intelligent Transportation Systems最新文献

英文 中文
Predicting Pedestrian Crossing Intentions in Adverse Weather With Self-Attention Models
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-07 DOI: 10.1109/TITS.2024.3524117
Ahmed Elgazwy;Khalid Elgazzar;Alaa Khamis
The enhancement of the vehicle perception model represents a crucial undertaking in the successful integration of assisted and automated vehicle driving. By enhancing the perceptual capabilities of the model to accurately anticipate the actions of vulnerable road users, the overall driving experience can be significantly improved, ensuring higher levels of safety. Existing research efforts focusing on the prediction of pedestrians’ crossing intentions have predominantly relied on vision-based deep learning models. However, these models continue to exhibit shortcomings in terms of robustness when faced with adverse weather conditions and domain adaptation challenges. Furthermore, little attention has been given to evaluating the real-time performance of these models. To address these aforementioned limitations, this study introduces an innovative framework for pedestrian crossing intention prediction. The framework incorporates an image enhancement pipeline, which enables the detection and rectification of various defects that may arise during unfavorable weather conditions. Subsequently, a transformer-based network, featuring a self-attention mechanism, is employed to predict the crossing intentions of target pedestrians. This augmentation enhances the model’s resilience and accuracy in classification tasks. Through evaluation on the Joint Attention in Autonomous Driving (JAAD) dataset, our framework attains state-of-the-art performance while maintaining a notably low inference time. Moreover, a deployment environment is established to assess the real-time performance of the model. The results of this evaluation demonstrate that our approach exhibits the shortest model inference time and the lowest end-to-end prediction time, accounting for the processing duration of the selected inputs.
{"title":"Predicting Pedestrian Crossing Intentions in Adverse Weather With Self-Attention Models","authors":"Ahmed Elgazwy;Khalid Elgazzar;Alaa Khamis","doi":"10.1109/TITS.2024.3524117","DOIUrl":"https://doi.org/10.1109/TITS.2024.3524117","url":null,"abstract":"The enhancement of the vehicle perception model represents a crucial undertaking in the successful integration of assisted and automated vehicle driving. By enhancing the perceptual capabilities of the model to accurately anticipate the actions of vulnerable road users, the overall driving experience can be significantly improved, ensuring higher levels of safety. Existing research efforts focusing on the prediction of pedestrians’ crossing intentions have predominantly relied on vision-based deep learning models. However, these models continue to exhibit shortcomings in terms of robustness when faced with adverse weather conditions and domain adaptation challenges. Furthermore, little attention has been given to evaluating the real-time performance of these models. To address these aforementioned limitations, this study introduces an innovative framework for pedestrian crossing intention prediction. The framework incorporates an image enhancement pipeline, which enables the detection and rectification of various defects that may arise during unfavorable weather conditions. Subsequently, a transformer-based network, featuring a self-attention mechanism, is employed to predict the crossing intentions of target pedestrians. This augmentation enhances the model’s resilience and accuracy in classification tasks. Through evaluation on the Joint Attention in Autonomous Driving (JAAD) dataset, our framework attains state-of-the-art performance while maintaining a notably low inference time. Moreover, a deployment environment is established to assess the real-time performance of the model. The results of this evaluation demonstrate that our approach exhibits the shortest model inference time and the lowest end-to-end prediction time, accounting for the processing duration of the selected inputs.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3250-3261"},"PeriodicalIF":7.9,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Control Authority Allocation in Indirect Shared Control for Steering Assistance
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-05 DOI: 10.1109/TITS.2024.3520107
Yutao Chen;Hongliang Zhang;Haocong Chen;Jie Huang;Bin Wang;Zixiang Xiong;Yuyi Wang;Xiwen Yuan
The concept of shared control has garnered significant attention within the realm of human-machine hybrid intelligence research. This study introduces a novel approach, specifically a dynamic control authority allocation method, for implementing shared control in autonomous vehicles. Unlike conventional mixed-initiative control techniques that blend human and vehicle inputs with weights determined by predefined index, the proposed method utilizes optimization-based techniques to obtain an optimal dynamic allocation for human and vehicle inputs that satisfies safety constraints. Specifically, a convex quadratic programm (QP) is constructed incorporating control barrier functions (CBF) for safety and control Lyapunov functions (CLF) for satisfying automated control objectives. The cost function of the QP is designed such that human weight increases with the magnitude of human input. A smooth control authority transition is obtained by optimizing over the change rate of the weight instead of the weight itself. The proposed method is verified in lane-changing scenarios with human-in-the-loop (HmIL) and hardware-in-the-loop (HdIL) experiments. Results show that the proposed method outperforms index-based control authority allocation method in terms of agility, safety and comfort.
{"title":"Dynamic Control Authority Allocation in Indirect Shared Control for Steering Assistance","authors":"Yutao Chen;Hongliang Zhang;Haocong Chen;Jie Huang;Bin Wang;Zixiang Xiong;Yuyi Wang;Xiwen Yuan","doi":"10.1109/TITS.2024.3520107","DOIUrl":"https://doi.org/10.1109/TITS.2024.3520107","url":null,"abstract":"The concept of shared control has garnered significant attention within the realm of human-machine hybrid intelligence research. This study introduces a novel approach, specifically a dynamic control authority allocation method, for implementing shared control in autonomous vehicles. Unlike conventional mixed-initiative control techniques that blend human and vehicle inputs with weights determined by predefined index, the proposed method utilizes optimization-based techniques to obtain an optimal dynamic allocation for human and vehicle inputs that satisfies safety constraints. Specifically, a convex quadratic programm (QP) is constructed incorporating control barrier functions (CBF) for safety and control Lyapunov functions (CLF) for satisfying automated control objectives. The cost function of the QP is designed such that human weight increases with the magnitude of human input. A smooth control authority transition is obtained by optimizing over the change rate of the weight instead of the weight itself. The proposed method is verified in lane-changing scenarios with human-in-the-loop (HmIL) and hardware-in-the-loop (HdIL) experiments. Results show that the proposed method outperforms index-based control authority allocation method in terms of agility, safety and comfort.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3458-3470"},"PeriodicalIF":7.9,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAGCAN: Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-05 DOI: 10.1109/TITS.2025.3531665
Qing Yuan;Junbo Wang;Yu Han;Zhi Liu;Wanquan Liu
It is necessary to establish a spatio-temporal correlation model in the traffic data to predict the state of the transportation system. Existing research has focused on traditional graph neural networks, which use predefined graphs and have shared parameters. But intuitive predefined graphs introduce biases into prediction tasks and the fine-grained spatio-temporal information can not be obtained by the parameter sharing model. In this paper, we consider it is crucial to learn node-specific parameters and adaptive graphs with complete edge information. To show this, we design a model based on graph structure that decouples nodes and edges into two modules. Each module extracts temporal and spatial features simultaneously. The adaptive node optimization module is used to learn the specific parameter patterns of all nodes, and the adaptive edge optimization module aims to mine the interdependencies among different nodes. Then we propose a Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting (DAGCAN), which relies on the above two modules to dynamically capture the fine-grained spatio-temporal relationships in traffic data. Experimental results on four public transportation datasets, demonstrate that our model can further improve the accuracy of traffic prediction.
有必要在交通数据中建立一个时空关联模型,以预测交通系统的状态。现有的研究主要集中在传统的图神经网络上,这种网络使用预定义的图并具有共享参数。但直观的预定义图形会给预测任务带来偏差,而且参数共享模型无法获得精细的时空信息。在本文中,我们认为学习节点特定参数和具有完整边缘信息的自适应图至关重要。为了证明这一点,我们设计了一个基于图结构的模型,将节点和边解耦为两个模块。每个模块同时提取时间和空间特征。自适应节点优化模块用于学习所有节点的特定参数模式,而自适应边缘优化模块旨在挖掘不同节点之间的相互依赖关系。然后,我们提出了用于交通预测的解耦自适应图卷积注意力网络(DAGCAN),该网络依靠上述两个模块动态捕捉交通数据中的细粒度时空关系。在四个公共交通数据集上的实验结果表明,我们的模型可以进一步提高交通预测的准确性。
{"title":"DAGCAN: Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting","authors":"Qing Yuan;Junbo Wang;Yu Han;Zhi Liu;Wanquan Liu","doi":"10.1109/TITS.2025.3531665","DOIUrl":"https://doi.org/10.1109/TITS.2025.3531665","url":null,"abstract":"It is necessary to establish a spatio-temporal correlation model in the traffic data to predict the state of the transportation system. Existing research has focused on traditional graph neural networks, which use predefined graphs and have shared parameters. But intuitive predefined graphs introduce biases into prediction tasks and the fine-grained spatio-temporal information can not be obtained by the parameter sharing model. In this paper, we consider it is crucial to learn node-specific parameters and adaptive graphs with complete edge information. To show this, we design a model based on graph structure that decouples nodes and edges into two modules. Each module extracts temporal and spatial features simultaneously. The adaptive node optimization module is used to learn the specific parameter patterns of all nodes, and the adaptive edge optimization module aims to mine the interdependencies among different nodes. Then we propose a Decoupled Adaptive Graph Convolution Attention Network for Traffic Forecasting (DAGCAN), which relies on the above two modules to dynamically capture the fine-grained spatio-temporal relationships in traffic data. Experimental results on four public transportation datasets, demonstrate that our model can further improve the accuracy of traffic prediction.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3513-3526"},"PeriodicalIF":7.9,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scanning the Issue
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3528298
Simona Sacone
Summary form only: Abstract of article "Scanning the Issue."
{"title":"Scanning the Issue","authors":"Simona Sacone","doi":"10.1109/TITS.2025.3528298","DOIUrl":"https://doi.org/10.1109/TITS.2025.3528298","url":null,"abstract":"Summary form only: Abstract of article \"Scanning the Issue.\"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"1354-1374"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Intelligent Transportation Systems Society Information
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3527807
{"title":"IEEE Intelligent Transportation Systems Society Information","authors":"","doi":"10.1109/TITS.2025.3527807","DOIUrl":"https://doi.org/10.1109/TITS.2025.3527807","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"C3-C3"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871219","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE INTELLIGENT TRANSPORTATION SYSTEMS SOCIETY
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3527805
{"title":"IEEE INTELLIGENT TRANSPORTATION SYSTEMS SOCIETY","authors":"","doi":"10.1109/TITS.2025.3527805","DOIUrl":"https://doi.org/10.1109/TITS.2025.3527805","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"C2-C2"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Sparse Cross Attention-Based Graph Convolution Network With Auxiliary Information Awareness for Traffic Flow Prediction
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3533560
Lingqiang Chen;Qinglin Zhao;Guanghui Li;Mengchu Zhou;Chenglong Dai;Yiming Feng;Xiaowei Liu;Jinjiang Li
Deep graph convolutional networks (GCNs) have shown promising performance in traffic prediction tasks, but their practical deployment on resource-constrained devices faces challenges. First, few models consider the potential influence of historical and future auxiliary information, such as weather and holidays, on complex traffic patterns. Second, the computational complexity of dynamic graph convolution operations grows quadratically with the number of traffic nodes, limiting model scalability. To address these challenges, this study proposes a deep encoder-decoder model named AIMSAN, which comprises an auxiliary information-aware module (AIM) and a sparse cross-attention-based graph convolutional network (SAN). From historical or future perspectives, AIM prunes multi-attribute auxiliary data into diverse time frames, and embeds them into one tensor. SAN employs a cross-attention mechanism to merge traffic data with historical embedded data in each encoder layer, forming dynamic adjacency matrices. Subsequently, it applies diffusion GCN to capture rich spatial-temporal dynamics from the traffic data. Additionally, AIMSAN utilizes the spatial sparsity of traffic nodes as a mask to mitigate the quadratic computational complexity of SAN, thereby improving overall computational efficiency. In the decoder layer, future embedded data are fused with feed-forward traffic data to generate prediction results. Experimental evaluations on three public traffic datasets demonstrate that AIMSAN achieves competitive performance compared to state-of-the-art algorithms, while reducing GPU memory consumption by 41.24%, training time by 62.09%, and validation time by 65.17% on average.
{"title":"A Sparse Cross Attention-Based Graph Convolution Network With Auxiliary Information Awareness for Traffic Flow Prediction","authors":"Lingqiang Chen;Qinglin Zhao;Guanghui Li;Mengchu Zhou;Chenglong Dai;Yiming Feng;Xiaowei Liu;Jinjiang Li","doi":"10.1109/TITS.2025.3533560","DOIUrl":"https://doi.org/10.1109/TITS.2025.3533560","url":null,"abstract":"Deep graph convolutional networks (GCNs) have shown promising performance in traffic prediction tasks, but their practical deployment on resource-constrained devices faces challenges. First, few models consider the potential influence of historical and future auxiliary information, such as weather and holidays, on complex traffic patterns. Second, the computational complexity of dynamic graph convolution operations grows quadratically with the number of traffic nodes, limiting model scalability. To address these challenges, this study proposes a deep encoder-decoder model named AIMSAN, which comprises an auxiliary information-aware module (AIM) and a sparse cross-attention-based graph convolutional network (SAN). From historical or future perspectives, AIM prunes multi-attribute auxiliary data into diverse time frames, and embeds them into one tensor. SAN employs a cross-attention mechanism to merge traffic data with historical embedded data in each encoder layer, forming dynamic adjacency matrices. Subsequently, it applies diffusion GCN to capture rich spatial-temporal dynamics from the traffic data. Additionally, AIMSAN utilizes the spatial sparsity of traffic nodes as a mask to mitigate the quadratic computational complexity of SAN, thereby improving overall computational efficiency. In the decoder layer, future embedded data are fused with feed-forward traffic data to generate prediction results. Experimental evaluations on three public traffic datasets demonstrate that AIMSAN achieves competitive performance compared to state-of-the-art algorithms, while reducing GPU memory consumption by 41.24%, training time by 62.09%, and validation time by 65.17% on average.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3210-3222"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Binocular-Separated Modeling for Efficient Binocular Stereo Matching
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3531115
Yeping Peng;Jianrui Xu;Guangzhong Cao;Runhao Zeng
Binocular stereo matching is a crucial task in autonomous driving for accurately estimating the depth information of objects and scenes. This task, however, is challenging due to various ill-posed regions within binocular image pairs, such as repeated textures and weak textures which present complex correspondences between the points. Existing methods extract features from binocular input images mainly by relying on deep convolutional neural networks with a substantial number of convolutional layers, which may incur high memory and computation costs, thus making it hard to deploy in real-world applications. Additionally, previous methods do not consider the correlation between view unary features during the construction of the cost volume, thus leading to inferior results. To address these issues, a novel lightweight binocular-separated feature extraction module is proposed that includes a view-shared multi-dilation fusion module and a view-specific feature extractor. Our method leverages a shallow neural network with a multi-dilation modeling module to provide similar receptive fields as deep neural networks but with fewer parameters and better computational efficiency. Furthermore, we propose incorporating the correlations of view-shared features to dynamically select view-specific features during the construction of the cost volume. Extensive experiments conducted on two public benchmark datasets show that our proposed method outperforms the deep model-based baseline method (i.e., 13.6% improvement on Scene Flow and 2.0% on KITTI 2015) while using 29.7% fewer parameters. Ablation experiments show that our method achieves superior matching performance in weak texture and edge regions. The source code will be made publicly available.
{"title":"Binocular-Separated Modeling for Efficient Binocular Stereo Matching","authors":"Yeping Peng;Jianrui Xu;Guangzhong Cao;Runhao Zeng","doi":"10.1109/TITS.2025.3531115","DOIUrl":"https://doi.org/10.1109/TITS.2025.3531115","url":null,"abstract":"Binocular stereo matching is a crucial task in autonomous driving for accurately estimating the depth information of objects and scenes. This task, however, is challenging due to various ill-posed regions within binocular image pairs, such as repeated textures and weak textures which present complex correspondences between the points. Existing methods extract features from binocular input images mainly by relying on deep convolutional neural networks with a substantial number of convolutional layers, which may incur high memory and computation costs, thus making it hard to deploy in real-world applications. Additionally, previous methods do not consider the correlation between view unary features during the construction of the cost volume, thus leading to inferior results. To address these issues, a novel lightweight binocular-separated feature extraction module is proposed that includes a view-shared multi-dilation fusion module and a view-specific feature extractor. Our method leverages a shallow neural network with a multi-dilation modeling module to provide similar receptive fields as deep neural networks but with fewer parameters and better computational efficiency. Furthermore, we propose incorporating the correlations of view-shared features to dynamically select view-specific features during the construction of the cost volume. Extensive experiments conducted on two public benchmark datasets show that our proposed method outperforms the deep model-based baseline method (i.e., 13.6% improvement on Scene Flow and 2.0% on KITTI 2015) while using 29.7% fewer parameters. Ablation experiments show that our method achieves superior matching performance in weak texture and edge regions. The source code will be made publicly available.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3028-3038"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Cyclist Safety Through Driver Gaze Analysis at Intersections With Cycle Lanes
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3530872
Jibran A. Abbasi;Ashkan Parsi;Nicolas Ringelstein;Patrice Reilhac;Edward Jones;Martin Glavin
In urban areas, roads with dedicated cycle lanes play a vital role in cyclist safety. However, accidents can still occur when vehicles cross the cycle lane at intersections. Accidents mostly occur due to failure of the driver to see a cyclist on the cycle lane, particularly when the cyclist is going straight through the intersection, and the vehicle is turning. For safe driving, it is critical that the drivers visually scan the area in the vicinity of the junction and the car, particularly using the wing-mirror, prior to making turns. This paper describes results from a set of test drives using in-vehicle non-invasive eye-tracking and in-vehicle CAN bus sensors to determine driver behaviour. In total, 20 drivers were monitored through 5 different intersections with cycle lanes. The study found that approximately 83% of drivers did not check their wing mirror prior to, or during their turning manoeuvre, potentially putting pedestrian, cyclists, scooter and hoverboard users in danger. An algorithm was developed to analyse driver gaze during the turning manoeuvre to identify cases where they failed to look at the wing mirror. The gaze pattern and gaze concentration on the mirror helps to identify safe and unsafe driving behaviour. This information can then be used to improve Advanced Driver-Assistance Systems (ADAS) to create a safer environment for all road users.
{"title":"Enhancing Cyclist Safety Through Driver Gaze Analysis at Intersections With Cycle Lanes","authors":"Jibran A. Abbasi;Ashkan Parsi;Nicolas Ringelstein;Patrice Reilhac;Edward Jones;Martin Glavin","doi":"10.1109/TITS.2025.3530872","DOIUrl":"https://doi.org/10.1109/TITS.2025.3530872","url":null,"abstract":"In urban areas, roads with dedicated cycle lanes play a vital role in cyclist safety. However, accidents can still occur when vehicles cross the cycle lane at intersections. Accidents mostly occur due to failure of the driver to see a cyclist on the cycle lane, particularly when the cyclist is going straight through the intersection, and the vehicle is turning. For safe driving, it is critical that the drivers visually scan the area in the vicinity of the junction and the car, particularly using the wing-mirror, prior to making turns. This paper describes results from a set of test drives using in-vehicle non-invasive eye-tracking and in-vehicle CAN bus sensors to determine driver behaviour. In total, 20 drivers were monitored through 5 different intersections with cycle lanes. The study found that approximately 83% of drivers did not check their wing mirror prior to, or during their turning manoeuvre, potentially putting pedestrian, cyclists, scooter and hoverboard users in danger. An algorithm was developed to analyse driver gaze during the turning manoeuvre to identify cases where they failed to look at the wing mirror. The gaze pattern and gaze concentration on the mirror helps to identify safe and unsafe driving behaviour. This information can then be used to improve Advanced Driver-Assistance Systems (ADAS) to create a safer environment for all road users.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3175-3184"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-02-04 DOI: 10.1109/TITS.2025.3527256
Huhnkuk Lim;Sajjad Ahmad Khan
Presents corrections to the paper, Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”.
{"title":"Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”","authors":"Huhnkuk Lim;Sajjad Ahmad Khan","doi":"10.1109/TITS.2025.3527256","DOIUrl":"https://doi.org/10.1109/TITS.2025.3527256","url":null,"abstract":"Presents corrections to the paper, Corrections to “Toward Infotainment Services in Vehicular Named Data Networking: A Comprehensive Framework Design and Its Realization”.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 2","pages":"2811-2811"},"PeriodicalIF":7.9,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10871176","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Intelligent Transportation Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1