A time-embedded attention-based transformer for crash likelihood prediction at intersections using connected vehicle data

IF 7.6 1区 工程技术 Q1 TRANSPORTATION SCIENCE & TECHNOLOGY Transportation Research Part C-Emerging Technologies Pub Date : 2024-09-11 DOI:10.1016/j.trc.2024.104831
B M Tazbiul Hassan Anik, Zubayer Islam, Mohamed Abdel-Aty
{"title":"A time-embedded attention-based transformer for crash likelihood prediction at intersections using connected vehicle data","authors":"B M Tazbiul Hassan Anik,&nbsp;Zubayer Islam,&nbsp;Mohamed Abdel-Aty","doi":"10.1016/j.trc.2024.104831","DOIUrl":null,"url":null,"abstract":"<div><p>The real-time crash likelihood prediction model is an essential component of the proactive traffic safety management system. Over the years, numerous studies have attempted to construct a crash likelihood prediction model in order to enhance traffic safety, but mostly on freeways. In the majority of the existing studies, researchers have primarily used a deep learning-based framework to identify crash potential. Lately, Transformers have emerged as a potential deep neural network that fundamentally operates through attention-based mechanisms. Transformers exhibit distinct functional benefits over established deep learning models like Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), and Convolutional Neural Networks (CNNs). First, they employ attention mechanisms to accurately weigh the significance of different parts of input data, a dynamic functionality that is not available in RNNs, LSTMs, and CNNs. Second, they are well-equipped to handle dependencies over long-range data sequences, a feat RNNs typically struggle with. Lastly, unlike RNNs, LSTMs, and CNNs, which process data in sequence, Transformers can parallelly process data elements during training and inference, thereby enhancing their efficiency. Apprehending the immense possibility of Transformers, this paper proposes inTersection-Transformer (inTformer), a time-embedded attention-based Transformer model that can effectively predict intersection crash likelihood in real-time. The inTformer is basically a binary prediction model that predicts the occurrence or non-occurrence of crashes at intersections in the near future (i.e., next 15 min). The proposed model was developed by employing traffic data extracted from connected vehicles. Acknowledging the complex traffic operation mechanism at intersection, this study developed zone-specific models by dividing the intersection region into two distinct zones: within-intersection and approach zones, each representing the intricate flow of traffic unique to the type of intersection (i.e., three-legged and four-legged intersections). In the ‘within-intersection’ zone, the inTformer models attained a sensitivity of up to 73%, while in the ‘approach’ zone, the sensitivity peaked at 74%. Moreover, benchmarking the optimal zone-specific inTformer models against earlier studies on crash likelihood prediction at intersections and several established deep learning models trained on the same connected vehicle dataset confirmed the superiority of the proposed inTformer. Further, to quantify the impact of features on crash likelihood at intersections, the SHAP (SHapley Additive exPlanations) method was applied on the best performing inTformer models. The most critical predictors were average and maximum approach speeds, average and maximum control delays, average and maximum travel times, split failure percentage and count, and percent arrival on green.</p></div>","PeriodicalId":54417,"journal":{"name":"Transportation Research Part C-Emerging Technologies","volume":"169 ","pages":"Article 104831"},"PeriodicalIF":7.6000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part C-Emerging Technologies","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0968090X24003528","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

The real-time crash likelihood prediction model is an essential component of the proactive traffic safety management system. Over the years, numerous studies have attempted to construct a crash likelihood prediction model in order to enhance traffic safety, but mostly on freeways. In the majority of the existing studies, researchers have primarily used a deep learning-based framework to identify crash potential. Lately, Transformers have emerged as a potential deep neural network that fundamentally operates through attention-based mechanisms. Transformers exhibit distinct functional benefits over established deep learning models like Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), and Convolutional Neural Networks (CNNs). First, they employ attention mechanisms to accurately weigh the significance of different parts of input data, a dynamic functionality that is not available in RNNs, LSTMs, and CNNs. Second, they are well-equipped to handle dependencies over long-range data sequences, a feat RNNs typically struggle with. Lastly, unlike RNNs, LSTMs, and CNNs, which process data in sequence, Transformers can parallelly process data elements during training and inference, thereby enhancing their efficiency. Apprehending the immense possibility of Transformers, this paper proposes inTersection-Transformer (inTformer), a time-embedded attention-based Transformer model that can effectively predict intersection crash likelihood in real-time. The inTformer is basically a binary prediction model that predicts the occurrence or non-occurrence of crashes at intersections in the near future (i.e., next 15 min). The proposed model was developed by employing traffic data extracted from connected vehicles. Acknowledging the complex traffic operation mechanism at intersection, this study developed zone-specific models by dividing the intersection region into two distinct zones: within-intersection and approach zones, each representing the intricate flow of traffic unique to the type of intersection (i.e., three-legged and four-legged intersections). In the ‘within-intersection’ zone, the inTformer models attained a sensitivity of up to 73%, while in the ‘approach’ zone, the sensitivity peaked at 74%. Moreover, benchmarking the optimal zone-specific inTformer models against earlier studies on crash likelihood prediction at intersections and several established deep learning models trained on the same connected vehicle dataset confirmed the superiority of the proposed inTformer. Further, to quantify the impact of features on crash likelihood at intersections, the SHAP (SHapley Additive exPlanations) method was applied on the best performing inTformer models. The most critical predictors were average and maximum approach speeds, average and maximum control delays, average and maximum travel times, split failure percentage and count, and percent arrival on green.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用联网车辆数据预测交叉路口碰撞可能性的时间嵌入式注意力转换器
实时碰撞可能性预测模型是主动式交通安全管理系统的重要组成部分。多年来,许多研究都试图构建碰撞可能性预测模型,以加强交通安全,但大多是在高速公路上。在现有的大多数研究中,研究人员主要使用基于深度学习的框架来识别碰撞可能性。最近,变形金刚作为一种潜在的深度神经网络出现了,它从根本上通过基于注意力的机制运行。与递归神经网络(RNNs)、长短期记忆网络(LSTMs)和卷积神经网络(CNNs)等成熟的深度学习模型相比,变形金刚具有明显的功能优势。首先,它们采用注意力机制来准确权衡输入数据不同部分的重要性,这是 RNN、LSTM 和 CNN 所不具备的动态功能。其次,它们能够很好地处理长距离数据序列的依赖关系,而 RNN 通常很难做到这一点。最后,与按顺序处理数据的 RNN、LSTM 和 CNN 不同,变换器可以在训练和推理过程中并行处理数据元素,从而提高效率。考虑到变换器的巨大潜力,本文提出了 "交叉路口变换器"(inTersection-Transformer,简称 inTformer),这是一种基于时间嵌入式注意力的变换器模型,可以有效地实时预测交叉路口碰撞的可能性。inTformer 基本上是一个二进制预测模型,可预测近期(即未来 15 分钟内)交叉路口是否会发生碰撞事故。所提议的模型是利用从联网车辆中提取的交通数据开发的。考虑到交叉路口复杂的交通运行机制,本研究将交叉路口区域划分为两个不同的区域:交叉路口内区域和引道区域,每个区域都代表了交叉路口类型(即三脚交叉路口和四脚交叉路口)特有的错综复杂的交通流,从而建立了特定区域模型。在 "交叉口内 "区域,inTformer 模型的灵敏度高达 73%,而在 "接近 "区域,灵敏度最高达到 74%。此外,将针对特定区域的最优 inTformer 模型与早期关于交叉口碰撞可能性预测的研究以及在相同联网车辆数据集上训练的几个成熟深度学习模型进行比较,证实了所提出的 inTformer 的优越性。此外,为了量化特征对交叉路口碰撞可能性的影响,对表现最好的 inTformer 模型采用了 SHAP(SHapley Additive exPlanations)方法。最关键的预测因素是平均和最大接近速度、平均和最大控制延迟、平均和最大行驶时间、分流故障百分比和计数以及绿灯到达百分比。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
15.80
自引率
12.00%
发文量
332
审稿时长
64 days
期刊介绍: Transportation Research: Part C (TR_C) is dedicated to showcasing high-quality, scholarly research that delves into the development, applications, and implications of transportation systems and emerging technologies. Our focus lies not solely on individual technologies, but rather on their broader implications for the planning, design, operation, control, maintenance, and rehabilitation of transportation systems, services, and components. In essence, the intellectual core of the journal revolves around the transportation aspect rather than the technology itself. We actively encourage the integration of quantitative methods from diverse fields such as operations research, control systems, complex networks, computer science, and artificial intelligence. Join us in exploring the intersection of transportation systems and emerging technologies to drive innovation and progress in the field.
期刊最新文献
Dynamic characteristics of commercial Adaptive Cruise Control across driving situations: Response time, string stability, and asymmetric behavior Household activity pattern problem with automated vehicle-enabled intermodal trips Dynamic lane management for emerging mixed traffic with semi-autonomous vehicles Reinforced stable matching for Crowd-Sourced Delivery Systems under stochastic driver acceptance behavior A human factors-based modeling framework to mimic bus driver behavior
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1