{"title":"GPT4TFP: Spatio-temporal fusion large language model for traffic flow prediction","authors":"Yiwu Xu, Mengchi Liu","doi":"10.1016/j.neucom.2025.129562","DOIUrl":null,"url":null,"abstract":"<div><div>Traffic flow prediction aims to anticipate the future usage levels of transportation, and is a pivotal component of intelligent transportation systems. Previous studies have mainly employed deep learning technologies to decode traffic flow data. These methods process the spatial and temporal embeddings of traffic flow data in a sequential, parallel, or single-feature manner. Although the structures of these models are becoming more and more complex, their accuracy has not improved. Recently, large language models (LLMs) have made significant progress in traffic flow prediction tasks due to their superior performance. However, although the spatio-temporal dependencies of traffic flow prediction can be captured by LLMs, they ignore the cross-relationships between spatio-temporal embeddings. To this end, we propose a spatio-temporal fusion large language model (GPT4TFP) for traffic flow prediction, which is divided into four components: the spatio-temporal embedding layer, the spatio-temporal fusion layer, the frozen pre-trained LLM layer, and the output linear layer. The spatio-temporal embedding layer embeds traffic flow data into the spatio-temporal representations required by traffic flow prediction. In the spatio-temporal fusion layer, we propose a spatio-temporal fusion strategy based on multi-head cross-attention to capture the cross-relationships between spatio-temporal embeddings. In addition, we introduce a frozen pre-trained strategy to fine-tune the LLM to improve the accuracy of traffic flow prediction. The experimental results on two traffic flow datasets show that the proposed model outperforms a set of state-of-the-art baseline models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"625 ","pages":"Article 129562"},"PeriodicalIF":6.5000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225002346","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Traffic flow prediction aims to anticipate the future usage levels of transportation, and is a pivotal component of intelligent transportation systems. Previous studies have mainly employed deep learning technologies to decode traffic flow data. These methods process the spatial and temporal embeddings of traffic flow data in a sequential, parallel, or single-feature manner. Although the structures of these models are becoming more and more complex, their accuracy has not improved. Recently, large language models (LLMs) have made significant progress in traffic flow prediction tasks due to their superior performance. However, although the spatio-temporal dependencies of traffic flow prediction can be captured by LLMs, they ignore the cross-relationships between spatio-temporal embeddings. To this end, we propose a spatio-temporal fusion large language model (GPT4TFP) for traffic flow prediction, which is divided into four components: the spatio-temporal embedding layer, the spatio-temporal fusion layer, the frozen pre-trained LLM layer, and the output linear layer. The spatio-temporal embedding layer embeds traffic flow data into the spatio-temporal representations required by traffic flow prediction. In the spatio-temporal fusion layer, we propose a spatio-temporal fusion strategy based on multi-head cross-attention to capture the cross-relationships between spatio-temporal embeddings. In addition, we introduce a frozen pre-trained strategy to fine-tune the LLM to improve the accuracy of traffic flow prediction. The experimental results on two traffic flow datasets show that the proposed model outperforms a set of state-of-the-art baseline models.
交通流量预测旨在预测未来交通的使用水平,是智能交通系统的关键组成部分。以往的研究主要采用深度学习技术对交通流数据进行解码。这些方法以顺序、并行或单特征的方式处理交通流数据的空间和时间嵌入。虽然这些模型的结构越来越复杂,但其精度并没有提高。近年来,大型语言模型(large language models, llm)由于其优越的性能在交通流预测任务中取得了重大进展。然而,尽管llm可以捕获交通流预测的时空依赖关系,但它们忽略了时空嵌入之间的交叉关系。为此,我们提出了一种用于交通流预测的时空融合大语言模型(GPT4TFP),该模型分为四个部分:时空嵌入层、时空融合层、冻结预训练LLM层和输出线性层。时空嵌入层将交通流数据嵌入到交通流预测所需的时空表示中。在时空融合层,我们提出了一种基于多头交叉注意的时空融合策略,以捕捉时空嵌入之间的交叉关系。此外,我们引入冻结预训练策略对LLM进行微调,以提高交通流量预测的准确性。在两个交通流数据集上的实验结果表明,该模型优于一组最先进的基线模型。
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.