{"title":"GPT4TFP: Spatio-temporal fusion large language model for traffic flow prediction","authors":"Yiwu Xu, Mengchi Liu","doi":"10.1016/j.neucom.2025.129562","DOIUrl":null,"url":null,"abstract":"<div><div>Traffic flow prediction aims to anticipate the future usage levels of transportation, and is a pivotal component of intelligent transportation systems. Previous studies have mainly employed deep learning technologies to decode traffic flow data. These methods process the spatial and temporal embeddings of traffic flow data in a sequential, parallel, or single-feature manner. Although the structures of these models are becoming more and more complex, their accuracy has not improved. Recently, large language models (LLMs) have made significant progress in traffic flow prediction tasks due to their superior performance. However, although the spatio-temporal dependencies of traffic flow prediction can be captured by LLMs, they ignore the cross-relationships between spatio-temporal embeddings. To this end, we propose a spatio-temporal fusion large language model (GPT4TFP) for traffic flow prediction, which is divided into four components: the spatio-temporal embedding layer, the spatio-temporal fusion layer, the frozen pre-trained LLM layer, and the output linear layer. The spatio-temporal embedding layer embeds traffic flow data into the spatio-temporal representations required by traffic flow prediction. In the spatio-temporal fusion layer, we propose a spatio-temporal fusion strategy based on multi-head cross-attention to capture the cross-relationships between spatio-temporal embeddings. In addition, we introduce a frozen pre-trained strategy to fine-tune the LLM to improve the accuracy of traffic flow prediction. The experimental results on two traffic flow datasets show that the proposed model outperforms a set of state-of-the-art baseline models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"625 ","pages":"Article 129562"},"PeriodicalIF":5.5000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225002346","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Traffic flow prediction aims to anticipate the future usage levels of transportation, and is a pivotal component of intelligent transportation systems. Previous studies have mainly employed deep learning technologies to decode traffic flow data. These methods process the spatial and temporal embeddings of traffic flow data in a sequential, parallel, or single-feature manner. Although the structures of these models are becoming more and more complex, their accuracy has not improved. Recently, large language models (LLMs) have made significant progress in traffic flow prediction tasks due to their superior performance. However, although the spatio-temporal dependencies of traffic flow prediction can be captured by LLMs, they ignore the cross-relationships between spatio-temporal embeddings. To this end, we propose a spatio-temporal fusion large language model (GPT4TFP) for traffic flow prediction, which is divided into four components: the spatio-temporal embedding layer, the spatio-temporal fusion layer, the frozen pre-trained LLM layer, and the output linear layer. The spatio-temporal embedding layer embeds traffic flow data into the spatio-temporal representations required by traffic flow prediction. In the spatio-temporal fusion layer, we propose a spatio-temporal fusion strategy based on multi-head cross-attention to capture the cross-relationships between spatio-temporal embeddings. In addition, we introduce a frozen pre-trained strategy to fine-tune the LLM to improve the accuracy of traffic flow prediction. The experimental results on two traffic flow datasets show that the proposed model outperforms a set of state-of-the-art baseline models.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.