MVCV-Traffic: multiview road traffic state estimation via cross-view learning

IF 4.3 1区 地球科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Geographical Information Science Pub Date : 2023-08-28 DOI:10.1080/13658816.2023.2249968
M. Deng, Kaiqi Chen, Kaiyuan Lei, Yuanfang Chen, Yan Shi
{"title":"MVCV-Traffic: multiview road traffic state estimation via cross-view learning","authors":"M. Deng, Kaiqi Chen, Kaiyuan Lei, Yuanfang Chen, Yan Shi","doi":"10.1080/13658816.2023.2249968","DOIUrl":null,"url":null,"abstract":"Abstract Fine-grained urban traffic data are often incomplete owing to limitations in sensor technology and economic cost. However, data-driven traffic analysis methods in intelligent transportation systems (ITSs) heavily rely on the quality of input data. Thus, accurately estimating missing traffic observations is an essential data engineering task in ITSs. The complexity of underlying node-wise correlation structures and various missing scenarios presents a significant challenge in achieving high-precision estimation. This study proposes a novel multiview neural network termed MVCV-Traffic, equipped with a cross-view learning mechanism, to improve traffic estimation. The contributions of this model can be summarized into two parts: multiview learning and cross-view fusing. For multiview learning, several specialized neural networks are adopted to fit diverse correlation structures from different views. For cross-view fusing, a new information fusion strategy merges multiview messages at both feature and output levels to enhance the learning of joint correlations. Experiments on two real-world datasets demonstrate that the proposed model significantly outperforms existing traffic speed estimation methods for different types and rates of missing data.","PeriodicalId":14162,"journal":{"name":"International Journal of Geographical Information Science","volume":"37 1","pages":"2205 - 2237"},"PeriodicalIF":4.3000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Geographical Information Science","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.1080/13658816.2023.2249968","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Fine-grained urban traffic data are often incomplete owing to limitations in sensor technology and economic cost. However, data-driven traffic analysis methods in intelligent transportation systems (ITSs) heavily rely on the quality of input data. Thus, accurately estimating missing traffic observations is an essential data engineering task in ITSs. The complexity of underlying node-wise correlation structures and various missing scenarios presents a significant challenge in achieving high-precision estimation. This study proposes a novel multiview neural network termed MVCV-Traffic, equipped with a cross-view learning mechanism, to improve traffic estimation. The contributions of this model can be summarized into two parts: multiview learning and cross-view fusing. For multiview learning, several specialized neural networks are adopted to fit diverse correlation structures from different views. For cross-view fusing, a new information fusion strategy merges multiview messages at both feature and output levels to enhance the learning of joint correlations. Experiments on two real-world datasets demonstrate that the proposed model significantly outperforms existing traffic speed estimation methods for different types and rates of missing data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MVCV-Traffic:基于交叉视角学习的多视角道路交通状态估计
摘要由于传感器技术和经济成本的限制,细粒度的城市交通数据往往是不完整的。然而,智能交通系统中的数据驱动交通分析方法在很大程度上依赖于输入数据的质量。因此,准确估计遗漏的交通观测是信息技术系统中的一项重要数据工程任务。底层节点相关结构和各种缺失场景的复杂性对实现高精度估计提出了重大挑战。本研究提出了一种新的多视角神经网络,称为MVCV Traffic,配备了交叉视角学习机制,以改进流量估计。该模型的贡献可以概括为两部分:多视角学习和跨视角融合。对于多视角学习,采用了几种专门的神经网络来适应不同视角的不同相关结构。对于跨视图融合,一种新的信息融合策略在特征和输出级别上融合多视图消息,以增强联合相关性的学习。在两个真实世界数据集上的实验表明,对于不同类型和数据丢失率,所提出的模型显著优于现有的交通速度估计方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
11.00
自引率
7.00%
发文量
81
审稿时长
9 months
期刊介绍: International Journal of Geographical Information Science provides a forum for the exchange of original ideas, approaches, methods and experiences in the rapidly growing field of geographical information science (GIScience). It is intended to interest those who research fundamental and computational issues of geographic information, as well as issues related to the design, implementation and use of geographical information for monitoring, prediction and decision making. Published research covers innovations in GIScience and novel applications of GIScience in natural resources, social systems and the built environment, as well as relevant developments in computer science, cartography, surveying, geography and engineering in both developed and developing countries.
期刊最新文献
Visual attention-guided augmented representation of geographic scenes: a case of bridge stress visualization A multi-hierarchical method to extract spatial network structures from large-scale origin-destination flow data A deep learning approach to recognizing fine-grained expressway location reference from unstructured texts in Chinese A knowledge-guided visualization framework of disaster scenes for helping the public cognize risk information A methodology to Geographic Cellular Automata model accounting for spatial heterogeneity and adaptive neighborhoods
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1