Graph and tensor-train recurrent neural networks for high-dimensional models of limit order books

Jacobo Roa-Vicens, Y. Xu, Ricardo Silva, D. Mandic
{"title":"Graph and tensor-train recurrent neural networks for high-dimensional models of limit order books","authors":"Jacobo Roa-Vicens, Y. Xu, Ricardo Silva, D. Mandic","doi":"10.1145/3533271.3561710","DOIUrl":null,"url":null,"abstract":"Recurrent neural networks (RNNs) have proven to be particularly effective for the paradigms of learning and modelling time series. However, sequential data of high dimensions are considerably more difficult and computationally expensive to model, as the number of parameters required to train the RNN grows exponentially with data dimensionality. This is also the case with time series from limit order books, the electronic registries where prices of securities are formed in public markets. To this end, tensorization of neural networks provides an efficient method to reduce the number of model parameters, and has been applied successfully to high-dimensional series such as video sequences and financial time series, for example, using tensor-train RNNs (TTRNNs). However, such TTRNNs suffer from a number of shortcomings, including: (i) model sensitivity to the ordering of core tensor contractions; (ii) training sensitivity to weight initialization; and (iii) exploding or vanishing gradient problems due to the recurrent propagation through the tensor-train topology. Recent studies showed that embedding a multi-linear graph filter to model RNN states (Recurrent Graph Tensor Network, RGTN) provides enhanced flexibility and expressive power to tensor networks, while mitigating the shortcomings of TTRNNs. In this paper, we demonstrate the advantages arising from the use of graph filters to model limit order book sequences of high dimension as compared with the state-of-the-art benchmarks. It is shown that the combination of the graph module (to mitigate problematic gradients) with the radial structure (to make the tensor network architecture flexible) results in substantial improvements in output variance, training time and number of parameters required, without any sacrifice in accuracy.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third ACM International Conference on AI in Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533271.3561710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recurrent neural networks (RNNs) have proven to be particularly effective for the paradigms of learning and modelling time series. However, sequential data of high dimensions are considerably more difficult and computationally expensive to model, as the number of parameters required to train the RNN grows exponentially with data dimensionality. This is also the case with time series from limit order books, the electronic registries where prices of securities are formed in public markets. To this end, tensorization of neural networks provides an efficient method to reduce the number of model parameters, and has been applied successfully to high-dimensional series such as video sequences and financial time series, for example, using tensor-train RNNs (TTRNNs). However, such TTRNNs suffer from a number of shortcomings, including: (i) model sensitivity to the ordering of core tensor contractions; (ii) training sensitivity to weight initialization; and (iii) exploding or vanishing gradient problems due to the recurrent propagation through the tensor-train topology. Recent studies showed that embedding a multi-linear graph filter to model RNN states (Recurrent Graph Tensor Network, RGTN) provides enhanced flexibility and expressive power to tensor networks, while mitigating the shortcomings of TTRNNs. In this paper, we demonstrate the advantages arising from the use of graph filters to model limit order book sequences of high dimension as compared with the state-of-the-art benchmarks. It is shown that the combination of the graph module (to mitigate problematic gradients) with the radial structure (to make the tensor network architecture flexible) results in substantial improvements in output variance, training time and number of parameters required, without any sacrifice in accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
高维极限序书模型的图和张量训练递归神经网络
递归神经网络(RNNs)已被证明对时间序列的学习和建模范式特别有效。然而,由于训练RNN所需的参数数量随着数据维数呈指数级增长,高维的序列数据的建模难度和计算成本要高得多。这也适用于限价订单的时间序列,限价订单是在公开市场上形成证券价格的电子登记。为此,神经网络的张张化提供了一种有效的方法来减少模型参数的数量,并且已经成功地应用于高维序列,例如视频序列和金融时间序列,例如使用张量训练rnn (TTRNNs)。然而,这种ttrnn存在许多缺点,包括:(i)模型对核心张量收缩排序的敏感性;(ii)训练对权重初始化的敏感性;(3)由于张量列拓扑结构的循环传播而引起的梯度爆炸或消失问题。最近的研究表明,嵌入一个多线性图滤波器来建模RNN状态(Recurrent graph Tensor Network, RGTN)为张量网络提供了增强的灵活性和表达能力,同时减轻了ttrnn的缺点。在本文中,我们展示了与最先进的基准相比,使用图滤波器对高维极限订单序列建模所产生的优势。结果表明,图模块(缓解有问题的梯度)与径向结构(使张量网络架构灵活)的结合在输出方差、训练时间和所需参数数量方面有了很大的改善,而精度没有任何牺牲。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Core Matrix Regression and Prediction with Regularization Risk-Aware Linear Bandits with Application in Smart Order Routing Addressing Extreme Market Responses Using Secure Aggregation Addressing Non-Stationarity in FX Trading with Online Model Selection of Offline RL Experts Objective Driven Portfolio Construction Using Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1