首页 > 最新文献

Proceedings of the Third ACM International Conference on AI in Finance最新文献

英文 中文
Dark-Pool Smart Order Routing: a Combinatorial Multi-armed Bandit Approach 暗池智能订单路由:一种组合多臂强盗方法
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561728
Martino Bernasconi, S. Martino, Edoardo Vittori, F. Trovò, Marcello Restelli
We study the problem of developing a Smart Order Routing algorithm that learns how to optimize the dollar volume, i.e., the total value of the traded shares, gained from slicing an order across multiple dark pools. Our work is motivated by two distinct issues: (i) the surge in liquidity fragmentation caused by the rising popularity of electronic trading and by the increasing number of trading venues, and (ii) the growth in popularity of dark pools, an exchange venue characterised by a lack of transparency. This paper critically discusses the known dark pool literature and proposes a novel algorithm, namely the DP-CMAB algorithm, that extends existing solutions by allowing the agent to specify the desired limit price when placing orders. Specifically, we frame the problem of dollar volume optimization in a multi-venue setting as a Combinatorial Multi-Armed Bandit (CMAB) problem, representing a generalization of the well-studied MAB framework. Drawing from the rich MAB and CMAB literature, we present multiple strategies that our algorithm may adopt to select the best allocation options. Furthermore, we analyze how exploiting financial domain knowledge improves the agents’ performance. Finally, we evaluate the DP-CMAB performance in an environment built from real market data and show that our algorithm outperforms state-of-the-art solutions.
我们研究了开发一个智能订单路由算法的问题,该算法学习如何优化美元交易量,即交易股票的总价值,通过在多个暗池中切片订单获得。我们的工作是由两个不同的问题驱动的:(i)由于电子交易的日益普及和交易场所数量的增加而导致的流动性碎片化激增,以及(ii)以缺乏透明度为特征的交易场所黑池的普及。本文批判性地讨论了已知的暗池文献,并提出了一种新的算法,即DP-CMAB算法,该算法通过允许代理在下订单时指定期望的限价来扩展现有的解决方案。具体来说,我们将多场所环境下的美元数量优化问题定义为组合多臂强盗(CMAB)问题,代表了已经得到充分研究的MAB框架的推广。从丰富的MAB和CMAB文献中,我们提出了我们的算法可能采用的多种策略来选择最佳分配选项。此外,我们还分析了利用金融领域知识如何提高代理的绩效。最后,我们在基于真实市场数据构建的环境中评估了DP-CMAB的性能,并表明我们的算法优于最先进的解决方案。
{"title":"Dark-Pool Smart Order Routing: a Combinatorial Multi-armed Bandit Approach","authors":"Martino Bernasconi, S. Martino, Edoardo Vittori, F. Trovò, Marcello Restelli","doi":"10.1145/3533271.3561728","DOIUrl":"https://doi.org/10.1145/3533271.3561728","url":null,"abstract":"We study the problem of developing a Smart Order Routing algorithm that learns how to optimize the dollar volume, i.e., the total value of the traded shares, gained from slicing an order across multiple dark pools. Our work is motivated by two distinct issues: (i) the surge in liquidity fragmentation caused by the rising popularity of electronic trading and by the increasing number of trading venues, and (ii) the growth in popularity of dark pools, an exchange venue characterised by a lack of transparency. This paper critically discusses the known dark pool literature and proposes a novel algorithm, namely the DP-CMAB algorithm, that extends existing solutions by allowing the agent to specify the desired limit price when placing orders. Specifically, we frame the problem of dollar volume optimization in a multi-venue setting as a Combinatorial Multi-Armed Bandit (CMAB) problem, representing a generalization of the well-studied MAB framework. Drawing from the rich MAB and CMAB literature, we present multiple strategies that our algorithm may adopt to select the best allocation options. Furthermore, we analyze how exploiting financial domain knowledge improves the agents’ performance. Finally, we evaluate the DP-CMAB performance in an environment built from real market data and show that our algorithm outperforms state-of-the-art solutions.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124771011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Market Making under Order Stacking Framework: A Deep Reinforcement Learning Approach 订单堆叠框架下的做市:一种深度强化学习方法
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561789
G. Chung, Munki Chung, Yongjae Lee, W. Kim
Market making strategy is one of the most popular high frequency trading strategies, where a market maker continuously quotes on both bid and ask side of the limit order book to profit from capturing bid-ask spread and to provide liquidity to the market. A market maker should consider three types of risk: 1) inventory risk, 2) adverse selection risk, and 3) non-execution risk. While there have been a lot of studies on market making via deep reinforcement learning, most of them focus on the first risk. However, in highly competitive markets, the latter two risks are very important to make stable profit from market making. For better control of the latter two risks, it is important to reserve good queue position of their resting limit orders. For this purpose, practitioners frequently adopt order stacking framework where their limit orders are quoted at multiple price levels beyond the best limit price. To the best of our knowledge, there have been no studies that adopt order stacking framework for market making. In this regard, we develop a deep reinforcement learning model for market making under order stacking framework. We use a modified state representation to efficiently encode the queue positions of the resting limit orders. We conduct comprehensive ablation study to show that by utilizing deep reinforcement learning, a market making agent under order stacking framework successfully learns to improve the PL while reducing various risks. For the training and testing of our model, we use complete limit order book data of KOSPI200 Index Futures from November 1, 2019 to January 31, 2020 which is comprised of 61 trading days.
做市策略是最受欢迎的高频交易策略之一,做市商在限价单上连续报价,以获取买卖价差并为市场提供流动性。做市商应该考虑三种类型的风险:1)库存风险,2)逆向选择风险,3)非执行风险。虽然有很多关于通过深度强化学习做市的研究,但大多数都集中在第一种风险上。然而,在竞争激烈的市场中,后两种风险对于做市商获得稳定的利润至关重要。为了更好地控制后两种风险,为其剩余限价单预留良好的排队位置是很重要的。为此,从业者经常采用订单堆叠框架,他们的限价订单在最佳限价之外的多个价格水平上报价。就我们所知,目前还没有采用订单叠加框架进行做市的研究。在这方面,我们开发了一个订单堆叠框架下的深度强化学习模型。我们使用一种改进的状态表示来有效地编码静止极限订单的队列位置。我们进行了全面的烧蚀研究,表明在订单堆叠框架下,做市主体通过利用深度强化学习,成功地学习到了提高PL的同时降低了各种风险。为了训练和测试我们的模型,我们使用了2019年11月1日至2020年1月31日KOSPI200指数期货的完整限价订单数据,包括61个交易日。
{"title":"Market Making under Order Stacking Framework: A Deep Reinforcement Learning Approach","authors":"G. Chung, Munki Chung, Yongjae Lee, W. Kim","doi":"10.1145/3533271.3561789","DOIUrl":"https://doi.org/10.1145/3533271.3561789","url":null,"abstract":"Market making strategy is one of the most popular high frequency trading strategies, where a market maker continuously quotes on both bid and ask side of the limit order book to profit from capturing bid-ask spread and to provide liquidity to the market. A market maker should consider three types of risk: 1) inventory risk, 2) adverse selection risk, and 3) non-execution risk. While there have been a lot of studies on market making via deep reinforcement learning, most of them focus on the first risk. However, in highly competitive markets, the latter two risks are very important to make stable profit from market making. For better control of the latter two risks, it is important to reserve good queue position of their resting limit orders. For this purpose, practitioners frequently adopt order stacking framework where their limit orders are quoted at multiple price levels beyond the best limit price. To the best of our knowledge, there have been no studies that adopt order stacking framework for market making. In this regard, we develop a deep reinforcement learning model for market making under order stacking framework. We use a modified state representation to efficiently encode the queue positions of the resting limit orders. We conduct comprehensive ablation study to show that by utilizing deep reinforcement learning, a market making agent under order stacking framework successfully learns to improve the PL while reducing various risks. For the training and testing of our model, we use complete limit order book data of KOSPI200 Index Futures from November 1, 2019 to January 31, 2020 which is comprised of 61 trading days.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129888118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Guided Self-Training based Semi-Supervised Learning for Fraud Detection 基于引导自我训练的半监督学习欺诈检测
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561783
Awanish Kumar, Soumyadeep Ghosh, Janu Verma
Semi supervised learning has attracted attention of AI researchers in the recent past, especially after the advent of deep learning methods and their success in several real world applications. Most deep learning models require large amounts of labelled data, which is expensive to obtain. Fraud detection is a very important problem for several industries and large amount of data is often available. However, obtaining labelled data is cumbersome and hence semi-supervised learning is perfectly positioned to aid us in building robust and accurate supervised models. In this work, we consider different kinds of fraud detection paradigms and show that a self-training based semi-supervised learning approach can produce significant improvements over a model that has been training on a limited set of labelled data. We propose a novel self-training approach by using a guided sharpening technique using a pair of autoencoders which provide useful cues for incorporating unlabelled data in the training process. We conduct thorough experiments on three different real world databases and analysis to showcase the effectiveness of the approach. On the elliptic bitcoin fraud dataset, we show that utilizing unlabelled data improves the F1 score of the model trained on limited labelled data by around 10%.
大多数深度学习模型需要大量的标记数据,而这些数据的获取成本很高。欺诈检测对于许多行业来说都是一个非常重要的问题,通常需要大量的数据。然而,获得标记数据是很麻烦的,因此半监督学习是帮助我们建立鲁棒和准确的监督模型的完美定位。在这项工作中,我们考虑了不同类型的欺诈检测范式,并表明基于自我训练的半监督学习方法可以比在有限的标记数据集上训练的模型产生显著的改进。我们提出了一种新的自我训练方法,通过使用一对自动编码器的引导锐化技术,为在训练过程中合并未标记数据提供有用的线索。我们在三个不同的真实世界数据库和分析上进行了彻底的实验,以展示该方法的有效性。在椭圆比特币欺诈数据集上,我们表明使用未标记数据将有限标记数据训练的模型的F1分数提高了约10%。
{"title":"Guided Self-Training based Semi-Supervised Learning for Fraud Detection","authors":"Awanish Kumar, Soumyadeep Ghosh, Janu Verma","doi":"10.1145/3533271.3561783","DOIUrl":"https://doi.org/10.1145/3533271.3561783","url":null,"abstract":"Semi supervised learning has attracted attention of AI researchers in the recent past, especially after the advent of deep learning methods and their success in several real world applications. Most deep learning models require large amounts of labelled data, which is expensive to obtain. Fraud detection is a very important problem for several industries and large amount of data is often available. However, obtaining labelled data is cumbersome and hence semi-supervised learning is perfectly positioned to aid us in building robust and accurate supervised models. In this work, we consider different kinds of fraud detection paradigms and show that a self-training based semi-supervised learning approach can produce significant improvements over a model that has been training on a limited set of labelled data. We propose a novel self-training approach by using a guided sharpening technique using a pair of autoencoders which provide useful cues for incorporating unlabelled data in the training process. We conduct thorough experiments on three different real world databases and analysis to showcase the effectiveness of the approach. On the elliptic bitcoin fraud dataset, we show that utilizing unlabelled data improves the F1 score of the model trained on limited labelled data by around 10%.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130357306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interpretable Deep Classifier for Counterfactual Generation 反事实生成的可解释深度分类器
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561722
Wei Zhang, Brian Barr, J. Paisley
Counterfactual explanation has been the core of interpretable machine learning, which requires a trained model to be able to not only infer but also justify its inference. This problem is crucial in many fields, such as fintech and the healthcare industry, where accurate decisions and their justifications are equally important. Many studies have leveraged the power of deep generative models for counterfactual generation. However, most focus on vision data and leave the latent space unsupervised. In this paper, we propose a new and general framework that uses a supervised extension to the Variational Auto-Encoder (VAE) with Normalizing Flow (NF) for simultaneous classification and counterfactual generation. We show experiments on two tabular financial data-sets, Lending Club (LCD) and Give Me Some Credit (GMC), which show that the model can achieve a state-of-art level prediction accuracy while also producing meaningful counterfactual examples to interpret and justify the classifier’s decision.
反事实解释一直是可解释性机器学习的核心,这需要一个训练有素的模型不仅能够推断,而且能够证明其推断。这个问题在许多领域至关重要,比如金融科技和医疗保健行业,在这些领域,准确的决策及其理由同样重要。许多研究利用深度生成模型的力量来生成反事实。然而,大多数研究都集中在视觉数据上,留下了不受监督的潜在空间。在本文中,我们提出了一个新的和通用的框架,该框架使用了具有归一化流(NF)的变分自编码器(VAE)的监督扩展,用于同时分类和反事实生成。我们展示了两个表格金融数据集的实验,Lending Club (LCD)和Give Me Some Credit (GMC),这表明该模型可以达到最先进的预测精度,同时也产生了有意义的反事实示例来解释和证明分类器的决定。
{"title":"An Interpretable Deep Classifier for Counterfactual Generation","authors":"Wei Zhang, Brian Barr, J. Paisley","doi":"10.1145/3533271.3561722","DOIUrl":"https://doi.org/10.1145/3533271.3561722","url":null,"abstract":"Counterfactual explanation has been the core of interpretable machine learning, which requires a trained model to be able to not only infer but also justify its inference. This problem is crucial in many fields, such as fintech and the healthcare industry, where accurate decisions and their justifications are equally important. Many studies have leveraged the power of deep generative models for counterfactual generation. However, most focus on vision data and leave the latent space unsupervised. In this paper, we propose a new and general framework that uses a supervised extension to the Variational Auto-Encoder (VAE) with Normalizing Flow (NF) for simultaneous classification and counterfactual generation. We show experiments on two tabular financial data-sets, Lending Club (LCD) and Give Me Some Credit (GMC), which show that the model can achieve a state-of-art level prediction accuracy while also producing meaningful counterfactual examples to interpret and justify the classifier’s decision.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116325405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computationally Efficient Feature Significance and Importance for Predictive Models 计算效率特征对预测模型的意义和重要性
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561713
Enguerrand Horel, K. Giesecke
We develop a simple and computationally efficient significance test for the features of a predictive model. Our forward-selection approach applies to any model specification, learning task and variable type. The test is non-asymptotic, straightforward to implement, and does not require model refitting. It identifies the statistically significant features as well as feature interactions of any order in a hierarchical manner, and generates a model-free notion of feature importance. This testing procedure can be used for model and variable selection. Experimental and empirical results illustrate its performance.
我们为预测模型的特征开发了一个简单且计算效率高的显著性检验。我们的前向选择方法适用于任何模型规格、学习任务和变量类型。该测试是非渐近的,易于实现,并且不需要修改模型。它以层次方式识别统计上显著的特征以及任意顺序的特征交互,并生成无模型的特征重要性概念。这个测试程序可以用于模型和变量的选择。实验和实证结果验证了该方法的有效性。
{"title":"Computationally Efficient Feature Significance and Importance for Predictive Models","authors":"Enguerrand Horel, K. Giesecke","doi":"10.1145/3533271.3561713","DOIUrl":"https://doi.org/10.1145/3533271.3561713","url":null,"abstract":"We develop a simple and computationally efficient significance test for the features of a predictive model. Our forward-selection approach applies to any model specification, learning task and variable type. The test is non-asymptotic, straightforward to implement, and does not require model refitting. It identifies the statistically significant features as well as feature interactions of any order in a hierarchical manner, and generates a model-free notion of feature importance. This testing procedure can be used for model and variable selection. Experimental and empirical results illustrate its performance.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123173400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Core Matrix Regression and Prediction with Regularization 核心矩阵回归与正则化预测
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561709
D. Zhou, Ajim Uddin, Zuofeng Shang, C. Sylla, Dantong Yu
Many finance time-series analyses often track a matrix of variables at each time and study their co-evolution over a long time. The matrix time series is overly sparse, involves complex interactions among latent matrix factors, and demands advanced models to extract dynamic temporal patterns from these interactions. This paper proposes a Core Matrix Regression with Regularization algorithm (CMRR) to capture spatiotemporal relations in sparse matrix-variate time series. The model decomposes each matrix into three factor matrices of row entities, column entities, and interactions between row entities and column entities, respectively. Subsequently, it applies recurrent neural networks on interaction matrices to extract temporal patterns. Given the sparse matrix, we design an element-wise orthogonal matrix factorization that leverages the Stochastic Gradient Descent (SGD) in a deep learning platform to overcome the challenge of the sparsity and large volume of complex data. The experiment confirms that combining orthogonal matrix factorization with recurrent neural networks is highly effective and outperforms existing graph neural networks and tensor-based time series prediction methods. We apply CMRR in three real-world financial applications: firm earning forecast, predicting firm fundamentals, and firm characteristics, and demonstrate its consistent performance superiority: reducing error by 23%-53% over other state-of-the-art high-dimensional time series prediction algorithms.
许多金融时间序列分析经常在每次跟踪一个变量矩阵,并研究它们在很长一段时间内的共同演化。矩阵时间序列过于稀疏,涉及潜在矩阵因子之间复杂的相互作用,需要先进的模型从这些相互作用中提取动态时间模式。本文提出了一种正则化核心矩阵回归算法(CMRR)来捕捉稀疏矩阵变量时间序列中的时空关系。该模型将每个矩阵分别分解为行实体、列实体和行实体与列实体之间相互作用的三个因子矩阵。然后,在交互矩阵上应用递归神经网络提取时间模式。鉴于稀疏矩阵,我们设计了一种基于元素的正交矩阵分解方法,该方法利用深度学习平台中的随机梯度下降(SGD)来克服稀疏性和大量复杂数据的挑战。实验证明,将正交矩阵分解与递归神经网络相结合是非常有效的,并且优于现有的图神经网络和基于张量的时间序列预测方法。我们将CMRR应用于三个现实世界的金融应用:公司盈利预测、预测公司基本面和公司特征,并证明了其一贯的性能优势:与其他最先进的高维时间序列预测算法相比,误差减少了23%-53%。
{"title":"Core Matrix Regression and Prediction with Regularization","authors":"D. Zhou, Ajim Uddin, Zuofeng Shang, C. Sylla, Dantong Yu","doi":"10.1145/3533271.3561709","DOIUrl":"https://doi.org/10.1145/3533271.3561709","url":null,"abstract":"Many finance time-series analyses often track a matrix of variables at each time and study their co-evolution over a long time. The matrix time series is overly sparse, involves complex interactions among latent matrix factors, and demands advanced models to extract dynamic temporal patterns from these interactions. This paper proposes a Core Matrix Regression with Regularization algorithm (CMRR) to capture spatiotemporal relations in sparse matrix-variate time series. The model decomposes each matrix into three factor matrices of row entities, column entities, and interactions between row entities and column entities, respectively. Subsequently, it applies recurrent neural networks on interaction matrices to extract temporal patterns. Given the sparse matrix, we design an element-wise orthogonal matrix factorization that leverages the Stochastic Gradient Descent (SGD) in a deep learning platform to overcome the challenge of the sparsity and large volume of complex data. The experiment confirms that combining orthogonal matrix factorization with recurrent neural networks is highly effective and outperforms existing graph neural networks and tensor-based time series prediction methods. We apply CMRR in three real-world financial applications: firm earning forecast, predicting firm fundamentals, and firm characteristics, and demonstrate its consistent performance superiority: reducing error by 23%-53% over other state-of-the-art high-dimensional time series prediction algorithms.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114208993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing Extreme Market Responses Using Secure Aggregation 使用安全聚合解决极端市场反应
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561776
Sahar Mazloom, Antigoni Polychroniadou, T. Balch
An investor short sells when he/she borrows a security and sells it on the open market, planning to buy it back later at a lower price. That said, short-sellers profit from a drop in the price of the security. If the shares of the security instead increase in price, short sellers can bare large losses. Short interest stock market data, provide crucial information of short selling in the market for data mining by publishing the number of shares that have been sold short. Short interest reports are compiled and published by the regulators at a high cost. In particular, brokers and market participants must report their positions on a daily basis to Financial Industry Regulatory Authority (FINRA). Then, FINRA processes the data and provides aggregated feeds to potential clients at a high cost. Third party data providers offer the same service at a lower cost given that the brokers contribute their data to the aggregated data feeds. However, the aggregated feeds do not cover 100% of the market since the brokers are not willing to submit and trust their individual data with the data providers. Not to mention that brokers and market participants do not wish to reveal such information on a daily basis to a third party. In this paper, we show how to publish short interest stock market data using Secure Multiparty Computation: In our process, brokers and market participants submit to a data provider their short selling information, including the symbol of the security and its volume in encrypted messages on a daily basis. The messages are encrypted in a way that the data provider cannot decrypt them and therefore cannot learn about individual participants input. Then, the data provider, can compute an aggregation on the encrypted data and publish the aggregation of the volume per security. It is important to note that the individual volumes are not revealed to the data provider, only the aggregated volume is published.
卖空是指投资者借入一种证券并在公开市场上卖出,计划以后以较低的价格买回。也就是说,卖空者从证券价格下跌中获利。如果该证券的股价反而上涨,卖空者可能会蒙受巨大损失。卖空股票市场数据,通过公布被卖空股票的数量,为数据挖掘提供市场卖空的关键信息。监管机构编制和发布空头报告的成本很高。特别是,经纪人和市场参与者必须每天向金融业监管局(FINRA)报告他们的头寸。然后,FINRA处理这些数据,并以高昂的成本向潜在客户提供聚合提要。第三方数据提供者以较低的成本提供相同的服务,因为代理将其数据贡献给聚合的数据源。但是,聚合的提要不能覆盖100%的市场,因为经纪人不愿意向数据提供者提交并信任他们的个人数据。更不用说经纪人和市场参与者不希望每天向第三方透露这些信息。在本文中,我们展示了如何使用安全多方计算来发布空头股票市场数据:在我们的过程中,经纪人和市场参与者向数据提供商提交他们的卖空信息,包括证券的符号和每天加密的消息量。消息以数据提供程序无法解密的方式加密,因此无法了解各个参与者的输入。然后,数据提供者可以计算加密数据的聚合,并根据安全性发布卷的聚合。需要注意的是,单独的卷不会显示给数据提供者,只会发布聚合的卷。
{"title":"Addressing Extreme Market Responses Using Secure Aggregation","authors":"Sahar Mazloom, Antigoni Polychroniadou, T. Balch","doi":"10.1145/3533271.3561776","DOIUrl":"https://doi.org/10.1145/3533271.3561776","url":null,"abstract":"An investor short sells when he/she borrows a security and sells it on the open market, planning to buy it back later at a lower price. That said, short-sellers profit from a drop in the price of the security. If the shares of the security instead increase in price, short sellers can bare large losses. Short interest stock market data, provide crucial information of short selling in the market for data mining by publishing the number of shares that have been sold short. Short interest reports are compiled and published by the regulators at a high cost. In particular, brokers and market participants must report their positions on a daily basis to Financial Industry Regulatory Authority (FINRA). Then, FINRA processes the data and provides aggregated feeds to potential clients at a high cost. Third party data providers offer the same service at a lower cost given that the brokers contribute their data to the aggregated data feeds. However, the aggregated feeds do not cover 100% of the market since the brokers are not willing to submit and trust their individual data with the data providers. Not to mention that brokers and market participants do not wish to reveal such information on a daily basis to a third party. In this paper, we show how to publish short interest stock market data using Secure Multiparty Computation: In our process, brokers and market participants submit to a data provider their short selling information, including the symbol of the security and its volume in encrypted messages on a daily basis. The messages are encrypted in a way that the data provider cannot decrypt them and therefore cannot learn about individual participants input. Then, the data provider, can compute an aggregation on the encrypted data and publish the aggregation of the volume per security. It is important to note that the individual volumes are not revealed to the data provider, only the aggregated volume is published.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115562100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asset Price and Direction Prediction via Deep 2D Transformer and Convolutional Neural Networks 基于深度二维变压器和卷积神经网络的资产价格和方向预测
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561738
Tuna Tuncer, Uygar Kaya, Emre Sefer, Onur Alacam, Tugcan Hoser
Artificial intelligence-based algorithmic trading has recently started to attract more attention. Among the techniques, deep learning-based methods such as transformers, convolutional neural networks, and patch embedding approaches have become quite popular inside the computer vision researchers. In this research, inspired by the state-of-the-art computer vision methods, we have come up with 2 approaches: DAPP (Deep Attention-based Price Prediction) and DPPP (Deep Patch-based Price Prediction) that are based on vision transformers and patch embedding-based convolutional neural networks respectively to predict asset price and direction from historical price data by capturing the image properties of the historical time-series dataset. Before applying attention-based architecture, we have transformed historical time series price dataset into two-dimensional images by using various number of different technical indicators. Each indicator creates data for a fixed number of days. Thus, we construct two-dimensional images of various dimensions. Then, we use original images valleys and hills to label each image as Hold, Buy, or Sell. We find our trained attention-based models to frequently provide better results for ETFs in comparison to the baseline convolutional architectures in terms of both accuracy and financial analysis metrics during longer testing periods. Our code and processed datasets are available at https://github.com/seferlab/SPDPvCNN
基于人工智能的算法交易最近开始吸引更多关注。在这些技术中,基于深度学习的方法,如变压器、卷积神经网络和补丁嵌入方法已经在计算机视觉研究人员中非常流行。在本研究中,受最先进的计算机视觉方法的启发,我们提出了两种方法:基于深度关注的价格预测(DAPP)和基于深度补丁的价格预测(DPPP),它们分别基于视觉变压器和基于补丁嵌入的卷积神经网络,通过捕获历史时间序列数据集的图像属性,从历史价格数据中预测资产价格和方向。在应用基于注意力的体系结构之前,我们已经通过使用各种不同的技术指标将历史时间序列价格数据集转换为二维图像。每个指标创建固定天数的数据。因此,我们构建了不同维度的二维图像。然后,我们使用原始图像的山谷和山丘来标记每个图像为持有,购买或出售。我们发现,在较长的测试期间,与基线卷积架构相比,我们训练有素的基于注意力的模型在准确性和财务分析指标方面经常为etf提供更好的结果。我们的代码和处理过的数据集可在https://github.com/seferlab/SPDPvCNN上获得
{"title":"Asset Price and Direction Prediction via Deep 2D Transformer and Convolutional Neural Networks","authors":"Tuna Tuncer, Uygar Kaya, Emre Sefer, Onur Alacam, Tugcan Hoser","doi":"10.1145/3533271.3561738","DOIUrl":"https://doi.org/10.1145/3533271.3561738","url":null,"abstract":"Artificial intelligence-based algorithmic trading has recently started to attract more attention. Among the techniques, deep learning-based methods such as transformers, convolutional neural networks, and patch embedding approaches have become quite popular inside the computer vision researchers. In this research, inspired by the state-of-the-art computer vision methods, we have come up with 2 approaches: DAPP (Deep Attention-based Price Prediction) and DPPP (Deep Patch-based Price Prediction) that are based on vision transformers and patch embedding-based convolutional neural networks respectively to predict asset price and direction from historical price data by capturing the image properties of the historical time-series dataset. Before applying attention-based architecture, we have transformed historical time series price dataset into two-dimensional images by using various number of different technical indicators. Each indicator creates data for a fixed number of days. Thus, we construct two-dimensional images of various dimensions. Then, we use original images valleys and hills to label each image as Hold, Buy, or Sell. We find our trained attention-based models to frequently provide better results for ETFs in comparison to the baseline convolutional architectures in terms of both accuracy and financial analysis metrics during longer testing periods. Our code and processed datasets are available at https://github.com/seferlab/SPDPvCNN","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126591083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Portfolio Selection: A Statistical Learning Approach 投资组合选择:一种统计学习方法
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561707
Yiming Peng, V. Linetsky
We propose a new portfolio optimization framework, partially egalitarian portfolio selection (PEPS). Inspired by the celebrated LASSO regression, we regularize the mean-variance portfolio optimization by adding two regularizing terms that essentially zero out portfolio weights of some of the assets in the portfolio and select and shrink the portfolio weights of the remaining assets towards the equal weights to hedge against parameter estimation risk. We solve our PEPS formulations by applying recent advances in mixed integer optimization that allow us to tackle large-scale portfolio problems. We also build a predictive regression model for expected return using two cross-sectional factors, the short-term reversal factor and the medium-term momentum factor, that are shown to be the more significant predictive factors among the hundreds of factors tested in the empirical finance literature. We then incorporate our predictive regression into PEPS by replacing the historical mean. We test our PEPS formulations against an array of classical portfolio optimization strategies on a number of datasets in the US equity markets. The PEPS portfolios enhanced with the predictive regression estimates of the expected stock returns exhibit the highest out-of-sample Sharpe ratios in all instances.
本文提出了一个新的投资组合优化框架——部分平均主义投资组合选择(PEPS)。受著名的LASSO回归的启发,我们通过添加两个正则化项来正则化均值-方差投资组合优化,这两个正则化项本质上是将投资组合中某些资产的投资组合权重归零,并选择并缩小剩余资产的投资组合权重,使其趋于相等的权重,以对冲参数估计风险。我们通过应用混合整数优化的最新进展来解决我们的pep公式,这使我们能够解决大规模的投资组合问题。我们还利用两个横截面因素——短期反转因素和中期动量因素——构建了预期收益的预测回归模型,这两个横截面因素在实证金融文献中检验的数百个因素中被证明是更显著的预测因素。然后,我们通过替换历史均值将我们的预测回归纳入PEPS。我们在美国股票市场的许多数据集上测试了我们的pep公式,以对抗一系列经典的投资组合优化策略。在所有情况下,经预期股票收益预测回归估计增强的PEPS投资组合表现出最高的样本外夏普比率。
{"title":"Portfolio Selection: A Statistical Learning Approach","authors":"Yiming Peng, V. Linetsky","doi":"10.1145/3533271.3561707","DOIUrl":"https://doi.org/10.1145/3533271.3561707","url":null,"abstract":"We propose a new portfolio optimization framework, partially egalitarian portfolio selection (PEPS). Inspired by the celebrated LASSO regression, we regularize the mean-variance portfolio optimization by adding two regularizing terms that essentially zero out portfolio weights of some of the assets in the portfolio and select and shrink the portfolio weights of the remaining assets towards the equal weights to hedge against parameter estimation risk. We solve our PEPS formulations by applying recent advances in mixed integer optimization that allow us to tackle large-scale portfolio problems. We also build a predictive regression model for expected return using two cross-sectional factors, the short-term reversal factor and the medium-term momentum factor, that are shown to be the more significant predictive factors among the hundreds of factors tested in the empirical finance literature. We then incorporate our predictive regression into PEPS by replacing the historical mean. We test our PEPS formulations against an array of classical portfolio optimization strategies on a number of datasets in the US equity markets. The PEPS portfolios enhanced with the predictive regression estimates of the expected stock returns exhibit the highest out-of-sample Sharpe ratios in all instances.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130342375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Achieving Mean–Variance Efficiency by Continuous-Time Reinforcement Learning 通过持续时间强化学习实现均方差效率
Pub Date : 2022-10-26 DOI: 10.1145/3533271.3561760
Yilie Huang, Yanwei Jia, X. Zhou
We conduct an extensive empirical analysis to evaluate the performance of the recently developed reinforcement learning algorithms by Jia and Zhou [11] in asset allocation tasks. We propose an efficient implementation of the algorithms in a dynamic mean-variance portfolio selection setting. We compare it with the conventional plug-in estimator and two state-of-the-art deep reinforcement learning algorithms, deep deterministic policy gradient (DDPG) and proximal policy optimization (PPO), with both simulated and real market data. On both data sets, our algorithm significantly outperforms the others. In particular, using the US stocks data from Jan 2000 to Dec 2019, we demonstrate the effectiveness of our algorithm in reaching the target return and maximizing the Sharpe ratio for various periods under consideration, including the period of the financial crisis in 2007-2008. By contrast, the plug-in estimator performs poorly on real data sets, and PPO performs better than DDPG but still has lower Sharpe ratio than the market. Our algorithm also outperforms two well-diversified portfolios: the market and equally weighted portfolios.
我们进行了广泛的实证分析,以评估Jia和Zhou[11]最近开发的强化学习算法在资产配置任务中的性能。我们提出了一种在动态均值-方差组合选择设置下的有效实现算法。我们将其与传统的插件估计器和两种最先进的深度强化学习算法,深度确定性策略梯度(DDPG)和近端策略优化(PPO),以及模拟和真实市场数据进行比较。在这两个数据集上,我们的算法明显优于其他算法。特别是,使用2000年1月至2019年12月的美国股票数据,我们证明了我们的算法在达到目标回报和最大化夏普比率方面的有效性,包括2007-2008年金融危机期间。相比之下,插件估计器在真实数据集上的性能较差,PPO的性能优于DDPG,但夏普比率仍低于市场。我们的算法也优于两种多元化的投资组合:市场投资组合和同等权重的投资组合。
{"title":"Achieving Mean–Variance Efficiency by Continuous-Time Reinforcement Learning","authors":"Yilie Huang, Yanwei Jia, X. Zhou","doi":"10.1145/3533271.3561760","DOIUrl":"https://doi.org/10.1145/3533271.3561760","url":null,"abstract":"We conduct an extensive empirical analysis to evaluate the performance of the recently developed reinforcement learning algorithms by Jia and Zhou [11] in asset allocation tasks. We propose an efficient implementation of the algorithms in a dynamic mean-variance portfolio selection setting. We compare it with the conventional plug-in estimator and two state-of-the-art deep reinforcement learning algorithms, deep deterministic policy gradient (DDPG) and proximal policy optimization (PPO), with both simulated and real market data. On both data sets, our algorithm significantly outperforms the others. In particular, using the US stocks data from Jan 2000 to Dec 2019, we demonstrate the effectiveness of our algorithm in reaching the target return and maximizing the Sharpe ratio for various periods under consideration, including the period of the financial crisis in 2007-2008. By contrast, the plug-in estimator performs poorly on real data sets, and PPO performs better than DDPG but still has lower Sharpe ratio than the market. Our algorithm also outperforms two well-diversified portfolios: the market and equally weighted portfolios.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130513763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the Third ACM International Conference on AI in Finance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1