The aim of this paper is to study a new methodological framework for systemic risk measures by applying deep learning method as a tool to compute the optimal strategy of capital allocations. Under this new framework, systemic risk measures can be interpreted as the minimal amount of cash that secures the aggregated system by allocating capital to the single institutions before aggregating the individual risks. This problem has no explicit solution except in very limited situations. Deep learning is increasingly receiving attention in financial modelings and risk management and we propose our deep learning based algorithms to solve both the primal and dual problems of the risk measures, and thus to learn the fair risk allocations. In particular, our method for the dual problem involves the training philosophy inspired by the well-known Generative Adversarial Networks (GAN) approach and a newly designed direct estimation of Radon-Nikodym derivative. We close the paper with substantial numerical studies of the subject and provide interpretations of the risk allocations associated to the systemic risk measures. In the particular case of exponential preferences, numerical experiments demonstrate excellent performance of the proposed algorithm, when compared with the optimal explicit solution as a benchmark.
{"title":"Deep Learning for Systemic Risk Measures","authors":"Yichen Feng, Ming Min, J. Fouque","doi":"10.1145/3533271.3561669","DOIUrl":"https://doi.org/10.1145/3533271.3561669","url":null,"abstract":"The aim of this paper is to study a new methodological framework for systemic risk measures by applying deep learning method as a tool to compute the optimal strategy of capital allocations. Under this new framework, systemic risk measures can be interpreted as the minimal amount of cash that secures the aggregated system by allocating capital to the single institutions before aggregating the individual risks. This problem has no explicit solution except in very limited situations. Deep learning is increasingly receiving attention in financial modelings and risk management and we propose our deep learning based algorithms to solve both the primal and dual problems of the risk measures, and thus to learn the fair risk allocations. In particular, our method for the dual problem involves the training philosophy inspired by the well-known Generative Adversarial Networks (GAN) approach and a newly designed direct estimation of Radon-Nikodym derivative. We close the paper with substantial numerical studies of the subject and provide interpretations of the risk allocations associated to the systemic risk measures. In the particular case of exponential preferences, numerical experiments demonstrate excellent performance of the proposed algorithm, when compared with the optimal explicit solution as a benchmark.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130655208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Fan, F. J. Cossío, B. Altschuler, He Sun, Xintong Wang, D. Parkes
Decentralized exchanges (DEXs) provide a means for users to trade pairs of assets on-chain without the need of a trusted third party to effectuate a trade. Amongst these, constant function market maker (CFMM) DEXs such as Uniswap handle the most volume of trades between ERC-20 tokens. With the introduction of Uniswap v3, liquidity providers are given the option to differentially allocate liquidity to be used for trades that occur within specific price intervals. In this paper, we formalize the profit and loss that liquidity providers can earn when providing specific liquidity positions to a contract. With this in hand, we are able to compute optimal liquidity allocations for liquidity providers who hold beliefs over how prices evolve over time. Ultimately, we use this tool to shed light on the design question regarding how v3 contracts should partition price space for permissible liquidity allocations. Our results show that a richer space of potential partitions can simultaneously benefit both liquidity providers and traders.
{"title":"Differential Liquidity Provision in Uniswap v3 and Implications for Contract Design✱","authors":"Z. Fan, F. J. Cossío, B. Altschuler, He Sun, Xintong Wang, D. Parkes","doi":"10.1145/3533271.3561775","DOIUrl":"https://doi.org/10.1145/3533271.3561775","url":null,"abstract":"Decentralized exchanges (DEXs) provide a means for users to trade pairs of assets on-chain without the need of a trusted third party to effectuate a trade. Amongst these, constant function market maker (CFMM) DEXs such as Uniswap handle the most volume of trades between ERC-20 tokens. With the introduction of Uniswap v3, liquidity providers are given the option to differentially allocate liquidity to be used for trades that occur within specific price intervals. In this paper, we formalize the profit and loss that liquidity providers can earn when providing specific liquidity positions to a contract. With this in hand, we are able to compute optimal liquidity allocations for liquidity providers who hold beliefs over how prices evolve over time. Ultimately, we use this tool to shed light on the design question regarding how v3 contracts should partition price space for permissible liquidity allocations. Our results show that a richer space of potential partitions can simultaneously benefit both liquidity providers and traders.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126388786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a method for pricing consumer credit using recent advances in offline deep reinforcement learning. This approach relies on a static dataset and as opposed to commonly used pricing approaches it requires no assumptions on the functional form of demand. Using both real and synthetic data on consumer credit applications, we demonstrate that our approach using the conservative Q-Learning algorithm is capable of learning an effective personalized pricing policy without any online interaction or price experimentation. In particular, using historical data on online auto loan applications we estimate an increase in expected profit of 21% with a less than 15% average change in prices relative to the original pricing policy.
{"title":"Offline Deep Reinforcement Learning for Dynamic Pricing of Consumer Credit","authors":"Raad Khraishi, Ramin Okhrati","doi":"10.1145/3533271.3561682","DOIUrl":"https://doi.org/10.1145/3533271.3561682","url":null,"abstract":"We introduce a method for pricing consumer credit using recent advances in offline deep reinforcement learning. This approach relies on a static dataset and as opposed to commonly used pricing approaches it requires no assumptions on the functional form of demand. Using both real and synthetic data on consumer credit applications, we demonstrate that our approach using the conservative Q-Learning algorithm is capable of learning an effective personalized pricing policy without any online interaction or price experimentation. In particular, using historical data on online auto loan applications we estimate an increase in expected profit of 21% with a less than 15% average change in prices relative to the original pricing policy.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128478253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We extend the research into cross-sectional momentum trading strategies. Our main result is our novel ranking algorithm, the naive Bayes asset ranker (nbar), which we use to select subsets of assets to trade from the S&P 500 index. We perform feature representation transfer from radial basis function networks to a curds and whey (caw) multivariate regression model that takes advantage of the correlations between the response variables to improve predictive accuracy. The nbar ranks this regression output by forecasting the one-step-ahead sequential posterior probability that individual assets will be ranked higher than other portfolio constituents. Earlier algorithms, such as the weighted majority, deal with nonstationarity by ensuring the weights assigned to each expert never dip below a minimum threshold without ever increasing weights again. Our ranking algorithm allows experts who previously performed poorly to have increased weights when they start performing well. Our algorithm outperforms a strategy that would hold the long-only S&P 500 index with hindsight, despite the index appreciating by 205% during the test period. It also outperforms a regress-then-rank baseline, the caw model.
{"title":"Sequential asset ranking in nonstationary time series","authors":"Gabriel Borrageiro, Nikan B. Firoozye, P. Barucca","doi":"10.1145/3533271.3561666","DOIUrl":"https://doi.org/10.1145/3533271.3561666","url":null,"abstract":"We extend the research into cross-sectional momentum trading strategies. Our main result is our novel ranking algorithm, the naive Bayes asset ranker (nbar), which we use to select subsets of assets to trade from the S&P 500 index. We perform feature representation transfer from radial basis function networks to a curds and whey (caw) multivariate regression model that takes advantage of the correlations between the response variables to improve predictive accuracy. The nbar ranks this regression output by forecasting the one-step-ahead sequential posterior probability that individual assets will be ranked higher than other portfolio constituents. Earlier algorithms, such as the weighted majority, deal with nonstationarity by ensuring the weights assigned to each expert never dip below a minimum threshold without ever increasing weights again. Our ranking algorithm allows experts who previously performed poorly to have increased weights when they start performing well. Our algorithm outperforms a strategy that would hold the long-only S&P 500 index with hindsight, despite the index appreciating by 205% during the test period. It also outperforms a regress-then-rank baseline, the caw model.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124612936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch
Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.
{"title":"Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy","authors":"David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch","doi":"10.1145/3533271.3561754","DOIUrl":"https://doi.org/10.1145/3533271.3561754","url":null,"abstract":"Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127210569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The introduction of electronic trading platforms effectively changed the organisation of traditional systemic trading from quote-driven markets into order-driven markets. Its convenience led to an exponentially increasing amount of financial data, which is however hard to use for the prediction of future prices, due to the low signal-to-noise ratio and the non-stationarity of financial time series. Simpler classification tasks — where the goal is to predict the directions of future price movement via supervised learning algorithms — need sufficiently reliable labels to generalise well. Labelling financial data is however less well defined than in other domains: did the price go up because of noise or a signal? The existing labelling methods have limited countermeasures against the noise, as well as limited effects in improving learning algorithms. This work takes inspiration from image classification in trading [6] and the success of self-supervised learning in computer vision (e.g., [16]). We investigate the idea of applying these techniques to financial time series to reduce the noise exposure and hence generate correct labels. We look at label generation as the pretext task of a self-supervised learning approach and compare the naive (and noisy) labels, commonly used in the literature, with the labels generated by a denoising autoencoder for the same downstream classification task. Our results demonstrate that these denoised labels improve the performances of the downstream learning algorithm, for both small and large datasets, while preserving the market trends. These findings suggest that with our proposed techniques, self-supervised learning constitutes a powerful framework for generating “better” financial labels that are useful for studying the underlying patterns of the market.
{"title":"Denoised Labels for Financial Time Series Data via Self-Supervised Learning","authors":"Yanqing Ma, Carmine Ventre, M. Polukarov","doi":"10.1145/3533271.3561687","DOIUrl":"https://doi.org/10.1145/3533271.3561687","url":null,"abstract":"The introduction of electronic trading platforms effectively changed the organisation of traditional systemic trading from quote-driven markets into order-driven markets. Its convenience led to an exponentially increasing amount of financial data, which is however hard to use for the prediction of future prices, due to the low signal-to-noise ratio and the non-stationarity of financial time series. Simpler classification tasks — where the goal is to predict the directions of future price movement via supervised learning algorithms — need sufficiently reliable labels to generalise well. Labelling financial data is however less well defined than in other domains: did the price go up because of noise or a signal? The existing labelling methods have limited countermeasures against the noise, as well as limited effects in improving learning algorithms. This work takes inspiration from image classification in trading [6] and the success of self-supervised learning in computer vision (e.g., [16]). We investigate the idea of applying these techniques to financial time series to reduce the noise exposure and hence generate correct labels. We look at label generation as the pretext task of a self-supervised learning approach and compare the naive (and noisy) labels, commonly used in the literature, with the labels generated by a denoising autoencoder for the same downstream classification task. Our results demonstrate that these denoised labels improve the performances of the downstream learning algorithm, for both small and large datasets, while preserving the market trends. These findings suggest that with our proposed techniques, self-supervised learning constitutes a powerful framework for generating “better” financial labels that are useful for studying the underlying patterns of the market.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127522645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-agent simulation is commonly used across multiple disciplines, specifically in artificial intelligence in recent years, which creates an environment for downstream machine learning or reinforcement learning tasks. In many practical scenarios, however, only the output series that result from the interactions of simulation agents are observable. Therefore, simulators need to be calibrated so that the simulated output series resemble historical – which amounts to solving a complex simulation optimization problem. In this paper, we propose a simple and efficient framework for calibrating simulator parameters from historical output series observations. First, we consider a novel concept of eligibility set to bypass the potential non-identifiability issue. Second, we generalize the two-sample Kolmogorov-Smirnov (K-S) test with Bonferroni correction to test the similarity between two high-dimensional distributions, which gives a simple yet effective distance metric between the output series sample sets. Third, we suggest using Bayesian optimization (BO) and trust-region BO (TuRBO) to minimize the aforementioned distance metric. Finally, we demonstrate the efficiency of our framework using numerical experiments both on a multi-agent financial market simulator.
{"title":"Efficient Calibration of Multi-Agent Simulation Models from Output Series with Bayesian Optimization","authors":"Yuanlu Bai, H. Lam, T. Balch, Svitlana Vyetrenko","doi":"10.1145/3533271.3561755","DOIUrl":"https://doi.org/10.1145/3533271.3561755","url":null,"abstract":"Multi-agent simulation is commonly used across multiple disciplines, specifically in artificial intelligence in recent years, which creates an environment for downstream machine learning or reinforcement learning tasks. In many practical scenarios, however, only the output series that result from the interactions of simulation agents are observable. Therefore, simulators need to be calibrated so that the simulated output series resemble historical – which amounts to solving a complex simulation optimization problem. In this paper, we propose a simple and efficient framework for calibrating simulator parameters from historical output series observations. First, we consider a novel concept of eligibility set to bypass the potential non-identifiability issue. Second, we generalize the two-sample Kolmogorov-Smirnov (K-S) test with Bonferroni correction to test the similarity between two high-dimensional distributions, which gives a simple yet effective distance metric between the output series sample sets. Third, we suggest using Bayesian optimization (BO) and trust-region BO (TuRBO) to minimize the aforementioned distance metric. Finally, we demonstrate the efficiency of our framework using numerical experiments both on a multi-agent financial market simulator.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":" 47","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113952980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The task we consider is portfolio construction in a speculative market, a fundamental problem in modern finance. While various empirical works now exist to explore deep learning in finance, the theory side is almost non-existent. In this work, we focus on developing a theoretical framework for understanding the use of data augmentation for deep-learning-based approaches to quantitative finance. The proposed theory clarifies the role and necessity of data augmentation for finance; moreover, our theory implies that a simple algorithm of injecting a random noise of strength to the observed return rt is better than not injecting any noise and a few other financially irrelevant data augmentation techniques.
{"title":"Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction","authors":"Liu Ziyin, Kentaro Minami, Kentaro Imajo","doi":"10.1145/3533271.3561720","DOIUrl":"https://doi.org/10.1145/3533271.3561720","url":null,"abstract":"The task we consider is portfolio construction in a speculative market, a fundamental problem in modern finance. While various empirical works now exist to explore deep learning in finance, the theory side is almost non-existent. In this work, we focus on developing a theoretical framework for understanding the use of data augmentation for deep-learning-based approaches to quantitative finance. The proposed theory clarifies the role and necessity of data augmentation for finance; moreover, our theory implies that a simple algorithm of injecting a random noise of strength to the observed return rt is better than not injecting any noise and a few other financially irrelevant data augmentation techniques.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"13 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125910611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anastasios Petropoulos, Vassilis Siakoulis, Konstantinos P. Panousis, T. Christophides, S. Chatzis
In the aftermath of the financial crisis, supervisory authorities have considerably altered the mode of operation of financial stress testing. Despite these efforts, significant concerns and extensive criticism have been raised by market participants regarding the considered unrealistic methodological assumptions and simplifications. Current stress testing methodologies attempt to simulate the risks underlying a financial institution’s balance sheet by using several satellite models. This renders their integration a really challenging task, leading to significant estimation errors. Moreover, advanced statistical techniques that could potentially capture the non-linear nature of adverse shocks are still ignored. This work aims to address these criticisms and shortcomings by proposing a novel approach based on recent advances in Deep Learning towards a principled method for Dynamic Balance Sheet Stress Testing. Experimental results on a newly collected financial/supervisory dataset, provide strong empirical evidence that our paradigm significantly outperforms traditional approaches; thus, it is capable of more accurately and efficiently simulating real world scenarios.
{"title":"A Deep Learning Approach for Dynamic Balance Sheet Stress Testing","authors":"Anastasios Petropoulos, Vassilis Siakoulis, Konstantinos P. Panousis, T. Christophides, S. Chatzis","doi":"10.1145/3533271.3561656","DOIUrl":"https://doi.org/10.1145/3533271.3561656","url":null,"abstract":"In the aftermath of the financial crisis, supervisory authorities have considerably altered the mode of operation of financial stress testing. Despite these efforts, significant concerns and extensive criticism have been raised by market participants regarding the considered unrealistic methodological assumptions and simplifications. Current stress testing methodologies attempt to simulate the risks underlying a financial institution’s balance sheet by using several satellite models. This renders their integration a really challenging task, leading to significant estimation errors. Moreover, advanced statistical techniques that could potentially capture the non-linear nature of adverse shocks are still ignored. This work aims to address these criticisms and shortcomings by proposing a novel approach based on recent advances in Deep Learning towards a principled method for Dynamic Balance Sheet Stress Testing. Experimental results on a newly collected financial/supervisory dataset, provide strong empirical evidence that our paradigm significantly outperforms traditional approaches; thus, it is capable of more accurately and efficiently simulating real world scenarios.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116829414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}