Apostolos G. Katsafados, Ion Androutsopoulos, Ilias Chalkidis, Manos Fergadiotis, George N. Leledakis, Emmanouil G. Pyrgiotakis
This study examines the predictive power of textual information from S-1 filings in explaining initial public offering (IPO) underpricing. The authors’ approach differs from previous research because they utilize several machine learning algorithms to predict whether an IPO will be underpriced or not, as well as the magnitude of the underpricing. Using a sample of 2,481 US IPOs, they find that textual information can effectively complement financial variables in terms of prediction accuracy because models that use both sources of data produce more accurate estimates. In particular, the model with the best performance using only financial variables achieves 67.5% accuracy whereas the best model with both textual and financial data appears a substantial improvement (6.1%). Also, the use of sophisticated machine learning models drives an increase in the predictive accuracy compared to the traditional logistic regression model (2.5%). The authors attribute the findings to the fact that textual information can reduce the ex ante valuation uncertainty of IPO firms. Finally, they create a portfolio of IPOs based on the out-of-sample machine learning predictions, which remarkably achieves 27.90% average returns. Their portfolio achieves extraordinary abnormal returns in various time dimensions (both in the short and long run), achieving up to 30% better yield than the benchmark.
{"title":"Textual Information and IPO Underpricing: A Machine Learning Approach","authors":"Apostolos G. Katsafados, Ion Androutsopoulos, Ilias Chalkidis, Manos Fergadiotis, George N. Leledakis, Emmanouil G. Pyrgiotakis","doi":"10.3905/jfds.2023.1.121","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.121","url":null,"abstract":"This study examines the predictive power of textual information from S-1 filings in explaining initial public offering (IPO) underpricing. The authors’ approach differs from previous research because they utilize several machine learning algorithms to predict whether an IPO will be underpriced or not, as well as the magnitude of the underpricing. Using a sample of 2,481 US IPOs, they find that textual information can effectively complement financial variables in terms of prediction accuracy because models that use both sources of data produce more accurate estimates. In particular, the model with the best performance using only financial variables achieves 67.5% accuracy whereas the best model with both textual and financial data appears a substantial improvement (6.1%). Also, the use of sophisticated machine learning models drives an increase in the predictive accuracy compared to the traditional logistic regression model (2.5%). The authors attribute the findings to the fact that textual information can reduce the ex ante valuation uncertainty of IPO firms. Finally, they create a portfolio of IPOs based on the out-of-sample machine learning predictions, which remarkably achieves 27.90% average returns. Their portfolio achieves extraordinary abnormal returns in various time dimensions (both in the short and long run), achieving up to 30% better yield than the benchmark.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"250 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135747795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances in machine learning (ML) are having profound influence on many fields. In this article, the authors present a curated version of a panel discussion that they moderated at Applied Machine Learning Days 2022 on the impact of recent advancements in ML on decision making, data-driven analysis, and time-series modeling in finance. The panel consisted of industry and academic panelists in the field of finance and ML: Robert Almgren, Matthew Dixon, Lisa Huang, Fabrizio Lillo, Mathieu Rosenbaum, and Nicholas Westray. In the discussions with the panelists, the authors focused on (1) the recent developments of deep learning such as transformer and physics-informed neural networks, (2) common misconceptions and challenges in applying ML in finance, and (3) opportunities and new research directions.
机器学习(ML)的进步正在对许多领域产生深远的影响。在本文中,作者介绍了他们在2022年应用机器学习日主持的小组讨论的策划版本,讨论了机器学习的最新进展对金融决策、数据驱动分析和时间序列建模的影响。该小组由金融和ML领域的行业和学术小组成员组成:Robert Almgren, Matthew Dixon, Lisa Huang, Fabrizio Lillo, Mathieu Rosenbaum和Nicholas Westray。在与小组成员的讨论中,作者集中讨论了(1)深度学习的最新发展,如变压器和物理信息神经网络,(2)在金融中应用ML的常见误解和挑战,以及(3)机遇和新的研究方向。
{"title":"Advances of Machine Learning Approaches for Financial Decision Making and Time-Series Analysis: A Panel Discussion","authors":"Nino Antulov-Fantulin, Petter N. Kolm","doi":"10.3905/jfds.2023.1.123","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.123","url":null,"abstract":"Advances in machine learning (ML) are having profound influence on many fields. In this article, the authors present a curated version of a panel discussion that they moderated at Applied Machine Learning Days 2022 on the impact of recent advancements in ML on decision making, data-driven analysis, and time-series modeling in finance. The panel consisted of industry and academic panelists in the field of finance and ML: Robert Almgren, Matthew Dixon, Lisa Huang, Fabrizio Lillo, Mathieu Rosenbaum, and Nicholas Westray. In the discussions with the panelists, the authors focused on (1) the recent developments of deep learning such as transformer and physics-informed neural networks, (2) common misconceptions and challenges in applying ML in finance, and (3) opportunities and new research directions.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Making reliable causal inferences is integral to both explaining past events and forecasting the future. Although there are various theories of economic causality, there has not yet been a wide adoption of machine learning techniques for causal inference within finance. One recently developed framework, double machine learning, is an approach to causal inference that is specifically designed to correct for bias in statistical analysis. In doing so, it allows for a more precise evaluation of treatment effects in the presence of confounders. In this article, the author uses double machine learning to study market contagion. He considers the treatment variable to be the weekly return of the S&P 500 Index below a specific threshold and the outcome to be the weekly return in a single major non-US market. In analyzing each non-US market, the other non-US markets under consideration are used as confounders. The author presents two case studies. In the first, outcomes are observed in the same week as the treatment is observed and, in the second, in the week after. His results show that, in the first case study, sizable and statistically significant contagion effects are observed but somewhat diluted due to the presence of confounders. In contrast, in the second case study, more ambiguous contagion effects are observed and the level of statistical significance is measurably lower than those observed in the first case study, indicating that contagion effects are most clearly transmitted in the same week that the dislocation in the S&P 500 occurs.
{"title":"A Causal Analysis of Market Contagion: A Double Machine Learning Approach","authors":"Joseph Simonian","doi":"10.3905/jfds.2023.1.122","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.122","url":null,"abstract":"Making reliable causal inferences is integral to both explaining past events and forecasting the future. Although there are various theories of economic causality, there has not yet been a wide adoption of machine learning techniques for causal inference within finance. One recently developed framework, double machine learning, is an approach to causal inference that is specifically designed to correct for bias in statistical analysis. In doing so, it allows for a more precise evaluation of treatment effects in the presence of confounders. In this article, the author uses double machine learning to study market contagion. He considers the treatment variable to be the weekly return of the S&P 500 Index below a specific threshold and the outcome to be the weekly return in a single major non-US market. In analyzing each non-US market, the other non-US markets under consideration are used as confounders. The author presents two case studies. In the first, outcomes are observed in the same week as the treatment is observed and, in the second, in the week after. His results show that, in the first case study, sizable and statistically significant contagion effects are observed but somewhat diluted due to the presence of confounders. In contrast, in the second case study, more ambiguous contagion effects are observed and the level of statistical significance is measurably lower than those observed in the first case study, indicating that contagion effects are most clearly transmitted in the same week that the dislocation in the S&P 500 occurs.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121788990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, the authors present a new deep trend-following strategy that selectively buys constituents of the S&P 500 Index that are estimated to be upward trending. Therefore, they construct a binary momentum indicator based on a recursive algorithm and then train a convolutional neural network combined with a long short-term memory model to classify periods that are defined as upward trends. The strategy, which can be used as an alternative to traditional quantitative momentum ranking models, generates returns up to 27.3% per annum over the out-of-sample period from January 2010 to December 2019 and achieves a Sharpe ratio of 1.3 after accounting for transaction costs on daily data. The authors show that volatility scaling can further increase the risk–return profile and lower the maximum drawdown of the strategy.
{"title":"A Deep Trend-Following Trading Strategy for Equity Markets","authors":"P. Eggebrecht, E. Lütkebohmert","doi":"10.3905/jfds.2023.1.120","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.120","url":null,"abstract":"In this article, the authors present a new deep trend-following strategy that selectively buys constituents of the S&P 500 Index that are estimated to be upward trending. Therefore, they construct a binary momentum indicator based on a recursive algorithm and then train a convolutional neural network combined with a long short-term memory model to classify periods that are defined as upward trends. The strategy, which can be used as an alternative to traditional quantitative momentum ranking models, generates returns up to 27.3% per annum over the out-of-sample period from January 2010 to December 2019 and achieves a Sharpe ratio of 1.3 after accounting for transaction costs on daily data. The authors show that volatility scaling can further increase the risk–return profile and lower the maximum drawdown of the strategy.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123129979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meta-labeling is a recently developed tool for determining the position size of a trade. It involves applying a secondary model to produce an output that can be interpreted as the estimated probability of a profitable trade, which can then be used to size positions. Before sizing the position, probability calibration can be applied to bring the model’s estimates closer to true posterior probabilities. This article investigates the use of these estimated probabilities, both uncalibrated and calibrated, in six position sizing algorithms. The algorithms used in this article include established methods used in practice and variations thereon, as well as a novel method called sigmoid optimal position sizing. The position sizing methods are evaluated and compared using strategy metrics such as the Sharpe ratio and maximum drawdown. The results indicate that the performance of fixed position sizing methods is significantly improved by calibration, whereas methods that estimate their functions from the training data do not gain any significant advantage from probability calibration.
{"title":"Meta-Labeling: Calibration and Position Sizing","authors":"Michael Meyer, Illya Barziy, J. Joubert","doi":"10.3905/jfds.2023.1.119","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.119","url":null,"abstract":"Meta-labeling is a recently developed tool for determining the position size of a trade. It involves applying a secondary model to produce an output that can be interpreted as the estimated probability of a profitable trade, which can then be used to size positions. Before sizing the position, probability calibration can be applied to bring the model’s estimates closer to true posterior probabilities. This article investigates the use of these estimated probabilities, both uncalibrated and calibrated, in six position sizing algorithms. The algorithms used in this article include established methods used in practice and variations thereon, as well as a novel method called sigmoid optimal position sizing. The position sizing methods are evaluated and compared using strategy metrics such as the Sharpe ratio and maximum drawdown. The results indicate that the performance of fixed position sizing methods is significantly improved by calibration, whereas methods that estimate their functions from the training data do not gain any significant advantage from probability calibration.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116987965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The question of how to diversify an investment portfolio is one with many possible answers. Over the past couple of years, the industry and academic literature have been shifting focus from an asset-driven answer to a factor-driven one, sparking special interest in the use of implicit factors identified through unsupervised learning. However, issues around the stability and implementation of these, in the context of diversification, have left a gap between what is an academic exercise and what is an implementable methodology. This article aims to fill this gap by presenting a diversification-focused portfolio construction methodology that takes advantage of singular value decomposition to identify implicit factors and uses hierarchical agglomerative clustering to address some of the challenges surrounding its implementation. In out-of-sample Monte Carlo simulations, this methodology can provide better risk-adjusted performance than other commonly used portfolio diversification approaches.
{"title":"Diversified Spectral Portfolios: An Unsupervised Learning Approach to Diversification","authors":"Francisco A. Ibanez","doi":"10.3905/jfds.2023.1.118","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.118","url":null,"abstract":"The question of how to diversify an investment portfolio is one with many possible answers. Over the past couple of years, the industry and academic literature have been shifting focus from an asset-driven answer to a factor-driven one, sparking special interest in the use of implicit factors identified through unsupervised learning. However, issues around the stability and implementation of these, in the context of diversification, have left a gap between what is an academic exercise and what is an implementable methodology. This article aims to fill this gap by presenting a diversification-focused portfolio construction methodology that takes advantage of singular value decomposition to identify implicit factors and uses hierarchical agglomerative clustering to address some of the challenges surrounding its implementation. In out-of-sample Monte Carlo simulations, this methodology can provide better risk-adjusted performance than other commonly used portfolio diversification approaches.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131959906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.48550/arXiv.2302.10175
Wee Ling Tan, S. Roberts, S. Zohren
The authors introduce spatio-temporal momentum strategies, a class of models that unify both time-series and cross-sectional momentum strategies by trading assets based on their cross-sectional momentum features over time. Although both time-series and cross-sectional momentum strategies are designed to systematically capture momentum risk premiums, these strategies are regarded as distinct implementations and do not consider the concurrent relationship and predictability between temporal and cross-sectional momentum features of different assets. They model spatio-temporal momentum with neural networks of varying complexities and demonstrate that a simple neural network with only a single fully connected layer learns to simultaneously generate trading signals for all assets in a portfolio by incorporating both their time-series and cross-sectional momentum features. Back testing on portfolios of 46 actively traded US equities and 12 equity index futures contracts, they demonstrate that the model is able to retain its performance over benchmarks in the presence of high transaction costs of up to 5–10 basis points. In particular, they find that the model when coupled with least absolute shrinkage and turnover regularization results in the best performance over various transaction cost scenarios.
{"title":"Spatio-Temporal Momentum: Jointly Learning Time-Series and Cross-Sectional Strategies","authors":"Wee Ling Tan, S. Roberts, S. Zohren","doi":"10.48550/arXiv.2302.10175","DOIUrl":"https://doi.org/10.48550/arXiv.2302.10175","url":null,"abstract":"The authors introduce spatio-temporal momentum strategies, a class of models that unify both time-series and cross-sectional momentum strategies by trading assets based on their cross-sectional momentum features over time. Although both time-series and cross-sectional momentum strategies are designed to systematically capture momentum risk premiums, these strategies are regarded as distinct implementations and do not consider the concurrent relationship and predictability between temporal and cross-sectional momentum features of different assets. They model spatio-temporal momentum with neural networks of varying complexities and demonstrate that a simple neural network with only a single fully connected layer learns to simultaneously generate trading signals for all assets in a portfolio by incorporating both their time-series and cross-sectional momentum features. Back testing on portfolios of 46 actively traded US equities and 12 equity index futures contracts, they demonstrate that the model is able to retain its performance over benchmarks in the presence of high transaction costs of up to 5–10 basis points. In particular, they find that the model when coupled with least absolute shrinkage and turnover regularization results in the best performance over various transaction cost scenarios.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114861806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Black–Litterman model extends the framework of the Markowitz modern portfolio theory to incorporate investor views. The authors consider a case in which multiple view estimates, including uncertainties, are given for the same underlying subset of assets at a point in time. This motivates their consideration of data fusion techniques for combining information from multiple sources. In particular, they consider consistency-based methods that yield fused view and uncertainty pairs; such methods are not common to the quantitative finance literature. They show a relevant, modern case of incorporating machine learning model-derived view and uncertainty estimates, and the impact on portfolio allocation, with an example subsuming arbitrage pricing theory. Hence, they show the value of the Black–Litterman model in combination with information fusion and artificial intelligence–grounded prediction methods.
{"title":"View Fusion Vis-à-Vis a Bayesian Interpretation of Black–Litterman for Portfolio Allocation","authors":"Trent Spears, S. Zohren, S. Roberts","doi":"10.3905/jfds.2023.1.132","DOIUrl":"https://doi.org/10.3905/jfds.2023.1.132","url":null,"abstract":"The Black–Litterman model extends the framework of the Markowitz modern portfolio theory to incorporate investor views. The authors consider a case in which multiple view estimates, including uncertainties, are given for the same underlying subset of assets at a point in time. This motivates their consideration of data fusion techniques for combining information from multiple sources. In particular, they consider consistency-based methods that yield fused view and uncertainty pairs; such methods are not common to the quantitative finance literature. They show a relevant, modern case of incorporating machine learning model-derived view and uncertainty estimates, and the impact on portfolio allocation, with an example subsuming arbitrage pricing theory. Hence, they show the value of the Black–Litterman model in combination with information fusion and artificial intelligence–grounded prediction methods.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133493364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiarui Chen, Qi Tong, H. Verma, Avinash Sharma, A. Dahbura, J. Liew
Blockchains have ushered in the next stage in the evolution of the Internet, transitioning us from Web 2.0 to 3.0. However, given the complex nature of this innovative technology and the inability to clearly measure and properly communicate the risks in blockchains, arguably this murkiness has hampered the development, growth, proper regulation, and, ultimately, the true societal beneficial contributions of blockchains. In an attempt to clear the confusion, the authors propose the Johns Hopkins Blockchain Risk Map. The authors present their risk map prototype, their current multidimensional exhibit of risks across the various stakeholders, and their current modest progress with some data on their current risk measures. The authors are attempting to create a safe space whereby blockchain risks are defined, displayed, debated, researched, fine-tuned, standardized, and freely shared. The authors believe that such a platform would be an ideal mechanism for education, networking, and collaboration for the next generation, specifically those who are underrepresented in the current blockchain development community. By increasing the transparency and debating risk issues in a safe academic environment, the authors hope that this risk map will help move blockchain adoption forward and spur more entrepreneurial activity across this industry. In this article, the authors lay down their initial thoughts and current progress and challenges. Although this article is in no way exhaustive, the authors provide several categorizations of blockchain risks: operational, decentralization, security, social sentiment, investment, and systemic, to name a few.
{"title":"The Complexity of Blockchain Risks Simplified and Displayed: Introduction of the Johns Hopkins Blockchain Risk Map","authors":"Jiarui Chen, Qi Tong, H. Verma, Avinash Sharma, A. Dahbura, J. Liew","doi":"10.3905/jfds.2022.1.117","DOIUrl":"https://doi.org/10.3905/jfds.2022.1.117","url":null,"abstract":"Blockchains have ushered in the next stage in the evolution of the Internet, transitioning us from Web 2.0 to 3.0. However, given the complex nature of this innovative technology and the inability to clearly measure and properly communicate the risks in blockchains, arguably this murkiness has hampered the development, growth, proper regulation, and, ultimately, the true societal beneficial contributions of blockchains. In an attempt to clear the confusion, the authors propose the Johns Hopkins Blockchain Risk Map. The authors present their risk map prototype, their current multidimensional exhibit of risks across the various stakeholders, and their current modest progress with some data on their current risk measures. The authors are attempting to create a safe space whereby blockchain risks are defined, displayed, debated, researched, fine-tuned, standardized, and freely shared. The authors believe that such a platform would be an ideal mechanism for education, networking, and collaboration for the next generation, specifically those who are underrepresented in the current blockchain development community. By increasing the transparency and debating risk issues in a safe academic environment, the authors hope that this risk map will help move blockchain adoption forward and spur more entrepreneurial activity across this industry. In this article, the authors lay down their initial thoughts and current progress and challenges. Although this article is in no way exhaustive, the authors provide several categorizations of blockchain risks: operational, decentralization, security, social sentiment, investment, and systemic, to name a few.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129829225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}