Fernando Acebes, M Pereda, David Poza, Javier Pajares, Jose M Galan
The aim of this paper is to describe a new an integrated methodology for project control under uncertainty. This proposal is based on Earned Value Methodology and risk analysis and presents several refinements to previous methodologies. More specifically, the approach uses extensive Monte Carlo simulation to obtain information about the expected behavior of the project. This dataset is exploited in several ways using different statistical learning methodologies in a structured fashion. Initially, simulations are used to detect if project deviations are a consequence of the expected variability using Anomaly Detection algorithms. If the project follows this expected variability, probabilities of success in cost and time and expected cost and total duration of the project can be estimated using classification and regression approaches.
{"title":"Stochastic Earned Value Analysis using Monte Carlo Simulation and Statistical Learning Techniques","authors":"Fernando Acebes, M Pereda, David Poza, Javier Pajares, Jose M Galan","doi":"arxiv-2406.02589","DOIUrl":"https://doi.org/arxiv-2406.02589","url":null,"abstract":"The aim of this paper is to describe a new an integrated methodology for\u0000project control under uncertainty. This proposal is based on Earned Value\u0000Methodology and risk analysis and presents several refinements to previous\u0000methodologies. More specifically, the approach uses extensive Monte Carlo\u0000simulation to obtain information about the expected behavior of the project.\u0000This dataset is exploited in several ways using different statistical learning\u0000methodologies in a structured fashion. Initially, simulations are used to\u0000detect if project deviations are a consequence of the expected variability\u0000using Anomaly Detection algorithms. If the project follows this expected\u0000variability, probabilities of success in cost and time and expected cost and\u0000total duration of the project can be estimated using classification and\u0000regression approaches.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernando Acebes, José Manuel González-Varona, Adolfo López-Paredes, Javier Pajares
The project managers who deal with risk management are often faced with the difficult task of determining the relative importance of the various sources of risk that affect the project. This prioritisation is crucial to direct management efforts to ensure higher project profitability. Risk matrices are widely recognised tools by academics and practitioners in various sectors to assess and rank risks according to their likelihood of occurrence and impact on project objectives. However, the existing literature highlights several limitations to use the risk matrix. In response to the weaknesses of its use, this paper proposes a novel approach for prioritising project risks. Monte Carlo Simulation (MCS) is used to perform a quantitative prioritisation of risks with the simulation software MCSimulRisk. Together with the definition of project activities, the simulation includes the identified risks by modelling their probability and impact on cost and duration. With this novel methodology, a quantitative assessment of the impact of each risk is provided, as measured by the effect that it would have on project duration and its total cost. This allows the differentiation of critical risks according to their impact on project duration, which may differ if cost is taken as a priority objective. This proposal is interesting for project managers because they will, on the one hand, know the absolute impact of each risk on their project duration and cost objectives and, on the other hand, be able to discriminate the impacts of each risk independently on the duration objective and the cost objective.
{"title":"Beyond probability-impact matrices in project risk management: A quantitative methodology for risk prioritisation","authors":"Fernando Acebes, José Manuel González-Varona, Adolfo López-Paredes, Javier Pajares","doi":"arxiv-2405.20679","DOIUrl":"https://doi.org/arxiv-2405.20679","url":null,"abstract":"The project managers who deal with risk management are often faced with the\u0000difficult task of determining the relative importance of the various sources of\u0000risk that affect the project. This prioritisation is crucial to direct\u0000management efforts to ensure higher project profitability. Risk matrices are\u0000widely recognised tools by academics and practitioners in various sectors to\u0000assess and rank risks according to their likelihood of occurrence and impact on\u0000project objectives. However, the existing literature highlights several\u0000limitations to use the risk matrix. In response to the weaknesses of its use,\u0000this paper proposes a novel approach for prioritising project risks. Monte\u0000Carlo Simulation (MCS) is used to perform a quantitative prioritisation of\u0000risks with the simulation software MCSimulRisk. Together with the definition of\u0000project activities, the simulation includes the identified risks by modelling\u0000their probability and impact on cost and duration. With this novel methodology,\u0000a quantitative assessment of the impact of each risk is provided, as measured\u0000by the effect that it would have on project duration and its total cost. This\u0000allows the differentiation of critical risks according to their impact on\u0000project duration, which may differ if cost is taken as a priority objective.\u0000This proposal is interesting for project managers because they will, on the one\u0000hand, know the absolute impact of each risk on their project duration and cost\u0000objectives and, on the other hand, be able to discriminate the impacts of each\u0000risk independently on the duration objective and the cost objective.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we discuss the worst-case of distortion riskmetrics for general distributions when only partial information (mean and variance) is known. This result is applicable to general class of distortion risk measures and variability measures. Furthermore, we also consider worst-case of weighted entropy for general distributions when only partial information is available. Specifically, we provide some applications for entropies, weighted entropies and risk measures. The commonly used entropies include Gini functional, cumulative residual entropy, tail-Gini functional, cumulative Tsallis past entropy, extended Gini coefficient and so on. The risk measures contain some premium principles and shortfalls based on entropy. The shortfalls include the Gini shortfall, extended Gini shortfall, shortfall of cumulative residual entropy and shortfall of cumulative residual Tsallis entropy with order $alpha$.
{"title":"Worst-cases of distortion riskmetrics and weighted entropy with partial information","authors":"Baishuai Zuo, Chuancun Yin","doi":"arxiv-2405.19075","DOIUrl":"https://doi.org/arxiv-2405.19075","url":null,"abstract":"In this paper, we discuss the worst-case of distortion riskmetrics for\u0000general distributions when only partial information (mean and variance) is\u0000known. This result is applicable to general class of distortion risk measures\u0000and variability measures. Furthermore, we also consider worst-case of weighted\u0000entropy for general distributions when only partial information is available.\u0000Specifically, we provide some applications for entropies, weighted entropies\u0000and risk measures. The commonly used entropies include Gini functional,\u0000cumulative residual entropy, tail-Gini functional, cumulative Tsallis past\u0000entropy, extended Gini coefficient and so on. The risk measures contain some\u0000premium principles and shortfalls based on entropy. The shortfalls include the\u0000Gini shortfall, extended Gini shortfall, shortfall of cumulative residual\u0000entropy and shortfall of cumulative residual Tsallis entropy with order\u0000$alpha$.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"181 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Risk sensitive decision making finds important applications in current day use cases. Existing risk measures consider a single or finite collection of random variables, which do not account for the asymptotic behaviour of underlying systems. Conditional Value at Risk (CVaR) is the most commonly used risk measure, and has been extensively utilized for modelling rare events in finite horizon scenarios. Naive extension of existing risk criteria to asymptotic regimes faces fundamental challenges, where basic assumptions of existing risk measures fail. We present a complete simulation based approach for sequentially computing Asymptotic CVaR (ACVaR), a risk measure we define on limiting empirical averages of markovian rewards. Large deviations theory, density estimation, and two-time scale stochastic approximation are utilized to define a 'tilted' probability kernel on the underlying state space to facilitate ACVaR simulation. Our algorithm enjoys theoretical guarantees, and we numerically evaluate its performance over a variety of test cases.
{"title":"An Asymptotic CVaR Measure of Risk for Markov Chains","authors":"Shivam Patel, Vivek Borkar","doi":"arxiv-2405.13513","DOIUrl":"https://doi.org/arxiv-2405.13513","url":null,"abstract":"Risk sensitive decision making finds important applications in current day\u0000use cases. Existing risk measures consider a single or finite collection of\u0000random variables, which do not account for the asymptotic behaviour of\u0000underlying systems. Conditional Value at Risk (CVaR) is the most commonly used\u0000risk measure, and has been extensively utilized for modelling rare events in\u0000finite horizon scenarios. Naive extension of existing risk criteria to\u0000asymptotic regimes faces fundamental challenges, where basic assumptions of\u0000existing risk measures fail. We present a complete simulation based approach\u0000for sequentially computing Asymptotic CVaR (ACVaR), a risk measure we define on\u0000limiting empirical averages of markovian rewards. Large deviations theory,\u0000density estimation, and two-time scale stochastic approximation are utilized to\u0000define a 'tilted' probability kernel on the underlying state space to\u0000facilitate ACVaR simulation. Our algorithm enjoys theoretical guarantees, and\u0000we numerically evaluate its performance over a variety of test cases.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaxin PangCGS i3, Shenle PanCGS i3, Eric BallotCGS i3
Supply chain resilience analysis aims to identify the critical elements in the supply chain, measure its reliability, and analyze solutions for improving vulnerabilities. While extensive methods like stochastic approaches have been dominant, robust optimization-widely applied in robust planning under uncertainties without specific probability distributions-remains relatively underexplored for this research problem. This paper employs robust optimization with budget-of-uncertainty as a tool to analyze the resilience of multi-modal logistics service networks under time uncertainty. We examine the interactive effects of three critical factors: network size, disruption scale, disruption degree. The computational experiments offer valuable managerial insights for practitioners and researchers.
{"title":"Resilience Analysis of Multi-modal Logistics Service Network Through Robust Optimization with Budget-of-Uncertainty","authors":"Yaxin PangCGS i3, Shenle PanCGS i3, Eric BallotCGS i3","doi":"arxiv-2405.12565","DOIUrl":"https://doi.org/arxiv-2405.12565","url":null,"abstract":"Supply chain resilience analysis aims to identify the critical elements in\u0000the supply chain, measure its reliability, and analyze solutions for improving\u0000vulnerabilities. While extensive methods like stochastic approaches have been\u0000dominant, robust optimization-widely applied in robust planning under\u0000uncertainties without specific probability distributions-remains relatively\u0000underexplored for this research problem. This paper employs robust optimization\u0000with budget-of-uncertainty as a tool to analyze the resilience of multi-modal\u0000logistics service networks under time uncertainty. We examine the interactive\u0000effects of three critical factors: network size, disruption scale, disruption\u0000degree. The computational experiments offer valuable managerial insights for\u0000practitioners and researchers.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Risk and utility functionals are fundamental building blocks in economics and finance. In this paper we investigate under which conditions a risk or utility functional is sensitive to the accumulation of losses in the sense that any sufficiently large multiple of a position that exposes an agent to future losses has positive risk or negative utility. We call this property sensitivity to large losses and provide necessary and sufficient conditions thereof that are easy to check for a very large class of risk and utility functionals. In particular, our results do not rely on convexity and can therefore also be applied to most examples discussed in the recent literature, including (non-convex) star-shaped risk measures or S-shaped utility functions encountered in prospect theory. As expected, Value at Risk generally fails to be sensitive to large losses. More surprisingly, this is also true of Expected Shortfall. By contrast, expected utility functionals as well as (optimized) certainty equivalents are proved to be sensitive to large losses for many standard choices of concave and nonconcave utility functions, including $S$-shaped utility functions. We also show that Value at Risk and Expected Shortfall become sensitive to large losses if they are either properly adjusted or if the property is suitably localized.
风险和效用函数是经济学和金融学的基本构件。在本文中,我们研究了在哪些条件下风险或效用函数对损失的累积敏感,即任何足够大的头寸倍数都会使代理人面临未来的损失,从而产生正风险或负效用。我们将这一特性称为对巨额损失的敏感性,并提供了必要条件和充分条件,这些条件很容易对一大类风险和效用函数进行检验。特别是,我们的结果并不依赖于凸性,因此也可以应用于近期文献中讨论的大多数例子,包括前景理论中遇到的(非凸性)星形风险度量或 S 形效用函数。不出所料,风险价值通常对巨额损失不敏感。更令人惊讶的是,预期亏损也是如此。相比之下,对于许多标准选择的凹形和非凹形效用函数,包括$S$形效用函数,预期效用函数以及(优化)确定性等价物都被证明对巨额损失敏感。我们还证明,如果对风险价值和预期亏损进行适当调整,或者对该属性进行适当的局部化处理,它们就会对巨额损失变得敏感。
{"title":"Risk, utility and sensitivity to large losses","authors":"Martin Herdegen, Nazem Khan, Cosimo Munari","doi":"arxiv-2405.12154","DOIUrl":"https://doi.org/arxiv-2405.12154","url":null,"abstract":"Risk and utility functionals are fundamental building blocks in economics and\u0000finance. In this paper we investigate under which conditions a risk or utility\u0000functional is sensitive to the accumulation of losses in the sense that any\u0000sufficiently large multiple of a position that exposes an agent to future\u0000losses has positive risk or negative utility. We call this property sensitivity\u0000to large losses and provide necessary and sufficient conditions thereof that\u0000are easy to check for a very large class of risk and utility functionals. In\u0000particular, our results do not rely on convexity and can therefore also be\u0000applied to most examples discussed in the recent literature, including\u0000(non-convex) star-shaped risk measures or S-shaped utility functions\u0000encountered in prospect theory. As expected, Value at Risk generally fails to\u0000be sensitive to large losses. More surprisingly, this is also true of Expected\u0000Shortfall. By contrast, expected utility functionals as well as (optimized)\u0000certainty equivalents are proved to be sensitive to large losses for many\u0000standard choices of concave and nonconcave utility functions, including\u0000$S$-shaped utility functions. We also show that Value at Risk and Expected\u0000Shortfall become sensitive to large losses if they are either properly adjusted\u0000or if the property is suitably localized.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"2013 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On April 22, 2020, the CME Group switched to Bachelier pricing for a group of oil futures options. The Bachelier model, or more generally the arithmetic Brownian motion (ABM), is not so widely used in finance, though. This paper provides the first comprehensive survey of options pricing under ABM. Using the risk-neutral valuation, we derive formulas for European options for three underlying types, namely an underlying that does not pay dividends, an underlying that pays a continuous dividend yield, and futures. Further, we derive Black-Scholes-Merton-like partial differential equations, which can in principle be utilized to price American options numerically via finite difference.
{"title":"Risk-neutral valuation of options under arithmetic Brownian motions","authors":"Qiang Liu, Yuhan Jiao, Shuxin Guo","doi":"arxiv-2405.11329","DOIUrl":"https://doi.org/arxiv-2405.11329","url":null,"abstract":"On April 22, 2020, the CME Group switched to Bachelier pricing for a group of\u0000oil futures options. The Bachelier model, or more generally the arithmetic\u0000Brownian motion (ABM), is not so widely used in finance, though. This paper\u0000provides the first comprehensive survey of options pricing under ABM. Using the\u0000risk-neutral valuation, we derive formulas for European options for three\u0000underlying types, namely an underlying that does not pay dividends, an\u0000underlying that pays a continuous dividend yield, and futures. Further, we\u0000derive Black-Scholes-Merton-like partial differential equations, which can in\u0000principle be utilized to price American options numerically via finite\u0000difference.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a challenge to estimate fund performance by compounded returns. Arguably, it is incorrect to use yearly returns directly for compounding, with reported annualized return of above 60% for Medallion for the 31 years up to 2018. We propose an estimation based on fund sizes and trading profits and obtain a compounded return of 32.6% before fees with a 3% financing rate. Alternatively, we suggest using the manager's wealth as a proxy and arriving at a compounded growth rate of 25.6% for Simons for the 33 years up to 2020. We conclude that the annualized compounded return of Medallion before fees is probably under 35%. Our findings have implications for how to compute fund performance correctly.
{"title":"Is the annualized compounded return of Medallion over 35%?","authors":"Shuxin Guo, Qiang Liu","doi":"arxiv-2405.10917","DOIUrl":"https://doi.org/arxiv-2405.10917","url":null,"abstract":"It is a challenge to estimate fund performance by compounded returns.\u0000Arguably, it is incorrect to use yearly returns directly for compounding, with\u0000reported annualized return of above 60% for Medallion for the 31 years up to\u00002018. We propose an estimation based on fund sizes and trading profits and\u0000obtain a compounded return of 32.6% before fees with a 3% financing rate.\u0000Alternatively, we suggest using the manager's wealth as a proxy and arriving at\u0000a compounded growth rate of 25.6% for Simons for the 33 years up to 2020. We\u0000conclude that the annualized compounded return of Medallion before fees is\u0000probably under 35%. Our findings have implications for how to compute fund\u0000performance correctly.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the data-generating processes for factors expressed in return differences, which the literature on time-series asset pricing seems to have overlooked. For the factors' data-generating processes or long-short zero-cost portfolios, a meaningful definition of returns is impossible; further, the compounded market factor (MF) significantly underestimates the return difference between the market and the risk-free rate compounded separately. Surprisingly, if MF were treated coercively as periodic-rebalancing long-short (i.e., the same as size and value), Fama-French three-factor (FF3) would be economically unattractive for lacking compounding and irrelevant for suffering from the small "size of an effect." Otherwise, FF3 might be misspecified if MF were buy-and-hold long-short. Finally, we show that OLS with net returns for single-index models leads to inflated alphas, exaggerated t-values, and overestimated Sharpe ratios (SR); worse, net returns may lead to pathological alphas and SRs. We propose defining factors (and SRs) with non-difference compound returns.
{"title":"Data-generating process and time-series asset pricing","authors":"Shuxin Guo, Qiang Liu","doi":"arxiv-2405.10920","DOIUrl":"https://doi.org/arxiv-2405.10920","url":null,"abstract":"We study the data-generating processes for factors expressed in return\u0000differences, which the literature on time-series asset pricing seems to have\u0000overlooked. For the factors' data-generating processes or long-short zero-cost\u0000portfolios, a meaningful definition of returns is impossible; further, the\u0000compounded market factor (MF) significantly underestimates the return\u0000difference between the market and the risk-free rate compounded separately.\u0000Surprisingly, if MF were treated coercively as periodic-rebalancing long-short\u0000(i.e., the same as size and value), Fama-French three-factor (FF3) would be\u0000economically unattractive for lacking compounding and irrelevant for suffering\u0000from the small \"size of an effect.\" Otherwise, FF3 might be misspecified if MF\u0000were buy-and-hold long-short. Finally, we show that OLS with net returns for\u0000single-index models leads to inflated alphas, exaggerated t-values, and\u0000overestimated Sharpe ratios (SR); worse, net returns may lead to pathological\u0000alphas and SRs. We propose defining factors (and SRs) with non-difference\u0000compound returns.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Cheng, Qin Yang, Liyang Wang, Ao Xiang, Jingyu Zhang
In the realm of globalized financial markets, commercial banks are confronted with an escalating magnitude of credit risk, thereby imposing heightened requisites upon the security of bank assets and financial stability. This study harnesses advanced neural network techniques, notably the Backpropagation (BP) neural network, to pioneer a novel model for preempting credit risk in commercial banks. The discourse initially scrutinizes conventional financial risk preemptive models, such as ARMA, ARCH, and Logistic regression models, critically analyzing their real-world applications. Subsequently, the exposition elaborates on the construction process of the BP neural network model, encompassing network architecture design, activation function selection, parameter initialization, and objective function construction. Through comparative analysis, the superiority of neural network models in preempting credit risk in commercial banks is elucidated. The experimental segment selects specific bank data, validating the model's predictive accuracy and practicality. Research findings evince that this model efficaciously enhances the foresight and precision of credit risk management.
在全球化的金融市场中,商业银行面临着不断升级的信用风险,从而对银行资产的安全性和金融稳定性提出了更高的要求。本研究利用先进的神经网络技术,特别是反向传播(BP)神经网络,开创了一种新型的商业银行信贷风险防范模型。论述首先仔细研究了传统的金融风险防范模型,如 ARMA、ARCH 和 Logistic 回归模型,并批判性地分析了它们在现实世界中的应用。随后,论述阐述了 BP 神经网络模型的构建过程,包括网络架构设计、激活函数选择、参数初始化和目标函数构建。通过比较分析,阐明了神经网络模型在防范商业银行信贷风险方面的优越性。实验部分选择了特定的银行数据,验证了模型的预测准确性和实用性。研究结果表明,该模型能有效提高信贷风险管理的预见性和精确性。
{"title":"Research on Credit Risk Early Warning Model of Commercial Banks Based on Neural Network Algorithm","authors":"Yu Cheng, Qin Yang, Liyang Wang, Ao Xiang, Jingyu Zhang","doi":"arxiv-2405.10762","DOIUrl":"https://doi.org/arxiv-2405.10762","url":null,"abstract":"In the realm of globalized financial markets, commercial banks are confronted\u0000with an escalating magnitude of credit risk, thereby imposing heightened\u0000requisites upon the security of bank assets and financial stability. This study\u0000harnesses advanced neural network techniques, notably the Backpropagation (BP)\u0000neural network, to pioneer a novel model for preempting credit risk in\u0000commercial banks. The discourse initially scrutinizes conventional financial\u0000risk preemptive models, such as ARMA, ARCH, and Logistic regression models,\u0000critically analyzing their real-world applications. Subsequently, the\u0000exposition elaborates on the construction process of the BP neural network\u0000model, encompassing network architecture design, activation function selection,\u0000parameter initialization, and objective function construction. Through\u0000comparative analysis, the superiority of neural network models in preempting\u0000credit risk in commercial banks is elucidated. The experimental segment selects\u0000specific bank data, validating the model's predictive accuracy and\u0000practicality. Research findings evince that this model efficaciously enhances\u0000the foresight and precision of credit risk management.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}