Pub Date : 2021-02-05DOI: 10.37227/jibm-2020-04-124/
Maxwell Dela Yao Gakpo
This quantitative study examines the importance and practices of operational risk management in Ghanaian banks. The objective of this paper is to investigate the practitioner’s understanding of the difference (if any) between importance and practices of operational risk management programs in Ghanaian banks using a survey research design. The survey instrument, used in collecting data from 170 respondents, was duly validated. The results confirmed that there exists a high level of ORM awareness among the respondents. The banks deploy different risk management solutions to control and mitigate operational risk. The study also concluded that there was a clear difference between perceive ORM importance and ORM practices. This meant that there exists a gap between ORM awareness and ORM program practices. The lack of effective implementation of ORM programs are likely to cause an operational risk contagion among banks with very catastrophic impact on the whole financial sector. The study therefore, recommends that bank mangers commensurate ORM awareness creation with practical implementation of ORM programs and policies in their banks to mitigate operational risk hazards.
{"title":"Understanding the Importance and Practices of Operational Risk Management in Ghanaian Banks","authors":"Maxwell Dela Yao Gakpo","doi":"10.37227/jibm-2020-04-124/","DOIUrl":"https://doi.org/10.37227/jibm-2020-04-124/","url":null,"abstract":"This quantitative study examines the importance and practices of operational risk management in Ghanaian banks. The objective of this paper is to investigate the practitioner’s understanding of the difference (if any) between importance and practices of operational risk management programs in Ghanaian banks using a survey research design. The survey instrument, used in collecting data from 170 respondents, was duly validated. The results confirmed that there exists a high level of ORM awareness among the respondents. The banks deploy different risk management solutions to control and mitigate operational risk. The study also concluded that there was a clear difference between perceive ORM importance and ORM practices. This meant that there exists a gap between ORM awareness and ORM program practices. The lack of effective implementation of ORM programs are likely to cause an operational risk contagion among banks with very catastrophic impact on the whole financial sector. The study therefore, recommends that bank mangers commensurate ORM awareness creation with practical implementation of ORM programs and policies in their banks to mitigate operational risk hazards.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131038072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new framework for modelling commodity forward curves. The proposed model describes the dynamics of fundamental driving factors simultaneously under physical (P) and risk-neutral (Q) probability measures. Our model an extension of the forward curve model by Borovkova and Geman (2007), into several directions. It is a three-factor model, incorpo- rating the synthetic spot price, based on liquidly traded futures, stochastic level of mean reversion and an analogue of the stochastic convenience yield. We develop an innovative calibration mechanism based on the Kalman ltering technique and apply it to a large set of Brent oil futures. Addition- ally, we investigate properties of the time-dependent market price of risk in oil markets. We apply the proposed modelling framework to derivatives pricing, risk management and counterparty credit risk. Finally, we outline a way of adjusting the proposed model to account for negative oil futures prices observed recently due to coronavirus pandemic.
{"title":"Three-Factor Commodity Forward Curve Model and Its Joint P and Q Dynamics","authors":"S. Ladokhin, S. Borovkova","doi":"10.2139/ssrn.3780286","DOIUrl":"https://doi.org/10.2139/ssrn.3780286","url":null,"abstract":"In this paper, we propose a new framework for modelling commodity forward curves. The proposed model describes the dynamics of fundamental driving factors simultaneously under physical (P) and risk-neutral (Q) probability measures. \u0000 \u0000Our model an extension of the forward curve model by Borovkova and Geman (2007), into several directions. It is a three-factor model, incorpo- rating the synthetic spot price, based on liquidly traded futures, stochastic level of mean reversion and an analogue of the stochastic convenience yield. We develop an innovative calibration mechanism based on the Kalman ltering technique and apply it to a large set of Brent oil futures. Addition- ally, we investigate properties of the time-dependent market price of risk in oil markets. We apply the proposed modelling framework to derivatives pricing, risk management and counterparty credit risk. Finally, we outline a way of adjusting the proposed model to account for negative oil futures prices observed recently due to coronavirus pandemic.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121348736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-31DOI: 10.5121/IJAIA.2021.12102
Lara Marie Demajo, Vince Vella, A. Dingli
With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. However, despite the ever growing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This deficiency of transparency limits their application in different domains including credit scoring. Credit scoring systems help financial experts make better decisions regarding whether or not to accept a loan application so that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance- based) that are required by different people in different situations. Evaluation through the use of functionally-grounded, application-grounded and human-grounded analysis shows that the explanations provided are simple and consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.
{"title":"An Explanation Framework for Interpretable Credit Scoring","authors":"Lara Marie Demajo, Vince Vella, A. Dingli","doi":"10.5121/IJAIA.2021.12102","DOIUrl":"https://doi.org/10.5121/IJAIA.2021.12102","url":null,"abstract":"With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. However, despite the ever growing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This deficiency of transparency limits their application in different domains including credit scoring. Credit scoring systems help financial experts make better decisions regarding whether or not to accept a loan application so that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance- based) that are required by different people in different situations. Evaluation through the use of functionally-grounded, application-grounded and human-grounded analysis shows that the explanations provided are simple and consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122464123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fitting loss distributions in insurance is sometimes a dilemma: either you get a good fit for the small/medium losses or for the very large losses. To be able to get both at the same time, this paper studies generalizations and extensions of the Pareto distribution. This leads not only to a classification of potentially suitable, piecewise defined, distribution functions, but also to new insights into tail behavior and exposure rating.
{"title":"Reinventing Pareto: Fits for All Losses, Small and Large","authors":"Michael Fackler","doi":"10.2139/ssrn.3775007","DOIUrl":"https://doi.org/10.2139/ssrn.3775007","url":null,"abstract":"Fitting loss distributions in insurance is sometimes a dilemma: either you get a good fit for the small/medium losses or for the very large losses. To be able to get both at the same time, this paper studies generalizations and extensions of the Pareto distribution. This leads not only to a classification of potentially suitable, piecewise defined, distribution functions, but also to new insights into tail behavior and exposure rating.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128206800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We test the value at risk (VaR) forecasting accuracy of seven generalised autoregressive condition heteroskedasticity (GARCH)-mixed data sampling (MIDAS) models, which potentially provide superior forecast accuracy than traditional GARCH models by capturing different forms of mixed frequency information from the market. The main empirical results are as follows. First, most traditional GARCH models have difficulties forecasting the VaR of the crude oil market. Second, although GARCH-MIDAS models generally produce more accurate forecasts than the traditional GARCH models, some specific GARCH-MIDAS models have poor forecasting accuracies. Third, we find that the mixed frequency information on the demand side of the crude oil market is most helpful for forecasting the VaR. The model that integrates the world industrial production index (GARCH-MIDAS-IP) robustly demonstrates good forecasting performance.
{"title":"Does Mixed Frequency Information Help To Forecast the Value at Risk of the Crude Oil Market?","authors":"Yongjian Lyu, Mengzhen Kong, Rui Ke, Yu Wei","doi":"10.2139/ssrn.3774891","DOIUrl":"https://doi.org/10.2139/ssrn.3774891","url":null,"abstract":"We test the value at risk (VaR) forecasting accuracy of seven generalised autoregressive condition heteroskedasticity (GARCH)-mixed data sampling (MIDAS) models, which potentially provide superior forecast accuracy than traditional GARCH models by capturing different forms of mixed frequency information from the market. The main empirical results are as follows. First, most traditional GARCH models have difficulties forecasting the VaR of the crude oil market. Second, although GARCH-MIDAS models generally produce more accurate forecasts than the traditional GARCH models, some specific GARCH-MIDAS models have poor forecasting accuracies. Third, we find that the mixed frequency information on the demand side of the crude oil market is most helpful for forecasting the VaR. The model that integrates the world industrial production index (GARCH-MIDAS-IP) robustly demonstrates good forecasting performance.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"397 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123200571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The classical way to get an analytical model for the (supposedly heavy) tail of a loss severity distribution is via parameter inference from empirical large losses. However, in the insurance practice it occurs that one has much less information, but nevertheless needs such a model, say for reinsurance pricing or capital modeling.
We use the Generalized Pareto distribution to build consistent underlying models from very scarce data like: the frequencies at three thresholds, the risk premiums of three layers, or a mixture of both. It turns out that for typical real-world data situations such GPD “fits” exist and are unique. We also provide a scheme enabling practitioners to construct reasonable models in situations where one has even less, or somewhat more, than three such bits of information.
Finally, we have a look at model risk, by applying some parameter-free inequalities for distribution tails and a particular representation for loss count distributions. It turns out that, in the data situation given above, the uncertainty about the severity can be surprisingly low, such that the overall uncertainty is driven by the loss count.
{"title":"Three-Layer Problems and the Generalized Pareto Distribution","authors":"Michael Fackler","doi":"10.2139/ssrn.3772372","DOIUrl":"https://doi.org/10.2139/ssrn.3772372","url":null,"abstract":"The classical way to get an analytical model for the (supposedly heavy) tail of a loss severity distribution is via parameter inference from empirical large losses. However, in the insurance practice it occurs that one has much less information, but nevertheless needs such a model, say for reinsurance pricing or capital modeling. <br><br>We use the Generalized Pareto distribution to build consistent underlying models from very scarce data like: the frequencies at three thresholds, the risk premiums of three layers, or a mixture of both. It turns out that for typical real-world data situations such GPD “fits” exist and are unique. <br>We also provide a scheme enabling practitioners to construct reasonable models in situations where one has even less, or somewhat more, than three such bits of information. <br><br>Finally, we have a look at model risk, by applying some parameter-free inequalities for distribution tails and a particular representation for loss count distributions. It turns out that, in the data situation given above, the uncertainty about the severity can be surprisingly low, such that the overall uncertainty is driven by the loss count.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124707016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the core problems of the conventional value-at-risk (VaR) based on the price probability determined by frequencies of trades at a price p during an averaging time interval Δ. To protect investors from risks of market price change, VaR should use price probability determined by the market trade time-series. To match the market stochasticity we introduce the new market-based price probability measure entirely determined by probabilities of random market time-series of the trade value and volume. The distinctions between the market-based and frequency-based price probabilities result different assessments of VaR and thus can cause excess losses. Predictions of the market-based price probability at horizon T equal the forecasts of the market trade value and volume probability measures.
{"title":"To VaR, or Not to VaR, That is the Question","authors":"Victor Olkhov","doi":"10.2139/ssrn.3770615","DOIUrl":"https://doi.org/10.2139/ssrn.3770615","url":null,"abstract":"We consider the core problems of the conventional value-at-risk (VaR) based on the price probability determined by frequencies of trades at a price p during an averaging time interval Δ. To protect investors from risks of market price change, VaR should use price probability determined by the market trade time-series. To match the market stochasticity we introduce the new market-based price probability measure entirely determined by probabilities of random market time-series of the trade value and volume. The distinctions between the market-based and frequency-based price probabilities result different assessments of VaR and thus can cause excess losses. Predictions of the market-based price probability at horizon T equal the forecasts of the market trade value and volume probability measures.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121715213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Graph theoretical techniques are utilized to examine the centrality structure of overnight index swap (OIS) networks. Correlation based graphs are constructed to encode pairwise relationships between distinct OIS rates. Multiple notions of graph centrality are considered, and the time evolution of these measures is studied. A principal component analysis based centrality measure is constructed to examine comovements between full OIS curves. Numerical examples demonstrating these ideas are provided.
{"title":"The Network Structure of Overnight Index Swap Rates","authors":"Ming Fang, Stephen Michael Taylor, Ajim Uddin","doi":"10.2139/ssrn.3764557","DOIUrl":"https://doi.org/10.2139/ssrn.3764557","url":null,"abstract":"Abstract Graph theoretical techniques are utilized to examine the centrality structure of overnight index swap (OIS) networks. Correlation based graphs are constructed to encode pairwise relationships between distinct OIS rates. Multiple notions of graph centrality are considered, and the time evolution of these measures is studied. A principal component analysis based centrality measure is constructed to examine comovements between full OIS curves. Numerical examples demonstrating these ideas are provided.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126972749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We model the new quantitative aspects of market risk management for banks that Basel established in 2016 and came into effect in January 2019. Market risk is measured by Conditional Value at Risk (CVaR) or Expected Shortfall at a confidence level of 97.5%. The regulatory backtest remains largely based on 99% VaR. As additional statistical procedures, in line with the Basel recommendations, supplementary VaR and CVaR backtests must be performed at different confidence levels. We apply these tests to various parametric distributions and use non-parametric measures of CVaR, including CVaR- and CVaR+ to supplement the modelling validation. Our data relate to a period of extreme market turbulence. After testing eight parametric distributions with these data, we find that the information obtained on their empirical performance is closely tied to the backtesting conclusions regarding the competing models.
{"title":"The New International Regulation of Market Risk: Roles of VaR and CVaR in Model Validation","authors":"Samir Saissi Hassani, G. Dionne","doi":"10.2139/ssrn.3766511","DOIUrl":"https://doi.org/10.2139/ssrn.3766511","url":null,"abstract":"We model the new quantitative aspects of market risk management for banks that Basel established in 2016 and came into effect in January 2019. Market risk is measured by Conditional Value at Risk (CVaR) or Expected Shortfall at a confidence level of 97.5%. The regulatory backtest remains largely based on 99% VaR. As additional statistical procedures, in line with the Basel recommendations, supplementary VaR and CVaR backtests must be performed at different confidence levels. We apply these tests to various parametric distributions and use non-parametric measures of CVaR, including CVaR- and CVaR+ to supplement the modelling validation. Our data relate to a period of extreme market turbulence. After testing eight parametric distributions with these data, we find that the information obtained on their empirical performance is closely tied to the backtesting conclusions regarding the competing models.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116865850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the asset-liability management (ALM) of life insurers in the markets with negative interest rates. Using a sample of Japanese life insurers between 1999 and 2018, we provide initial evidence that the negative interest rate environment produces a much more serious consequence on insurers than the positive interest rate environment. Given that duration and convexity are two common measures widely used by insurers to manage their assets and liabilities, we highlight that the assumption of flat yield curve underlying the traditional measures (e.g. the Macaulay and modified durations and convexities) is problematic when interest rates turn negative. To address this issue, we propose an ALM framework using the duration and convexity based on the Vasicek stochastic model. Our results show that the strategy based on the Vasicek model outperforms the strategy using the modified duration and convexity in the negative interest rate environment.
{"title":"Asset-Liability Management of Life Insurers in the Negative Interest Rate Environment","authors":"Yi-Jia Lin, Sheen X. Liu, K. S. Tan, Xun Zhang","doi":"10.2139/ssrn.3840643","DOIUrl":"https://doi.org/10.2139/ssrn.3840643","url":null,"abstract":"This study investigates the asset-liability management (ALM) of life insurers in the markets with negative interest rates. Using a sample of Japanese life insurers between 1999 and 2018, we provide initial evidence that the negative interest rate environment produces a much more serious consequence on insurers than the positive interest rate environment. Given that duration and convexity are two common measures widely used by insurers to manage their assets and liabilities, we highlight that the assumption of flat yield curve underlying the traditional measures (e.g. the Macaulay and modified durations and convexities) is problematic when interest rates turn negative. To address this issue, we propose an ALM framework using the duration and convexity based on the Vasicek stochastic model. Our results show that the strategy based on the Vasicek model outperforms the strategy using the modified duration and convexity in the negative interest rate environment.","PeriodicalId":306152,"journal":{"name":"Risk Management eJournal","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132264668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}