In this paper, we propose a copula approach in measuring the dependency between inflation and exchange rate. In unveiling this dependency, we first estimated the best GARCH model for the two variables. Then, we derived the marginal distributions of the standardised residuals from the GARCH. The Laplace and generalised t distributions best modelled the residuals of the GARCH(1,1) models, respectively, for inflation and exchange rate. These marginals were then used to transform the standardised residuals into uniform random variables on a unit interval [0, 1] for estimating the copulas. Our results show that the dependency between inflation and exchange rate in Ghana is approximately 7%.
{"title":"Modelling the Dependency between Inflation and Exchange Rate Using Copula","authors":"C. Kwofie, I. Akoto, K. Opoku-Ameyaw","doi":"10.1155/2020/2345746","DOIUrl":"https://doi.org/10.1155/2020/2345746","url":null,"abstract":"In this paper, we propose a copula approach in measuring the dependency between inflation and exchange rate. In unveiling this dependency, we first estimated the best GARCH model for the two variables. Then, we derived the marginal distributions of the standardised residuals from the GARCH. The Laplace and generalised t distributions best modelled the residuals of the GARCH(1,1) models, respectively, for inflation and exchange rate. These marginals were then used to transform the standardised residuals into uniform random variables on a unit interval [0, 1] for estimating the copulas. Our results show that the dependency between inflation and exchange rate in Ghana is approximately 7%.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/2345746","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44829899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. W. Ayele, Emmanuel Gabreyohannes, Hayimro Edmealem
Like most commodities, the price of silver is driven by supply and demand speculation, which makes the price of silver notoriously volatile due to the smaller market, lower market liquidity, and fluctuations in demand between industrial and store value use. The concern of this article was to model and forecast the silver price volatility dynamics on the Ethiopian market using GARCH family models using data from January 1998 to January 2014. The price return series of silver shows the characteristics of financial time series such as leptokurtic distributions and thus can suitably be modeled using GARCH family models. An empirical investigation was conducted to model price volatility using GARCH family models. Among the GARCH family models considered in this study, ARMA (1, 3)-EGARCH (3, 2) model with the normal distributional assumption of residuals was found to be a better fit for price volatility of silver. Among the exogenous variables considered in this study, saving interest rate and general inflation rate have a statistically significant effect on monthly silver price volatility. In the EGARCH (3, 2) volatility model, the asymmetric term was found to be positive and significant. This is an indication that the unanticipated price increase had a greater impact on price volatility than the unanticipated price decrease in silver. Then, concerned stockholders such as portfolio managers, planners, bankers, and investors should intervene and pay due attention to these factors in the formulation of financial and related market policy.
{"title":"Generalized Autoregressive Conditional Heteroskedastic Model to Examine Silver Price Volatility and Its Macroeconomic Determinant in Ethiopia Market","authors":"A. W. Ayele, Emmanuel Gabreyohannes, Hayimro Edmealem","doi":"10.1155/2020/5095181","DOIUrl":"https://doi.org/10.1155/2020/5095181","url":null,"abstract":"Like most commodities, the price of silver is driven by supply and demand speculation, which makes the price of silver notoriously volatile due to the smaller market, lower market liquidity, and fluctuations in demand between industrial and store value use. The concern of this article was to model and forecast the silver price volatility dynamics on the Ethiopian market using GARCH family models using data from January 1998 to January 2014. The price return series of silver shows the characteristics of financial time series such as leptokurtic distributions and thus can suitably be modeled using GARCH family models. An empirical investigation was conducted to model price volatility using GARCH family models. Among the GARCH family models considered in this study, ARMA (1, 3)-EGARCH (3, 2) model with the normal distributional assumption of residuals was found to be a better fit for price volatility of silver. Among the exogenous variables considered in this study, saving interest rate and general inflation rate have a statistically significant effect on monthly silver price volatility. In the EGARCH (3, 2) volatility model, the asymmetric term was found to be positive and significant. This is an indication that the unanticipated price increase had a greater impact on price volatility than the unanticipated price decrease in silver. Then, concerned stockholders such as portfolio managers, planners, bankers, and investors should intervene and pay due attention to these factors in the formulation of financial and related market policy.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/5095181","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44013260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we derive the cumulative distribution functions (CDF) and probability density functions (PDF) of the ratio and product of two independent Weibull and Lindley random variables. The moment generating functions (MGF) and the k -moment are driven from the ratio and product cases. In these derivations, we use some special functions, for instance, generalized hypergeometric functions, confluent hypergeometric functions, and the parabolic cylinder functions. Finally, we draw the PDF and CDF in many values of the parameters.
{"title":"Distributions of the Ratio and Product of Two Independent Weibull and Lindley Random Variables","authors":"N. J. Hassan, A. Nasar, J. M. Hadad","doi":"10.1155/2020/5693129","DOIUrl":"https://doi.org/10.1155/2020/5693129","url":null,"abstract":"In this paper, we derive the cumulative distribution functions (CDF) and probability density functions (PDF) of the ratio and product of two independent Weibull and Lindley random variables. The moment generating functions (MGF) and the k -moment are driven from the ratio and product cases. In these derivations, we use some special functions, for instance, generalized hypergeometric functions, confluent hypergeometric functions, and the parabolic cylinder functions. Finally, we draw the PDF and CDF in many values of the parameters.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/5693129","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48721318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, different distributions have been generalized using the - R { Y } framework but the possibility of using Dagum distribution has not been assessed. The - R { Y } combines three distributions, with one as a baseline distribution, with the strength of each distribution combined to produce greater effect on the new generated distribution. The new generated distributions would have more parameters but would have high flexibility in handling bimodality in datasets and it is a weighted hazard function of the baseline distribution. This paper therefore generalized the Dagum distribution using the quantile function of Lomax distribution. A member of - Dagum class of distribution called exponentiated-exponential-Dagum {Lomax} (EEDL) distribution was proposed. The distribution will be useful in survival analysis and reliability studies. Different characterizations of the distribution are derived, such as the asymptotes, stochastic ordering, stress-strength analysis, moment, Shannon entropy, and quantile function. Simulated and real data are used and compared favourably with existing distributions in the literature.
{"title":"T-Dagum: A Way of Generalizing Dagum Distribution Using Lomax Quantile Function","authors":"M. Ekum, M. Adamu, E. Akarawak","doi":"10.1155/2020/1641207","DOIUrl":"https://doi.org/10.1155/2020/1641207","url":null,"abstract":"Recently, different distributions have been generalized using the - R { Y } framework but the possibility of using Dagum distribution has not been assessed. The - R { Y } combines three distributions, with one as a baseline distribution, with the strength of each distribution combined to produce greater effect on the new generated distribution. The new generated distributions would have more parameters but would have high flexibility in handling bimodality in datasets and it is a weighted hazard function of the baseline distribution. This paper therefore generalized the Dagum distribution using the quantile function of Lomax distribution. A member of - Dagum class of distribution called exponentiated-exponential-Dagum {Lomax} (EEDL) distribution was proposed. The distribution will be useful in survival analysis and reliability studies. Different characterizations of the distribution are derived, such as the asymptotes, stochastic ordering, stress-strength analysis, moment, Shannon entropy, and quantile function. Simulated and real data are used and compared favourably with existing distributions in the literature.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/1641207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48686053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nonresponse is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random nonresponse using auxiliary data. In this study, it is assumed that random nonresponse occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random nonresponse. In particular, auxiliary information is used via an improved Nadaraya–Watson kernel regression technique to compensate for random nonresponse. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of a finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at coverage rate. The results obtained in this study are useful for instance in choosing efficient estimators of a finite population mean in demographic sample surveys.
{"title":"Estimation of a Finite Population Mean under Random Nonresponse Using Kernel Weights","authors":"Nelson Kiprono Bii, C. O. Onyango, J. Odhiambo","doi":"10.1155/2020/8090381","DOIUrl":"https://doi.org/10.1155/2020/8090381","url":null,"abstract":"Nonresponse is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random nonresponse using auxiliary data. In this study, it is assumed that random nonresponse occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random nonresponse. In particular, auxiliary information is used via an improved Nadaraya–Watson kernel regression technique to compensate for random nonresponse. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of a finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at coverage rate. The results obtained in this study are useful for instance in choosing efficient estimators of a finite population mean in demographic sample surveys.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8090381","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45992648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The generalised Pareto distribution (GPD) offers a family of probability spaces which support threshold exceedances and is thus suitable for modelling high-end actuarial risks. Nonetheless, its distributional continuity presents a critical limitation in characterising data of discrete forms. Discretising the GPD, therefore, yields a derived distribution which accommodates the count data while maintaining the essential tail modelling properties of the GPD. In this paper, we model non-life insurance claims under the three-parameter discrete generalised Pareto (DGP) distribution. Data for the study on reported and settled claims, spanning the period 2012–2016, were obtained from the National Insurance Commission, Ghana. The maximum likelihood estimation (MLE) principle was adopted in fitting the DGP to yearly and aggregated data. The estimation involved two steps. First, we propose a modification to the μ and μ + 1 frequency method in the literature. The proposal provides an alternative routine for generating initial estimators for MLE, in cases of varied count intervals, as is a characteristic of the claim data under study. Second, a bootstrap algorithm is implemented to obtain standard errors of estimators of the DGP parameters. The performance of the DGP is compared to the negative binomial distribution in modelling the claim data using the Akaike and Bayesian information criteria. The results show that the DGP is appropriate for modelling the count of non-life insurance claims and provides a better fit to the regulatory claim data considered.
{"title":"Assessing the Performance of the Discrete Generalised Pareto Distribution in Modelling Non-Life Insurance Claims","authors":"S. K. Dzidzornu, R. Minkah","doi":"10.1155/2021/5518583","DOIUrl":"https://doi.org/10.1155/2021/5518583","url":null,"abstract":"The generalised Pareto distribution (GPD) offers a family of probability spaces which support threshold exceedances and is thus suitable for modelling high-end actuarial risks. Nonetheless, its distributional continuity presents a critical limitation in characterising data of discrete forms. Discretising the GPD, therefore, yields a derived distribution which accommodates the count data while maintaining the essential tail modelling properties of the GPD. In this paper, we model non-life insurance claims under the three-parameter discrete generalised Pareto (DGP) distribution. Data for the study on reported and settled claims, spanning the period 2012–2016, were obtained from the National Insurance Commission, Ghana. The maximum likelihood estimation (MLE) principle was adopted in fitting the DGP to yearly and aggregated data. The estimation involved two steps. First, we propose a modification to the \u0000 \u0000 μ\u0000 \u0000 and \u0000 \u0000 \u0000 \u0000 μ\u0000 +\u0000 1\u0000 \u0000 \u0000 \u0000 frequency method in the literature. The proposal provides an alternative routine for generating initial estimators for MLE, in cases of varied count intervals, as is a characteristic of the claim data under study. Second, a bootstrap algorithm is implemented to obtain standard errors of estimators of the DGP parameters. The performance of the DGP is compared to the negative binomial distribution in modelling the claim data using the Akaike and Bayesian information criteria. The results show that the DGP is appropriate for modelling the count of non-life insurance claims and provides a better fit to the regulatory claim data considered.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49431851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-07DOI: 10.1101/2020.04.02.20050153
L. Fenga, Carlo Del Castello
A compounded method, exploiting the searching capabilities of an operation research algorithm and the power of bootstrap techniques, is presented. The resulting algorithm has been successfully tested to predict the turning point reached by the epidemic curve followed by the CoViD19 virus in Italy. Futures lines of research, which include the generalization of the method to a broad set of distribution, will be finally given.
{"title":"CoViD19 Meta heuristic optimization based forecast method on time dependent bootstrapped data","authors":"L. Fenga, Carlo Del Castello","doi":"10.1101/2020.04.02.20050153","DOIUrl":"https://doi.org/10.1101/2020.04.02.20050153","url":null,"abstract":"A compounded method, exploiting the searching capabilities of an operation research algorithm and the power of bootstrap techniques, is presented. The resulting algorithm has been successfully tested to predict the turning point reached by the epidemic curve followed by the CoViD19 virus in Italy. Futures lines of research, which include the generalization of the method to a broad set of distribution, will be finally given.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48269932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forecasting the covolatility of asset return series is becoming the subject of extensive research among academics, practitioners, and portfolio managers. This paper estimates a variety of multivariate GARCH models using weekly closing price (in USD/barrel) of Brent crude oil and weekly closing prices (in USD/pound) of Coffee Arabica and compares the forecasting performance of these models based on high-frequency intraday data which allows for a more precise realized volatility measurement. The study used weekly price data to explicitly model covolatility and employed high-frequency intraday data to assess model forecasting performance. The analysis points to the conclusion that the varying conditional correlation (VCC) model with Student’s t distributed innovation terms is the most accurate volatility forecasting model in the context of our empirical setting. We recommend and encourage future researchers studying the forecasting performance of MGARCH models to pay particular attention to the measurement of realized volatility and employ high-frequency data whenever feasible.
{"title":"Forecasting the Covolatility of Coffee Arabica and Crude Oil Prices: A Multivariate GARCH Approach with High-Frequency Data","authors":"Dawit Yeshiwas, Yebelay Berelie","doi":"10.1155/2020/1424020","DOIUrl":"https://doi.org/10.1155/2020/1424020","url":null,"abstract":"Forecasting the covolatility of asset return series is becoming the subject of extensive research among academics, practitioners, and portfolio managers. This paper estimates a variety of multivariate GARCH models using weekly closing price (in USD/barrel) of Brent crude oil and weekly closing prices (in USD/pound) of Coffee Arabica and compares the forecasting performance of these models based on high-frequency intraday data which allows for a more precise realized volatility measurement. The study used weekly price data to explicitly model covolatility and employed high-frequency intraday data to assess model forecasting performance. The analysis points to the conclusion that the varying conditional correlation (VCC) model with Student’s t distributed innovation terms is the most accurate volatility forecasting model in the context of our empirical setting. We recommend and encourage future researchers studying the forecasting performance of MGARCH models to pay particular attention to the measurement of realized volatility and employ high-frequency data whenever feasible.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/1424020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43603875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among several variable selection methods, LASSO is the most desirable estimation procedure for handling regularization and variable selection simultaneously in the high-dimensional linear regression models when multicollinearity exists among the predictor variables. Since LASSO is unstable under high multicollinearity, the elastic-net (Enet) estimator has been used to overcome this issue. According to the literature, the estimation of regression parameters can be improved by adding prior information about regression coefficients to the model, which is available in the form of exact or stochastic linear restrictions. In this article, we proposed a stochastic restricted LASSO-type estimator (SRLASSO) by incorporating stochastic linear restrictions. Furthermore, we compared the performance of SRLASSO with LASSO and Enet in root mean square error (RMSE) criterion and mean absolute prediction error (MAPE) criterion based on a Monte Carlo simulation study. Finally, a real-world example was used to demonstrate the performance of SRLASSO.
{"title":"Stochastic Restricted LASSO-Type Estimator in the Linear Regression Model","authors":"Kayanan Manickavasagar, P. Wijekoon","doi":"10.1155/2020/7352097","DOIUrl":"https://doi.org/10.1155/2020/7352097","url":null,"abstract":"Among several variable selection methods, LASSO is the most desirable estimation procedure for handling regularization and variable selection simultaneously in the high-dimensional linear regression models when multicollinearity exists among the predictor variables. Since LASSO is unstable under high multicollinearity, the elastic-net (Enet) estimator has been used to overcome this issue. According to the literature, the estimation of regression parameters can be improved by adding prior information about regression coefficients to the model, which is available in the form of exact or stochastic linear restrictions. In this article, we proposed a stochastic restricted LASSO-type estimator (SRLASSO) by incorporating stochastic linear restrictions. Furthermore, we compared the performance of SRLASSO with LASSO and Enet in root mean square error (RMSE) criterion and mean absolute prediction error (MAPE) criterion based on a Monte Carlo simulation study. Finally, a real-world example was used to demonstrate the performance of SRLASSO.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/7352097","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43099626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A risk measure commonly used in financial risk management, namely, Value-at-Risk (VaR), is studied. In particular, we find a VaR forecast for heteroscedastic processes such that its (conditional) coverage probability is close to the nominal. To do so, we pay attention to the effect of estimator variability such as asymptotic bias and mean square error. Numerical analysis is carried out to illustrate this calculation for the Autoregressive Conditional Heteroscedastic (ARCH) model, an observable volatility type model. In comparison, we find VaR for the latent volatility model i.e., the Stochastic Volatility Autoregressive (SVAR) model. It is found that the effect of estimator variability is significant to obtain VaR forecast with better coverage. In addition, we may only be able to assess unconditional coverage probability for VaR forecast of the SVAR model. This is due to the fact that the volatility process of the model is unobservable.
{"title":"The Improved Value-at-Risk for Heteroscedastic Processes and Their Coverage Probability","authors":"Khreshna Syuhada","doi":"10.1155/2020/7638517","DOIUrl":"https://doi.org/10.1155/2020/7638517","url":null,"abstract":"A risk measure commonly used in financial risk management, namely, Value-at-Risk (VaR), is studied. In particular, we find a VaR forecast for heteroscedastic processes such that its (conditional) coverage probability is close to the nominal. To do so, we pay attention to the effect of estimator variability such as asymptotic bias and mean square error. Numerical analysis is carried out to illustrate this calculation for the Autoregressive Conditional Heteroscedastic (ARCH) model, an observable volatility type model. In comparison, we find VaR for the latent volatility model i.e., the Stochastic Volatility Autoregressive (SVAR) model. It is found that the effect of estimator variability is significant to obtain VaR forecast with better coverage. In addition, we may only be able to assess unconditional coverage probability for VaR forecast of the SVAR model. This is due to the fact that the volatility process of the model is unobservable.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/7638517","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45992320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}