Abstract The primary objective of this scholarly work is to develop two estimation procedures – maximum likelihood estimator (MLE) and method of trimmed moments (MTM) – for the mean and variance of lognormal insurance payment severity data sets affected by different loss control mechanism, for example, truncation (due to deductibles), censoring (due to policy limits), and scaling (due to coinsurance proportions), in insurance and financial industries. Maximum likelihood estimating equations for both payment-per-payment and payment-per-loss data sets are derived which can be solved readily by any existing iterative numerical methods. The asymptotic distributions of those estimators are established via Fisher information matrices. Further, with a goal of balancing efficiency and robustness and to remove point masses at certain data points, we develop a dynamic MTM estimation procedures for lognormal claim severity models for the above-mentioned transformed data scenarios. The asymptotic distributional properties and the comparison with the corresponding MLEs of those MTM estimators are established along with extensive simulation studies. Purely for illustrative purpose, numerical examples for 1500 US indemnity losses are provided which illustrate the practical performance of the established results in this paper.
{"title":"ROBUST ESTIMATION OF LOSS MODELS FOR LOGNORMAL INSURANCE PAYMENT SEVERITY DATA","authors":"Chudamani Poudyal","doi":"10.1017/asb.2021.4","DOIUrl":"https://doi.org/10.1017/asb.2021.4","url":null,"abstract":"Abstract The primary objective of this scholarly work is to develop two estimation procedures – maximum likelihood estimator (MLE) and method of trimmed moments (MTM) – for the mean and variance of lognormal insurance payment severity data sets affected by different loss control mechanism, for example, truncation (due to deductibles), censoring (due to policy limits), and scaling (due to coinsurance proportions), in insurance and financial industries. Maximum likelihood estimating equations for both payment-per-payment and payment-per-loss data sets are derived which can be solved readily by any existing iterative numerical methods. The asymptotic distributions of those estimators are established via Fisher information matrices. Further, with a goal of balancing efficiency and robustness and to remove point masses at certain data points, we develop a dynamic MTM estimation procedures for lognormal claim severity models for the above-mentioned transformed data scenarios. The asymptotic distributional properties and the comparison with the corresponding MLEs of those MTM estimators are established along with extensive simulation studies. Purely for illustrative purpose, numerical examples for 1500 US indemnity losses are provided which illustrate the practical performance of the established results in this paper.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88980325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dilan SriDaran, M. Sherris, Andrés M. Villegas, Jonathan Ziveyi
Abstract Given the rapid reductions in human mortality observed over recent decades and the uncertainty associated with their future evolution, there have been a large number of mortality projection models proposed by actuaries and demographers in recent years. Many of these, however, suffer from being overly complex, thereby producing spurious forecasts, particularly over long horizons and for small, noisy data sets. In this paper, we exploit statistical learning tools, namely group regularisation and cross-validation, to provide a robust framework to construct discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular data sets. Most importantly, this approach produces bespoke models using a trade-off between complexity (to draw as much insight as possible from limited data sets) and parsimony (to prevent over-fitting to noise), with this trade-off designed to have specific regard to the forecasting horizon of interest. This is illustrated using both empirical data from the Human Mortality Database and simulated data, using code that has been made available within a user-friendly open-source R package StMoMo.
{"title":"A GROUP REGULARISATION APPROACH FOR CONSTRUCTING GENERALISED AGE-PERIOD-COHORT MORTALITY PROJECTION MODELS","authors":"Dilan SriDaran, M. Sherris, Andrés M. Villegas, Jonathan Ziveyi","doi":"10.1017/asb.2021.29","DOIUrl":"https://doi.org/10.1017/asb.2021.29","url":null,"abstract":"Abstract Given the rapid reductions in human mortality observed over recent decades and the uncertainty associated with their future evolution, there have been a large number of mortality projection models proposed by actuaries and demographers in recent years. Many of these, however, suffer from being overly complex, thereby producing spurious forecasts, particularly over long horizons and for small, noisy data sets. In this paper, we exploit statistical learning tools, namely group regularisation and cross-validation, to provide a robust framework to construct discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular data sets. Most importantly, this approach produces bespoke models using a trade-off between complexity (to draw as much insight as possible from limited data sets) and parsimony (to prevent over-fitting to noise), with this trade-off designed to have specific regard to the forecasting horizon of interest. This is illustrated using both empirical data from the Human Mortality Database and simulated data, using code that has been made available within a user-friendly open-source R package StMoMo.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89449026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The Lee–Carter model has become a benchmark in stochastic mortality modeling. However, its forecasting performance can be significantly improved upon by modern machine learning techniques. We propose a convolutional neural network (NN) architecture for mortality rate forecasting, empirically compare this model as well as other NN models to the Lee–Carter model and find that lower forecast errors are achievable for many countries in the Human Mortality Database. We provide details on the errors and forecasts of our model to make it more understandable and, thus, more trustworthy. As NN by default only yield point estimates, previous works applying them to mortality modeling have not investigated prediction uncertainty. We address this gap in the literature by implementing a bootstrapping-based technique and demonstrate that it yields highly reliable prediction intervals for our NN model.
{"title":"POINT AND INTERVAL FORECASTS OF DEATH RATES USING NEURAL NETWORKS","authors":"Simon Schnürch, R. Korn","doi":"10.1017/asb.2021.34","DOIUrl":"https://doi.org/10.1017/asb.2021.34","url":null,"abstract":"Abstract The Lee–Carter model has become a benchmark in stochastic mortality modeling. However, its forecasting performance can be significantly improved upon by modern machine learning techniques. We propose a convolutional neural network (NN) architecture for mortality rate forecasting, empirically compare this model as well as other NN models to the Lee–Carter model and find that lower forecast errors are achievable for many countries in the Human Mortality Database. We provide details on the errors and forecasts of our model to make it more understandable and, thus, more trustworthy. As NN by default only yield point estimates, previous works applying them to mortality modeling have not investigated prediction uncertainty. We address this gap in the literature by implementing a bootstrapping-based technique and demonstrate that it yields highly reliable prediction intervals for our NN model.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85559329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Assessing conditional tail risk at very high or low levels is of great interest in numerous applications. Due to data sparsity in high tails, the widely used quantile regression method can suffer from high variability at the tails, especially for heavy-tailed distributions. As an alternative to quantile regression, expectile regression, which relies on the minimization of the asymmetric l2-norm and is more sensitive to the magnitudes of extreme losses than quantile regression, is considered. In this article, we develop a new estimation method for high conditional tail risk by first estimating the intermediate conditional expectiles in regression framework, and then estimating the underlying tail index via weighted combinations of the top order conditional expectiles. The resulting conditional tail index estimators are then used as the basis for extrapolating these intermediate conditional expectiles to high tails based on reasonable assumptions on tail behaviors. Finally, we use these high conditional tail expectiles to estimate alternative risk measures such as the Value at Risk (VaR) and Expected Shortfall (ES), both in high tails. The asymptotic properties of the proposed estimators are investigated. Simulation studies and real data analysis show that the proposed method outperforms alternative approaches.
{"title":"ESTIMATION OF HIGH CONDITIONAL TAIL RISK BASED ON EXPECTILE REGRESSION","authors":"Jie Hu, Yu Chen, Keqi Tan","doi":"10.1017/asb.2021.3","DOIUrl":"https://doi.org/10.1017/asb.2021.3","url":null,"abstract":"Abstract Assessing conditional tail risk at very high or low levels is of great interest in numerous applications. Due to data sparsity in high tails, the widely used quantile regression method can suffer from high variability at the tails, especially for heavy-tailed distributions. As an alternative to quantile regression, expectile regression, which relies on the minimization of the asymmetric l2-norm and is more sensitive to the magnitudes of extreme losses than quantile regression, is considered. In this article, we develop a new estimation method for high conditional tail risk by first estimating the intermediate conditional expectiles in regression framework, and then estimating the underlying tail index via weighted combinations of the top order conditional expectiles. The resulting conditional tail index estimators are then used as the basis for extrapolating these intermediate conditional expectiles to high tails based on reasonable assumptions on tail behaviors. Finally, we use these high conditional tail expectiles to estimate alternative risk measures such as the Value at Risk (VaR) and Expected Shortfall (ES), both in high tails. The asymptotic properties of the proposed estimators are investigated. Simulation studies and real data analysis show that the proposed method outperforms alternative approaches.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88470136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We construct a double common factor model for projecting the mortality of a population using as a reference the minimum death rate at each age among a large number of countries. In particular, the female and male minimum death rates, described as best-performance or best-practice rates, are first modelled by a common factor model structure with both common and sex-specific parameters. The differences between the death rates of the population under study and the best-performance rates are then modelled by another common factor model structure. An important result of using our proposed model is that the projected death rates of the population being considered are coherent with the projected best-performance rates in the long term, the latter of which serves as a very useful reference for the projection based on the collective experience of multiple countries. Our out-of-sample analysis shows that the new model has potential to outperform some conventional approaches in mortality projection.
{"title":"A DOUBLE COMMON FACTOR MODEL FOR MORTALITY PROJECTION USING BEST-PERFORMANCE MORTALITY RATES AS REFERENCE","authors":"Jackie Li, Maggie Lee, S. Guthrie","doi":"10.1017/asb.2020.44","DOIUrl":"https://doi.org/10.1017/asb.2020.44","url":null,"abstract":"Abstract We construct a double common factor model for projecting the mortality of a population using as a reference the minimum death rate at each age among a large number of countries. In particular, the female and male minimum death rates, described as best-performance or best-practice rates, are first modelled by a common factor model structure with both common and sex-specific parameters. The differences between the death rates of the population under study and the best-performance rates are then modelled by another common factor model structure. An important result of using our proposed model is that the projected death rates of the population being considered are coherent with the projected best-performance rates in the long term, the latter of which serves as a very useful reference for the projection based on the collective experience of multiple countries. Our out-of-sample analysis shows that the new model has potential to outperform some conventional approaches in mortality projection.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76384740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Quantifying tail dependence is an important issue in insurance and risk management. The prevalent tail dependence coefficient (TDC), however, is known to underestimate the degree of tail dependence and it does not capture non-exchangeable tail dependence since it evaluates the limiting tail probability only along the main diagonal. To overcome these issues, two novel tail dependence measures called the maximal tail concordance measure (MTCM) and the average tail concordance measure (ATCM) are proposed. Both measures are constructed based on tail copulas and possess clear probabilistic interpretations in that the MTCM evaluates the largest limiting probability among all comparable rectangles in the tail, and the ATCM is a normalized average of these limiting probabilities. In contrast to the TDC, the proposed measures can capture non-exchangeable tail dependence. Analytical forms of the proposed measures are also derived for various copulas. A real data analysis reveals striking tail dependence and tail non-exchangeability of the return series of stock indices, particularly in periods of financial distress.
{"title":"Measuring non-exchangeable tail dependence using tail copulas","authors":"Takaaki Koike, Shogo Kato, M. Hofert","doi":"10.1017/asb.2023.4","DOIUrl":"https://doi.org/10.1017/asb.2023.4","url":null,"abstract":"Abstract Quantifying tail dependence is an important issue in insurance and risk management. The prevalent tail dependence coefficient (TDC), however, is known to underestimate the degree of tail dependence and it does not capture non-exchangeable tail dependence since it evaluates the limiting tail probability only along the main diagonal. To overcome these issues, two novel tail dependence measures called the maximal tail concordance measure (MTCM) and the average tail concordance measure (ATCM) are proposed. Both measures are constructed based on tail copulas and possess clear probabilistic interpretations in that the MTCM evaluates the largest limiting probability among all comparable rectangles in the tail, and the ATCM is a normalized average of these limiting probabilities. In contrast to the TDC, the proposed measures can capture non-exchangeable tail dependence. Analytical forms of the proposed measures are also derived for various copulas. A real data analysis reveals striking tail dependence and tail non-exchangeability of the return series of stock indices, particularly in periods of financial distress.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74237099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In the context of life insurance with profit participation, the future discretionary benefits (FDB), which are a central item for Solvency II reporting, are generally calculated by computationally expensive Monte Carlo algorithms. We derive analytic formulas to estimate lower and upper bounds for the FDB. This yields an estimation interval for the FDB, and the average of lower and upper bound is a simple estimator. These formulae are designed for real world applications, and we compare the results to publicly available reporting data.
{"title":"ESTIMATION OF FUTURE DISCRETIONARY BENEFITS IN TRADITIONAL LIFE INSURANCE","authors":"F. Gach, Simon Hochgerner","doi":"10.1017/asb.2022.16","DOIUrl":"https://doi.org/10.1017/asb.2022.16","url":null,"abstract":"Abstract In the context of life insurance with profit participation, the future discretionary benefits (FDB), which are a central item for Solvency II reporting, are generally calculated by computationally expensive Monte Carlo algorithms. We derive analytic formulas to estimate lower and upper bounds for the FDB. This yields an estimation interval for the FDB, and the average of lower and upper bound is a simple estimator. These formulae are designed for real world applications, and we compare the results to publicly available reporting data.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72375111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We extend the Annually Recalculated Virtual Annuity (ARVA) spending rule for retirement savings decumulation (Waring and Siegel (2015) Financial Analysts Journal, 71(1), 91–107) to include a cap and a floor on withdrawals. With a minimum withdrawal constraint, the ARVA strategy runs the risk of depleting the investment portfolio. We determine the dynamic asset allocation strategy which maximizes a weighted combination of expected total withdrawals (EW) and expected shortfall (ES), defined as the average of the worst 5% of the outcomes of real terminal wealth. We compare the performance of our dynamic strategy to simpler alternatives which maintain constant asset allocation weights over time accompanied by either our same modified ARVA spending rule or withdrawals that are constant over time in real terms. Tests are carried out using both a parametric model of historical asset returns as well as bootstrap resampling of historical data. Consistent with previous literature that has used different measures of reward and risk than EW and ES, we find that allowing some variability in withdrawals leads to large improvements in efficiency. However, unlike the prior literature, we also demonstrate that further significant enhancements are possible through incorporating a dynamic asset allocation strategy rather than simply keeping asset allocation weights constant throughout retirement.
{"title":"OPTIMAL CONTROL OF THE DECUMULATION OF A RETIREMENT PORTFOLIO WITH VARIABLE SPENDING AND DYNAMIC ASSET ALLOCATION","authors":"P. Forsyth, K. Vetzal, G. Westmacott","doi":"10.1017/asb.2021.19","DOIUrl":"https://doi.org/10.1017/asb.2021.19","url":null,"abstract":"Abstract We extend the Annually Recalculated Virtual Annuity (ARVA) spending rule for retirement savings decumulation (Waring and Siegel (2015) Financial Analysts Journal, 71(1), 91–107) to include a cap and a floor on withdrawals. With a minimum withdrawal constraint, the ARVA strategy runs the risk of depleting the investment portfolio. We determine the dynamic asset allocation strategy which maximizes a weighted combination of expected total withdrawals (EW) and expected shortfall (ES), defined as the average of the worst 5% of the outcomes of real terminal wealth. We compare the performance of our dynamic strategy to simpler alternatives which maintain constant asset allocation weights over time accompanied by either our same modified ARVA spending rule or withdrawals that are constant over time in real terms. Tests are carried out using both a parametric model of historical asset returns as well as bootstrap resampling of historical data. Consistent with previous literature that has used different measures of reward and risk than EW and ES, we find that allowing some variability in withdrawals leads to large improvements in efficiency. However, unlike the prior literature, we also demonstrate that further significant enhancements are possible through incorporating a dynamic asset allocation strategy rather than simply keeping asset allocation weights constant throughout retirement.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2021-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81020063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}