Abstract While the current pandemic is causing mortality shocks globally, the management of longevity risk remains a major challenge for both individuals and institutions. It is high time there be private market solutions designed for efficient longevity risk transfer among various stakeholders such as individuals, pension funds and annuity providers. From individuals’ point of view, appealing features of post-retirement solutions include stable and satisfactory benefit levels, flexibility, meeting bequest preferences and low fees. This paper proposes a dynamic target volatility strategy for group self-annuitization (GSA) schemes aimed at enhancing living benefits for pool participants. More specifically, we suggest investing GSA funds in a portfolio consisting of equity and cash, continuously rebalanced to maintain a target volatility level. The performance of a dynamic target volatility strategy is assessed against the static case which does not involve portfolio rebalancing. Benefit profiles are assessed by analysing quantiles and alternative strategies involving varying equity compositions. The case of death benefits is included, and the fund dynamics analysed by assessing resulting investment returns and the mortality credits. Overall, higher living benefit profiles are obtained under a dynamic target volatility strategy. From the analysis performed, a trade-off between the equity proportion and the impact on the lower quantile of the living benefit amount emerges, suggesting an optimal proportion of equity composition.
{"title":"TARGET VOLATILITY STRATEGIES FOR GROUP SELF-ANNUITY PORTFOLIOS","authors":"A. Olivieri, Samuel Thirurajah, Jonathan Ziveyi","doi":"10.1017/asb.2022.7","DOIUrl":"https://doi.org/10.1017/asb.2022.7","url":null,"abstract":"Abstract While the current pandemic is causing mortality shocks globally, the management of longevity risk remains a major challenge for both individuals and institutions. It is high time there be private market solutions designed for efficient longevity risk transfer among various stakeholders such as individuals, pension funds and annuity providers. From individuals’ point of view, appealing features of post-retirement solutions include stable and satisfactory benefit levels, flexibility, meeting bequest preferences and low fees. This paper proposes a dynamic target volatility strategy for group self-annuitization (GSA) schemes aimed at enhancing living benefits for pool participants. More specifically, we suggest investing GSA funds in a portfolio consisting of equity and cash, continuously rebalanced to maintain a target volatility level. The performance of a dynamic target volatility strategy is assessed against the static case which does not involve portfolio rebalancing. Benefit profiles are assessed by analysing quantiles and alternative strategies involving varying equity compositions. The case of death benefits is included, and the fund dynamics analysed by assessing resulting investment returns and the mortality credits. Overall, higher living benefit profiles are obtained under a dynamic target volatility strategy. From the analysis performed, a trade-off between the equity proportion and the impact on the lower quantile of the living benefit amount emerges, suggesting an optimal proportion of equity composition.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"12 1","pages":"591 - 617"},"PeriodicalIF":1.9,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78103804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Crevecoeur, Katrien Antonio, S. Desmedt, Alexandre Masquelein
Abstract Due to the presence of reporting and settlement delay, claim data sets collected by non-life insurance companies are typically incomplete, facing right censored claim count and claim severity observations. Current practice in non-life insurance pricing tackles these right censored data via a two-step procedure. First, best estimates are computed for the number of claims that occurred in past exposure periods and the ultimate claim severities, using the incomplete, historical claim data. Second, pricing actuaries build predictive models to estimate technical, pure premiums for new contracts by treating these best estimates as actual observed outcomes, hereby neglecting their inherent uncertainty. We propose an alternative approach that brings valuable insights for both non-life pricing and reserving. As such, we effectively bridge these two key actuarial tasks that have traditionally been discussed in silos. Hereto, we develop a granular occurrence and development model for non-life claims that tackles reserving and at the same time resolves the inconsistency in traditional pricing techniques between actual observations and imputed best estimates. We illustrate our proposed model on an insurance as well as a reinsurance portfolio. The advantages of our proposed strategy are most compelling in the reinsurance illustration where large uncertainties in the best estimates originate from long reporting and settlement delays, low claim frequencies and heavy (even extreme) claim sizes.
{"title":"Bridging the gap between pricing and reserving with an occurrence and development model for non-life insurance claims","authors":"Jonas Crevecoeur, Katrien Antonio, S. Desmedt, Alexandre Masquelein","doi":"10.1017/asb.2023.14","DOIUrl":"https://doi.org/10.1017/asb.2023.14","url":null,"abstract":"Abstract Due to the presence of reporting and settlement delay, claim data sets collected by non-life insurance companies are typically incomplete, facing right censored claim count and claim severity observations. Current practice in non-life insurance pricing tackles these right censored data via a two-step procedure. First, best estimates are computed for the number of claims that occurred in past exposure periods and the ultimate claim severities, using the incomplete, historical claim data. Second, pricing actuaries build predictive models to estimate technical, pure premiums for new contracts by treating these best estimates as actual observed outcomes, hereby neglecting their inherent uncertainty. We propose an alternative approach that brings valuable insights for both non-life pricing and reserving. As such, we effectively bridge these two key actuarial tasks that have traditionally been discussed in silos. Hereto, we develop a granular occurrence and development model for non-life claims that tackles reserving and at the same time resolves the inconsistency in traditional pricing techniques between actual observations and imputed best estimates. We illustrate our proposed model on an insurance as well as a reinsurance portfolio. The advantages of our proposed strategy are most compelling in the reinsurance illustration where large uncertainties in the best estimates originate from long reporting and settlement delays, low claim frequencies and heavy (even extreme) claim sizes.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"80 1","pages":"185 - 212"},"PeriodicalIF":1.9,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73901061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract One of the most fundamental tasks in non-life insurance, done on regular basis, is risk reserving assessment analysis, which amounts to predict stochastically the overall loss reserves to cover possible claims. The most common reserving methods are based on different parametric approaches using aggregated data structured in the run-off triangles. In this paper, we propose a rather non-parametric approach, which handles the underlying loss development triangles as functional profiles and predicts the claim reserve distribution through permutation bootstrap. Three competitive functional-based reserving techniques, each with slightly different scope, are presented; their theoretical and practical advantages – in particular, effortless implementation, robustness against outliers, and wide-range applicability – are discussed. Theoretical justifications of the methods are derived as well. An evaluation of the empirical performance of the designed methods and a full-scale comparison with standard (parametric) reserving techniques are carried on several hundreds of real run-off triangles against the known real loss outcomes. An important objective of the paper is also to promote the idea of natural usefulness of the functional reserving methods among the reserving practitioners.
{"title":"FUNCTIONAL PROFILE TECHNIQUES FOR CLAIMS RESERVING","authors":"M. Maciak, I. Mizera, M. Pešta","doi":"10.1017/asb.2022.4","DOIUrl":"https://doi.org/10.1017/asb.2022.4","url":null,"abstract":"Abstract One of the most fundamental tasks in non-life insurance, done on regular basis, is risk reserving assessment analysis, which amounts to predict stochastically the overall loss reserves to cover possible claims. The most common reserving methods are based on different parametric approaches using aggregated data structured in the run-off triangles. In this paper, we propose a rather non-parametric approach, which handles the underlying loss development triangles as functional profiles and predicts the claim reserve distribution through permutation bootstrap. Three competitive functional-based reserving techniques, each with slightly different scope, are presented; their theoretical and practical advantages – in particular, effortless implementation, robustness against outliers, and wide-range applicability – are discussed. Theoretical justifications of the methods are derived as well. An evaluation of the empirical performance of the designed methods and a full-scale comparison with standard (parametric) reserving techniques are carried on several hundreds of real run-off triangles against the known real loss outcomes. An important objective of the paper is also to promote the idea of natural usefulness of the functional reserving methods among the reserving practitioners.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"10 1","pages":"449 - 482"},"PeriodicalIF":1.9,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74724724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We study the optimal investment strategy to minimize the probability of lifetime ruin under a general mortality hazard rate. We explore the error between the minimum probability of lifetime ruin and the achieved probability of lifetime ruin if one follows a simple investment strategy inspired by earlier work in this area. We also include numerical examples to illustrate the estimation. We show that the nearly optimal probability of lifetime ruin under the simplified investment strategy is quite close to the original minimum probability of lifetime ruin under reasonable parameter values.
{"title":"A SIMPLE AND NEARLY OPTIMAL INVESTMENT STRATEGY TO MINIMIZE THE PROBABILITY OF LIFETIME RUIN","authors":"Xiaoqing Liang, V. Young","doi":"10.1017/asb.2022.3","DOIUrl":"https://doi.org/10.1017/asb.2022.3","url":null,"abstract":"Abstract We study the optimal investment strategy to minimize the probability of lifetime ruin under a general mortality hazard rate. We explore the error between the minimum probability of lifetime ruin and the achieved probability of lifetime ruin if one follows a simple investment strategy inspired by earlier work in this area. We also include numerical examples to illustrate the estimation. We show that the nearly optimal probability of lifetime ruin under the simplified investment strategy is quite close to the original minimum probability of lifetime ruin under reasonable parameter values.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"19 1","pages":"619 - 643"},"PeriodicalIF":1.9,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81726051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The fact that a large proportion of insurance policyholders make no claims during a one-year period highlights the importance of zero-inflated count models when analyzing the frequency of insurance claims. There is a vast literature focused on the univariate case of zero-inflated count models, while work in the area of multivariate models is considerably less advanced. Given that insurance companies write multiple lines of insurance business, where the claim counts on these lines of business are often correlated, there is a strong incentive to analyze multivariate claim count models. Motivated by the idea of Liu and Tian (Computational Statistics and Data Analysis, 83, 200–222; 2015), we develop a multivariate zero-inflated hurdle model to describe multivariate count data with extra zeros. This generalization offers more flexibility in modeling the behavior of individual claim counts while also incorporating a correlation structure between claim counts for different lines of insurance business. We develop an application of the expectation–maximization (EM) algorithm to enable the statistical inference necessary to estimate the parameters associated with our model. Our model is then applied to an automobile insurance portfolio from a major insurance company in Spain. We demonstrate that the model performance for the multivariate zero-inflated hurdle model is superior when compared to several alternatives.
{"title":"A NEW MULTIVARIATE ZERO-INFLATED HURDLE MODEL WITH APPLICATIONS IN AUTOMOBILE INSURANCE","authors":"Pengcheng Zhang, David G. W. Pitt, Xueyuan Wu","doi":"10.1017/asb.2021.39","DOIUrl":"https://doi.org/10.1017/asb.2021.39","url":null,"abstract":"Abstract The fact that a large proportion of insurance policyholders make no claims during a one-year period highlights the importance of zero-inflated count models when analyzing the frequency of insurance claims. There is a vast literature focused on the univariate case of zero-inflated count models, while work in the area of multivariate models is considerably less advanced. Given that insurance companies write multiple lines of insurance business, where the claim counts on these lines of business are often correlated, there is a strong incentive to analyze multivariate claim count models. Motivated by the idea of Liu and Tian (Computational Statistics and Data Analysis, 83, 200–222; 2015), we develop a multivariate zero-inflated hurdle model to describe multivariate count data with extra zeros. This generalization offers more flexibility in modeling the behavior of individual claim counts while also incorporating a correlation structure between claim counts for different lines of insurance business. We develop an application of the expectation–maximization (EM) algorithm to enable the statistical inference necessary to estimate the parameters associated with our model. Our model is then applied to an automobile insurance portfolio from a major insurance company in Spain. We demonstrate that the model performance for the multivariate zero-inflated hurdle model is superior when compared to several alternatives.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"36 1","pages":"393 - 416"},"PeriodicalIF":1.9,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74947442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengwang Meng, He Wang, Yanlin Shi, Guangyuan Gao
Abstract Novel navigation applications provide a driving behavior score for each finished trip to promote safe driving, which is mainly based on experts’ domain knowledge. In this paper, with automobile insurance claims data and associated telematics car driving data, we propose a supervised driving risk scoring neural network model. This one-dimensional convolutional neural network takes time series of individual car driving trips as input and returns a risk score in the unit range of (0,1). By incorporating credibility average risk score of each driver, the classical Poisson generalized linear model for automobile insurance claims frequency prediction can be improved significantly. Hence, compared with non-telematics-based insurers, telematics-based insurers can discover more heterogeneity in their portfolio and attract safer drivers with premiums discounts.
{"title":"IMPROVING AUTOMOBILE INSURANCE CLAIMS FREQUENCY PREDICTION WITH TELEMATICS CAR DRIVING DATA","authors":"Shengwang Meng, He Wang, Yanlin Shi, Guangyuan Gao","doi":"10.1017/asb.2021.35","DOIUrl":"https://doi.org/10.1017/asb.2021.35","url":null,"abstract":"Abstract Novel navigation applications provide a driving behavior score for each finished trip to promote safe driving, which is mainly based on experts’ domain knowledge. In this paper, with automobile insurance claims data and associated telematics car driving data, we propose a supervised driving risk scoring neural network model. This one-dimensional convolutional neural network takes time series of individual car driving trips as input and returns a risk score in the unit range of (0,1). By incorporating credibility average risk score of each driver, the classical Poisson generalized linear model for automobile insurance claims frequency prediction can be improved significantly. Hence, compared with non-telematics-based insurers, telematics-based insurers can discover more heterogeneity in their portfolio and attract safer drivers with premiums discounts.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"19 1","pages":"363 - 391"},"PeriodicalIF":1.9,"publicationDate":"2021-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81536499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper studies the optimal insurance design from the perspective of an insured when there is possibility for the insurer to default on its promised indemnity. Default of the insurer leads to limited liability, and the promised indemnity is only partially recovered in case of a default. To alleviate the potential ex post moral hazard, an incentive compatibility condition is added to restrict the permissible indemnity function. Assuming that the premium is determined as a function of the expected coverage and under the mean–variance preference of the insured, we derive the explicit structure of the optimal indemnity function through the marginal indemnity function formulation of the problem. It is shown that the optimal indemnity function depends on the first and second order expectations of the random recovery rate conditioned on the realized insurable loss. The methodology and results in this article complement the literature regarding the optimal insurance subject to the default risk and provide new insights on problems of similar types.
{"title":"MEAN–VARIANCE INSURANCE DESIGN WITH COUNTERPARTY RISK AND INCENTIVE COMPATIBILITY","authors":"T. Boonen, Wenjun Jiang","doi":"10.1017/asb.2021.36","DOIUrl":"https://doi.org/10.1017/asb.2021.36","url":null,"abstract":"Abstract This paper studies the optimal insurance design from the perspective of an insured when there is possibility for the insurer to default on its promised indemnity. Default of the insurer leads to limited liability, and the promised indemnity is only partially recovered in case of a default. To alleviate the potential ex post moral hazard, an incentive compatibility condition is added to restrict the permissible indemnity function. Assuming that the premium is determined as a function of the expected coverage and under the mean–variance preference of the insured, we derive the explicit structure of the optimal indemnity function through the marginal indemnity function formulation of the problem. It is shown that the optimal indemnity function depends on the first and second order expectations of the random recovery rate conditioned on the realized insurable loss. The methodology and results in this article complement the literature regarding the optimal insurance subject to the default risk and provide new insights on problems of similar types.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"214 1","pages":"645 - 667"},"PeriodicalIF":1.9,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75379915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}