Abstract Survivor funds are financial arrangements where participants agree to share the proceeds of a collective investment pool in a predescribed way depending on their survival. This offers investors a way to benefit from mortality credits, boosting financial returns. Following Denuit (2019, ASTIN Bulletin, 49, 591–617), participants are assumed to adopt the conditional mean risk sharing rule introduced in Denuit and Dhaene (2012, Insurance: Mathematics and Economics, 51, 265–270) to assess their respective shares in mortality credits. This paper looks at pools of individuals that are heterogeneous in terms of their survival probability and their contributions. Imposing mild conditions, we show that individual risk can be fully diversified if the size of the group tends to infinity. For large groups, we derive simple, hierarchical approximations of the conditional mean risk sharing rule.
{"title":"MORTALITY CREDITS WITHIN LARGE SURVIVOR FUNDS","authors":"M. Denuit, P. Hieber, C. Robert","doi":"10.1017/asb.2022.13","DOIUrl":"https://doi.org/10.1017/asb.2022.13","url":null,"abstract":"Abstract Survivor funds are financial arrangements where participants agree to share the proceeds of a collective investment pool in a predescribed way depending on their survival. This offers investors a way to benefit from mortality credits, boosting financial returns. Following Denuit (2019, ASTIN Bulletin, 49, 591–617), participants are assumed to adopt the conditional mean risk sharing rule introduced in Denuit and Dhaene (2012, Insurance: Mathematics and Economics, 51, 265–270) to assess their respective shares in mortality credits. This paper looks at pools of individuals that are heterogeneous in terms of their survival probability and their contributions. Imposing mild conditions, we show that individual risk can be fully diversified if the size of the group tends to infinity. For large groups, we derive simple, hierarchical approximations of the conditional mean risk sharing rule.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76776796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The choice of a copula model from limited data is a hard but important task. Motivated by the visual patterns that different copula models produce in smoothed density heatmaps, we consider copula model selection as an image recognition problem. We extract image features from heatmaps using the pre-trained AlexNet and present workflows for model selection that combine image features with statistical information. We employ dimension reduction via Principal Component and Linear Discriminant Analyses and use a Support Vector Machine classifier. Simulation studies show that the use of image data improves the accuracy of the copula model selection task, particularly in scenarios where sample sizes and correlations are low. This finding indicates that transfer learning can support statistical procedures of model selection. We demonstrate application of the proposed approach to the joint modelling of weekly returns of the MSCI and RISX indices.
{"title":"SELECTING BIVARIATE COPULA MODELS USING IMAGE RECOGNITION","authors":"A. Tsanakas, Rui Zhu","doi":"10.1017/asb.2022.12","DOIUrl":"https://doi.org/10.1017/asb.2022.12","url":null,"abstract":"Abstract The choice of a copula model from limited data is a hard but important task. Motivated by the visual patterns that different copula models produce in smoothed density heatmaps, we consider copula model selection as an image recognition problem. We extract image features from heatmaps using the pre-trained AlexNet and present workflows for model selection that combine image features with statistical information. We employ dimension reduction via Principal Component and Linear Discriminant Analyses and use a Support Vector Machine classifier. Simulation studies show that the use of image data improves the accuracy of the copula model selection task, particularly in scenarios where sample sizes and correlations are low. This finding indicates that transfer learning can support statistical procedures of model selection. We demonstrate application of the proposed approach to the joint modelling of weekly returns of the MSCI and RISX indices.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74251056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Machine learning has recently entered the mortality literature in order to improve the forecasts of stochastic mortality models. This paper proposes to use two pure, tree-based machine learning models: random forests and gradient boosting, based on the differenced log-mortality rates to produce more accurate mortality forecasts. These forecasts are compared with forecasts from traditional, stochastic mortality models and with forecasts from random forests and gradient boosting variants of the stochastic models. The comparisons are based on the Model Confidence Set procedure. The results show that the pure, tree-based models significantly outperform all other models in the majority of cases considered. To address the lack of interpretability issue associated with machine learning models, we demonstrate how to extract information about the relationships uncovered by the tree-based models. For this purpose, we consider variable importance, partial dependence plots, and variable split conditions. Results from the in-sample fit suggest that tree-based models can be very useful tools for detecting patterns within and between variables that are not commonly identifiable with traditional methods.
{"title":"TREE-BASED MACHINE LEARNING METHODS FOR MODELING AND FORECASTING MORTALITY","authors":"D. S. Bjerre","doi":"10.1017/asb.2022.11","DOIUrl":"https://doi.org/10.1017/asb.2022.11","url":null,"abstract":"Abstract Machine learning has recently entered the mortality literature in order to improve the forecasts of stochastic mortality models. This paper proposes to use two pure, tree-based machine learning models: random forests and gradient boosting, based on the differenced log-mortality rates to produce more accurate mortality forecasts. These forecasts are compared with forecasts from traditional, stochastic mortality models and with forecasts from random forests and gradient boosting variants of the stochastic models. The comparisons are based on the Model Confidence Set procedure. The results show that the pure, tree-based models significantly outperform all other models in the majority of cases considered. To address the lack of interpretability issue associated with machine learning models, we demonstrate how to extract information about the relationships uncovered by the tree-based models. For this purpose, we consider variable importance, partial dependence plots, and variable split conditions. Results from the in-sample fit suggest that tree-based models can be very useful tools for detecting patterns within and between variables that are not commonly identifiable with traditional methods.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84684238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper investigates risk aggregation and capital allocation problems for an insurance portfolio consisting of several lines of business. The class of multivariate INAR(1) processes is proposed to model different sources of dependence between the number of claims of the portfolio. The total capital required for the whole portfolio is evaluated under the TVaR risk measure, and the contribution of each line of business is derived under the TVaR-based allocation rule. We provide the risk aggregation and capital allocation formulas in the general case of continuous and strictly positive claim sizes and then in the case of mixed Erlang claim sizes. The impact of both time dependence and cross-dependence on the behavior of risk aggregation and capital allocation is numerically illustrated.
{"title":"MULTIVARIATE DISTRIBUTIONS WITH TIME AND CROSS-DEPENDENCE: AGGREGATION AND CAPITAL ALLOCATION","authors":"Xiang Hu, Lianzeng Zhang","doi":"10.1017/asb.2022.8","DOIUrl":"https://doi.org/10.1017/asb.2022.8","url":null,"abstract":"Abstract This paper investigates risk aggregation and capital allocation problems for an insurance portfolio consisting of several lines of business. The class of multivariate INAR(1) processes is proposed to model different sources of dependence between the number of claims of the portfolio. The total capital required for the whole portfolio is evaluated under the TVaR risk measure, and the contribution of each line of business is derived under the TVaR-based allocation rule. We provide the risk aggregation and capital allocation formulas in the general case of continuous and strictly positive claim sizes and then in the case of mixed Erlang claim sizes. The impact of both time dependence and cross-dependence on the behavior of risk aggregation and capital allocation is numerically illustrated.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88873015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract While the current pandemic is causing mortality shocks globally, the management of longevity risk remains a major challenge for both individuals and institutions. It is high time there be private market solutions designed for efficient longevity risk transfer among various stakeholders such as individuals, pension funds and annuity providers. From individuals’ point of view, appealing features of post-retirement solutions include stable and satisfactory benefit levels, flexibility, meeting bequest preferences and low fees. This paper proposes a dynamic target volatility strategy for group self-annuitization (GSA) schemes aimed at enhancing living benefits for pool participants. More specifically, we suggest investing GSA funds in a portfolio consisting of equity and cash, continuously rebalanced to maintain a target volatility level. The performance of a dynamic target volatility strategy is assessed against the static case which does not involve portfolio rebalancing. Benefit profiles are assessed by analysing quantiles and alternative strategies involving varying equity compositions. The case of death benefits is included, and the fund dynamics analysed by assessing resulting investment returns and the mortality credits. Overall, higher living benefit profiles are obtained under a dynamic target volatility strategy. From the analysis performed, a trade-off between the equity proportion and the impact on the lower quantile of the living benefit amount emerges, suggesting an optimal proportion of equity composition.
{"title":"TARGET VOLATILITY STRATEGIES FOR GROUP SELF-ANNUITY PORTFOLIOS","authors":"A. Olivieri, Samuel Thirurajah, Jonathan Ziveyi","doi":"10.1017/asb.2022.7","DOIUrl":"https://doi.org/10.1017/asb.2022.7","url":null,"abstract":"Abstract While the current pandemic is causing mortality shocks globally, the management of longevity risk remains a major challenge for both individuals and institutions. It is high time there be private market solutions designed for efficient longevity risk transfer among various stakeholders such as individuals, pension funds and annuity providers. From individuals’ point of view, appealing features of post-retirement solutions include stable and satisfactory benefit levels, flexibility, meeting bequest preferences and low fees. This paper proposes a dynamic target volatility strategy for group self-annuitization (GSA) schemes aimed at enhancing living benefits for pool participants. More specifically, we suggest investing GSA funds in a portfolio consisting of equity and cash, continuously rebalanced to maintain a target volatility level. The performance of a dynamic target volatility strategy is assessed against the static case which does not involve portfolio rebalancing. Benefit profiles are assessed by analysing quantiles and alternative strategies involving varying equity compositions. The case of death benefits is included, and the fund dynamics analysed by assessing resulting investment returns and the mortality credits. Overall, higher living benefit profiles are obtained under a dynamic target volatility strategy. From the analysis performed, a trade-off between the equity proportion and the impact on the lower quantile of the living benefit amount emerges, suggesting an optimal proportion of equity composition.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78103804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Crevecoeur, Katrien Antonio, S. Desmedt, Alexandre Masquelein
Abstract Due to the presence of reporting and settlement delay, claim data sets collected by non-life insurance companies are typically incomplete, facing right censored claim count and claim severity observations. Current practice in non-life insurance pricing tackles these right censored data via a two-step procedure. First, best estimates are computed for the number of claims that occurred in past exposure periods and the ultimate claim severities, using the incomplete, historical claim data. Second, pricing actuaries build predictive models to estimate technical, pure premiums for new contracts by treating these best estimates as actual observed outcomes, hereby neglecting their inherent uncertainty. We propose an alternative approach that brings valuable insights for both non-life pricing and reserving. As such, we effectively bridge these two key actuarial tasks that have traditionally been discussed in silos. Hereto, we develop a granular occurrence and development model for non-life claims that tackles reserving and at the same time resolves the inconsistency in traditional pricing techniques between actual observations and imputed best estimates. We illustrate our proposed model on an insurance as well as a reinsurance portfolio. The advantages of our proposed strategy are most compelling in the reinsurance illustration where large uncertainties in the best estimates originate from long reporting and settlement delays, low claim frequencies and heavy (even extreme) claim sizes.
{"title":"Bridging the gap between pricing and reserving with an occurrence and development model for non-life insurance claims","authors":"Jonas Crevecoeur, Katrien Antonio, S. Desmedt, Alexandre Masquelein","doi":"10.1017/asb.2023.14","DOIUrl":"https://doi.org/10.1017/asb.2023.14","url":null,"abstract":"Abstract Due to the presence of reporting and settlement delay, claim data sets collected by non-life insurance companies are typically incomplete, facing right censored claim count and claim severity observations. Current practice in non-life insurance pricing tackles these right censored data via a two-step procedure. First, best estimates are computed for the number of claims that occurred in past exposure periods and the ultimate claim severities, using the incomplete, historical claim data. Second, pricing actuaries build predictive models to estimate technical, pure premiums for new contracts by treating these best estimates as actual observed outcomes, hereby neglecting their inherent uncertainty. We propose an alternative approach that brings valuable insights for both non-life pricing and reserving. As such, we effectively bridge these two key actuarial tasks that have traditionally been discussed in silos. Hereto, we develop a granular occurrence and development model for non-life claims that tackles reserving and at the same time resolves the inconsistency in traditional pricing techniques between actual observations and imputed best estimates. We illustrate our proposed model on an insurance as well as a reinsurance portfolio. The advantages of our proposed strategy are most compelling in the reinsurance illustration where large uncertainties in the best estimates originate from long reporting and settlement delays, low claim frequencies and heavy (even extreme) claim sizes.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73901061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract One of the most fundamental tasks in non-life insurance, done on regular basis, is risk reserving assessment analysis, which amounts to predict stochastically the overall loss reserves to cover possible claims. The most common reserving methods are based on different parametric approaches using aggregated data structured in the run-off triangles. In this paper, we propose a rather non-parametric approach, which handles the underlying loss development triangles as functional profiles and predicts the claim reserve distribution through permutation bootstrap. Three competitive functional-based reserving techniques, each with slightly different scope, are presented; their theoretical and practical advantages – in particular, effortless implementation, robustness against outliers, and wide-range applicability – are discussed. Theoretical justifications of the methods are derived as well. An evaluation of the empirical performance of the designed methods and a full-scale comparison with standard (parametric) reserving techniques are carried on several hundreds of real run-off triangles against the known real loss outcomes. An important objective of the paper is also to promote the idea of natural usefulness of the functional reserving methods among the reserving practitioners.
{"title":"FUNCTIONAL PROFILE TECHNIQUES FOR CLAIMS RESERVING","authors":"M. Maciak, I. Mizera, M. Pešta","doi":"10.1017/asb.2022.4","DOIUrl":"https://doi.org/10.1017/asb.2022.4","url":null,"abstract":"Abstract One of the most fundamental tasks in non-life insurance, done on regular basis, is risk reserving assessment analysis, which amounts to predict stochastically the overall loss reserves to cover possible claims. The most common reserving methods are based on different parametric approaches using aggregated data structured in the run-off triangles. In this paper, we propose a rather non-parametric approach, which handles the underlying loss development triangles as functional profiles and predicts the claim reserve distribution through permutation bootstrap. Three competitive functional-based reserving techniques, each with slightly different scope, are presented; their theoretical and practical advantages – in particular, effortless implementation, robustness against outliers, and wide-range applicability – are discussed. Theoretical justifications of the methods are derived as well. An evaluation of the empirical performance of the designed methods and a full-scale comparison with standard (parametric) reserving techniques are carried on several hundreds of real run-off triangles against the known real loss outcomes. An important objective of the paper is also to promote the idea of natural usefulness of the functional reserving methods among the reserving practitioners.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74724724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We study the optimal investment strategy to minimize the probability of lifetime ruin under a general mortality hazard rate. We explore the error between the minimum probability of lifetime ruin and the achieved probability of lifetime ruin if one follows a simple investment strategy inspired by earlier work in this area. We also include numerical examples to illustrate the estimation. We show that the nearly optimal probability of lifetime ruin under the simplified investment strategy is quite close to the original minimum probability of lifetime ruin under reasonable parameter values.
{"title":"A SIMPLE AND NEARLY OPTIMAL INVESTMENT STRATEGY TO MINIMIZE THE PROBABILITY OF LIFETIME RUIN","authors":"Xiaoqing Liang, V. Young","doi":"10.1017/asb.2022.3","DOIUrl":"https://doi.org/10.1017/asb.2022.3","url":null,"abstract":"Abstract We study the optimal investment strategy to minimize the probability of lifetime ruin under a general mortality hazard rate. We explore the error between the minimum probability of lifetime ruin and the achieved probability of lifetime ruin if one follows a simple investment strategy inspired by earlier work in this area. We also include numerical examples to illustrate the estimation. We show that the nearly optimal probability of lifetime ruin under the simplified investment strategy is quite close to the original minimum probability of lifetime ruin under reasonable parameter values.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81726051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}