Pub Date : 2021-06-03DOI: 10.1080/10920277.2021.1914666
Jiandong Ren
2. SIZE-BIASED TRANSFORM FOR DISTRIBUTIONS IN ða,b, 0Þ CLASS The concept of ða, b, 0Þ class distributions is well known to actuaries, mainly because of the popularity of Panjer’s recursive formulas for calculating the distribution of the corresponding compound sums. For detailed introductions and applications, refer to Klugman, Panjer, and Willmot (2019) and Sundt and Vernic (2009). In this section, we present a result for the sizebiased transform of distributions in the class. For completeness, we begin with two definitions. Definition 1. Let PNðkÞ denote the probability function of a discrete random variable N; it is a member of the ða, b, 0Þ class of distributions if there exist constants a and b such that
2. 关于ða,b, 0Þ类类分布的概念对于精算师来说是非常熟悉的,这主要是因为Panjer的递归公式在计算相应的复和分布时非常流行。有关详细的介绍和应用,请参见Klugman, Panjer, and Willmot(2019)和Sundt and Vernic(2009)。在本节中,我们给出了类中分布的大小偏置变换的结果。为了完整起见,我们从两个定义开始。定义1。设PNðkÞ表示离散随机变量N的概率函数;如果存在常数a和b,则它是分布类的一个成员
{"title":"Jiandong Ren's Discussion on “Size-Biased Risk Measures of Compound Sums,” by Michel Denuit, January 2020","authors":"Jiandong Ren","doi":"10.1080/10920277.2021.1914666","DOIUrl":"https://doi.org/10.1080/10920277.2021.1914666","url":null,"abstract":"2. SIZE-BIASED TRANSFORM FOR DISTRIBUTIONS IN ða,b, 0Þ CLASS The concept of ða, b, 0Þ class distributions is well known to actuaries, mainly because of the popularity of Panjer’s recursive formulas for calculating the distribution of the corresponding compound sums. For detailed introductions and applications, refer to Klugman, Panjer, and Willmot (2019) and Sundt and Vernic (2009). In this section, we present a result for the sizebiased transform of distributions in the class. For completeness, we begin with two definitions. Definition 1. Let PNðkÞ denote the probability function of a discrete random variable N; it is a member of the ða, b, 0Þ class of distributions if there exist constants a and b such that","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2021.1914666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41323897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-02DOI: 10.1080/10920277.2021.1895843
Jing Ai, Jennifer Russomanno, Skyla Guigou, Rachel Allan
Health care fraud is a costly, challenging problem in health insurance. This study provides a systematic evaluation and synthesis of the methodologies and data samples used in current peer-reviewed studies from different academic fields on characterizing health care fraud. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was used to guide reviewing the literature. In addition, a qualitative case study approach was employed to assess the studies included in the review in order to independently confirm the conclusions of the systematic review. Out of the 450 articles that were identified by the search criteria, 27 studies were deemed as relevant and included in the analysis. Using 24 variables designed from the literature to synthesize the fraud detection methodologies, the systematic review showed an inability to compare studies quantitatively because few studies reported the accuracy of their detection methods or the overall rate of fraud. The qualitative assessment independently confirmed that prior studies are highly diverse, with the only common characteristic being widespread use of data mining methods. Applying a previously validated approach that has not been taken by prior health care fraud reviews, our qualitative method showed high validity in terms of reviewers’ agreement on the classification of fraud detection methods (r = 93%). Two limitations of this study are that the strength of the evidence is reliant on the quality and number of studies previously performed on the topic, and our systematic review and qualitative results were limited to the text of the final studies as published in peer-reviewed journals. The main gaps we identified are the need to validate existing methods, lack of proof of intent to commit fraud, absence of a fraud rate estimate in the studies analyzed, and inability to use prior evidence to select the best fraud detection method(s). Additional research designed to address these gaps would be of value to researchers, policymakers, and health care practitioners who aim to select the best fraud detection methods for their specific area of practice.
{"title":"A Systematic Review and Qualitative Assessment of Fraud Detection Methodologies in Health Care","authors":"Jing Ai, Jennifer Russomanno, Skyla Guigou, Rachel Allan","doi":"10.1080/10920277.2021.1895843","DOIUrl":"https://doi.org/10.1080/10920277.2021.1895843","url":null,"abstract":"Health care fraud is a costly, challenging problem in health insurance. This study provides a systematic evaluation and synthesis of the methodologies and data samples used in current peer-reviewed studies from different academic fields on characterizing health care fraud. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was used to guide reviewing the literature. In addition, a qualitative case study approach was employed to assess the studies included in the review in order to independently confirm the conclusions of the systematic review. Out of the 450 articles that were identified by the search criteria, 27 studies were deemed as relevant and included in the analysis. Using 24 variables designed from the literature to synthesize the fraud detection methodologies, the systematic review showed an inability to compare studies quantitatively because few studies reported the accuracy of their detection methods or the overall rate of fraud. The qualitative assessment independently confirmed that prior studies are highly diverse, with the only common characteristic being widespread use of data mining methods. Applying a previously validated approach that has not been taken by prior health care fraud reviews, our qualitative method showed high validity in terms of reviewers’ agreement on the classification of fraud detection methods (r = 93%). Two limitations of this study are that the strength of the evidence is reliant on the quality and number of studies previously performed on the topic, and our systematic review and qualitative results were limited to the text of the final studies as published in peer-reviewed journals. The main gaps we identified are the need to validate existing methods, lack of proof of intent to commit fraud, absence of a fraud rate estimate in the studies analyzed, and inability to use prior evidence to select the best fraud detection method(s). Additional research designed to address these gaps would be of value to researchers, policymakers, and health care practitioners who aim to select the best fraud detection methods for their specific area of practice.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2021.1895843","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48266263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-28DOI: 10.1080/10920277.2021.2022499
Francis Duval, J. Boucher, M. Pigeon
It has been shown several times in the literature that telematics data collected in motor insurance help to better understand an insured’s driving risk. Insurers who use these data reap several benefits, such as a better estimate of the pure premium, more segmented pricing, and less adverse selection. The flip side of the coin is that collected telematics information is often sensitive and can therefore compromise policyholders’ privacy. Moreover, due to their large volume, this type of data is costly to store and hard to manipulate. These factors, combined with the fact that insurance regulators tend to issue more and more recommendations regarding the collection and use of telematics data, make it important for an insurer to determine the right amount of telematics information to collect. In addition to traditional contract information such as the age and gender of the insured, we have access to a telematics dataset where information is summarized by trip. We first derive several features of interest from these trip summaries before building a claim classification model using both traditional and telematics features. By comparing a few classification algorithms, we find that logistic regression with lasso penalty is the most suitable for our problem. Using this model, we develop a method to determine how much information about policyholders’ driving should be kept by an insurer. Using real data from a North American insurance company, we find that telematics data become redundant after about 3 months or 4000 km of observation, at least from a claim classification perspective.
{"title":"How Much Telematics Information Do Insurers Need for Claim Classification?","authors":"Francis Duval, J. Boucher, M. Pigeon","doi":"10.1080/10920277.2021.2022499","DOIUrl":"https://doi.org/10.1080/10920277.2021.2022499","url":null,"abstract":"It has been shown several times in the literature that telematics data collected in motor insurance help to better understand an insured’s driving risk. Insurers who use these data reap several benefits, such as a better estimate of the pure premium, more segmented pricing, and less adverse selection. The flip side of the coin is that collected telematics information is often sensitive and can therefore compromise policyholders’ privacy. Moreover, due to their large volume, this type of data is costly to store and hard to manipulate. These factors, combined with the fact that insurance regulators tend to issue more and more recommendations regarding the collection and use of telematics data, make it important for an insurer to determine the right amount of telematics information to collect. In addition to traditional contract information such as the age and gender of the insured, we have access to a telematics dataset where information is summarized by trip. We first derive several features of interest from these trip summaries before building a claim classification model using both traditional and telematics features. By comparing a few classification algorithms, we find that logistic regression with lasso penalty is the most suitable for our problem. Using this model, we develop a method to determine how much information about policyholders’ driving should be kept by an insurer. Using real data from a North American insurance company, we find that telematics data become redundant after about 3 months or 4000 km of observation, at least from a claim classification perspective.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46495534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-04DOI: 10.1080/10920277.2021.1886948
Maochao Xu, Yiying Zhang
Data breaches cause millions of dollars in financial losses each year. The insurance industry has been exploring the ways to transfer such extreme risk. In this work, we investigate data breach catastrophe (CAT) bonds via developing a multiperiod pricing model. It is found that the nonstationary extreme value model can capture the statistical pattern of the monthly maximum of data breach size very well and, in particular, a positive time trend is discovered. For the financial risks, data-driven time series approaches are proposed to model the complex patterns exhibited by the financial data, which are different from those in the literature. Simulation studies are performed to determine the bond prices and cash flows. Our results show that the data breach CAT bond can be an attractive financial product and an effective instrument for transferring the extreme data breach risk.
{"title":"Data Breach CAT Bonds: Modeling and Pricing","authors":"Maochao Xu, Yiying Zhang","doi":"10.1080/10920277.2021.1886948","DOIUrl":"https://doi.org/10.1080/10920277.2021.1886948","url":null,"abstract":"Data breaches cause millions of dollars in financial losses each year. The insurance industry has been exploring the ways to transfer such extreme risk. In this work, we investigate data breach catastrophe (CAT) bonds via developing a multiperiod pricing model. It is found that the nonstationary extreme value model can capture the statistical pattern of the monthly maximum of data breach size very well and, in particular, a positive time trend is discovered. For the financial risks, data-driven time series approaches are proposed to model the complex patterns exhibited by the financial data, which are different from those in the literature. Simulation studies are performed to determine the bond prices and cash flows. Our results show that the data breach CAT bond can be an attractive financial product and an effective instrument for transferring the extreme data breach risk.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2021.1886948","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46250931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-15DOI: 10.1080/10920277.2021.1874421
Séverine Arnold, V. Glushko
This article applies cointegration analysis and vector error correction models to model the short- and long-run relationships between cause-specific mortality rates. We work with the data from five developed countries (the United States, Japan, France, England and Wales, and Australia) and split the mortality rates into five main causes of death (infectious and parasitic, cancer, circulatory diseases, respiratory diseases, and external causes). We successively adopt short- and long-term perspectives, and analyze how each cause-specific mortality rate impacts and reacts to the shocks received from the rest of the causes. We observe that the cause-specific mortality rates are closely linked to each other, apart from the external causes that show an entirely independent behavior and hence could be considered as truly exogenous. We summarize our findings with the aim to help practitioners set more informed assumptions concerning the future development of mortality.
{"title":"Short- and Long-Term Dynamics of Cause-Specific Mortality Rates Using Cointegration Analysis","authors":"Séverine Arnold, V. Glushko","doi":"10.1080/10920277.2021.1874421","DOIUrl":"https://doi.org/10.1080/10920277.2021.1874421","url":null,"abstract":"This article applies cointegration analysis and vector error correction models to model the short- and long-run relationships between cause-specific mortality rates. We work with the data from five developed countries (the United States, Japan, France, England and Wales, and Australia) and split the mortality rates into five main causes of death (infectious and parasitic, cancer, circulatory diseases, respiratory diseases, and external causes). We successively adopt short- and long-term perspectives, and analyze how each cause-specific mortality rate impacts and reacts to the shocks received from the rest of the causes. We observe that the cause-specific mortality rates are closely linked to each other, apart from the external causes that show an entirely independent behavior and hence could be considered as truly exogenous. We summarize our findings with the aim to help practitioners set more informed assumptions concerning the future development of mortality.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2021.1874421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44554246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1080/10920277.2022.2082986
Boheng Su, Sharon Tennyson
This article tests for regulation-induced adverse selection in the Massachusetts automobile insurance market during the 1990–2004 period of fix-and-establish rate regulation. We demonstrate the application of the test for adverse selection in Finkelstein and Poterba (Journal of Risk and Insurance 81 (4):709–34, 2014) to a regulated insurance market using group-level panel data on purchase amounts and loss costs. Differences between rates that incorporate state-mandated restrictions and those based on actuarial estimates provide a proxy for the unused observables needed to implement the test. Consistent with regulation-induced adverse selection, proxy values indicating higher unpriced risk are statistically significant and positively related to both insurance purchases and loss costs.
本文对美国马萨诸塞州汽车保险市场在1990-2004年固定固定费率管制期间的监管诱导逆向选择进行了检验。我们将Finkelstein和Poterba (Journal of Risk and Insurance 81(4):709 - 34,2014)的逆向选择测试应用于一个受监管的保险市场,使用集团层面的购买金额和损失成本面板数据。包含州强制限制的费率与基于精算估计的费率之间的差异为实施测试所需的未使用的可观察值提供了代理。与监管诱导的逆向选择一致,表明较高未定价风险的代理值在统计上显著,且与保险购买和损失成本呈正相关。
{"title":"Price Subsidies and the Demand for Automobile Insurance","authors":"Boheng Su, Sharon Tennyson","doi":"10.1080/10920277.2022.2082986","DOIUrl":"https://doi.org/10.1080/10920277.2022.2082986","url":null,"abstract":"This article tests for regulation-induced adverse selection in the Massachusetts automobile insurance market during the 1990–2004 period of fix-and-establish rate regulation. We demonstrate the application of the test for adverse selection in Finkelstein and Poterba (Journal of Risk and Insurance 81 (4):709–34, 2014) to a regulated insurance market using group-level panel data on purchase amounts and loss costs. Differences between rates that incorporate state-mandated restrictions and those based on actuarial estimates provide a proxy for the unused observables needed to implement the test. Consistent with regulation-induced adverse selection, proxy values indicating higher unpriced risk are statistically significant and positively related to both insurance purchases and loss costs.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46134324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-12DOI: 10.1080/10920277.2022.2099426
Tsz Chai Fung, G. Tzougas, M. Wüthrich
The aim of this article is to present a mixture composite regression model for claim severity modeling. Claim severity modeling poses several challenges such as multimodality, tail-heaviness, and systematic effects in data. We tackle this modeling problem by studying a mixture composite regression model for simultaneous modeling of attritional and large claims and for considering systematic effects in both the mixture components as well as the mixing probabilities. For model fitting, we present a group-fused regularization approach that allows us to select the explanatory variables that significantly impact the mixing probabilities and the different mixture components, respectively. We develop an asymptotic theory for this regularized estimation approach, and fitting is performed using a novel generalized expectation-maximization algorithm. We exemplify our approach on a real motor insurance dataset.
{"title":"Mixture Composite Regression Models with Multi-type Feature Selection","authors":"Tsz Chai Fung, G. Tzougas, M. Wüthrich","doi":"10.1080/10920277.2022.2099426","DOIUrl":"https://doi.org/10.1080/10920277.2022.2099426","url":null,"abstract":"The aim of this article is to present a mixture composite regression model for claim severity modeling. Claim severity modeling poses several challenges such as multimodality, tail-heaviness, and systematic effects in data. We tackle this modeling problem by studying a mixture composite regression model for simultaneous modeling of attritional and large claims and for considering systematic effects in both the mixture components as well as the mixing probabilities. For model fitting, we present a group-fused regularization approach that allows us to select the explanatory variables that significantly impact the mixing probabilities and the different mixture components, respectively. We develop an asymptotic theory for this regularized estimation approach, and fitting is performed using a novel generalized expectation-maximization algorithm. We exemplify our approach on a real motor insurance dataset.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47941457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-25DOI: 10.1080/10920277.2020.1856687
Yaming Yang, Shuanming Li
Modeling dependence structure among various risks, especially the measure of tail dependence and the aggregation of risks, is crucial for risk management. In this article, we present an extension to the traditional one-parameter Archimedean copulas by integrating the log-gamma-generated (LGG) margins. This class of novel multivariate distribution can better capture the tail dependence. The distortion effect on the classic one-parameter Archimedean copulas is well exhibited and the analytical expression of the sum of bivariate margins is proposed. The model provides a flexible way to capture tail risks and aggregate portfolio losses. Sufficient conditions for constructing a legitimate d-dimensional LGG Archimedean copula as well as the simulation framework are also proposed. Furthermore, two applications of this model are presented using concrete insurance datasets.
{"title":"On a Family of Log-Gamma-Generated Archimedean Copulas","authors":"Yaming Yang, Shuanming Li","doi":"10.1080/10920277.2020.1856687","DOIUrl":"https://doi.org/10.1080/10920277.2020.1856687","url":null,"abstract":"Modeling dependence structure among various risks, especially the measure of tail dependence and the aggregation of risks, is crucial for risk management. In this article, we present an extension to the traditional one-parameter Archimedean copulas by integrating the log-gamma-generated (LGG) margins. This class of novel multivariate distribution can better capture the tail dependence. The distortion effect on the classic one-parameter Archimedean copulas is well exhibited and the analytical expression of the sum of bivariate margins is proposed. The model provides a flexible way to capture tail risks and aggregate portfolio losses. Sufficient conditions for constructing a legitimate d-dimensional LGG Archimedean copula as well as the simulation framework are also proposed. Furthermore, two applications of this model are presented using concrete insurance datasets.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2020.1856687","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49415183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-23DOI: 10.1080/10920277.2020.1855199
M. Denuit, C. Robert
Denuit (2019, 2020a) demonstrated that conditional mean risk sharing introduced by Denuit and Dhaene (2012) is the appropriate theoretical tool to share losses in collaborative peer-to-peer insurance schemes. Denuit and Robert (2020a, 2020b, 2021) studied this risk sharing mechanism and established several attractive properties including linear approximations when total losses or the number of participants get large. It is also shown there that the conditional expectation defining the conditional mean risk sharing is asymptotically increasing in the total loss (under mild technical assumptions). This ensures that the risk exchange is Pareto-optimal and that all participants have an interest to keep total losses as small as possible. In this article, we design a flexible system where entry prices can be made attractive compared to the premium of a regular, commercial insurance contract and participants are awarded cash-backs in case of favorable experience while being protected by a stop-loss treaty in the opposite case. Members can also be grouped according to some meaningful criteria, resulting in a hierarchical decomposition of the community. The particular case where realized losses are allocated in proportion to the pure premiums is studied.
{"title":"Collaborative Insurance with Stop-Loss Protection and Team Partitioning","authors":"M. Denuit, C. Robert","doi":"10.1080/10920277.2020.1855199","DOIUrl":"https://doi.org/10.1080/10920277.2020.1855199","url":null,"abstract":"Denuit (2019, 2020a) demonstrated that conditional mean risk sharing introduced by Denuit and Dhaene (2012) is the appropriate theoretical tool to share losses in collaborative peer-to-peer insurance schemes. Denuit and Robert (2020a, 2020b, 2021) studied this risk sharing mechanism and established several attractive properties including linear approximations when total losses or the number of participants get large. It is also shown there that the conditional expectation defining the conditional mean risk sharing is asymptotically increasing in the total loss (under mild technical assumptions). This ensures that the risk exchange is Pareto-optimal and that all participants have an interest to keep total losses as small as possible. In this article, we design a flexible system where entry prices can be made attractive compared to the premium of a regular, commercial insurance contract and participants are awarded cash-backs in case of favorable experience while being protected by a stop-loss treaty in the opposite case. Members can also be grouped according to some meaningful criteria, resulting in a hierarchical decomposition of the community. The particular case where realized losses are allocated in proportion to the pure premiums is studied.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2020.1855199","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47463162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-19DOI: 10.1080/10920277.2020.1853574
Di Wang, W. Chan
There have been significant developments in using extrapolative stochastic models for mortality forecasting (forward projection) in the literature. However, little attention has been devoted to mortality backcasting (backward projection). This article proposes a simple mortality backcasting framework that can be used in practice. Research and analysis of English demography in the 17th and 18th centuries have suffered from a lack of mortality data. We attempt to alleviate this problem by developing a technique that runs backward in time and produces estimates of mortality data before the time at which such data became available. After confirming the time reversibility of the mortality data, we compare the backcasting performance of some commonly used stochastic mortality models for the England and Wales data. The original Lee–Carter model is selected for backcasting purpose of this dataset. Finally, we examine the longevity of British artists between the 17th and the 20th centuries using the backcasted population mortality as benchmarks. The results show that artists living in Britain from 1600 to the mid 1800s had life expectancies similar to those of the general population, with a marked increase in longevity after the Industrial Revolution.
{"title":"Backcasting Mortality in England and Wales, 1600–1840","authors":"Di Wang, W. Chan","doi":"10.1080/10920277.2020.1853574","DOIUrl":"https://doi.org/10.1080/10920277.2020.1853574","url":null,"abstract":"There have been significant developments in using extrapolative stochastic models for mortality forecasting (forward projection) in the literature. However, little attention has been devoted to mortality backcasting (backward projection). This article proposes a simple mortality backcasting framework that can be used in practice. Research and analysis of English demography in the 17th and 18th centuries have suffered from a lack of mortality data. We attempt to alleviate this problem by developing a technique that runs backward in time and produces estimates of mortality data before the time at which such data became available. After confirming the time reversibility of the mortality data, we compare the backcasting performance of some commonly used stochastic mortality models for the England and Wales data. The original Lee–Carter model is selected for backcasting purpose of this dataset. Finally, we examine the longevity of British artists between the 17th and the 20th centuries using the backcasted population mortality as benchmarks. The results show that artists living in Britain from 1600 to the mid 1800s had life expectancies similar to those of the general population, with a marked increase in longevity after the Industrial Revolution.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10920277.2020.1853574","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48430451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}