Pub Date : 2022-08-05DOI: 10.1080/10920277.2022.2087679
Fabio Baione, D. Biancalana, Paolo De Angelis
Beta regression is a flexible tool in modeling proportions and rates, but is rarely applied in th actuarial field. In this article, we propose its application in the context of policyholder behavior and particularly to model surrenders and withdrawals. Surrender implies the expiration of the contract and denotes the payment of the surrender value, which is contractually defined. Withdrawal does not imply the termination of the contract and denotes the payment of a cash amount, left to the discretion of the policyholder, within the limits of the surrender value. Moreover, the Actuarial Standard of Practice 52 states that, for surrender and withdrawal estimation, the actuary should take into account several risk factors that could influence the phenomenon. To this aim, we introduce a two-part Beta regression model, where the first part consists in the estimate of the number of surrenders and withdrawals by means of a multinomial regression, as an extension of the logistic regression model frequently used in the empirical literature just to estimate surrender. Then, considering the uncertainty on the amount withdrawn, we express it as a proportion of surrender value; in this way, it assumes values continuously in the interval and it is compliant with a Beta distribution. Therefore, in the second part, we propose the adoption of a Beta regression approach to model the proportion withdrawn of the surrender value. Our final goal is to apply our model on a real-life insurance portfolio providing the estimates of the number of surrenders and withdrawals as well as the corresponding cash amount for each risk class considered.
{"title":"A Two-Part Beta Regression Approach for Modeling Surrenders and Withdrawals in a Life Insurance Portfolio","authors":"Fabio Baione, D. Biancalana, Paolo De Angelis","doi":"10.1080/10920277.2022.2087679","DOIUrl":"https://doi.org/10.1080/10920277.2022.2087679","url":null,"abstract":"Beta regression is a flexible tool in modeling proportions and rates, but is rarely applied in th actuarial field. In this article, we propose its application in the context of policyholder behavior and particularly to model surrenders and withdrawals. Surrender implies the expiration of the contract and denotes the payment of the surrender value, which is contractually defined. Withdrawal does not imply the termination of the contract and denotes the payment of a cash amount, left to the discretion of the policyholder, within the limits of the surrender value. Moreover, the Actuarial Standard of Practice 52 states that, for surrender and withdrawal estimation, the actuary should take into account several risk factors that could influence the phenomenon. To this aim, we introduce a two-part Beta regression model, where the first part consists in the estimate of the number of surrenders and withdrawals by means of a multinomial regression, as an extension of the logistic regression model frequently used in the empirical literature just to estimate surrender. Then, considering the uncertainty on the amount withdrawn, we express it as a proportion of surrender value; in this way, it assumes values continuously in the interval and it is compliant with a Beta distribution. Therefore, in the second part, we propose the adoption of a Beta regression approach to model the proportion withdrawn of the surrender value. Our final goal is to apply our model on a real-life insurance portfolio providing the estimates of the number of surrenders and withdrawals as well as the corresponding cash amount for each risk class considered.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47065086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-27DOI: 10.1080/10920277.2022.2078373
R. Thomas
I congratulate the authors on this enjoyable and timely article, which touches on several of my interests. I would like to offer some comments on nonrisk price discrimination, that is, individual price variations that do not reflect expected costs (sometimes described as “ price optimization ” )
{"title":"Discussion on “The Discriminating (Pricing) Actuary,” by Edward W. (Jed) Frees and Fei Huang","authors":"R. Thomas","doi":"10.1080/10920277.2022.2078373","DOIUrl":"https://doi.org/10.1080/10920277.2022.2078373","url":null,"abstract":"I congratulate the authors on this enjoyable and timely article, which touches on several of my interests. I would like to offer some comments on nonrisk price discrimination, that is, individual price variations that do not reflect expected costs (sometimes described as “ price optimization ” )","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43339698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1080/10920277.2022.2071741
V. Brazauskas, Ponmalar Ratnam
Many risk measures can be defined through the quantile function of the underlying loss variable (e.g., a class of distortion risk measures). When the loss variable is discrete or mixed, however, the definition of risk measures has to be broadened, which makes statistical inference trickier. To facilitate a straightforward transition from the risk measurement literature of continuous loss variables to that of discrete, in this article we study smoothing of quantiles for discrete variables. Smoothed quantiles are defined using the theory of fractional or imaginary order statistics, which was originated by Stigler (1977). To prove consistency and asymptotic normality of sample estimators of smoothed quantiles, we utilize the results of Wang and Hutson (2011) and generalize them to vectors of smoothed quantiles. Further, we thoroughly investigate extensions of this methodology to discrete populations with infinite support (e.g., Poisson and zero-inflated Poisson distributions). Furthermore, large- and small-sample properties of the newly designed estimators are investigated theoretically and through Monte Carlo simulations. Finally, applications of smoothed quantiles to risk measurement (e.g., estimation of distortion risk measures such as Value at Risk, conditional tail expectation, and proportional hazards transform) are discussed and illustrated using automobile accident data. Comparisons between the classical (and linearly interpolated) quantiles and smoothed quantiles are performed as well.
{"title":"Smoothed Quantiles for Measuring Discrete Risks","authors":"V. Brazauskas, Ponmalar Ratnam","doi":"10.1080/10920277.2022.2071741","DOIUrl":"https://doi.org/10.1080/10920277.2022.2071741","url":null,"abstract":"Many risk measures can be defined through the quantile function of the underlying loss variable (e.g., a class of distortion risk measures). When the loss variable is discrete or mixed, however, the definition of risk measures has to be broadened, which makes statistical inference trickier. To facilitate a straightforward transition from the risk measurement literature of continuous loss variables to that of discrete, in this article we study smoothing of quantiles for discrete variables. Smoothed quantiles are defined using the theory of fractional or imaginary order statistics, which was originated by Stigler (1977). To prove consistency and asymptotic normality of sample estimators of smoothed quantiles, we utilize the results of Wang and Hutson (2011) and generalize them to vectors of smoothed quantiles. Further, we thoroughly investigate extensions of this methodology to discrete populations with infinite support (e.g., Poisson and zero-inflated Poisson distributions). Furthermore, large- and small-sample properties of the newly designed estimators are investigated theoretically and through Monte Carlo simulations. Finally, applications of smoothed quantiles to risk measurement (e.g., estimation of distortion risk measures such as Value at Risk, conditional tail expectation, and proportional hazards transform) are discussed and illustrated using automobile accident data. Comparisons between the classical (and linearly interpolated) quantiles and smoothed quantiles are performed as well.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43253060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1080/10920277.2022.2069124
Gaurav Khemka, David G. W. Pitt, Jinhui Zhang
Many publicly available datasets relevant to actuarial work contain data grouped in various ways. For example, operational loss data are often reported in a grouped format that includes group boundaries, loss frequency, and average or total amount of loss for each group. The process of fitting a parametric distribution to grouped data becomes more complex but potentially more accurate when additional information, such as group means, is incorporated in the estimation process. This article compares the relative performance of three methods of inference using distributions suitable for actuarial applications, particularly those that are right-skewed, heavy-tailed, and left-truncated. We compare the traditional maximum likelihood method, which only considers the group limits and frequency of observations in each group, to two research innovations: a modified maximum likelihood method and a modified generalized method of moments approach, both of which incorporate additional group mean information in the estimation process. We perform a simulation study where the proposed methods outperform the traditional maximum likelihood method and the maximum entropy when the true underlying distribution is both known and unknown. Further, we apply the methods to three actuarial datasets: operational loss data, pension fund data, and car insurance claims data. Here we compare the performance of the three methods along with the maximum entropy distribution (under the traditional maximum likelihood and the modified maximum likelihood methods) and find that for all three datasets the proposed methods outperform the traditional maximum likelihood method. We conclude that there is merit in considering the proposed methods while fitting a parametric distribution to grouped data.
{"title":"On Fitting Probability Distribution to Univariate Grouped Actuarial Data with Both Group Mean and Relative Frequencies","authors":"Gaurav Khemka, David G. W. Pitt, Jinhui Zhang","doi":"10.1080/10920277.2022.2069124","DOIUrl":"https://doi.org/10.1080/10920277.2022.2069124","url":null,"abstract":"Many publicly available datasets relevant to actuarial work contain data grouped in various ways. For example, operational loss data are often reported in a grouped format that includes group boundaries, loss frequency, and average or total amount of loss for each group. The process of fitting a parametric distribution to grouped data becomes more complex but potentially more accurate when additional information, such as group means, is incorporated in the estimation process. This article compares the relative performance of three methods of inference using distributions suitable for actuarial applications, particularly those that are right-skewed, heavy-tailed, and left-truncated. We compare the traditional maximum likelihood method, which only considers the group limits and frequency of observations in each group, to two research innovations: a modified maximum likelihood method and a modified generalized method of moments approach, both of which incorporate additional group mean information in the estimation process. We perform a simulation study where the proposed methods outperform the traditional maximum likelihood method and the maximum entropy when the true underlying distribution is both known and unknown. Further, we apply the methods to three actuarial datasets: operational loss data, pension fund data, and car insurance claims data. Here we compare the performance of the three methods along with the maximum entropy distribution (under the traditional maximum likelihood and the modified maximum likelihood methods) and find that for all three datasets the proposed methods outperform the traditional maximum likelihood method. We conclude that there is merit in considering the proposed methods while fitting a parametric distribution to grouped data.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42860682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-03DOI: 10.1080/10920277.2022.2036198
C. Ramsay, V. I. Oguledo
A major problem facing many U.S. retirees is accessing and paying for long term care. The 2019 National Association of Insurance Commissioners (NAIC) guide on long-term care insurance estimates that, of the individuals living in the United States who reach age 65, about 70% are expected to need some form of long-term care at least once in their lifetime and about 35% are expected to enter a nursing home at least once in their lifetime. Although Medicare covers most of a U.S. retiree’s medical care, Medicare does not ordinarily pay for long-term care. U.S. retirees often can access long-term care services via the Medicaid program, which is a means-tested program geared to lower income Americans. But, to quickly qualify for Medicaid, many retirees take drastic steps, such as transferring their assets to family members. When access to long-term care is not urgent and long-term planning is an option, most U.S. states have developed so-called Partnership for Long-Term Care (PLTC) Program insurance policies that provide access to Medicaid services while sheltering some or all of a retiree’s assets. In this article, we propose a hybrid annuity product called a doubly enhanced Medicaid partnership annuity (DEMPAN) that combines an annuity with a long-term care rider that is integrated within the framework of a qualified partnership policy. (Outside the United States, bundled retirement products similar to DEMPANs are often called life-care annuities.) To analyze our DEMPANs, we use a multistate model of long-term care with health states that are based on a retiree’s ability to perform activities of daily living (ADLs) and instrumental activities of daily living (IADLs) and cognitive ability. A significant contribution of this article is to explicitly model how the quality of long-term care a retiree receives affects the retiree’s health state transition probabilities used in the multistate model. As higher quality of care usually comes at a higher cost but with better health outcomes, we provide an example that explores an expected discounted utility maximizing retiree’s optimal choice of DEMPAN. Our example showed that it may be optimal for retirees who purchase DEMPANs to simply buy average quality long-term care. We hope DEMPANs fill a gap in the long-term care market by providing an important tool for elder care planning for those in the Medicaid penumbra (i.e., in the middle- and lower-middle-income classes). Retirees who purchase DEMPANs have the benefits of an annuity, private long-term care, Medicaid assistance with paying their long-term care bills, and some degree of asset protection from Medicaid estate recovery.
{"title":"Doubly Enhanced Medicaid Partnership Annuities (DEMPANs): A New Tool for Providing Long Term Care to Retired U.S. Seniors in the Medicaid Penumbra","authors":"C. Ramsay, V. I. Oguledo","doi":"10.1080/10920277.2022.2036198","DOIUrl":"https://doi.org/10.1080/10920277.2022.2036198","url":null,"abstract":"A major problem facing many U.S. retirees is accessing and paying for long term care. The 2019 National Association of Insurance Commissioners (NAIC) guide on long-term care insurance estimates that, of the individuals living in the United States who reach age 65, about 70% are expected to need some form of long-term care at least once in their lifetime and about 35% are expected to enter a nursing home at least once in their lifetime. Although Medicare covers most of a U.S. retiree’s medical care, Medicare does not ordinarily pay for long-term care. U.S. retirees often can access long-term care services via the Medicaid program, which is a means-tested program geared to lower income Americans. But, to quickly qualify for Medicaid, many retirees take drastic steps, such as transferring their assets to family members. When access to long-term care is not urgent and long-term planning is an option, most U.S. states have developed so-called Partnership for Long-Term Care (PLTC) Program insurance policies that provide access to Medicaid services while sheltering some or all of a retiree’s assets. In this article, we propose a hybrid annuity product called a doubly enhanced Medicaid partnership annuity (DEMPAN) that combines an annuity with a long-term care rider that is integrated within the framework of a qualified partnership policy. (Outside the United States, bundled retirement products similar to DEMPANs are often called life-care annuities.) To analyze our DEMPANs, we use a multistate model of long-term care with health states that are based on a retiree’s ability to perform activities of daily living (ADLs) and instrumental activities of daily living (IADLs) and cognitive ability. A significant contribution of this article is to explicitly model how the quality of long-term care a retiree receives affects the retiree’s health state transition probabilities used in the multistate model. As higher quality of care usually comes at a higher cost but with better health outcomes, we provide an example that explores an expected discounted utility maximizing retiree’s optimal choice of DEMPAN. Our example showed that it may be optimal for retirees who purchase DEMPANs to simply buy average quality long-term care. We hope DEMPANs fill a gap in the long-term care market by providing an important tool for elder care planning for those in the Medicaid penumbra (i.e., in the middle- and lower-middle-income classes). Retirees who purchase DEMPANs have the benefits of an annuity, private long-term care, Medicaid assistance with paying their long-term care bills, and some degree of asset protection from Medicaid estate recovery.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47077339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-20DOI: 10.1080/10920277.2021.2022497
Jared Cummings, Brian Hartman
Long-term care insurance (LTCI) should be an essential part of a family financial plan. It could protect assets from the expensive and relatively common risk of needing disability assistance, and LTCI purchase rates are lower than expected. Though there are multiple reasons for this trend, it is partially due to the difficultly insurers have in operating profitably as LTCI providers. If LTCI providers were better able to forecast claim rates, they would have less difficulty maintaining profitability. In this article, we develop several models to improve upon those used by insurers to forecast claim rates. We find that standard logistic regression is outperformed by tree-based and neural network models. More modest improvements can be found by using a neighbor-based model. Of all of our tested models, the random forest models were the consistent top performers. Additionally, simple sampling techniques influence the performance of each of the models. This is especially true for the deep neural network, which improves drastically under oversampling. The effects of the sampling vary depending on the size of the available data. To better understand this relationship, we thoroughly examine three states with various amounts of available data as case studies.
{"title":"Using Machine Learning to Better Model Long-Term Care Insurance Claims","authors":"Jared Cummings, Brian Hartman","doi":"10.1080/10920277.2021.2022497","DOIUrl":"https://doi.org/10.1080/10920277.2021.2022497","url":null,"abstract":"Long-term care insurance (LTCI) should be an essential part of a family financial plan. It could protect assets from the expensive and relatively common risk of needing disability assistance, and LTCI purchase rates are lower than expected. Though there are multiple reasons for this trend, it is partially due to the difficultly insurers have in operating profitably as LTCI providers. If LTCI providers were better able to forecast claim rates, they would have less difficulty maintaining profitability. In this article, we develop several models to improve upon those used by insurers to forecast claim rates. We find that standard logistic regression is outperformed by tree-based and neural network models. More modest improvements can be found by using a neighbor-based model. Of all of our tested models, the random forest models were the consistent top performers. Additionally, simple sampling techniques influence the performance of each of the models. This is especially true for the deep neural network, which improves drastically under oversampling. The effects of the sampling vary depending on the size of the available data. To better understand this relationship, we thoroughly examine three states with various amounts of available data as case studies.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48447615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1080/10920277.2023.2211648
H. Assa, Liyuan Lin, Ruodu Wang
The Value-at-Risk (VaR) and the Expected Shortfall (ES) are the two most popular risk measures in banking and insurance regulation. To bridge between the two regulatory risk measures, the Probability Equivalent Level of VaR-ES (PELVE) was recently proposed to convert a level of VaR to that of ES. It is straightforward to compute the value of PELVE for a given distribution model. In this paper, we study the converse problem of PELVE calibration, that is, to find a distribution model that yields a given PELVE, which may either be obtained from data or from expert opinion. We discuss separately the cases when one-point, two-point, n-point and curve constraints are given. In the most complicated case of a curve constraint, we convert the calibration problem to that of an advanced differential equation. We apply the model calibration techniques to estimation and simulation for datasets used in insurance. We further study some technical properties of PELVE by offering a few new results on monotonicity and convergence.
{"title":"Calibrating Distribution Models from PELVE","authors":"H. Assa, Liyuan Lin, Ruodu Wang","doi":"10.1080/10920277.2023.2211648","DOIUrl":"https://doi.org/10.1080/10920277.2023.2211648","url":null,"abstract":"The Value-at-Risk (VaR) and the Expected Shortfall (ES) are the two most popular risk measures in banking and insurance regulation. To bridge between the two regulatory risk measures, the Probability Equivalent Level of VaR-ES (PELVE) was recently proposed to convert a level of VaR to that of ES. It is straightforward to compute the value of PELVE for a given distribution model. In this paper, we study the converse problem of PELVE calibration, that is, to find a distribution model that yields a given PELVE, which may either be obtained from data or from expert opinion. We discuss separately the cases when one-point, two-point, n-point and curve constraints are given. In the most complicated case of a curve constraint, we convert the calibration problem to that of an advanced differential equation. We apply the model calibration techniques to estimation and simulation for datasets used in insurance. We further study some technical properties of PELVE by offering a few new results on monotonicity and convergence.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47792125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-13DOI: 10.1080/10920277.2022.2050260
Mario Marino, Susanna Levantesi, A. Nigri
Several countries worldwide are experiencing a continuous increase in life expectancy, extending the challenges of life actuaries and demographers in forecasting mortality. Although several stochastic mortality models have been proposed in the literature, mortality forecasting research remains a crucial task. Recently, various research works have encouraged the use of deep learning models to extrapolate suitable patterns within mortality data. Such learning models allow achieving accurate point predictions, though uncertainty measures are also necessary to support both model estimate reliability and risk evaluation. As a new advance in mortality forecasting, we formalize the deep neural network integration within the Lee-Carter framework, as a first bridge between the deep learning and the mortality density forecasts. We test our model proposal in a numerical application considering three representative countries worldwide and for both genders, scrutinizing two different fitting periods. Exploiting the meaning of both biological reasonableness and plausibility of forecasts, as well as performance metrics, our findings confirm the suitability of deep learning models to improve the predictive capacity of the Lee-Carter model, providing more reliable mortality boundaries in the long run.
{"title":"A Neural Approach to Improve the Lee-Carter Mortality Density Forecasts","authors":"Mario Marino, Susanna Levantesi, A. Nigri","doi":"10.1080/10920277.2022.2050260","DOIUrl":"https://doi.org/10.1080/10920277.2022.2050260","url":null,"abstract":"Several countries worldwide are experiencing a continuous increase in life expectancy, extending the challenges of life actuaries and demographers in forecasting mortality. Although several stochastic mortality models have been proposed in the literature, mortality forecasting research remains a crucial task. Recently, various research works have encouraged the use of deep learning models to extrapolate suitable patterns within mortality data. Such learning models allow achieving accurate point predictions, though uncertainty measures are also necessary to support both model estimate reliability and risk evaluation. As a new advance in mortality forecasting, we formalize the deep neural network integration within the Lee-Carter framework, as a first bridge between the deep learning and the mortality density forecasts. We test our model proposal in a numerical application considering three representative countries worldwide and for both genders, scrutinizing two different fitting periods. Exploiting the meaning of both biological reasonableness and plausibility of forecasts, as well as performance metrics, our findings confirm the suitability of deep learning models to improve the predictive capacity of the Lee-Carter model, providing more reliable mortality boundaries in the long run.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46878288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-11DOI: 10.1080/10920277.2022.2041040
Tianxiang Shi, Xuesong You
Multiemployer defined benefit pension plans are facing severe funding challenges. The Pension Protection Act of 2006 requires that a multiemployer pension plan with an actuarial funded percentage of less than 80% must take corrective actions to improve its financial health. We use a regression discontinuity design to examine the effect of funding rule requirements on employer withdrawals from multiemployer pension plans. We find that multiemployer pension plans subject to funding rule requirements are about 14 percentage points more likely to experience employer withdrawals in a 1-year period compared to plans not required to take any corrective actions. We also find that plans with ex ante employer withdrawal experiences are more vulnerable to financial shocks such as the 2008 financial crisis. Our study provides important policy implications for regulators concerning best practices to build pension plan resilience to insolvency events.
{"title":"Multiemployer Defined Benefit Pension Plans: Employer Withdrawals and Financial Vulnerability","authors":"Tianxiang Shi, Xuesong You","doi":"10.1080/10920277.2022.2041040","DOIUrl":"https://doi.org/10.1080/10920277.2022.2041040","url":null,"abstract":"Multiemployer defined benefit pension plans are facing severe funding challenges. The Pension Protection Act of 2006 requires that a multiemployer pension plan with an actuarial funded percentage of less than 80% must take corrective actions to improve its financial health. We use a regression discontinuity design to examine the effect of funding rule requirements on employer withdrawals from multiemployer pension plans. We find that multiemployer pension plans subject to funding rule requirements are about 14 percentage points more likely to experience employer withdrawals in a 1-year period compared to plans not required to take any corrective actions. We also find that plans with ex ante employer withdrawal experiences are more vulnerable to financial shocks such as the 2008 financial crisis. Our study provides important policy implications for regulators concerning best practices to build pension plan resilience to insolvency events.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43577800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-24DOI: 10.1080/10920277.2022.2034507
M. Eling, Mauro Elvedi, Gregory Falco
Numerous industry studies discuss the economic effects of potentially extreme cyber incidents, with considerable variation in the applied methodology and estimated costs. We implement a dynamic inoperability input–output model that allows a consistent analysis and comparison of the economic impacts resulting from six widely discussed cyber risk scenarios. Our model accounts for the frequently omitted qualitative context of the scenarios to be considered as part of the economic projection. Overall, our loss estimations remain in an insurable range from US$0.7 to 35 billion. To our knowledge, this is the first effort to develop a standardized evaluation framework that allows for a consistent assessment of cyber risk scenarios, thereby enabling comparability.
{"title":"The Economic Impact of Extreme Cyber Risk Scenarios","authors":"M. Eling, Mauro Elvedi, Gregory Falco","doi":"10.1080/10920277.2022.2034507","DOIUrl":"https://doi.org/10.1080/10920277.2022.2034507","url":null,"abstract":"Numerous industry studies discuss the economic effects of potentially extreme cyber incidents, with considerable variation in the applied methodology and estimated costs. We implement a dynamic inoperability input–output model that allows a consistent analysis and comparison of the economic impacts resulting from six widely discussed cyber risk scenarios. Our model accounts for the frequently omitted qualitative context of the scenarios to be considered as part of the economic projection. Overall, our loss estimations remain in an insurable range from US$0.7 to 35 billion. To our knowledge, this is the first effort to develop a standardized evaluation framework that allows for a consistent assessment of cyber risk scenarios, thereby enabling comparability.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42606247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}