Pub Date : 2024-06-12DOI: 10.1007/s13385-024-00388-2
Alexej Brauer
Currently, there is a lot of research in the field of neural networks for non-life insurance pricing. The usual goal is to improve the predictive power of actuarial pricing and behavioral models via neural networks while building upon the generalized linear model, which is the current industry standard. Our paper contributes to this current journey via novel methods to enhance actuarial non-life models with transformer models for tabular data. We build here upon the foundation laid out by the combined actuarial neural network as well as the localGLMnet and enhance those models via the feature tokenizer transformer. The manuscript demonstrates the performance of the proposed methods on a real-world claim frequency dataset and compares them with several benchmark models such as generalized linear models, feed-forward neural networks, combined actuarial neural networks, LocalGLMnet, and the pure feature tokenizer transformer. The paper shows that the new methods can achieve better results than the benchmark models while preserving the structure of the underlying actuarial models, thereby inheriting and retaining their advantages. The paper also discusses the practical implications and challenges of applying transformer models in actuarial settings.
{"title":"Enhancing actuarial non-life pricing models via transformers","authors":"Alexej Brauer","doi":"10.1007/s13385-024-00388-2","DOIUrl":"https://doi.org/10.1007/s13385-024-00388-2","url":null,"abstract":"<p>Currently, there is a lot of research in the field of neural networks for non-life insurance pricing. The usual goal is to improve the predictive power of actuarial pricing and behavioral models via neural networks while building upon the generalized linear model, which is the current industry standard. Our paper contributes to this current journey via novel methods to enhance actuarial non-life models with transformer models for tabular data. We build here upon the foundation laid out by the combined actuarial neural network as well as the localGLMnet and enhance those models via the feature tokenizer transformer. The manuscript demonstrates the performance of the proposed methods on a real-world claim frequency dataset and compares them with several benchmark models such as generalized linear models, feed-forward neural networks, combined actuarial neural networks, LocalGLMnet, and the pure feature tokenizer transformer. The paper shows that the new methods can achieve better results than the benchmark models while preserving the structure of the underlying actuarial models, thereby inheriting and retaining their advantages. The paper also discusses the practical implications and challenges of applying transformer models in actuarial settings.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"160 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s13385-024-00385-5
Mathias Lindholm, Taariq Nazar
The paper discusses duration effects on the consistency of mean parameter and dispersion parameter estimators in exponential dispersion families (EDFs) that are the standard models used for non-life insurance pricing. Focus is on the standard generalised linear model assumptions where both the mean and variance, conditional on duration, are linear functions in terms of duration. We derive simple convergence results that highlight consequences when the linear conditional moment assumptions are not satisfied. These results illustrate that: (i) the resulting mean estimators always have a relevant asymptotic interpretation in terms of the duration adjusted actuarially fair premium—a premium that only agrees with the standard actuarial premium using a duration equal to one, given that the expected value is linear in the duration; (ii) deviance based estimators of the dispersion parameter in an EDF should be avoided in favour of Pearson estimators; (iii) unless the linear moment assumptions are satisfied, consistency of dispersion and plug-in variance estimators can not be guaranteed and may result in spurious over-dispersion. The results provide explicit conditions on the underlying data generating process that will lead to spurious over-dispersion that can be used for model checking. This is illustrated based on real insurance data, where it is concluded that the linear moment assumptions are violated, which results in non-negligible spurious over-dispersion.
{"title":"On duration effects in non-life insurance pricing","authors":"Mathias Lindholm, Taariq Nazar","doi":"10.1007/s13385-024-00385-5","DOIUrl":"https://doi.org/10.1007/s13385-024-00385-5","url":null,"abstract":"<p>The paper discusses duration effects on the consistency of mean parameter and dispersion parameter estimators in exponential dispersion families (EDFs) that are the standard models used for non-life insurance pricing. Focus is on the standard generalised linear model assumptions where both the mean and variance, conditional on duration, are linear functions in terms of duration. We derive simple convergence results that highlight consequences when the linear conditional moment assumptions are not satisfied. These results illustrate that: (i) the resulting mean estimators always have a relevant asymptotic interpretation in terms of the duration adjusted actuarially fair premium—a premium that only agrees with the standard actuarial premium using a duration equal to one, given that the expected value is linear in the duration; (ii) deviance based estimators of the dispersion parameter in an EDF should be avoided in favour of Pearson estimators; (iii) unless the linear moment assumptions are satisfied, consistency of dispersion and plug-in variance estimators can not be guaranteed and may result in spurious over-dispersion. The results provide explicit conditions on the underlying data generating process that will lead to spurious over-dispersion that can be used for model checking. This is illustrated based on real insurance data, where it is concluded that the linear moment assumptions are violated, which results in non-negligible spurious over-dispersion.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"29 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141167631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-24DOI: 10.1007/s13385-024-00380-w
Solveig Flaig, Gero Junike
The profit and loss (P &L) attribution for each business year into different risk factors (e.g., interest rates, credit spreads, foreign exchange rate etc.) is a regulatory requirement, e.g., under Solvency 2. Three different decomposition principles are prevalent: one-at-a-time (OAT), sequential updating (SU) and average sequential updating (ASU) decompositions. In this research, using financial market data from 2003 to 2022, we demonstrate that the OAT decomposition can generate significant unexplained P &L and that the SU decompositions depends significantly on the order or labeling of the risk factors. On the basis of an investment in a foreign stock, we further explain that the SU decomposition is not able to identify all relevant risk factors. This potentially effects the hedging strategy of the portfolio manager. In conclusion, we suggest to use the ASU decomposition in practice.
将每个业务年度的损益(P&L)归结为不同的风险因素(如利率、信用利差、汇率等)是偿付能力 2 等的监管要求。目前流行三种不同的分解原则:一次性分解(OAT)、顺序更新分解(SU)和平均顺序更新分解(ASU)。在这项研究中,我们利用 2003 年至 2022 年的金融市场数据证明,OAT 分解法会产生大量无法解释的 P &L 值,而 SU 分解法在很大程度上取决于风险因素的顺序或标记。在投资外国股票的基础上,我们进一步解释了 SU 分解无法识别所有相关风险因素。这可能会影响投资组合经理的对冲策略。总之,我们建议在实践中使用 ASU 分解法。
{"title":"Profit and loss attribution: an empirical study","authors":"Solveig Flaig, Gero Junike","doi":"10.1007/s13385-024-00380-w","DOIUrl":"https://doi.org/10.1007/s13385-024-00380-w","url":null,"abstract":"<p>The profit and loss (P &L) attribution for each business year into different risk factors (e.g., interest rates, credit spreads, foreign exchange rate etc.) is a regulatory requirement, e.g., under Solvency 2. Three different decomposition principles are prevalent: one-at-a-time (OAT), sequential updating (SU) and average sequential updating (ASU) decompositions. In this research, using financial market data from 2003 to 2022, we demonstrate that the OAT decomposition can generate significant unexplained P &L and that the SU decompositions depends significantly on the order or labeling of the risk factors. On the basis of an investment in a foreign stock, we further explain that the SU decomposition is not able to identify all relevant risk factors. This potentially effects the hedging strategy of the portfolio manager. In conclusion, we suggest to use the ASU decomposition in practice.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"62 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141150369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.1007/s13385-024-00384-6
Bruno Deprez, Félix Vandervorst, Wouter Verbeke, Tim Verdonck, Bart Baesens
There has been an increasing interest in fraud detection methods, driven by new regulations and by the financial losses linked to fraud. One of the state-of-the-art methods to fight fraud is network analytics. Network analytics leverages the interactions between different entities to detect complex patterns that are indicative of fraud. However, network analytics has only recently been applied to fraud detection in the actuarial literature. Although it shows much potential, many network methods are not yet applied. This paper extends the literature in two main ways. First, we review and apply multiple methods in the context of insurance fraud and assess their predictive power against each other. Second, we analyse the added value of network features over intrinsic features to detect fraud. We conclude that (1) complex methods do not necessarily outperform basic network features, and that (2) network analytics helps to detect different fraud patterns, compared to models trained on claim-specific features alone.
{"title":"Network analytics for insurance fraud detection: a critical case study","authors":"Bruno Deprez, Félix Vandervorst, Wouter Verbeke, Tim Verdonck, Bart Baesens","doi":"10.1007/s13385-024-00384-6","DOIUrl":"https://doi.org/10.1007/s13385-024-00384-6","url":null,"abstract":"<p>There has been an increasing interest in fraud detection methods, driven by new regulations and by the financial losses linked to fraud. One of the state-of-the-art methods to fight fraud is network analytics. Network analytics leverages the interactions between different entities to detect complex patterns that are indicative of fraud. However, network analytics has only recently been applied to fraud detection in the actuarial literature. Although it shows much potential, many network methods are not yet applied. This paper extends the literature in two main ways. First, we review and apply multiple methods in the context of insurance fraud and assess their predictive power against each other. Second, we analyse the added value of network features over intrinsic features to detect fraud. We conclude that (1) complex methods do not necessarily outperform basic network features, and that (2) network analytics helps to detect different fraud patterns, compared to models trained on claim-specific features alone.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"91 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140927849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s13385-024-00379-3
Johan G. Andréasson, Pavel V. Shevchenko
In this paper we develop a model to find optimal decisions in retirement with respect to the consumption, risky asset allocation, access to annuities, reverse mortgage and the option to scale housing in the presence of a means-tested public pension. To solve the corresponding high-dimensional optimal stochastic control problem, we use the Least-Squares Monte Carlo simulation method. The model is applied in the context of the Australian retirement system. Few retirees in Australia utilise financial products in retirement, such as annuities or reverse mortgages. Since the government-provided means-tested Age Pension in Australia is an indirect annuity stream which is typically higher than the consumption floor, it can be argued that this could be the reason why many Australians do not annuitise. In addition, in Australia where assets allocated to the family home are not included in the means tests of Age Pension, the incentive to over-allocate wealth into housing assets is high. This raises the question whether a retiree is really better off over-allocating into the family home, while accessing home equity later on either via downsizing housing or by taking out a reverse mortgage. Our findings confirm that means-tested pension crowds out voluntary annuitisation in retirement, and that annuitisation is optimal sooner rather than later once retired. We find that it is never optimal to downscale housing when the means-tested pension and a reverse mortgage are available; only when there is no other way to access equity then downsizing is the only option.
{"title":"Optimal annuitisation, housing and reverse mortgage in retirement in the presence of a means-tested public pension","authors":"Johan G. Andréasson, Pavel V. Shevchenko","doi":"10.1007/s13385-024-00379-3","DOIUrl":"https://doi.org/10.1007/s13385-024-00379-3","url":null,"abstract":"<p>In this paper we develop a model to find optimal decisions in retirement with respect to the consumption, risky asset allocation, access to annuities, reverse mortgage and the option to scale housing in the presence of a means-tested public pension. To solve the corresponding high-dimensional optimal stochastic control problem, we use the Least-Squares Monte Carlo simulation method. The model is applied in the context of the Australian retirement system. Few retirees in Australia utilise financial products in retirement, such as annuities or reverse mortgages. Since the government-provided means-tested Age Pension in Australia is an indirect annuity stream which is typically higher than the consumption floor, it can be argued that this could be the reason why many Australians do not annuitise. In addition, in Australia where assets allocated to the family home are not included in the means tests of Age Pension, the incentive to over-allocate wealth into housing assets is high. This raises the question whether a retiree is really better off over-allocating into the family home, while accessing home equity later on either via downsizing housing or by taking out a reverse mortgage. Our findings confirm that means-tested pension crowds out voluntary annuitisation in retirement, and that annuitisation is optimal sooner rather than later once retired. We find that it is never optimal to downscale housing when the means-tested pension and a reverse mortgage are available; only when there is no other way to access equity then downsizing is the only option.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"22 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140927854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s13385-024-00382-8
Mathias Raschke
{"title":"Discussion on ‘A resimulation framework for event loss tables based on clustering’ by Benedikt Funke and Harmen Roering","authors":"Mathias Raschke","doi":"10.1007/s13385-024-00382-8","DOIUrl":"https://doi.org/10.1007/s13385-024-00382-8","url":null,"abstract":"","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"53 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s13385-024-00378-4
Yiqing Chen, Jiajun Liu
We investigate capital allocation based on the higher moment risk measure at a confidence level (qin (0,1)). To reflect the excessive prudence of today’s regulatory frameworks in banking and insurance, we consider the extreme case with (quparrow 1) and study the asymptotic behavior of capital allocation for heavy-tailed and asymptotically independent/dependent risks. Some explicit asymptotic formulas are derived, demonstrating that the capital allocated to a specific line is asymptotically proportional to the Value at Risk of the corresponding individual risk. In addition, some numerical studies are conducted to examine their accuracy.
{"title":"Asymptotic capital allocation based on the higher moment risk measure","authors":"Yiqing Chen, Jiajun Liu","doi":"10.1007/s13385-024-00378-4","DOIUrl":"https://doi.org/10.1007/s13385-024-00378-4","url":null,"abstract":"<p>We investigate capital allocation based on the higher moment risk measure at a confidence level <span>(qin (0,1))</span>. To reflect the excessive prudence of today’s regulatory frameworks in banking and insurance, we consider the extreme case with <span>(quparrow 1)</span> and study the asymptotic behavior of capital allocation for heavy-tailed and asymptotically independent/dependent risks. Some explicit asymptotic formulas are derived, demonstrating that the capital allocated to a specific line is asymptotically proportional to the Value at Risk of the corresponding individual risk. In addition, some numerical studies are conducted to examine their accuracy.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-06DOI: 10.1007/s13385-024-00377-5
Maximilian Euthum, Matthias Scherer, Francesco Ungolo
A Neural Network (NN) approach for the modelling of mortality rates in a multi-population framework is compared to three classical mortality models. The NN setup contains two instances of Recurrent NNs, including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) networks. The stochastic approaches comprise the Li and Lee model, the Common Age Effect model of Kleinow, and the model of Plat. All models are applied and compared in a large case study on decades of data of the Italian population as divided in counties. In this case study, a new index of multiple deprivation is introduced and used to classify all Italian counties based on socio-economic indicators, sourced from the local office of national statistics (ISTAT). The aforementioned models are then used to model and predict mortality rates of groups of different socio-economic characteristics, sex, and age.
在多人口框架下,将神经网络(NN)方法用于死亡率建模与三种经典死亡率模型进行了比较。NN 设置包含两个递归 NN 实例,包括长短期记忆 (LSTM) 和门控递归单元 (GRU) 网络。随机方法包括 Li 和 Lee 模型、Kleinow 的共同年龄效应模型和 Plat 模型。所有模型都应用于一项大型案例研究中,并在数十年的意大利各县人口数据中进行了比较。在这一案例研究中,引入了一个新的多重贫困指数,并根据国家统计局地方办事处(ISTAT)提供的社会经济指标对意大利所有县进行分类。然后使用上述模型对不同社会经济特征、性别和年龄组别的死亡率进行建模和预测。
{"title":"A neural network approach for the mortality analysis of multiple populations: a case study on data of the Italian population","authors":"Maximilian Euthum, Matthias Scherer, Francesco Ungolo","doi":"10.1007/s13385-024-00377-5","DOIUrl":"https://doi.org/10.1007/s13385-024-00377-5","url":null,"abstract":"<p>A Neural Network (NN) approach for the modelling of mortality rates in a multi-population framework is compared to three classical mortality models. The NN setup contains two instances of Recurrent NNs, including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) networks. The stochastic approaches comprise the Li and Lee model, the Common Age Effect model of Kleinow, and the model of Plat. All models are applied and compared in a large case study on decades of data of the Italian population as divided in counties. In this case study, a new index of multiple deprivation is introduced and used to classify all Italian counties based on socio-economic indicators, sourced from the local office of national statistics (ISTAT). The aforementioned models are then used to model and predict mortality rates of groups of different socio-economic characteristics, sex, and age.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"25 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140056587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Participating life insurance contracts are policies that provide dividends (participation bonuses) based on the insurer’s financial performance. While these products are popular, there exists a gap in the literature for the analysis of these contracts under a stochastic setting. This paper fills this gap by proposing methods to (i) determine performance bonuses, (ii) compute the fair premium of the contract, and (iii) perform risk measurements for participating contracts in a realistic stochastic environment. The specific case of a fixed premium endowment participating contract, where the annual premium remains constant while benefits increase stochastically, is considered. We extend both the variable benefits life insurance approach of Bowers et al. [9] and the compound reversionary bonus mechanism presented in Booth et al. [8] and Bacinello [2] to a stochastic financial market (including stochastic interest rates) and stochastic mortality framework. Monte Carlo simulations provide insight about the sensitivity of premiums to contract specification and the evolution over time of both benefits and risks faced by the insurer.
{"title":"Evaluation of participating endowment life insurance policies in a stochastic environment","authors":"Ramin Eghbalzadeh, Patrice Gaillardetz, Frédéric Godin","doi":"10.1007/s13385-023-00373-1","DOIUrl":"https://doi.org/10.1007/s13385-023-00373-1","url":null,"abstract":"<p>Participating life insurance contracts are policies that provide dividends (participation bonuses) based on the insurer’s financial performance. While these products are popular, there exists a gap in the literature for the analysis of these contracts under a stochastic setting. This paper fills this gap by proposing methods to (i) determine performance bonuses, (ii) compute the fair premium of the contract, and (iii) perform risk measurements for participating contracts in a realistic stochastic environment. The specific case of a fixed premium endowment participating contract, where the annual premium remains constant while benefits increase stochastically, is considered. We extend both the variable benefits life insurance approach of Bowers et al. [9] and the compound reversionary bonus mechanism presented in Booth et al. [8] and Bacinello [2] to a stochastic financial market (including stochastic interest rates) and stochastic mortality framework. Monte Carlo simulations provide insight about the sensitivity of premiums to contract specification and the evolution over time of both benefits and risks faced by the insurer.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"21 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139518212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1007/s13385-023-00372-2
Snorre Jallbjørn, Søren F. Jarner, Niels R. Hansen
Integrating epidemiological information into mortality models has the potential to improve forecasting accuracy and facilitate the assessment of preventive measures that reduce disease risk. While probabilistic models are often used for mortality forecasting, predicting how a system behaves under external manipulation requires a causal model. In this paper, we utilize the potential outcomes framework to explore how population-level mortality forecasts are affected by interventions, and discuss the assumptions and data needed to operationalize such an analysis. A unique challenge arises in population-level mortality models where common forecasting methods treat risk prevalence as an exogenous process. This approach simplifies the forecasting process but overlooks (part of) the interdependency between risk and death, limiting the model’s ability to capture selection-induced effects. Using techniques from causal mediation theory, we quantify the selection effect typically missing in studies on cause-of-death elimination and when analyzing actions that modify risk prevalence. Specifically, we decompose the total effect of an intervention into a part directly attributable to the intervention and a part due to subsequent selection. We illustrate the effects with U.S. data.
{"title":"Forecasting, interventions and selection: the benefits of a causal mortality model","authors":"Snorre Jallbjørn, Søren F. Jarner, Niels R. Hansen","doi":"10.1007/s13385-023-00372-2","DOIUrl":"https://doi.org/10.1007/s13385-023-00372-2","url":null,"abstract":"<p>Integrating epidemiological information into mortality models has the potential to improve forecasting accuracy and facilitate the assessment of preventive measures that reduce disease risk. While probabilistic models are often used for mortality forecasting, predicting how a system behaves under external manipulation requires a causal model. In this paper, we utilize the potential outcomes framework to explore how population-level mortality forecasts are affected by interventions, and discuss the assumptions and data needed to operationalize such an analysis. A unique challenge arises in population-level mortality models where common forecasting methods treat risk prevalence as an exogenous process. This approach simplifies the forecasting process but overlooks (part of) the interdependency between risk and death, limiting the model’s ability to capture selection-induced effects. Using techniques from causal mediation theory, we quantify the selection effect typically missing in studies on cause-of-death elimination and when analyzing actions that modify risk prevalence. Specifically, we decompose the total effect of an intervention into a part directly attributable to the intervention and a part due to subsequent selection. We illustrate the effects with U.S. data.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":"34 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138817186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}