Pub Date : 2025-11-13DOI: 10.1016/j.insmatheco.2025.103183
Jun Cai , Huameng Jia , Ying Wang
In this paper, we introduce a new method for determining the optimal aggregate capital reserve and the corresponding optimal allocation through a one-step approach, allowing for the simultaneous consideration of aggregate and individual risks. In our one-step approach, both the aggregate capital and the allocation scheme are optimized to minimize an expected loss or cost function that accounts for these risks. Our findings provide insights into decision-makers’ attitudes toward commonly used capital requirement criteria and allocation principles, including VaR and CTE capital criteria, as well as VaR-based and CTE-based haircut allocation principles, and the CTE additive allocation principle. We also offer quantitative arguments explaining why the aggregate capital requirement and the corresponding allocation are optimal and specify the conditions under which they achieve optimality. Notably, our one-step optimal capital criteria can yield required reserves that meet the safety and budget requirements discussed in. Additionally, we provide numerical examples to illustrate our new approaches and compare them with standard methods commonly used in practice.
{"title":"A one-step approach for determining the optimal aggregate capital reserve and allocation","authors":"Jun Cai , Huameng Jia , Ying Wang","doi":"10.1016/j.insmatheco.2025.103183","DOIUrl":"10.1016/j.insmatheco.2025.103183","url":null,"abstract":"<div><div>In this paper, we introduce a new method for determining the optimal aggregate capital reserve and the corresponding optimal allocation through a one-step approach, allowing for the simultaneous consideration of aggregate and individual risks. In our one-step approach, both the aggregate capital and the allocation scheme are optimized to minimize an expected loss or cost function that accounts for these risks. Our findings provide insights into decision-makers’ attitudes toward commonly used capital requirement criteria and allocation principles, including VaR and CTE capital criteria, as well as VaR-based and CTE-based haircut allocation principles, and the CTE additive allocation principle. We also offer quantitative arguments explaining why the aggregate capital requirement and the corresponding allocation are optimal and specify the conditions under which they achieve optimality. Notably, our one-step optimal capital criteria can yield required reserves that meet the safety and budget requirements discussed in. Additionally, we provide numerical examples to illustrate our new approaches and compare them with standard methods commonly used in practice.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"126 ","pages":"Article 103183"},"PeriodicalIF":2.2,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.insmatheco.2025.103179
Corrado De Vecchi , Matthias Scherer
We investigate the relationship between almost first order stochastic dominance (AFSD), the statistical functionals called expectiles, and the corresponding expectile-based monetary risk measure. From a methodological point of view, we show that expectiles provide a ready-to-be-used criterion for the comparison between a deterministic and a random payoff in the sense of AFSD. Furthermore, we obtain a consistency result for expectile-based monetary risk measures with respect to the AFSD ordering. Finally, we discuss applications to robustify some utility-based risk management procedures when there is uncertainty on the utility function to be considered. This includes preference robust portfolio optimization problems and worst-case shortfall risk measures.
{"title":"On expectiles and almost stochastic dominance","authors":"Corrado De Vecchi , Matthias Scherer","doi":"10.1016/j.insmatheco.2025.103179","DOIUrl":"10.1016/j.insmatheco.2025.103179","url":null,"abstract":"<div><div>We investigate the relationship between almost first order stochastic dominance (AFSD), the statistical functionals called expectiles, and the corresponding expectile-based monetary risk measure. From a methodological point of view, we show that expectiles provide a ready-to-be-used criterion for the comparison between a deterministic and a random payoff in the sense of AFSD. Furthermore, we obtain a consistency result for expectile-based monetary risk measures with respect to the AFSD ordering. Finally, we discuss applications to robustify some utility-based risk management procedures when there is uncertainty on the utility function to be considered. This includes preference robust portfolio optimization problems and worst-case shortfall risk measures.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"126 ","pages":"Article 103179"},"PeriodicalIF":2.2,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145572013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-11DOI: 10.1016/j.insmatheco.2025.103173
Zijia Wang , Jingyi Cao , Shu Li
In response to challenges posed by emerging risks such as climate change, practitioners are increasingly aware of the need for a more forward-looking approach to insurance solvency risk management, which requires not only the identification of risks but also timely intervention. However, determining when to implement risk mitigation is often complex, as it involves balancing insolvency prevention against the potential costs and consequences of such actions. In this paper, we provide insights into the timing of risk mitigation before it is too late by studying the last time a Lévy insurance risk process is above a certain threshold before ruin. In the theoretical part, we first derive the joint Laplace transform of the last passage time and the remaining time until ruin. We then study an optimal prediction problem of approximating the last passage time before ruin with a stopping time under the L1 distance, showing that the optimum occurs when the risk process first drops below a certain level. The stopping boundary is independent of the initial surplus level, and we provide an explicit characterization of this boundary. These theoretical results fill a gap in the literature, where last passage times are typically analyzed over an infinite time horizon or an independent exponential time horizon. By focusing on the dynamics of risk processes up to ruin, our findings offer interesting insights into liquidation risk management. These are discussed in the application part, where we develop a framework to endogenously determine financial distress and rehabilitation levels under contemporary regulations. We further analyze the liquidation time under Chapter 7 and Chapter 11 of the U.S. Bankruptcy Code. Numerical examples and an empirical study using real data are presented to illustrate the practical implications of our results.
{"title":"The last passage time before ruin: Theory and applications in liquidation risk management","authors":"Zijia Wang , Jingyi Cao , Shu Li","doi":"10.1016/j.insmatheco.2025.103173","DOIUrl":"10.1016/j.insmatheco.2025.103173","url":null,"abstract":"<div><div>In response to challenges posed by emerging risks such as climate change, practitioners are increasingly aware of the need for a more forward-looking approach to insurance solvency risk management, which requires not only the identification of risks but also timely intervention. However, determining when to implement risk mitigation is often complex, as it involves balancing insolvency prevention against the potential costs and consequences of such actions. In this paper, we provide insights into the timing of risk mitigation before it is too late by studying the last time a Lévy insurance risk process is above a certain threshold before ruin. In the theoretical part, we first derive the joint Laplace transform of the last passage time and the remaining time until ruin. We then study an optimal prediction problem of approximating the last passage time before ruin with a stopping time under the <em>L</em><sub>1</sub> distance, showing that the optimum occurs when the risk process first drops below a certain level. The stopping boundary is independent of the initial surplus level, and we provide an explicit characterization of this boundary. These theoretical results fill a gap in the literature, where last passage times are typically analyzed over an infinite time horizon or an independent exponential time horizon. By focusing on the dynamics of risk processes up to ruin, our findings offer interesting insights into liquidation risk management. These are discussed in the application part, where we develop a framework to endogenously determine financial distress and rehabilitation levels under contemporary regulations. We further analyze the liquidation time under Chapter 7 and Chapter 11 of the U.S. Bankruptcy Code. Numerical examples and an empirical study using real data are presented to illustrate the practical implications of our results.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"126 ","pages":"Article 103173"},"PeriodicalIF":2.2,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.insmatheco.2025.103176
Nicolas Baradel
We revisit the famous Mack’s model which gives an estimate for the conditional mean squared error of prediction of the chain-ladder claims reserves. We introduce a stochastic differential equation driven by a Brownian motion to model the accumulated total claims amount for the chain-ladder method. Within this continuous-time framework, we propose a bootstrap technique for estimating the distribution of claims reserves. It turns out that our approach leads to inherently capturing asymmetry and non-negativity, eliminating the necessity for additional assumptions. We conclude with a case study and comparative analysis against alternative methodologies based on Mack’s model.
{"title":"Continuous-time modeling and bootstrap for chain-ladder reserving","authors":"Nicolas Baradel","doi":"10.1016/j.insmatheco.2025.103176","DOIUrl":"10.1016/j.insmatheco.2025.103176","url":null,"abstract":"<div><div>We revisit the famous Mack’s model which gives an estimate for the conditional mean squared error of prediction of the chain-ladder claims reserves. We introduce a stochastic differential equation driven by a Brownian motion to model the accumulated total claims amount for the chain-ladder method. Within this continuous-time framework, we propose a bootstrap technique for estimating the distribution of claims reserves. It turns out that our approach leads to inherently capturing asymmetry and non-negativity, eliminating the necessity for additional assumptions. We conclude with a case study and comparative analysis against alternative methodologies based on Mack’s model.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"126 ","pages":"Article 103176"},"PeriodicalIF":2.2,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145536941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.insmatheco.2025.103172
Runze Li , Rui Zhou , David Pitt
Existing mortality models typically rely on annual data, with forecasts for a given year based on information available up to the end of the previous year. However, technological advances have enabled the collection and production of weekly and monthly death data, offering new opportunities to improve forecasting. Using only annual data overlooks the full range of available information. In this paper, we propose mixed frequency sampling (MIDAS) models to integrate monthly death counts (high frequency) with annual mortality rates (low frequency), enabling improved short-term prediction of annual mortality. Extending economic applications of MIDAS, which typically predict single variables such as GDP growth, our MIDAS framework accounts for age dependence unique to age-specific mortality modeling. We also evaluate different weighting functions, a core element in MIDAS that determines the relative importance of high-frequency data at different lags, and identify suitable weighting functions for mortality forecasting. Using U.S. mortality data, we demonstrate that our approach significantly improves prediction accuracy compared to models relying solely on annual data for short-term forecasting. These findings highlight the potential of MIDAS models as a useful tool for accurate and timely mortality forecasts.
{"title":"Beyond annual data: Mortality forecasting with mixed frequency data","authors":"Runze Li , Rui Zhou , David Pitt","doi":"10.1016/j.insmatheco.2025.103172","DOIUrl":"10.1016/j.insmatheco.2025.103172","url":null,"abstract":"<div><div>Existing mortality models typically rely on annual data, with forecasts for a given year based on information available up to the end of the previous year. However, technological advances have enabled the collection and production of weekly and monthly death data, offering new opportunities to improve forecasting. Using only annual data overlooks the full range of available information. In this paper, we propose mixed frequency sampling (MIDAS) models to integrate monthly death counts (high frequency) with annual mortality rates (low frequency), enabling improved short-term prediction of annual mortality. Extending economic applications of MIDAS, which typically predict single variables such as GDP growth, our MIDAS framework accounts for age dependence unique to age-specific mortality modeling. We also evaluate different weighting functions, a core element in MIDAS that determines the relative importance of high-frequency data at different lags, and identify suitable weighting functions for mortality forecasting. Using U.S. mortality data, we demonstrate that our approach significantly improves prediction accuracy compared to models relying solely on annual data for short-term forecasting. These findings highlight the potential of MIDAS models as a useful tool for accurate and timely mortality forecasts.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"126 ","pages":"Article 103172"},"PeriodicalIF":2.2,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.insmatheco.2025.103174
Francesco Strati
This paper introduces a novel framework for risk measures defined on Musielak-Orlicz spaces, incorporating state-dependent Young functions to address the limitations of traditional uniform risk models in capturing heterogeneous tail behaviors. Extending the foundational work of Cheridito and Li [2009] on Orlicz hearts, the study establishes a robust representation theorem for convex monetary risk measures, demonstrating their expression as a maximum over penalized expectations adjusted by state-specific penalty functions. This result accommodates unbounded risks with spatially or temporally varying profiles, a critical enhancement for applications in insurance and finance where heterogeneity is prevalent. The framework subsumes coherent measures as a special case and provides a characterization of optimal probability measures, ensuring computational feasibility. Practical implications are explored through connections to insurance mathematics, including links to star-shaped risk measures, variable annuities via the Q⊙P measure, and a state-dependent generalization of the Haezendonck-Goovaerts risk measure. Additionally, an aggregation technique for portfolio risk across diverse states is proposed, accompanied by illustrative examples such as the Transformed Norm and Entropic Risk Measures. By integrating theoretical rigor with practical relevance, this study offers a versatile tool for risk assessment under complex, state-varying conditions.
{"title":"Risk measures on Musielak-Orlicz spaces: A state-dependent perspective for insurance","authors":"Francesco Strati","doi":"10.1016/j.insmatheco.2025.103174","DOIUrl":"10.1016/j.insmatheco.2025.103174","url":null,"abstract":"<div><div>This paper introduces a novel framework for risk measures defined on Musielak-Orlicz spaces, incorporating state-dependent Young functions to address the limitations of traditional uniform risk models in capturing heterogeneous tail behaviors. Extending the foundational work of Cheridito and Li [2009] on Orlicz hearts, the study establishes a robust representation theorem for convex monetary risk measures, demonstrating their expression as a maximum over penalized expectations adjusted by state-specific penalty functions. This result accommodates unbounded risks with spatially or temporally varying profiles, a critical enhancement for applications in insurance and finance where heterogeneity is prevalent. The framework subsumes coherent measures as a special case and provides a characterization of optimal probability measures, ensuring computational feasibility. Practical implications are explored through connections to insurance mathematics, including links to star-shaped risk measures, variable annuities via the <em>Q</em>⊙<em>P</em> measure, and a state-dependent generalization of the Haezendonck-Goovaerts risk measure. Additionally, an aggregation technique for portfolio risk across diverse states is proposed, accompanied by illustrative examples such as the Transformed Norm and Entropic Risk Measures. By integrating theoretical rigor with practical relevance, this study offers a versatile tool for risk assessment under complex, state-varying conditions.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"125 ","pages":"Article 103174"},"PeriodicalIF":2.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.insmatheco.2025.103168
Zhihao Wang , Yanlin Shi , Guangyuan Gao
We consider a specific class of regression models with discrete latent variables, which are commonly used in actuarial science and other fields. When fitting these parametric regression models, regression functions are estimated for both the observed response variable and the latent variable, respectively. Feature engineering, variable selection and model selection become challenging due to the involvement of multiple regression functions and latent variable. To address these challenges, we propose additive tree latent variable models. To calibrate these models, we introduce an iteratively re-weighted gradient boosting (IRGB) algorithm that combines the EM algorithm with the gradient boosting. In each iteration, the IRGB algorithm trains only one weak learner in a stagewise manner. Theoretical analysis demonstrates the monotonic behavior of the likelihood in the IRGB algorithm. We further illustrate the advantages of the proposed nonparametric methods through an empirical example of motor insurance claim counts and a case study on French motor third-party liability insurance pure premiums.
{"title":"Additive tree latent variable models with applications to insurance loss prediction","authors":"Zhihao Wang , Yanlin Shi , Guangyuan Gao","doi":"10.1016/j.insmatheco.2025.103168","DOIUrl":"10.1016/j.insmatheco.2025.103168","url":null,"abstract":"<div><div>We consider a specific class of regression models with discrete latent variables, which are commonly used in actuarial science and other fields. When fitting these parametric regression models, regression functions are estimated for both the observed response variable and the latent variable, respectively. Feature engineering, variable selection and model selection become challenging due to the involvement of multiple regression functions and latent variable. To address these challenges, we propose additive tree latent variable models. To calibrate these models, we introduce an iteratively re-weighted gradient boosting (IRGB) algorithm that combines the EM algorithm with the gradient boosting. In each iteration, the IRGB algorithm trains only one weak learner in a stagewise manner. Theoretical analysis demonstrates the monotonic behavior of the likelihood in the IRGB algorithm. We further illustrate the advantages of the proposed nonparametric methods through an empirical example of motor insurance claim counts and a case study on French motor third-party liability insurance pure premiums.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"125 ","pages":"Article 103168"},"PeriodicalIF":2.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145424574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.insmatheco.2025.103170
Giovanna Apicella , Emilia Di Lorenzo , Giulia Magni , Marilena Sibillo
Combining retirement income and long-term disability protection is a well-established concept in the literature. We present a new connection between the lifetime annuity provided by a Reverse Mortgage and long-term care (LTC) insurance, jointly managed in a unique bundled product. Indeed, we define the RM (considered as a source of lifetime income) and LTC as two different regimes of a jointly managed insurance product, which we call the “Life Care Reverse Mortgage” (LCRM). From an actuarial perspective, we design the underlying structure of the LCRM and the inherent framework for the ex-ante estimation of a time-dependent profit/loss function, that provides a measure of the expected annual net cashflows for a life insurer holding a portfolio of LCRMs. We perform a numerical application to illustrate the regime-switching mechanism on which the proposed LCRM insurance contracts are based and to quantify over time the lender’s return for a pool of LCRM contracts through the designed time-dependent profit/loss function. Furthermore, we analyse the sensitivity of the portfolio profit/loss function to two sources of uncertainty: health patterns over time and house price dynamics.
{"title":"Life care reverse mortgages: Monitoring the net cashflows of a new hybrid insurance product","authors":"Giovanna Apicella , Emilia Di Lorenzo , Giulia Magni , Marilena Sibillo","doi":"10.1016/j.insmatheco.2025.103170","DOIUrl":"10.1016/j.insmatheco.2025.103170","url":null,"abstract":"<div><div>Combining retirement income and long-term disability protection is a well-established concept in the literature. We present a new connection between the lifetime annuity provided by a Reverse Mortgage and long-term care (LTC) insurance, jointly managed in a unique bundled product. Indeed, we define the RM (considered as a source of lifetime income) and LTC as two different regimes of a jointly managed insurance product, which we call the “Life Care Reverse Mortgage” (LCRM). From an actuarial perspective, we design the underlying structure of the LCRM and the inherent framework for the ex-ante estimation of a time-dependent profit/loss function, that provides a measure of the expected annual net cashflows for a life insurer holding a portfolio of LCRMs. We perform a numerical application to illustrate the regime-switching mechanism on which the proposed LCRM insurance contracts are based and to quantify over time the lender’s return for a pool of LCRM contracts through the designed time-dependent profit/loss function. Furthermore, we analyse the sensitivity of the portfolio profit/loss function to two sources of uncertainty: health patterns over time and house price dynamics.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"125 ","pages":"Article 103170"},"PeriodicalIF":2.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.insmatheco.2025.103171
Mengyu Wu , Zhibin Liang , Qingqing Zhang
In this paper, we investigate the optimal risk sharing problem for two insurers under the framework of Stackelberg-Nash differential game. More specifically, two insurers transfer their businesses to each other for achieving the goal of win-win, where both of them act as the leader for pricing while the follower for choosing their own retention level. Based on the game theoretic equilibrium setting and dynamic programming principle, the explicit optimal strategies are derived. We find that insurers will cooperate more eagerly when there is a stronger negative correlation between the businesses of both parties. In order to explore the advantages of risk sharing, we also investigate the optimal reinsurance problem in a traditional Stackelberg game framework. Risk sharing is found to be more advantageous than reinsurance in many cases, especially when the businesses have significant differences, such as a strong negative correlation or a large/small volatility ratio, which means that one of the two businesses is relatively stable while the other fluctuates greatly. Further analysis is given to show the effects of model parameters and the economics interpretations behind them. It is interesting to find that risk-aversion coefficient plays a key role in this Stackelberg-Nash differential game, and the conclusions confirm an obvious fact, that is, risk-averse individuals tend to be more hesitant and conservative when making a decision.
{"title":"Optimal risk sharing with correlated insurance businesses in a Stackelberg-Nash differential game","authors":"Mengyu Wu , Zhibin Liang , Qingqing Zhang","doi":"10.1016/j.insmatheco.2025.103171","DOIUrl":"10.1016/j.insmatheco.2025.103171","url":null,"abstract":"<div><div>In this paper, we investigate the optimal risk sharing problem for two insurers under the framework of Stackelberg-Nash differential game. More specifically, two insurers transfer their businesses to each other for achieving the goal of win-win, where both of them act as the leader for pricing while the follower for choosing their own retention level. Based on the game theoretic equilibrium setting and dynamic programming principle, the explicit optimal strategies are derived. We find that insurers will cooperate more eagerly when there is a stronger negative correlation between the businesses of both parties. In order to explore the advantages of risk sharing, we also investigate the optimal reinsurance problem in a traditional Stackelberg game framework. Risk sharing is found to be more advantageous than reinsurance in many cases, especially when the businesses have significant differences, such as a strong negative correlation or a large/small volatility ratio, which means that one of the two businesses is relatively stable while the other fluctuates greatly. Further analysis is given to show the effects of model parameters and the economics interpretations behind them. It is interesting to find that risk-aversion coefficient plays a key role in this Stackelberg-Nash differential game, and the conclusions confirm an obvious fact, that is, risk-averse individuals tend to be more hesitant and conservative when making a decision.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"125 ","pages":"Article 103171"},"PeriodicalIF":2.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145578913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1016/j.insmatheco.2025.103167
Matteo Malavasi , Gareth W. Peters , Stefan Trück , Pavel V. Shevchenko , Jiwook Jang , Georgy Sofronov
Cyber risk classifications are widely used in the modeling of cyber event distributions, yet their effectiveness in out of sample forecasting performance remains underexplored. In this paper, we analyze the most commonly used classifications and argue in favor of switching the attention from goodness-of-fit and in-sample predictive performance, to focusing on the out-of sample forecasting performance. We use a rolling window analysis, to compare cyber risk distribution forecasts via threshold weighted scoring functions. Our results indicate that business motivated cyber risk classifications appear to be too restrictive and not flexible enough to capture the heterogeneity of cyber risk events. We investigate how dynamic and impact-based cyber risk classifiers seem to be better suited in forecasting future cyber risk losses than the other considered classifications. These findings suggest that cyber risk types provide limited forecasting ability concerning cyber event loss severity distribution, and cyber insurance rate-makers should utilize cyber risk types only when modeling the cyber event frequency distribution. Our study offers valuable insights for decision-makers and policymakers alike, contributing to the advancement of scientific knowledge in the field of cyber risk management.
{"title":"Cyber risk taxonomies: statistical analysis of cybersecurity risk classifications","authors":"Matteo Malavasi , Gareth W. Peters , Stefan Trück , Pavel V. Shevchenko , Jiwook Jang , Georgy Sofronov","doi":"10.1016/j.insmatheco.2025.103167","DOIUrl":"10.1016/j.insmatheco.2025.103167","url":null,"abstract":"<div><div>Cyber risk classifications are widely used in the modeling of cyber event distributions, yet their effectiveness in out of sample forecasting performance remains underexplored. In this paper, we analyze the most commonly used classifications and argue in favor of switching the attention from goodness-of-fit and in-sample predictive performance, to focusing on the out-of sample forecasting performance. We use a rolling window analysis, to compare cyber risk distribution forecasts via threshold weighted scoring functions. Our results indicate that business motivated cyber risk classifications appear to be too restrictive and not flexible enough to capture the heterogeneity of cyber risk events. We investigate how dynamic and impact-based cyber risk classifiers seem to be better suited in forecasting future cyber risk losses than the other considered classifications. These findings suggest that cyber risk types provide limited forecasting ability concerning cyber event loss severity distribution, and cyber insurance rate-makers should utilize cyber risk types only when modeling the cyber event frequency distribution. Our study offers valuable insights for decision-makers and policymakers alike, contributing to the advancement of scientific knowledge in the field of cyber risk management.</div></div>","PeriodicalId":54974,"journal":{"name":"Insurance Mathematics & Economics","volume":"126 ","pages":"Article 103167"},"PeriodicalIF":2.2,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145536948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}