Pub Date : 2024-09-13DOI: 10.1007/s13385-024-00396-2
Oussama Belhouari, Griselda Deelstra, Pierre Devolder
In a complete arbitrage-free financial market, financial products are valued with the risk-neutral measure and these products are completely hedgeable. In life insurance, the approach is different as the valuation is based on an insurance premium principle which includes a safety loading. The insurer reduces the risk by pooling a vast number of independent risks. In our framework, we suggest valuations of a class of products that are dependent on both mortality and financial risk, namely hybrid life products. The main contribution of this paper is to present a generalized standard deviation premium principle in a stochastic interest rate framework, and to integrate it in different valuation operators suggested in the literature. We illustrate our methods with a classical application, namely a Pure Endowment with profit. Several numerical results are presented, and an extensive sensitivity analysis is included.
{"title":"Hybrid life insurance valuation based on a new standard deviation premium principle in a stochastic interest rate framework","authors":"Oussama Belhouari, Griselda Deelstra, Pierre Devolder","doi":"10.1007/s13385-024-00396-2","DOIUrl":"https://doi.org/10.1007/s13385-024-00396-2","url":null,"abstract":"<p>In a complete arbitrage-free financial market, financial products are valued with the risk-neutral measure and these products are completely hedgeable. In life insurance, the approach is different as the valuation is based on an insurance premium principle which includes a safety loading. The insurer reduces the risk by pooling a vast number of independent risks. In our framework, we suggest valuations of a class of products that are dependent on both mortality and financial risk, namely hybrid life products. The main contribution of this paper is to present a generalized standard deviation premium principle in a stochastic interest rate framework, and to integrate it in different valuation operators suggested in the literature. We illustrate our methods with a classical application, namely a Pure Endowment with profit. Several numerical results are presented, and an extensive sensitivity analysis is included.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-02DOI: 10.1007/s13385-024-00398-0
Jorge Segura-Gisbert, Josep Lledó, Jose M. Pavía
Advanced analytics plays a vital role in enhancing various aspects of business operations within the insurance sector by providing valuable insights that drive informed decision-making, primarily through effective database utilization. However, open access databases in the insurance industry are exceedingly rare, as they are the basis of the business, encapsulating all the risk structure of the company. This makes it challenging for researchers and practitioners to access comprehensive insurance datasets for analysis and assessing new approaches. This paper introduces an extensive database specifically tailored for non-life motor insurance, containing 105,555 rows and encompassing a wide array of 30 variables. The dataset comprises important date-related information, such as effective date, date of birth of the insured, and renewal date, essential for policy management and risk assessment. Additionally, it includes relevant economic variables, such as premiums and claim costs, for assessments of products’ financial profitability. Moreover, the database features an array of risk-related variables, such as vehicle size, economic value, power, and weight, which significantly contribute to understanding risk dynamics. By leveraging the statistical analysis of this rich database, researchers could identify novel risk profiles, reveal variables that influence insured claims behaviour, and contribute to the advancement of educational and research initiatives in the dynamic fields of economics and actuarial sciences. The availability of this comprehensive database opens new opportunities for research and teaching and empowers insurance professionals to enhance their risk assessment and decision-making processes.
{"title":"Dataset of an actual motor vehicle insurance portfolio","authors":"Jorge Segura-Gisbert, Josep Lledó, Jose M. Pavía","doi":"10.1007/s13385-024-00398-0","DOIUrl":"https://doi.org/10.1007/s13385-024-00398-0","url":null,"abstract":"<p>Advanced analytics plays a vital role in enhancing various aspects of business operations within the insurance sector by providing valuable insights that drive informed decision-making, primarily through effective database utilization. However, open access databases in the insurance industry are exceedingly rare, as they are the basis of the business, encapsulating all the risk structure of the company. This makes it challenging for researchers and practitioners to access comprehensive insurance datasets for analysis and assessing new approaches. This paper introduces an extensive database specifically tailored for non-life motor insurance, containing 105,555 rows and encompassing a wide array of 30 variables. The dataset comprises important date-related information, such as effective date, date of birth of the insured, and renewal date, essential for policy management and risk assessment. Additionally, it includes relevant economic variables, such as premiums and claim costs, for assessments of products’ financial profitability. Moreover, the database features an array of risk-related variables, such as vehicle size, economic value, power, and weight, which significantly contribute to understanding risk dynamics. By leveraging the statistical analysis of this rich database, researchers could identify novel risk profiles, reveal variables that influence insured claims behaviour, and contribute to the advancement of educational and research initiatives in the dynamic fields of economics and actuarial sciences. The availability of this comprehensive database opens new opportunities for research and teaching and empowers insurance professionals to enhance their risk assessment and decision-making processes.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1007/s13385-024-00394-4
Antoine Heranval, Olivier Lopez, Maud Thomas
The Bayesian credibility approach is a method for evaluating a certain risk of a segment of a portfolio (such as policyholder or category of policyholders) by compensating for the lack of historical data through the use of a prior distribution. This prior distribution can be thought as a preliminary expertise, that gathers information on the target distribution. This paper describes a particular Bayesian credibility model that is well-suited for situations where collective data are available to compute the prior, and when the distribution of the variables are heavy-tailed. The credibility model we consider aims to obtain a heavy-tailed distribution (namely a Generalized Pareto distribution) at a collective level and provides a closed formula to compute the severity part of the credibility premium at an individual level. Two cases of application are presented: one related to natural disasters and the other to cyber insurance. In the former, a large database on flood events is used as the collective information to define the prior, which is then combined with individual observations at a city level. In the latter, a classical database on data leaks is used to fit a model for the volume of data exposed during a cyber incident, while the historical data on a given firm is taken into account to consider individual experience.
{"title":"Bayesian credibility model with heavy tail random variables: calibration of the prior and application to natural disasters and cyber insurance","authors":"Antoine Heranval, Olivier Lopez, Maud Thomas","doi":"10.1007/s13385-024-00394-4","DOIUrl":"https://doi.org/10.1007/s13385-024-00394-4","url":null,"abstract":"<p>The Bayesian credibility approach is a method for evaluating a certain risk of a segment of a portfolio (such as policyholder or category of policyholders) by compensating for the lack of historical data through the use of a prior distribution. This prior distribution can be thought as a preliminary expertise, that gathers information on the target distribution. This paper describes a particular Bayesian credibility model that is well-suited for situations where collective data are available to compute the prior, and when the distribution of the variables are heavy-tailed. The credibility model we consider aims to obtain a heavy-tailed distribution (namely a Generalized Pareto distribution) at a collective level and provides a closed formula to compute the severity part of the credibility premium at an individual level. Two cases of application are presented: one related to natural disasters and the other to cyber insurance. In the former, a large database on flood events is used as the collective information to define the prior, which is then combined with individual observations at a city level. In the latter, a classical database on data leaks is used to fit a model for the volume of data exposed during a cyber incident, while the historical data on a given firm is taken into account to consider individual experience.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s13385-024-00395-3
Sebastián Calcetero Vanegas, Andrei L. Badescu, X. Sheldon Lin
Claim reserving primarily relies on macro-level models, with the Chain-Ladder method being the most widely adopted. These methods were heuristically developed without minimal statistical foundations, relying on oversimplified data assumptions and neglecting policyholder heterogeneity, often resulting in conservative reserve predictions. Micro-level reserving, utilizing stochastic modeling with granular information, can improve predictions, but tends to involve less attractive and complex models for practitioners. This paper aims to strike a practical balance between aggregate and individual models by introducing a methodology that enables the Chain-Ladder method to incorporate individual information. We achieve this by proposing a novel framework and formulating the claim reserving problem within a population sampling context. We introduce a reserve estimator in a frequency- and severity-distribution-free manner that utilizes inverse probability weights (IPW) driven by individual information, akin to propensity scores. We demonstrate that the Chain-Ladder method emerges as a particular case of such an IPW estimator, thereby inheriting a statistically sound foundation based on population sampling theory that enables the use of granular information and other extensions.
{"title":"Claim reserving via inverse probability weighting: a micro-level Chain-Ladder method","authors":"Sebastián Calcetero Vanegas, Andrei L. Badescu, X. Sheldon Lin","doi":"10.1007/s13385-024-00395-3","DOIUrl":"https://doi.org/10.1007/s13385-024-00395-3","url":null,"abstract":"<p>Claim reserving primarily relies on macro-level models, with the Chain-Ladder method being the most widely adopted. These methods were heuristically developed without minimal statistical foundations, relying on oversimplified data assumptions and neglecting policyholder heterogeneity, often resulting in conservative reserve predictions. Micro-level reserving, utilizing stochastic modeling with granular information, can improve predictions, but tends to involve less attractive and complex models for practitioners. This paper aims to strike a practical balance between aggregate and individual models by introducing a methodology that enables the Chain-Ladder method to incorporate individual information. We achieve this by proposing a novel framework and formulating the claim reserving problem within a population sampling context. We introduce a reserve estimator in a frequency- and severity-distribution-free manner that utilizes inverse probability weights (IPW) driven by individual information, akin to propensity scores. We demonstrate that the Chain-Ladder method emerges as a particular case of such an IPW estimator, thereby inheriting a statistically sound foundation based on population sampling theory that enables the use of granular information and other extensions.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1007/s13385-024-00389-1
Katharina Oberpriller, Moritz Ritter, Thorsten Schmidt
This paper studies the valuation of insurance contracts linked to financial markets, for example through interest rates or in equity-linked insurance products. We build upon the concept of insurance-finance arbitrage as introduced by Artzner et al. (Math Financ, 2024), extending their work by incorporating model uncertainty. This is achieved by introducing statistical uncertainty in the underlying dynamics to be represented by a set of priors ({{mathscr {P}}}). Within this framework we propose the notion of robust asymptotic insurance-finance arbitrage (RIFA) and characterize the absence of such strategies in terms of the new concept of ({Q}{{mathscr {P}}})-evaluations. This nonlinear two-step evaluation ensures absence of RIFA. Moreover, it dominates all two-step evaluations, as long as we agree on the set of priors ({{mathscr {P}}}). Our analysis highlights the role of ({Q}{{mathscr {P}}})-evaluations in terms of showing that all two-step evaluations are free of RIFA. Furthermore, we introduce a doubly stochastic model to address uncertainty for surrender and survival, utilizing copulas to define conditional dependence. This setting illustrates how the ({Q}{{mathscr {P}}})-evaluation can be applied for the pricing of hybrid insurance products, highlighting the flexibility and potential of the proposed approach.
{"title":"Robust asymptotic insurance-finance arbitrage","authors":"Katharina Oberpriller, Moritz Ritter, Thorsten Schmidt","doi":"10.1007/s13385-024-00389-1","DOIUrl":"https://doi.org/10.1007/s13385-024-00389-1","url":null,"abstract":"<p>This paper studies the valuation of insurance contracts linked to financial markets, for example through interest rates or in equity-linked insurance products. We build upon the concept of insurance-finance arbitrage as introduced by Artzner et al. (Math Financ, 2024), extending their work by incorporating model uncertainty. This is achieved by introducing statistical uncertainty in the underlying dynamics to be represented by a set of priors <span>({{mathscr {P}}})</span>. Within this framework we propose the notion of <i>robust asymptotic insurance-finance arbitrage</i> (RIFA) and characterize the absence of such strategies in terms of the new concept of <span>({Q}{{mathscr {P}}})</span>-evaluations. This nonlinear two-step evaluation ensures absence of RIFA. Moreover, it dominates all two-step evaluations, as long as we agree on the set of priors <span>({{mathscr {P}}})</span>. Our analysis highlights the role of <span>({Q}{{mathscr {P}}})</span>-evaluations in terms of showing that all two-step evaluations are free of RIFA. Furthermore, we introduce a doubly stochastic model to address uncertainty for surrender and survival, utilizing copulas to define conditional dependence. This setting illustrates how the <span>({Q}{{mathscr {P}}})</span>-evaluation can be applied for the pricing of hybrid insurance products, highlighting the flexibility and potential of the proposed approach.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1007/s13385-024-00392-6
Sascha Günther, Peter Hieber
Equity-indexed annuities (EIAs) with investment guarantees are pension products sensitive to changes in the interest rate environment. A flexible and common choice for modelling this risk factor is a Hull–White model in its G2++ variant. We investigate the valuation of EIAs in this model setting and extend the literature by introducing a more efficient framework for Monte-Carlo simulation. In addition, we build on previous work by adapting an approach based on scenario matrices to a two-factor G2++ model. This method does not rely on simulations or on Fourier transformations. In numerical studies, we demonstrate its fast convergence and the limitations of techniques relying on the independence of annual returns and the central limit theorem.
{"title":"Efficient simulation and valuation of equity-indexed annuities under a two-factor G2++ model","authors":"Sascha Günther, Peter Hieber","doi":"10.1007/s13385-024-00392-6","DOIUrl":"https://doi.org/10.1007/s13385-024-00392-6","url":null,"abstract":"<p>Equity-indexed annuities (EIAs) with investment guarantees are pension products sensitive to changes in the interest rate environment. A flexible and common choice for modelling this risk factor is a Hull–White model in its G2++ variant. We investigate the valuation of EIAs in this model setting and extend the literature by introducing a more efficient framework for Monte-Carlo simulation. In addition, we build on previous work by adapting an approach based on scenario matrices to a two-factor G2++ model. This method does not rely on simulations or on Fourier transformations. In numerical studies, we demonstrate its fast convergence and the limitations of techniques relying on the independence of annual returns and the central limit theorem.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s13385-024-00393-5
Anna Rita Bacinello, Pietro Millossovich, Fabio Viviano
This paper tackles the problem of approximating the distribution of future biometric indices under a cohort-based perspective. Unlike period-based evaluations, cohort-based schemes require the computation of conditional expectations for which explicit solutions often do not exist. To overcome this issue, we suggest the application of a well-established methodology, i.e., the Least-Squares Monte Carlo approach. The idea is to approximate conditional expectations by combining simulations and regression techniques, thus avoiding a straightforward but computationally demanding nested simulations method. To show the extreme flexibility and generality of the proposal, we provide extensive numerical results concerning two main longevity indices, life expectancy and lifespan disparity, obtained by adopting both single- and multi-population mortality models. Comparisons between period- and cohort-based results are made as well. Finally, the paper shows that the proposed methodology can be used to approximate other biometric indices at future dates for which cohort-based estimations are often replaced by period ones for computational simplicity.
{"title":"An iterative least-squares Monte Carlo approach for the simulation of cohort based biometric indices","authors":"Anna Rita Bacinello, Pietro Millossovich, Fabio Viviano","doi":"10.1007/s13385-024-00393-5","DOIUrl":"https://doi.org/10.1007/s13385-024-00393-5","url":null,"abstract":"<p>This paper tackles the problem of approximating the distribution of future biometric indices under a cohort-based perspective. Unlike period-based evaluations, cohort-based schemes require the computation of conditional expectations for which explicit solutions often do not exist. To overcome this issue, we suggest the application of a well-established methodology, i.e., the Least-Squares Monte Carlo approach. The idea is to approximate conditional expectations by combining simulations and regression techniques, thus avoiding a straightforward but computationally demanding nested simulations method. To show the extreme flexibility and generality of the proposal, we provide extensive numerical results concerning two main longevity indices, life expectancy and lifespan disparity, obtained by adopting both single- and multi-population mortality models. Comparisons between period- and cohort-based results are made as well. Finally, the paper shows that the proposed methodology can be used to approximate other biometric indices at future dates for which cohort-based estimations are often replaced by period ones for computational simplicity.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s13385-024-00390-8
Mulah Moriah, Franck Vermet, Arthur Charpentier
The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.
{"title":"Measuring and mitigating biases in motor insurance pricing","authors":"Mulah Moriah, Franck Vermet, Arthur Charpentier","doi":"10.1007/s13385-024-00390-8","DOIUrl":"https://doi.org/10.1007/s13385-024-00390-8","url":null,"abstract":"<p>The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-06DOI: 10.1007/s13385-024-00391-7
Qian Zhao, Chudamani Poudyal
The classical Bühlmann credibility model has been widely applied to premium estimation for group insurance contracts and other insurance types. In this paper, we develop a robust Bühlmann credibility model using the winsorized version of loss data, also known as the winsorized mean (a robust alternative to the traditional individual mean). This approach assumes that the observed sample data come from a contaminated underlying model with a small percentage of contaminated sample data. This framework provides explicit formulas for the structural parameters in credibility estimation for scale-shape distribution families, location-scale distribution families, and their variants, commonly used in insurance risk modeling. Using the theory of (L)-estimators (different from the influence function approach), we derive the asymptotic properties of the proposed method and validate them through a comprehensive simulation study, comparing their performance to credibility based on the trimmed mean. By varying the winsorizing/trimming thresholds in several parametric models, we find that all structural parameters derived from the winsorized approach are less volatile than those from the trimmed approach. Using the winsorized mean as a robust risk measure can reduce the influence of parametric loss assumptions on credibility estimation. Additionally, we discuss non-parametric estimations in credibility. Finally, a numerical illustration from the Wisconsin Local Government Property Insurance Fund indicates that the proposed robust credibility approach mitigates the impact of model mis-specification and captures the risk behavior of loss data from a broader perspective.
{"title":"Credibility theory based on winsorizing","authors":"Qian Zhao, Chudamani Poudyal","doi":"10.1007/s13385-024-00391-7","DOIUrl":"https://doi.org/10.1007/s13385-024-00391-7","url":null,"abstract":"<p>The classical Bühlmann credibility model has been widely applied to premium estimation for group insurance contracts and other insurance types. In this paper, we develop a robust Bühlmann credibility model using the winsorized version of loss data, also known as the winsorized mean (a robust alternative to the traditional individual mean). This approach assumes that the observed sample data come from a contaminated underlying model with a small percentage of contaminated sample data. This framework provides explicit formulas for the structural parameters in credibility estimation for scale-shape distribution families, location-scale distribution families, and their variants, commonly used in insurance risk modeling. Using the theory of <span>(L)</span>-estimators (different from the influence function approach), we derive the asymptotic properties of the proposed method and validate them through a comprehensive simulation study, comparing their performance to credibility based on the trimmed mean. By varying the winsorizing/trimming thresholds in several parametric models, we find that all structural parameters derived from the winsorized approach are less volatile than those from the trimmed approach. Using the winsorized mean as a robust risk measure can reduce the influence of parametric loss assumptions on credibility estimation. Additionally, we discuss non-parametric estimations in credibility. Finally, a numerical illustration from the Wisconsin Local Government Property Insurance Fund indicates that the proposed robust credibility approach mitigates the impact of model mis-specification and captures the risk behavior of loss data from a broader perspective.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s13385-024-00387-3
Matthias Fahrenwaldt, Christian Furrer, Munir Eberhardt Hiabu, Fei Huang, Frederik Hytting Jørgensen, Mathias Lindholm, Joshua Loftus, Mogens Steffensen, Andreas Tsanakas
This article summarizes the main topics, findings, and avenues for future work from the workshop Fairness with a view towards insurance held August 2023 in Copenhagen, Denmark.
{"title":"Fairness: plurality, causality, and insurability","authors":"Matthias Fahrenwaldt, Christian Furrer, Munir Eberhardt Hiabu, Fei Huang, Frederik Hytting Jørgensen, Mathias Lindholm, Joshua Loftus, Mogens Steffensen, Andreas Tsanakas","doi":"10.1007/s13385-024-00387-3","DOIUrl":"https://doi.org/10.1007/s13385-024-00387-3","url":null,"abstract":"<p>This article summarizes the main topics, findings, and avenues for future work from the workshop <i>Fairness with a view towards insurance</i> held August 2023 in Copenhagen, Denmark.</p>","PeriodicalId":44305,"journal":{"name":"European Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}