{"title":"Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, and Models","authors":"Xi Xin, Fei Huang","doi":"10.1080/10920277.2023.2190528","DOIUrl":null,"url":null,"abstract":"On the issue of insurance discrimination, a grey area in regulation has resulted from the growing use of big data analytics by insurance companies: direct discrimination is prohibited, but indirect discrimination using proxies or more complex and opaque algorithms is not clearly specified or assessed. This phenomenon has recently attracted the attention of insurance regulators all over the world. Meanwhile, various fairness criteria have been proposed and flourished in the machine learning literature with the rapid growth of artificial intelligence (AI) in the past decade and have mostly focused on classification decisions. In this article, we introduce some fairness criteria that are potentially applicable to insurance pricing as a regression problem to the actuarial field, match them with different levels of potential and existing antidiscrimination regulations, and implement them into a series of existing and newly proposed antidiscrimination insurance pricing models, using both generalized linear models (GLMs) and Extreme Gradient Boosting (XGBoost). Our empirical analysis compares the outcome of different models via the fairness–accuracy trade-off and shows their impact on adverse selection and solidarity.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"North American Actuarial Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10920277.2023.2190528","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 1
Abstract
On the issue of insurance discrimination, a grey area in regulation has resulted from the growing use of big data analytics by insurance companies: direct discrimination is prohibited, but indirect discrimination using proxies or more complex and opaque algorithms is not clearly specified or assessed. This phenomenon has recently attracted the attention of insurance regulators all over the world. Meanwhile, various fairness criteria have been proposed and flourished in the machine learning literature with the rapid growth of artificial intelligence (AI) in the past decade and have mostly focused on classification decisions. In this article, we introduce some fairness criteria that are potentially applicable to insurance pricing as a regression problem to the actuarial field, match them with different levels of potential and existing antidiscrimination regulations, and implement them into a series of existing and newly proposed antidiscrimination insurance pricing models, using both generalized linear models (GLMs) and Extreme Gradient Boosting (XGBoost). Our empirical analysis compares the outcome of different models via the fairness–accuracy trade-off and shows their impact on adverse selection and solidarity.