Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra
{"title":"Assessing Fair Machine Learning Strategies Through a Fairness-Utility Trade-off Metric","authors":"Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra","doi":"10.5753/eniac.2021.18288","DOIUrl":null,"url":null,"abstract":"Due to the increasing use of artificial intelligence for decision making and the observation of biased decisions in many applications, researchers are investigating solutions that attempt to build fairer models that do not reproduce discrimination. Some of the explored strategies are based on adversarial learning to achieve fairness in machine learning by encoding fairness constraints through an adversarial model. Moreover, it is usual for each proposal to assess its model with a specific metric, making comparing current approaches a complex task. In that sense, we defined a utility and fairness trade-off metric. We assessed 15 fair model implementations and a baseline model using this metric, providing a systemically comparative ruler for other approaches.","PeriodicalId":318676,"journal":{"name":"Anais do XVIII Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2021)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anais do XVIII Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5753/eniac.2021.18288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Due to the increasing use of artificial intelligence for decision making and the observation of biased decisions in many applications, researchers are investigating solutions that attempt to build fairer models that do not reproduce discrimination. Some of the explored strategies are based on adversarial learning to achieve fairness in machine learning by encoding fairness constraints through an adversarial model. Moreover, it is usual for each proposal to assess its model with a specific metric, making comparing current approaches a complex task. In that sense, we defined a utility and fairness trade-off metric. We assessed 15 fair model implementations and a baseline model using this metric, providing a systemically comparative ruler for other approaches.