{"title":"A Survey on Optimization and Machine -learning-based Fair Decision Making in Healthcare","authors":"Zequn Chen, Wesley J. Marrero","doi":"10.1101/2024.03.16.24304403","DOIUrl":null,"url":null,"abstract":"Background. Unintended biases introduced by optimization and machine learning (ML) models are of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income) to have lower access to resources, exacerbating societal unfairness. Purpose. This review aims to identify, describe, and categorize literature regarding bias types, fairness metrics, and bias mitigation methods in healthcare decision making. Data Sources. Google Scholar database was searched to identify published studies. Study Selection. Eligible studies were required to present 1) types of bias 2) fairness metrics and 3) bias mitigation methods within decision-making in healthcare. Data Extraction. Studies were classified according to the three themes mentioned in the Study Selection. Information was extracted concerning the definitions, examples, applications, and limitations of bias types, fairness metrics, and bias mitigation methods. Data Synthesis. In bias type section, we included studies (n=15) concerning different biases. In the fairness metric section, we included studies (n=6) regarding common fairness metrics. In bias mitigation method section, themes included pre-processing methods (n=5), in-processing methods (n=16), and post-processing methods (n=4). Limitations. Most examples in our survey are from the United States since the majority of studies included in this survey were conducted in the United States. In the meanwhile, we limited the search language to English, so we may not capture some meaningful articles in other languages. Conclusions. Several types of bias, fairness metrics, and bias mitigation methods (especially optimization and machine learning-based methods) were identified in this review, with common themes based on analytical approaches. We also found topics such as explainability, fairness metric selection, and integration of prediction and optimization are promising directions for future studies.","PeriodicalId":501556,"journal":{"name":"medRxiv - Health Systems and Quality Improvement","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Health Systems and Quality Improvement","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.03.16.24304403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background. Unintended biases introduced by optimization and machine learning (ML) models are of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income) to have lower access to resources, exacerbating societal unfairness. Purpose. This review aims to identify, describe, and categorize literature regarding bias types, fairness metrics, and bias mitigation methods in healthcare decision making. Data Sources. Google Scholar database was searched to identify published studies. Study Selection. Eligible studies were required to present 1) types of bias 2) fairness metrics and 3) bias mitigation methods within decision-making in healthcare. Data Extraction. Studies were classified according to the three themes mentioned in the Study Selection. Information was extracted concerning the definitions, examples, applications, and limitations of bias types, fairness metrics, and bias mitigation methods. Data Synthesis. In bias type section, we included studies (n=15) concerning different biases. In the fairness metric section, we included studies (n=6) regarding common fairness metrics. In bias mitigation method section, themes included pre-processing methods (n=5), in-processing methods (n=16), and post-processing methods (n=4). Limitations. Most examples in our survey are from the United States since the majority of studies included in this survey were conducted in the United States. In the meanwhile, we limited the search language to English, so we may not capture some meaningful articles in other languages. Conclusions. Several types of bias, fairness metrics, and bias mitigation methods (especially optimization and machine learning-based methods) were identified in this review, with common themes based on analytical approaches. We also found topics such as explainability, fairness metric selection, and integration of prediction and optimization are promising directions for future studies.