基于优化和机器学习的医疗公平决策调查

Zequn Chen, Wesley J. Marrero
{"title":"基于优化和机器学习的医疗公平决策调查","authors":"Zequn Chen, Wesley J. Marrero","doi":"10.1101/2024.03.16.24304403","DOIUrl":null,"url":null,"abstract":"Background. Unintended biases introduced by optimization and machine learning (ML) models are of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income) to have lower access to resources, exacerbating societal unfairness. Purpose. This review aims to identify, describe, and categorize literature regarding bias types, fairness metrics, and bias mitigation methods in healthcare decision making. Data Sources. Google Scholar database was searched to identify published studies. Study Selection. Eligible studies were required to present 1) types of bias 2) fairness metrics and 3) bias mitigation methods within decision-making in healthcare. Data Extraction. Studies were classified according to the three themes mentioned in the Study Selection. Information was extracted concerning the definitions, examples, applications, and limitations of bias types, fairness metrics, and bias mitigation methods. Data Synthesis. In bias type section, we included studies (n=15) concerning different biases. In the fairness metric section, we included studies (n=6) regarding common fairness metrics. In bias mitigation method section, themes included pre-processing methods (n=5), in-processing methods (n=16), and post-processing methods (n=4). Limitations. Most examples in our survey are from the United States since the majority of studies included in this survey were conducted in the United States. In the meanwhile, we limited the search language to English, so we may not capture some meaningful articles in other languages. Conclusions. Several types of bias, fairness metrics, and bias mitigation methods (especially optimization and machine learning-based methods) were identified in this review, with common themes based on analytical approaches. We also found topics such as explainability, fairness metric selection, and integration of prediction and optimization are promising directions for future studies.","PeriodicalId":501556,"journal":{"name":"medRxiv - Health Systems and Quality Improvement","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Survey on Optimization and Machine -learning-based Fair Decision Making in Healthcare\",\"authors\":\"Zequn Chen, Wesley J. Marrero\",\"doi\":\"10.1101/2024.03.16.24304403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background. Unintended biases introduced by optimization and machine learning (ML) models are of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income) to have lower access to resources, exacerbating societal unfairness. Purpose. This review aims to identify, describe, and categorize literature regarding bias types, fairness metrics, and bias mitigation methods in healthcare decision making. Data Sources. Google Scholar database was searched to identify published studies. Study Selection. Eligible studies were required to present 1) types of bias 2) fairness metrics and 3) bias mitigation methods within decision-making in healthcare. Data Extraction. Studies were classified according to the three themes mentioned in the Study Selection. Information was extracted concerning the definitions, examples, applications, and limitations of bias types, fairness metrics, and bias mitigation methods. Data Synthesis. In bias type section, we included studies (n=15) concerning different biases. In the fairness metric section, we included studies (n=6) regarding common fairness metrics. In bias mitigation method section, themes included pre-processing methods (n=5), in-processing methods (n=16), and post-processing methods (n=4). Limitations. Most examples in our survey are from the United States since the majority of studies included in this survey were conducted in the United States. In the meanwhile, we limited the search language to English, so we may not capture some meaningful articles in other languages. Conclusions. Several types of bias, fairness metrics, and bias mitigation methods (especially optimization and machine learning-based methods) were identified in this review, with common themes based on analytical approaches. We also found topics such as explainability, fairness metric selection, and integration of prediction and optimization are promising directions for future studies.\",\"PeriodicalId\":501556,\"journal\":{\"name\":\"medRxiv - Health Systems and Quality Improvement\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Health Systems and Quality Improvement\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.03.16.24304403\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Health Systems and Quality Improvement","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.03.16.24304403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景。优化和机器学习(ML)模型引入的意外偏差引起了医疗专业人士的极大兴趣。医疗决策中的偏差会导致弱势群体(如少数种族、低收入人群)的患者获得的资源更少,从而加剧社会的不公平。目的。本综述旨在识别、描述和归类有关医疗决策中的偏差类型、公平性指标和偏差缓解方法的文献。数据来源。搜索谷歌学术数据库,以确定已发表的研究。研究选择。符合条件的研究必须介绍:1)偏差类型;2)公平性指标;3)医疗决策中的偏差缓解方法。数据提取。根据 "研究选择 "中提到的三个主题对研究进行分类。提取的信息涉及偏差类型、公平性指标和偏差缓解方法的定义、示例、应用和局限性。数据综合。在偏差类型部分,我们纳入了有关不同偏差的研究(n=15)。在公平性指标部分,我们纳入了有关常见公平性指标的研究(n=6)。在偏差缓解方法部分,主题包括预处理方法(n=5)、内部处理方法(n=16)和后处理方法(n=4)。局限性。我们调查中的大多数例子都来自美国,因为本次调查中的大多数研究都是在美国进行的。同时,我们将搜索语言限定为英语,因此可能无法捕捉到其他语言的一些有意义的文章。结论本综述确定了几种类型的偏差、公平度量和偏差缓解方法(尤其是基于优化和机器学习的方法),并根据分析方法确定了共同的主题。我们还发现,可解释性、公平度量选择以及预测与优化的整合等主题是未来研究的有前途的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Survey on Optimization and Machine -learning-based Fair Decision Making in Healthcare
Background. Unintended biases introduced by optimization and machine learning (ML) models are of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income) to have lower access to resources, exacerbating societal unfairness. Purpose. This review aims to identify, describe, and categorize literature regarding bias types, fairness metrics, and bias mitigation methods in healthcare decision making. Data Sources. Google Scholar database was searched to identify published studies. Study Selection. Eligible studies were required to present 1) types of bias 2) fairness metrics and 3) bias mitigation methods within decision-making in healthcare. Data Extraction. Studies were classified according to the three themes mentioned in the Study Selection. Information was extracted concerning the definitions, examples, applications, and limitations of bias types, fairness metrics, and bias mitigation methods. Data Synthesis. In bias type section, we included studies (n=15) concerning different biases. In the fairness metric section, we included studies (n=6) regarding common fairness metrics. In bias mitigation method section, themes included pre-processing methods (n=5), in-processing methods (n=16), and post-processing methods (n=4). Limitations. Most examples in our survey are from the United States since the majority of studies included in this survey were conducted in the United States. In the meanwhile, we limited the search language to English, so we may not capture some meaningful articles in other languages. Conclusions. Several types of bias, fairness metrics, and bias mitigation methods (especially optimization and machine learning-based methods) were identified in this review, with common themes based on analytical approaches. We also found topics such as explainability, fairness metric selection, and integration of prediction and optimization are promising directions for future studies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Effect of Monitoring and Evaluation Systems on the Performance of Neonatal Intensive Care Unit at Yumbe Regional referral hospital; A Pre-post quasi-experimental study design Plaintiff experiences of the medico-legal environment in Ireland “We’re here to help them if they want to come”: A qualitative exploration of hospital staff perceptions and experiences with outpatient non-attendance Improving Access and Efficiency of Acute Ischemic Stroke Treatment Across Four Canadian Provinces: A Stepped-Wedge Trial I am a quarterback: A mixed methods study of death investigators' communication with family members of young sudden cardiac death victims from suspected heritable causes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1