Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness

V. Jeanselme, Maria De-Arteaga, Zhe Zhang, J. Barrett, Brian D. M. Tom
{"title":"Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness","authors":"V. Jeanselme, Maria De-Arteaga, Zhe Zhang, J. Barrett, Brian D. M. Tom","doi":"10.48550/arXiv.2208.06648","DOIUrl":null,"url":null,"abstract":"Biases have marked medical history, leading to unequal care affecting marginalised groups. The patterns of missingness in observational data often reflect these group discrepancies, but the algorithmic fairness implications of group-specific missingness are not well understood. Despite its potential impact, imputation is too often an overlooked preprocessing step. When explicitly considered, attention is placed on overall performance, ignoring how this preprocessing can reinforce groupspecific inequities. Our work questions this choice by studying how imputation affects downstream algorithmic fairness. First, we provide a structured view of the relationship between clinical presence mechanisms and groupspecific missingness patterns. Then, through simulations and real-world experiments, we demonstrate that the imputation choice influences marginalised group performance and that no imputation strategy consistently reduces disparities. Importantly, our results show that current practices may endanger health equity as similarly performing imputation strategies at the population level can affect marginalised groups differently. Finally, we propose recommendations for mitigating inequities that may stem from a neglected step of the machine learning pipeline.","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"193 1","pages":"12 - 34"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of machine learning research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2208.06648","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Biases have marked medical history, leading to unequal care affecting marginalised groups. The patterns of missingness in observational data often reflect these group discrepancies, but the algorithmic fairness implications of group-specific missingness are not well understood. Despite its potential impact, imputation is too often an overlooked preprocessing step. When explicitly considered, attention is placed on overall performance, ignoring how this preprocessing can reinforce groupspecific inequities. Our work questions this choice by studying how imputation affects downstream algorithmic fairness. First, we provide a structured view of the relationship between clinical presence mechanisms and groupspecific missingness patterns. Then, through simulations and real-world experiments, we demonstrate that the imputation choice influences marginalised group performance and that no imputation strategy consistently reduces disparities. Importantly, our results show that current practices may endanger health equity as similarly performing imputation strategies at the population level can affect marginalised groups differently. Finally, we propose recommendations for mitigating inequities that may stem from a neglected step of the machine learning pipeline.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
临床存在下的推断策略:对算法公平性的影响
偏见在医学史上留下了印记,导致不平等的护理影响到边缘化群体。观测数据中的缺失模式通常反映了这些群体差异,但群体特定缺失的算法公平性含义还没有得到很好的理解。尽管插补有潜在的影响,但它往往是一个被忽视的预处理步骤。当明确考虑时,会将注意力放在整体性能上,忽略这种预处理如何会加剧特定群体的不公平。我们的工作通过研究插补如何影响下游算法的公平性来质疑这一选择。首先,我们对临床存在机制和群体特异性缺失模式之间的关系提供了一个结构化的观点。然后,通过模拟和真实世界的实验,我们证明了插补选择会影响边缘化群体的表现,并且没有插补策略可以持续减少差异。重要的是,我们的研究结果表明,目前的做法可能会危及健康公平,因为在人口层面执行类似的插补策略可能会对边缘化群体产生不同的影响。最后,我们提出了减少不平等的建议,这些不平等可能源于机器学习过程中被忽视的一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Borrowing From the Future: Enhancing Early Risk Assessment through Contrastive Learning. Balancing Interpretability and Flexibility in Modeling Diagnostic Trajectories with an Embedded Neural Hawkes Process Model. ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning. Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension using Large Language Models. Sidechain conditioning and modeling for full-atom protein sequence design with FAMPNN.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1