Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty, Marcus Eng Hock Ong, Roger Vaughan, Nan Liu
{"title":"FAIM:面向医疗保健领域可信机器学习的公平感知可解释建模","authors":"Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty, Marcus Eng Hock Ong, Roger Vaughan, Nan Liu","doi":"10.1016/j.patter.2024.101059","DOIUrl":null,"url":null,"abstract":"<p>The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness. We propose an interpretable framework, fairness-aware interpretable modeling (FAIM), to improve model fairness without compromising performance, featuring an interactive interface to identify a “fairer” model from a set of high-performing models and promoting the integration of data-driven evidence and clinical expertise to enhance contextualized fairness. We demonstrate FAIM’s value in reducing intersectional biases arising from race and sex by predicting hospital admission with two real-world databases, the Medical Information Mart for Intensive Care IV Emergency Department (MIMIC-IV-ED) and the database collected from Singapore General Hospital Emergency Department (SGH-ED). For both datasets, FAIM models not only exhibit satisfactory discriminatory performance but also significantly mitigate biases as measured by well-established fairness metrics, outperforming commonly used bias mitigation methods. Our approach demonstrates the feasibility of improving fairness without sacrificing performance and provides a modeling mode that invites domain experts to engage, fostering a multidisciplinary effort toward tailored AI fairness.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare\",\"authors\":\"Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty, Marcus Eng Hock Ong, Roger Vaughan, Nan Liu\",\"doi\":\"10.1016/j.patter.2024.101059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness. We propose an interpretable framework, fairness-aware interpretable modeling (FAIM), to improve model fairness without compromising performance, featuring an interactive interface to identify a “fairer” model from a set of high-performing models and promoting the integration of data-driven evidence and clinical expertise to enhance contextualized fairness. We demonstrate FAIM’s value in reducing intersectional biases arising from race and sex by predicting hospital admission with two real-world databases, the Medical Information Mart for Intensive Care IV Emergency Department (MIMIC-IV-ED) and the database collected from Singapore General Hospital Emergency Department (SGH-ED). For both datasets, FAIM models not only exhibit satisfactory discriminatory performance but also significantly mitigate biases as measured by well-established fairness metrics, outperforming commonly used bias mitigation methods. Our approach demonstrates the feasibility of improving fairness without sacrificing performance and provides a modeling mode that invites domain experts to engage, fostering a multidisciplinary effort toward tailored AI fairness.</p>\",\"PeriodicalId\":36242,\"journal\":{\"name\":\"Patterns\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patterns\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.patter.2024.101059\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2024.101059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
机器学习与医疗保健等高风险领域的整合不断升级,引起了人们对模型公平性的极大关注。我们提出了一个可解释的框架--公平感知可解释建模(FAIM),以在不影响性能的情况下提高模型的公平性,其特点是从一组高性能模型中识别出 "更公平 "模型的交互式界面,并促进数据驱动的证据和临床专业知识的整合,以提高情境公平性。我们利用两个真实世界的数据库--重症监护医学信息市场 IV 急诊部(MIMIC-IV-ED)和新加坡中央医院急诊部(SGH-ED)收集的数据库--预测入院情况,证明了 FAIM 在减少种族和性别交叉偏见方面的价值。对于这两个数据集,FAIM 模型不仅表现出令人满意的判别性能,而且还能显著减轻偏差,这是用公认的公平性指标来衡量的,优于常用的减轻偏差方法。我们的方法证明了在不牺牲性能的情况下提高公平性的可行性,并提供了一种可邀请领域专家参与的建模模式,促进了多学科合作,以实现量身定制的人工智能公平性。
FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare
The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness. We propose an interpretable framework, fairness-aware interpretable modeling (FAIM), to improve model fairness without compromising performance, featuring an interactive interface to identify a “fairer” model from a set of high-performing models and promoting the integration of data-driven evidence and clinical expertise to enhance contextualized fairness. We demonstrate FAIM’s value in reducing intersectional biases arising from race and sex by predicting hospital admission with two real-world databases, the Medical Information Mart for Intensive Care IV Emergency Department (MIMIC-IV-ED) and the database collected from Singapore General Hospital Emergency Department (SGH-ED). For both datasets, FAIM models not only exhibit satisfactory discriminatory performance but also significantly mitigate biases as measured by well-established fairness metrics, outperforming commonly used bias mitigation methods. Our approach demonstrates the feasibility of improving fairness without sacrificing performance and provides a modeling mode that invites domain experts to engage, fostering a multidisciplinary effort toward tailored AI fairness.