Pub Date : 2025-10-01Epub Date: 2025-10-15DOI: 10.1609/aies.v8i2.36660
Pengxi Liu, Yi Shen, Matthew M Engelhard, Benjamin A Goldstein, Michael J Pencina, Nicoleta J Economou-Zavlanos, Michael M Zavlanos
Fairness metrics utilizing the area under the receiver operator characteristic curve (AUC) have gained increasing attention in high-stakes domains such as healthcare, finance, and criminal justice. In these domains, fairness is often evaluated over risk scores rather than binary outcomes, and a common challenge is that enforcing strict fairness can significantly degrade AUC performance. To address this challenge, we propose Fair Proportional Optimal Transport (FairPOT), a novel, model-agnostic post-processing framework that strategically aligns risk score distributions across different groups using optimal transport, but does so selectively by transforming a controllable proportion, i.e., the top- quantile, of scores within the disadvantaged group. By varying , our method allows for a tunable trade-off between reducing AUC disparities and maintaining overall AUC performance. Furthermore, we extend FairPOT to the partial AUC setting, enabling fairness interventions to concentrate on the highest-risk regions. Extensive experiments on synthetic, public, and clinical datasets show that FairPOT consistently outperforms existing post-processing techniques in both global and partial AUC scenarios, often achieving improved fairness with slight AUC degradation or even positive gains in utility. The computational efficiency and practical adaptability of FairPOT make it a promising solution for real-world deployment.
{"title":"FairPOT: Balancing AUC Performance and Fairness with Proportional Optimal Transport.","authors":"Pengxi Liu, Yi Shen, Matthew M Engelhard, Benjamin A Goldstein, Michael J Pencina, Nicoleta J Economou-Zavlanos, Michael M Zavlanos","doi":"10.1609/aies.v8i2.36660","DOIUrl":"10.1609/aies.v8i2.36660","url":null,"abstract":"<p><p>Fairness metrics utilizing the area under the receiver operator characteristic curve (AUC) have gained increasing attention in high-stakes domains such as healthcare, finance, and criminal justice. In these domains, fairness is often evaluated over risk scores rather than binary outcomes, and a common challenge is that enforcing strict fairness can significantly degrade AUC performance. To address this challenge, we propose Fair Proportional Optimal Transport (FairPOT), a novel, model-agnostic post-processing framework that strategically aligns risk score distributions across different groups using optimal transport, but does so selectively by transforming a controllable proportion, i.e., the top- <math><mi>λ</mi></math> quantile, of scores within the disadvantaged group. By varying <math><mi>λ</mi></math> , our method allows for a tunable trade-off between reducing AUC disparities and maintaining overall AUC performance. Furthermore, we extend FairPOT to the partial AUC setting, enabling fairness interventions to concentrate on the highest-risk regions. Extensive experiments on synthetic, public, and clinical datasets show that FairPOT consistently outperforms existing post-processing techniques in both global and partial AUC scenarios, often achieving improved fairness with slight AUC degradation or even positive gains in utility. The computational efficiency and practical adaptability of FairPOT make it a promising solution for real-world deployment.</p>","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 2","pages":"1611-1622"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12671453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy Preserving Machine Learning Systems","authors":"Soumia Zohra El Mestari","doi":"10.1145/3514094.3539530","DOIUrl":"https://doi.org/10.1145/3514094.3539530","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"16 1","pages":"898"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82464017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021","authors":"","doi":"10.1145/3514094","DOIUrl":"https://doi.org/10.1145/3514094","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86389791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bias in Artificial Intelligence Models in Financial Services","authors":"Ángel Pavón Pérez","doi":"10.1145/3514094.3539561","DOIUrl":"https://doi.org/10.1145/3514094.3539561","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"191 1","pages":"908"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76933461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To Scale: The Universalist and Imperialist Narrative of Big Tech","authors":"Jessica de Jesus de Pinho Pinhal","doi":"10.1145/3461702.3462474","DOIUrl":"https://doi.org/10.1145/3461702.3462474","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"267-268"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84278797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021","authors":"","doi":"10.1145/3461702","DOIUrl":"https://doi.org/10.1145/3461702","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84537795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous vehicles (AVs) and accidents they are involved in attest to the urgent need to consider the ethics of AI. The question dominating the discussion has been whether we want AVs to behave in a 'selfish' or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The Agent-Deed-Consequence (ADC) model [1-4] provides a promising account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the Agent, Deed, and Consequence in any given situation. These intuitive evaluations combine to produce a judgment of moral acceptability. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general.
{"title":"Toward Implementing the Agent-Deed-Consequence Model of Moral Judgment in Autonomous Vehicles","authors":"Veljko Dubljević","doi":"10.1145/3375627.3375853","DOIUrl":"https://doi.org/10.1145/3375627.3375853","url":null,"abstract":"Autonomous vehicles (AVs) and accidents they are involved in attest to the urgent need to consider the ethics of AI. The question dominating the discussion has been whether we want AVs to behave in a 'selfish' or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The Agent-Deed-Consequence (ADC) model [1-4] provides a promising account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the Agent, Deed, and Consequence in any given situation. These intuitive evaluations combine to produce a judgment of moral acceptability. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85247049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What constitutes a 'fair' electoral districting plan is a discussion dating back to the founding of the United States and, in light of several recent court cases, mathematical developments, and the approaching 2020 U.S. Census, is still a fiercely debated topic today. In light of the growing desire and ability to use algorithmic tools in drawing these districts, we discuss two prototypical formulations of fairness in this domain: drawing the districts by a neutral procedure or drawing them to intentionally induce an equitable electoral outcome. We then generate a large sample of districting plans for North Carolina and Pennsylvania and consider empirically how compactness and partisan symmetry, as instantiations of these frameworks, trade off with each other -- prioritizing the value of one of these necessarily comes at a cost in the other.
{"title":"Trade-offs in Fair Redistricting","authors":"Zachary Schutzman","doi":"10.1145/3375627.3375802","DOIUrl":"https://doi.org/10.1145/3375627.3375802","url":null,"abstract":"What constitutes a 'fair' electoral districting plan is a discussion dating back to the founding of the United States and, in light of several recent court cases, mathematical developments, and the approaching 2020 U.S. Census, is still a fiercely debated topic today. In light of the growing desire and ability to use algorithmic tools in drawing these districts, we discuss two prototypical formulations of fairness in this domain: drawing the districts by a neutral procedure or drawing them to intentionally induce an equitable electoral outcome. We then generate a large sample of districting plans for North Carolina and Pennsylvania and consider empirically how compactness and partisan symmetry, as instantiations of these frameworks, trade off with each other -- prioritizing the value of one of these necessarily comes at a cost in the other.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76015668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Yu, Zelei Liu, Yang Liu, Tianjian Chen, Mingshu Cong, Xi Weng, D. Niyato, Qiang Yang
In federated learning (FL), data owners "share" their local data in a privacy preserving manner in order to build a federated model, which in turn, can be used to generate revenues for the participants. However, in FL involving business participants, they might incur significant costs if several competitors join the same federation. Furthermore, the training and commercialization of the models will take time, resulting in delays before the federation accumulates enough budget to pay back the participants. The issues of costs and temporary mismatch between contributions and rewards have not been addressed by existing payoff-sharing schemes. In this paper, we propose the Federated Learning Incentivizer (FLI) payoff-sharing scheme. The scheme dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff gained by them and the waiting time for receiving payoff. Extensive experimental comparisons with five state-of-the-art payoff-sharing schemes show that FLI is the most attractive to high quality data owners and achieves the highest expected revenue for a data federation.
{"title":"A Fairness-aware Incentive Scheme for Federated Learning","authors":"Han Yu, Zelei Liu, Yang Liu, Tianjian Chen, Mingshu Cong, Xi Weng, D. Niyato, Qiang Yang","doi":"10.1145/3375627.3375840","DOIUrl":"https://doi.org/10.1145/3375627.3375840","url":null,"abstract":"In federated learning (FL), data owners \"share\" their local data in a privacy preserving manner in order to build a federated model, which in turn, can be used to generate revenues for the participants. However, in FL involving business participants, they might incur significant costs if several competitors join the same federation. Furthermore, the training and commercialization of the models will take time, resulting in delays before the federation accumulates enough budget to pay back the participants. The issues of costs and temporary mismatch between contributions and rewards have not been addressed by existing payoff-sharing schemes. In this paper, we propose the Federated Learning Incentivizer (FLI) payoff-sharing scheme. The scheme dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff gained by them and the waiting time for receiving payoff. Extensive experimental comparisons with five state-of-the-art payoff-sharing schemes show that FLI is the most attractive to high quality data owners and achieves the highest expected revenue for a data federation.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86110274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes the establishment of Medical Artificial Intelligence (AI) Types (MA Types)"that classify AI in medicine not only by technical system requirements but also implications to healthcare workers' roles and users/patients. MA Types can be useful to promote discussion regarding the purpose and application of the clinical site. Although MA Types are based on the current technologies and regulations in Japan, but that does not hinder the potential reform of the technologies and regulations. MA Types aims to facilitate discussions among physicians, healthcare workers, engineers, public/patients and policymakers on AI systems in medical practices.
{"title":"Proposal for Type Classification for Building Trust in Medical Artificial Intelligence Systems","authors":"Arisa Ema, Katsue Nagakura, Takanori Fujita","doi":"10.1145/3375627.3375846","DOIUrl":"https://doi.org/10.1145/3375627.3375846","url":null,"abstract":"This paper proposes the establishment of Medical Artificial Intelligence (AI) Types (MA Types)\"that classify AI in medicine not only by technical system requirements but also implications to healthcare workers' roles and users/patients. MA Types can be useful to promote discussion regarding the purpose and application of the clinical site. Although MA Types are based on the current technologies and regulations in Japan, but that does not hinder the potential reform of the technologies and regulations. MA Types aims to facilitate discussions among physicians, healthcare workers, engineers, public/patients and policymakers on AI systems in medical practices.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82747480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}