Oscar Dilley, Juan Marcelo Parra-Ullauri, Rasheed Hussain, Dimitra Simeonidou
{"title":"Federated Fairness Analytics: Quantifying Fairness in Federated Learning","authors":"Oscar Dilley, Juan Marcelo Parra-Ullauri, Rasheed Hussain, Dimitra Simeonidou","doi":"arxiv-2408.08214","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a privacy-enhancing technology for distributed ML.\nBy training models locally and aggregating updates - a federation learns\ntogether, while bypassing centralised data collection. FL is increasingly\npopular in healthcare, finance and personal computing. However, it inherits\nfairness challenges from classical ML and introduces new ones, resulting from\ndifferences in data quality, client participation, communication constraints,\naggregation methods and underlying hardware. Fairness remains an unresolved\nissue in FL and the community has identified an absence of succinct definitions\nand metrics to quantify fairness; to address this, we propose Federated\nFairness Analytics - a methodology for measuring fairness. Our definition of\nfairness comprises four notions with novel, corresponding metrics. They are\nsymptomatically defined and leverage techniques originating from XAI,\ncooperative game-theory and networking engineering. We tested a range of\nexperimental settings, varying the FL approach, ML task and data settings. The\nresults show that statistical heterogeneity and client participation affect\nfairness and fairness conscious approaches such as Ditto and q-FedAvg\nmarginally improve fairness-performance trade-offs. Using our techniques, FL\npractitioners can uncover previously unobtainable insights into their system's\nfairness, at differing levels of granularity in order to address fairness\nchallenges in FL. We have open-sourced our work at:\nhttps://github.com/oscardilley/federated-fairness.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"14 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.08214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) is a privacy-enhancing technology for distributed ML.
By training models locally and aggregating updates - a federation learns
together, while bypassing centralised data collection. FL is increasingly
popular in healthcare, finance and personal computing. However, it inherits
fairness challenges from classical ML and introduces new ones, resulting from
differences in data quality, client participation, communication constraints,
aggregation methods and underlying hardware. Fairness remains an unresolved
issue in FL and the community has identified an absence of succinct definitions
and metrics to quantify fairness; to address this, we propose Federated
Fairness Analytics - a methodology for measuring fairness. Our definition of
fairness comprises four notions with novel, corresponding metrics. They are
symptomatically defined and leverage techniques originating from XAI,
cooperative game-theory and networking engineering. We tested a range of
experimental settings, varying the FL approach, ML task and data settings. The
results show that statistical heterogeneity and client participation affect
fairness and fairness conscious approaches such as Ditto and q-FedAvg
marginally improve fairness-performance trade-offs. Using our techniques, FL
practitioners can uncover previously unobtainable insights into their system's
fairness, at differing levels of granularity in order to address fairness
challenges in FL. We have open-sourced our work at:
https://github.com/oscardilley/federated-fairness.