FedLF: 联合长尾学习中的自适应对数调整和特征优化

Xiuhua Lu, Peng Li, Xuefeng Jiang
{"title":"FedLF: 联合长尾学习中的自适应对数调整和特征优化","authors":"Xiuhua Lu, Peng Li, Xuefeng Jiang","doi":"arxiv-2409.12105","DOIUrl":null,"url":null,"abstract":"Federated learning offers a paradigm to the challenge of preserving privacy\nin distributed machine learning. However, datasets distributed across each\nclient in the real world are inevitably heterogeneous, and if the datasets can\nbe globally aggregated, they tend to be long-tailed distributed, which greatly\naffects the performance of the model. The traditional approach to federated\nlearning primarily addresses the heterogeneity of data among clients, yet it\nfails to address the phenomenon of class-wise bias in global long-tailed data.\nThis results in the trained model focusing on the head classes while neglecting\nthe equally important tail classes. Consequently, it is essential to develop a\nmethodology that considers classes holistically. To address the above problems,\nwe propose a new method FedLF, which introduces three modifications in the\nlocal training phase: adaptive logit adjustment, continuous class centred\noptimization, and feature decorrelation. We compare seven state-of-the-art\nmethods with varying degrees of data heterogeneity and long-tailed\ndistribution. Extensive experiments on benchmark datasets CIFAR-10-LT and\nCIFAR-100-LT demonstrate that our approach effectively mitigates the problem of\nmodel performance degradation due to data heterogeneity and long-tailed\ndistribution. our code is available at https://github.com/18sym/FedLF.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning\",\"authors\":\"Xiuhua Lu, Peng Li, Xuefeng Jiang\",\"doi\":\"arxiv-2409.12105\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning offers a paradigm to the challenge of preserving privacy\\nin distributed machine learning. However, datasets distributed across each\\nclient in the real world are inevitably heterogeneous, and if the datasets can\\nbe globally aggregated, they tend to be long-tailed distributed, which greatly\\naffects the performance of the model. The traditional approach to federated\\nlearning primarily addresses the heterogeneity of data among clients, yet it\\nfails to address the phenomenon of class-wise bias in global long-tailed data.\\nThis results in the trained model focusing on the head classes while neglecting\\nthe equally important tail classes. Consequently, it is essential to develop a\\nmethodology that considers classes holistically. To address the above problems,\\nwe propose a new method FedLF, which introduces three modifications in the\\nlocal training phase: adaptive logit adjustment, continuous class centred\\noptimization, and feature decorrelation. We compare seven state-of-the-art\\nmethods with varying degrees of data heterogeneity and long-tailed\\ndistribution. Extensive experiments on benchmark datasets CIFAR-10-LT and\\nCIFAR-100-LT demonstrate that our approach effectively mitigates the problem of\\nmodel performance degradation due to data heterogeneity and long-tailed\\ndistribution. our code is available at https://github.com/18sym/FedLF.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12105\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联盟学习为解决分布式机器学习中的隐私保护难题提供了一种范例。然而,现实世界中分布在每个客户端的数据集不可避免地具有异质性,如果数据集可以进行全局聚合,它们往往是长尾分布的,这会极大地影响模型的性能。传统的联合学习方法主要解决的是客户端之间数据的异构性问题,但却无法解决全局长尾数据中的类偏差现象。因此,开发一种全面考虑类别的方法至关重要。为了解决上述问题,我们提出了一种新方法 FedLF,它在局部训练阶段引入了三项修正:自适应 logit 调整、连续类中心优化和特征去相关性。我们比较了数据异质性和长尾分布程度不同的七种最新方法。在基准数据集 CIFAR-10-LT 和 CIFAR-100-LT 上进行的大量实验证明,我们的方法能有效缓解数据异质性和长尾分布导致的模型性能下降问题。我们的代码可在 https://github.com/18sym/FedLF 上获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning. However, datasets distributed across each client in the real world are inevitably heterogeneous, and if the datasets can be globally aggregated, they tend to be long-tailed distributed, which greatly affects the performance of the model. The traditional approach to federated learning primarily addresses the heterogeneity of data among clients, yet it fails to address the phenomenon of class-wise bias in global long-tailed data. This results in the trained model focusing on the head classes while neglecting the equally important tail classes. Consequently, it is essential to develop a methodology that considers classes holistically. To address the above problems, we propose a new method FedLF, which introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation. We compare seven state-of-the-art methods with varying degrees of data heterogeneity and long-tailed distribution. Extensive experiments on benchmark datasets CIFAR-10-LT and CIFAR-100-LT demonstrate that our approach effectively mitigates the problem of model performance degradation due to data heterogeneity and long-tailed distribution. our code is available at https://github.com/18sym/FedLF.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1