{"title":"FedLF: 联合长尾学习中的自适应对数调整和特征优化","authors":"Xiuhua Lu, Peng Li, Xuefeng Jiang","doi":"arxiv-2409.12105","DOIUrl":null,"url":null,"abstract":"Federated learning offers a paradigm to the challenge of preserving privacy\nin distributed machine learning. However, datasets distributed across each\nclient in the real world are inevitably heterogeneous, and if the datasets can\nbe globally aggregated, they tend to be long-tailed distributed, which greatly\naffects the performance of the model. The traditional approach to federated\nlearning primarily addresses the heterogeneity of data among clients, yet it\nfails to address the phenomenon of class-wise bias in global long-tailed data.\nThis results in the trained model focusing on the head classes while neglecting\nthe equally important tail classes. Consequently, it is essential to develop a\nmethodology that considers classes holistically. To address the above problems,\nwe propose a new method FedLF, which introduces three modifications in the\nlocal training phase: adaptive logit adjustment, continuous class centred\noptimization, and feature decorrelation. We compare seven state-of-the-art\nmethods with varying degrees of data heterogeneity and long-tailed\ndistribution. Extensive experiments on benchmark datasets CIFAR-10-LT and\nCIFAR-100-LT demonstrate that our approach effectively mitigates the problem of\nmodel performance degradation due to data heterogeneity and long-tailed\ndistribution. our code is available at https://github.com/18sym/FedLF.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning\",\"authors\":\"Xiuhua Lu, Peng Li, Xuefeng Jiang\",\"doi\":\"arxiv-2409.12105\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning offers a paradigm to the challenge of preserving privacy\\nin distributed machine learning. However, datasets distributed across each\\nclient in the real world are inevitably heterogeneous, and if the datasets can\\nbe globally aggregated, they tend to be long-tailed distributed, which greatly\\naffects the performance of the model. The traditional approach to federated\\nlearning primarily addresses the heterogeneity of data among clients, yet it\\nfails to address the phenomenon of class-wise bias in global long-tailed data.\\nThis results in the trained model focusing on the head classes while neglecting\\nthe equally important tail classes. Consequently, it is essential to develop a\\nmethodology that considers classes holistically. To address the above problems,\\nwe propose a new method FedLF, which introduces three modifications in the\\nlocal training phase: adaptive logit adjustment, continuous class centred\\noptimization, and feature decorrelation. We compare seven state-of-the-art\\nmethods with varying degrees of data heterogeneity and long-tailed\\ndistribution. Extensive experiments on benchmark datasets CIFAR-10-LT and\\nCIFAR-100-LT demonstrate that our approach effectively mitigates the problem of\\nmodel performance degradation due to data heterogeneity and long-tailed\\ndistribution. our code is available at https://github.com/18sym/FedLF.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12105\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning
Federated learning offers a paradigm to the challenge of preserving privacy
in distributed machine learning. However, datasets distributed across each
client in the real world are inevitably heterogeneous, and if the datasets can
be globally aggregated, they tend to be long-tailed distributed, which greatly
affects the performance of the model. The traditional approach to federated
learning primarily addresses the heterogeneity of data among clients, yet it
fails to address the phenomenon of class-wise bias in global long-tailed data.
This results in the trained model focusing on the head classes while neglecting
the equally important tail classes. Consequently, it is essential to develop a
methodology that considers classes holistically. To address the above problems,
we propose a new method FedLF, which introduces three modifications in the
local training phase: adaptive logit adjustment, continuous class centred
optimization, and feature decorrelation. We compare seven state-of-the-art
methods with varying degrees of data heterogeneity and long-tailed
distribution. Extensive experiments on benchmark datasets CIFAR-10-LT and
CIFAR-100-LT demonstrate that our approach effectively mitigates the problem of
model performance degradation due to data heterogeneity and long-tailed
distribution. our code is available at https://github.com/18sym/FedLF.