一种高效的多模型联邦学习训练算法

Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li
{"title":"一种高效的多模型联邦学习训练算法","authors":"Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li","doi":"10.1109/GLOBECOM46510.2021.9685230","DOIUrl":null,"url":null,"abstract":"How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An Efficient Multi-Model Training Algorithm for Federated Learning\",\"authors\":\"Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li\",\"doi\":\"10.1109/GLOBECOM46510.2021.9685230\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9685230\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

如何有效地组织各种异构客户端进行有效的模型训练一直是联邦学习中的一个关键问题。这方面现有的算法都是针对单模型训练的,由于对强大客户端的资源利用效率不高,不适合多模型并行训练。本文研究了联邦学习中的多模型训练问题。目标是有效地利用客户端异构资源进行并行多模型训练,在保证各个模型之间一定公平性的同时,最大限度地提高整体训练效率。为此,我们引入对数函数来描述基于测量结果的模型训练精度与参与训练的客户数量之间的关系。因此,我们将多模型训练作为一个优化问题,在保证各个模型之间的对数公平的情况下,找到一个最大限度提高整体训练效率的分配。我们设计了一种基于对数公平的多模型平衡算法(LFMB),该算法在每个客户端迭代地用未分配的模型替换已经分配的模型,以提高训练效率,直到没有发现这种改进。数值结果表明,LFMB在整体训练效率和公平性方面具有显著的高性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Efficient Multi-Model Training Algorithm for Federated Learning
How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Blockchain-based Energy Trading Scheme for Dynamic Charging of Electric Vehicles Algebraic Design of a Class of Rate 1/3 Quasi-Cyclic LDPC Codes A Fast and Scalable Resource Allocation Scheme for End-to-End Network Slices Modelling of Multi-Tier Handover in LiFi Networks Enabling Efficient Scheduling Policy in Intelligent Reflecting Surface Aided Federated Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1