Federated Hyperparameter Optimisation with Flower and Optuna

IF 0.4 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Applied Computing Review Pub Date : 2023-03-27 DOI:10.1145/3555776.3577847
J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou
{"title":"Federated Hyperparameter Optimisation with Flower and Optuna","authors":"J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou","doi":"10.1145/3555776.3577847","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555776.3577847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于Flower和Optuna的联邦超参数优化
联邦学习(FL)是一种新兴的分布式机器学习技术,其中多个客户端在中央服务器的管理下协作学习模型。FL系统依赖于一组影响系统性能的初始条件(即超参数)。然而,为中心服务器和客户机定义一个好的超参数选择是一个具有挑战性的问题。FL中的超参数调优通常需要手动或自动搜索以找到最优值。尽管如此,一个明显的限制是服务器和客户机模型的算法评估的高成本,使得优化过程在计算上昂贵且耗时。我们提出了一种基于集成FL框架Flower和主要优化软件Optuna的实现,用于FL中自动化和高效的超参数优化(HPO)。通过这种组合,可以在线调整客户端和服务器中的超参数,旨在在运行时找到最优值。我们引入了HPO因子来描述HPO将发生的轮数,以及HPO率,它定义了更新超参数的频率,并可用于修剪。HPO由FL服务器管理,该服务器使用Optuna启用的最先进的优化算法,以HPO率更新客户端的超参数。我们通过在流行的图像识别数据集中同时更新多个客户端模型来测试我们的方法,与基线相比,产生了有希望的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Applied Computing Review
Applied Computing Review COMPUTER SCIENCE, INFORMATION SYSTEMS-
自引率
40.00%
发文量
8
期刊最新文献
DIWS-LCR-Rot-hop++: A Domain-Independent Word Selector for Cross-Domain Aspect-Based Sentiment Classification Leveraging Semantic Technologies for Collaborative Inference of Threatening IoT Dependencies Relating Optimal Repairs in Ontology Engineering with Contraction Operations in Belief Change Block-RACS: Towards Reputation-Aware Client Selection and Monetization Mechanism for Federated Learning Elastic Data Binning: Time-Series Sketching for Time-Domain Astrophysics Analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1