A Web-Based Solution for Federated Learning with LLM-Based Automation

Chamith Mawela, Chaouki Ben Issaid, Mehdi Bennis
{"title":"A Web-Based Solution for Federated Learning with LLM-Based Automation","authors":"Chamith Mawela, Chaouki Ben Issaid, Mehdi Bennis","doi":"arxiv-2408.13010","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) offers a promising approach for collaborative machine\nlearning across distributed devices. However, its adoption is hindered by the\ncomplexity of building reliable communication architectures and the need for\nexpertise in both machine learning and network programming. This paper presents\na comprehensive solution that simplifies the orchestration of FL tasks while\nintegrating intent-based automation. We develop a user-friendly web application\nsupporting the federated averaging (FedAvg) algorithm, enabling users to\nconfigure parameters through an intuitive interface. The backend solution\nefficiently manages communication between the parameter server and edge nodes.\nWe also implement model compression and scheduling algorithms to optimize FL\nperformance. Furthermore, we explore intent-based automation in FL using a\nfine-tuned Language Model (LLM) trained on a tailored dataset, allowing users\nto conduct FL tasks using high-level prompts. We observe that the LLM-based\nautomated solution achieves comparable test accuracy to the standard web-based\nsolution while reducing transferred bytes by up to 64% and CPU time by up to\n46% for FL tasks. Also, we leverage the neural architecture search (NAS) and\nhyperparameter optimization (HPO) using LLM to improve the performance. We\nobserve that by using this approach test accuracy can be improved by 10-20% for\nthe carried out FL tasks.","PeriodicalId":501172,"journal":{"name":"arXiv - STAT - Applications","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.13010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) offers a promising approach for collaborative machine learning across distributed devices. However, its adoption is hindered by the complexity of building reliable communication architectures and the need for expertise in both machine learning and network programming. This paper presents a comprehensive solution that simplifies the orchestration of FL tasks while integrating intent-based automation. We develop a user-friendly web application supporting the federated averaging (FedAvg) algorithm, enabling users to configure parameters through an intuitive interface. The backend solution efficiently manages communication between the parameter server and edge nodes. We also implement model compression and scheduling algorithms to optimize FL performance. Furthermore, we explore intent-based automation in FL using a fine-tuned Language Model (LLM) trained on a tailored dataset, allowing users to conduct FL tasks using high-level prompts. We observe that the LLM-based automated solution achieves comparable test accuracy to the standard web-based solution while reducing transferred bytes by up to 64% and CPU time by up to 46% for FL tasks. Also, we leverage the neural architecture search (NAS) and hyperparameter optimization (HPO) using LLM to improve the performance. We observe that by using this approach test accuracy can be improved by 10-20% for the carried out FL tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于 LLM 自动化的联合学习网络解决方案
联盟学习(FL)为跨分布式设备的协作式机器学习提供了一种前景广阔的方法。然而,由于构建可靠通信架构的复杂性以及对机器学习和网络编程两方面专业知识的需求,该方法的应用受到了阻碍。本文提出了一个全面的解决方案,它简化了 FL 任务的协调,同时集成了基于意图的自动化。我们开发了一个用户友好型网络应用程序,支持联合平均(FedAvg)算法,使用户能够通过直观的界面配置参数。我们还实施了模型压缩和调度算法,以优化 FL 性能。此外,我们还探索了 FL 中基于意图的自动化,使用在定制数据集上训练的经过精细调整的语言模型(LLM),允许用户使用高级提示执行 FL 任务。我们发现,基于 LLM 的自动化解决方案的测试准确率与基于网络的标准解决方案相当,同时在 FL 任务中,传输字节数最多减少了 64%,CPU 时间最多减少了 46%。此外,我们还利用 LLM 的神经架构搜索(NAS)和超参数优化(HPO)来提高性能。我们发现,使用这种方法可以将已执行的 FL 任务的测试准确率提高 10-20%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Bayesian estimation of the number of significant principal components for cultural data Optimal Visual Search with Highly Heuristic Decision Rules Who's the GOAT? Sports Rankings and Data-Driven Random Walks on the Symmetric Group Conformity assessment of processes and lots in the framework of JCGM 106:2012 Equity considerations in COVID-19 vaccine allocation modelling: a literature review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1