iSample: Intelligent Client Sampling in Federated Learning

H. Imani, Jeff Anderson, T. El-Ghazawi
{"title":"iSample: Intelligent Client Sampling in Federated Learning","authors":"H. Imani, Jeff Anderson, T. El-Ghazawi","doi":"10.1109/icfec54809.2022.00015","DOIUrl":null,"url":null,"abstract":"The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, het-erogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icfec54809.2022.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, het-erogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
iSample:联邦学习中的智能客户端抽样
人工智能在社会中的普及使得机器学习(ML)成为移动和物联网(IoT)设备的宝贵工具。虽然这些设备产生的数据总量足以训练一个准确的模型,但任何一台设备可用的数据都是有限的。因此,通过与其他设备相关的观察经验来增强任何设备上的学习将是必要的。然而,这会极大地增加带宽需求。先前的工作导致了联邦学习(FL)的发展,其中客户端设备只能共享权重以相互学习,而不是交换数据。然而,设备资源可用性和网络条件的异构性仍然限制了训练性能。为了在保持良好精度的同时提高性能,我们引入了issample。iSample是一种智能采样技术,通过联合考虑已知的网络性能和模型质量参数来选择客户端,从而使训练时间最小化。我们将iSample与其他联邦学习方法进行了比较,结果表明iSample提高了全局模型的性能,特别是在训练的早期阶段,同时将CNN和VGG的训练时间分别减少了27%和39%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
High-Level Metrics for Service Level Objective-aware Autoscaling in Polaris: a Performance Evaluation Optimal Timing for Bandwidth Reservation for Time-Sensitive Vehicular Applications FaDO: FaaS Functions and Data Orchestrator for Multiple Serverless Edge-Cloud Clusters SIMORA: SIMulating Open Routing protocols for Application interoperability on edge devices SDN-based Service Discovery and Assignment Framework to Preserve Service Availability in Telco-based Multi-Access Edge Computing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1