民主化机器学习:异质参与者的弹性分布式学习

Karim Boubouh, Amine Boussetta, Nirupam Gupta, Alexandre Maurer, Rafael Pinot
{"title":"民主化机器学习:异质参与者的弹性分布式学习","authors":"Karim Boubouh, Amine Boussetta, Nirupam Gupta, Alexandre Maurer, Rafael Pinot","doi":"10.1109/SRDS55811.2022.00019","DOIUrl":null,"url":null,"abstract":"The increasing prevalence of personal devices motivates the design of algorithms that can leverage their computing power, together with the data they generate, in order to build privacy-preserving and effective machine learning models. However, traditional distributed learning algorithms impose a uniform workload on all participating devices, most often discarding the weakest participants. This not only induces a suboptimal use of available computational resources, but also significantly reduces the quality of the learning process, as data held by the slowest devices is discarded from the procedure. This paper proposes HgO, a distributed learning scheme with parameterizable iteration costs that can be adjusted to the computational capabilities of different devices. HgO encourages the participation of slower devices, thereby improving the accuracy of the model when the participants do not share the same dataset. When combined with a robust aggregation rule, HgO can tolerate some level of Byzantine behavior, depending on the hardware profile of the devices (we prove, for the first time, a trade-off between Byzantine tolerance and hardware heterogeneity). We also demonstrate the convergence of HgO, theoretically and empirically, without assuming any specific partitioning of the data over the devices. We present an exhaustive set of experiments, evaluating the performance of HgO on several classification tasks and highlighting the importance of incorporating slow devices when learning in a Byzantine-prone environment with heterogeneous participants.","PeriodicalId":143115,"journal":{"name":"2022 41st International Symposium on Reliable Distributed Systems (SRDS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Democratizing Machine Learning: Resilient Distributed Learning with Heterogeneous Participants\",\"authors\":\"Karim Boubouh, Amine Boussetta, Nirupam Gupta, Alexandre Maurer, Rafael Pinot\",\"doi\":\"10.1109/SRDS55811.2022.00019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasing prevalence of personal devices motivates the design of algorithms that can leverage their computing power, together with the data they generate, in order to build privacy-preserving and effective machine learning models. However, traditional distributed learning algorithms impose a uniform workload on all participating devices, most often discarding the weakest participants. This not only induces a suboptimal use of available computational resources, but also significantly reduces the quality of the learning process, as data held by the slowest devices is discarded from the procedure. This paper proposes HgO, a distributed learning scheme with parameterizable iteration costs that can be adjusted to the computational capabilities of different devices. HgO encourages the participation of slower devices, thereby improving the accuracy of the model when the participants do not share the same dataset. When combined with a robust aggregation rule, HgO can tolerate some level of Byzantine behavior, depending on the hardware profile of the devices (we prove, for the first time, a trade-off between Byzantine tolerance and hardware heterogeneity). We also demonstrate the convergence of HgO, theoretically and empirically, without assuming any specific partitioning of the data over the devices. We present an exhaustive set of experiments, evaluating the performance of HgO on several classification tasks and highlighting the importance of incorporating slow devices when learning in a Byzantine-prone environment with heterogeneous participants.\",\"PeriodicalId\":143115,\"journal\":{\"name\":\"2022 41st International Symposium on Reliable Distributed Systems (SRDS)\",\"volume\":\"75 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 41st International Symposium on Reliable Distributed Systems (SRDS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SRDS55811.2022.00019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 41st International Symposium on Reliable Distributed Systems (SRDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SRDS55811.2022.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

个人设备的日益普及激发了算法的设计,这些算法可以利用它们的计算能力,以及它们产生的数据,以建立保护隐私和有效的机器学习模型。然而,传统的分布式学习算法在所有参与设备上施加统一的工作负载,通常会丢弃最弱的参与者。这不仅会导致可用计算资源的次优使用,而且还会显著降低学习过程的质量,因为最慢的设备所持有的数据会从过程中丢弃。本文提出了一种可参数化迭代代价的分布式学习方案HgO,该方案可根据不同设备的计算能力进行调整。HgO鼓励较慢的设备参与,从而提高模型的准确性,当参与者不共享相同的数据集。当与健壮的聚合规则相结合时,HgO可以容忍一定程度的拜占庭行为,这取决于设备的硬件配置文件(我们首次证明了拜占庭容忍度和硬件异质性之间的权衡)。我们还从理论上和经验上证明了HgO的收敛性,而不假设设备上的数据有任何特定的分区。我们提出了一组详尽的实验,评估了HgO在几个分类任务中的性能,并强调了在具有异构参与者的拜占庭倾向环境中学习时结合慢速设备的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Democratizing Machine Learning: Resilient Distributed Learning with Heterogeneous Participants
The increasing prevalence of personal devices motivates the design of algorithms that can leverage their computing power, together with the data they generate, in order to build privacy-preserving and effective machine learning models. However, traditional distributed learning algorithms impose a uniform workload on all participating devices, most often discarding the weakest participants. This not only induces a suboptimal use of available computational resources, but also significantly reduces the quality of the learning process, as data held by the slowest devices is discarded from the procedure. This paper proposes HgO, a distributed learning scheme with parameterizable iteration costs that can be adjusted to the computational capabilities of different devices. HgO encourages the participation of slower devices, thereby improving the accuracy of the model when the participants do not share the same dataset. When combined with a robust aggregation rule, HgO can tolerate some level of Byzantine behavior, depending on the hardware profile of the devices (we prove, for the first time, a trade-off between Byzantine tolerance and hardware heterogeneity). We also demonstrate the convergence of HgO, theoretically and empirically, without assuming any specific partitioning of the data over the devices. We present an exhaustive set of experiments, evaluating the performance of HgO on several classification tasks and highlighting the importance of incorporating slow devices when learning in a Byzantine-prone environment with heterogeneous participants.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
FWC: Fitting Weight Compression Method for Reducing Communication Traffic for Federated Learning External Reviewers & Co-Reviewers Secure Publish-Process-Subscribe System for Dispersed Computing An In-Depth Correlative Study Between DRAM Errors and Server Failures in Production Data Centers An Investigation on Data Center Cooling Systems Using FPGA-based Temperature Side Channels
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1