Time-Efficient Blockchain-Based Federated Learning

IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE/ACM Transactions on Networking Pub Date : 2024-08-14 DOI:10.1109/TNET.2024.3436862
Rongping Lin;Fan Wang;Shan Luo;Xiong Wang;Moshe Zukerman
{"title":"Time-Efficient Blockchain-Based Federated Learning","authors":"Rongping Lin;Fan Wang;Shan Luo;Xiong Wang;Moshe Zukerman","doi":"10.1109/TNET.2024.3436862","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a distributed machine learning method that ensures the privacy and security of participants’ data by avoiding direct data upload to a central node for training. However, the traditional FL typically applies a star structure with cloud servers as the central aggregator for the model parameters from different terminals, leading to problems such as central failure, malicious tampering and malicious participants, resulting in training errors or system crashes. To address these issues, a permissioned blockchain is used to build a secure and reliable data-sharing platform among participating terminals, replacing the central aggregator in the traditional FL called blockchain-based federated learning. However, the block generation method of the blockchain system may introduce significant latency in the federated learning where distributed model parameters upload randomly, resulting in low efficiency of the federated learning. To overcome this, we propose a block generation strategy that groups terminals and generates a block for each group, which minimizes the latency of a single round of federated learning, and an optimal block generation algorithm that considers data distribution, terminal resources, and network resources is provided. The analysis shows that the proposed algorithm can effectively obtain the optimal solution of block generation to minimize the authentication time, and we conduct extensive experiments that demonstrate the time efficiency of the proposed algorithm.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4885-4900"},"PeriodicalIF":3.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10637280/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is a distributed machine learning method that ensures the privacy and security of participants’ data by avoiding direct data upload to a central node for training. However, the traditional FL typically applies a star structure with cloud servers as the central aggregator for the model parameters from different terminals, leading to problems such as central failure, malicious tampering and malicious participants, resulting in training errors or system crashes. To address these issues, a permissioned blockchain is used to build a secure and reliable data-sharing platform among participating terminals, replacing the central aggregator in the traditional FL called blockchain-based federated learning. However, the block generation method of the blockchain system may introduce significant latency in the federated learning where distributed model parameters upload randomly, resulting in low efficiency of the federated learning. To overcome this, we propose a block generation strategy that groups terminals and generates a block for each group, which minimizes the latency of a single round of federated learning, and an optimal block generation algorithm that considers data distribution, terminal resources, and network resources is provided. The analysis shows that the proposed algorithm can effectively obtain the optimal solution of block generation to minimize the authentication time, and we conduct extensive experiments that demonstrate the time efficiency of the proposed algorithm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于区块链的高效时间联合学习
联邦学习(FL)是一种分布式机器学习方法,通过避免直接将数据上传到中心节点进行训练,确保参与者数据的隐私和安全。然而,传统的FL通常采用星形结构,云服务器作为不同终端模型参数的中心聚合器,容易出现中心失效、恶意篡改、恶意参与者等问题,导致训练错误或系统崩溃。为了解决这些问题,使用经过许可的区块链在参与终端之间构建安全可靠的数据共享平台,取代传统FL中称为基于区块链的联邦学习的中央聚合器。然而,区块链系统的块生成方法可能会在分布式模型参数随机上传的联邦学习中引入较大的延迟,导致联邦学习的效率较低。为了克服这个问题,我们提出了一种将终端分组并为每组生成一个块的块生成策略,从而最大限度地减少了单轮联邦学习的延迟,并提供了一种考虑数据分布、终端资源和网络资源的最优块生成算法。分析表明,本文提出的算法可以有效地获得块生成的最优解,以最小化认证时间,并进行了大量的实验,证明了本文提出的算法的时间效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE/ACM Transactions on Networking
IEEE/ACM Transactions on Networking 工程技术-电信学
CiteScore
8.20
自引率
5.40%
发文量
246
审稿时长
4-8 weeks
期刊介绍: The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking, covering all sorts of information transport networks over all sorts of physical layer technologies, both wireline (all kinds of guided media: e.g., copper, optical) and wireless (e.g., radio-frequency, acoustic (e.g., underwater), infra-red), or hybrids of these. The journal welcomes applied contributions reporting on novel experiences and experiments with actual systems.
期刊最新文献
Table of Contents IEEE/ACM Transactions on Networking Information for Authors IEEE/ACM Transactions on Networking Society Information IEEE/ACM Transactions on Networking Publication Information FPCA: Parasitic Coding Authentication for UAVs by FM Signals
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1