P-HotStuff:吞吐量对传播延迟不敏感的并行BFT算法

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Computer Networks Pub Date : 2025-05-01 Epub Date: 2025-03-12 DOI:10.1016/j.comnet.2025.111183
Fei Zhu, Lin You, Jixiang Wang, Lei Li
{"title":"P-HotStuff:吞吐量对传播延迟不敏感的并行BFT算法","authors":"Fei Zhu,&nbsp;Lin You,&nbsp;Jixiang Wang,&nbsp;Lei Li","doi":"10.1016/j.comnet.2025.111183","DOIUrl":null,"url":null,"abstract":"<div><div>In this work, we present P-HotStuff, a novel variant of HotStuff consensus algorithm with multiple parallel operations, which can effectively solve the bottleneck of the Byzantine Fault Tolerance (BFT) algorithms that employ the leader-based consensus model, where the throughput is sensitive to Propagation Delay, resulting in the bandwidth of each node is frequently idle. The parallel operations consist of three parts. First, the <strong>Broadcast</strong> layer is decoupled from the <strong>Agreement</strong> layer and they run in parallel, where the <strong>Broadcast</strong> is for preparing the inputs for each consensus, and the <strong>Agreement</strong> is for determining the inputs. Secondly, instead of only the leader, all the nodes can prepare the inputs in parallel. Lastly, the node can prepare each input in parallel, which means that it can directly prepare its next input without waiting for the completion of its preceding preparation. We have conducted the experiments and compared our P-HotStuff with HotStuff and the latest work Motorway. The experimental results show that P-HotStuff can achieve an average throughput that is about 20 times that of HotStuff and 50% higher than that of Motorway under the condition of about 60 nodes, 256 Bytes payload, batch size of 400 and 100 Mbps bandwidth in a Wide Area Network spanning multiple states with an average propagation delay of 260 ms.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111183"},"PeriodicalIF":4.6000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"P-HotStuff: Parallel BFT algorithm with throughput insensitive to propagation delay\",\"authors\":\"Fei Zhu,&nbsp;Lin You,&nbsp;Jixiang Wang,&nbsp;Lei Li\",\"doi\":\"10.1016/j.comnet.2025.111183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this work, we present P-HotStuff, a novel variant of HotStuff consensus algorithm with multiple parallel operations, which can effectively solve the bottleneck of the Byzantine Fault Tolerance (BFT) algorithms that employ the leader-based consensus model, where the throughput is sensitive to Propagation Delay, resulting in the bandwidth of each node is frequently idle. The parallel operations consist of three parts. First, the <strong>Broadcast</strong> layer is decoupled from the <strong>Agreement</strong> layer and they run in parallel, where the <strong>Broadcast</strong> is for preparing the inputs for each consensus, and the <strong>Agreement</strong> is for determining the inputs. Secondly, instead of only the leader, all the nodes can prepare the inputs in parallel. Lastly, the node can prepare each input in parallel, which means that it can directly prepare its next input without waiting for the completion of its preceding preparation. We have conducted the experiments and compared our P-HotStuff with HotStuff and the latest work Motorway. The experimental results show that P-HotStuff can achieve an average throughput that is about 20 times that of HotStuff and 50% higher than that of Motorway under the condition of about 60 nodes, 256 Bytes payload, batch size of 400 and 100 Mbps bandwidth in a Wide Area Network spanning multiple states with an average propagation delay of 260 ms.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":\"262 \",\"pages\":\"Article 111183\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128625001513\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/3/12 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625001513","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

在这项工作中,我们提出了一种具有多个并行操作的HotStuff共识算法的新变体P-HotStuff,它可以有效地解决采用基于leader的共识模型的拜占庭容错(BFT)算法的瓶颈,该算法的吞吐量对传播延迟敏感,导致每个节点的带宽经常空闲。并行操作由三部分组成。首先,广播层与协议层解耦,它们并行运行,其中广播层用于准备每个共识的输入,而协议层用于确定输入。其次,所有节点都可以并行地准备输入,而不是只有领导者。最后,节点可以并行地准备每个输入,这意味着它可以直接准备下一个输入,而不必等待之前的准备完成。我们进行了实验,并将我们的P-HotStuff与HotStuff和最新作品Motorway进行了比较。实验结果表明,在跨多个状态的广域网中,P-HotStuff在约60个节点、256字节有效载荷、400批大小和100 Mbps带宽的条件下,平均传输速率是HotStuff的20倍左右,比Motorway高50%左右,平均传输延迟为260 ms。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
P-HotStuff: Parallel BFT algorithm with throughput insensitive to propagation delay
In this work, we present P-HotStuff, a novel variant of HotStuff consensus algorithm with multiple parallel operations, which can effectively solve the bottleneck of the Byzantine Fault Tolerance (BFT) algorithms that employ the leader-based consensus model, where the throughput is sensitive to Propagation Delay, resulting in the bandwidth of each node is frequently idle. The parallel operations consist of three parts. First, the Broadcast layer is decoupled from the Agreement layer and they run in parallel, where the Broadcast is for preparing the inputs for each consensus, and the Agreement is for determining the inputs. Secondly, instead of only the leader, all the nodes can prepare the inputs in parallel. Lastly, the node can prepare each input in parallel, which means that it can directly prepare its next input without waiting for the completion of its preceding preparation. We have conducted the experiments and compared our P-HotStuff with HotStuff and the latest work Motorway. The experimental results show that P-HotStuff can achieve an average throughput that is about 20 times that of HotStuff and 50% higher than that of Motorway under the condition of about 60 nodes, 256 Bytes payload, batch size of 400 and 100 Mbps bandwidth in a Wide Area Network spanning multiple states with an average propagation delay of 260 ms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
期刊最新文献
STAIR: Towards structure-aware inference of autonomous system relationships using graph neural networks Privacy risks of continuous location sharing under local differential privacy: Inference attacks and defenses CUTIE: Component-specific Unsupervised Technique for In-node Examination AI-driven converged metro-access optical network-as-a-service with point-to-multipoint coherent optics for 6G X-Hauling From simulation to deep learning: Survey on network performance modeling approaches
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1