首页 > 最新文献

2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)最新文献

英文 中文
Occam: A Secure and Adaptive Scaling Scheme for Permissionless Blockchain Occam:一种安全且自适应的无许可区块链扩展方案
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00065
Jie Xu, Yingying Cheng, Cong Wang, X. Jia
Blockchain scalability is one of the most desired properties for permissionless blockchain. Many recent blockchain protocols have focused on increasing the transaction throughput. However, existing protocols cannot dynamically scale the throughput to meet transaction demand. In this paper, we propose Occam, a secure and adaptive scaling scheme. Occam adaptively changes the transaction throughput by expanding and shrinking according to the transaction demand in the network. We introduce a dynamic adjustment mechanism of mining difficulty and a mining power load balancing mechanism to resist various attacks. Furthermore, we implement Occam on Amazon EC2 cluster with 1000 full nodes. Experimental results show that Occam can greatly increase the throughput of the blockchain and the mining power utilization.
区块链的可扩展性是无权限区块链最需要的属性之一。最近的许多区块链协议都专注于提高交易吞吐量。然而,现有协议无法动态扩展吞吐量以满足事务需求。本文提出了一种安全的自适应标度方案Occam。Occam根据网络中的交易需求,自适应地改变交易吞吐量的大小。引入挖矿难度动态调整机制和挖矿力负载均衡机制,抵御各种攻击。此外,我们在拥有1000个完整节点的Amazon EC2集群上实现了Occam。实验结果表明,Occam可以大大提高区块链的吞吐量和挖矿功率利用率。
{"title":"Occam: A Secure and Adaptive Scaling Scheme for Permissionless Blockchain","authors":"Jie Xu, Yingying Cheng, Cong Wang, X. Jia","doi":"10.1109/ICDCS51616.2021.00065","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00065","url":null,"abstract":"Blockchain scalability is one of the most desired properties for permissionless blockchain. Many recent blockchain protocols have focused on increasing the transaction throughput. However, existing protocols cannot dynamically scale the throughput to meet transaction demand. In this paper, we propose Occam, a secure and adaptive scaling scheme. Occam adaptively changes the transaction throughput by expanding and shrinking according to the transaction demand in the network. We introduce a dynamic adjustment mechanism of mining difficulty and a mining power load balancing mechanism to resist various attacks. Furthermore, we implement Occam on Amazon EC2 cluster with 1000 full nodes. Experimental results show that Occam can greatly increase the throughput of the blockchain and the mining power utilization.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115486021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Re-architecting Distributed Block Storage System for Improving Random Write Performance 重构分布式块存储系统,提高随机写性能
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00019
Myoungwon Oh, Jiwoong Park, S. Park, Adel Choi, Jongyoul Lee, Jin-Hyeok Choi, H. Yeom
In cloud ecosystems, distributed block storage systems are used to provide a persistent block storage service, which is the fundamental building block for operating cloud native services. However, existing distributed storage systems performed poorly for random write workloads in an all-NVMe storage configuration, becoming CPU-bottlenecked. Our roofline-based approach to performance analysis on a conventional distributed block storage system with NVMe SSDs reveals that the bottleneck does not lie in one specific software module, but across the entire software stack; (1) tightly coupled I/O processing, (2) inefficient threading architecture, and (3) local backend data store causing excessive CPU usage. To this end, we re-architect a modern distributed block storage system for improving random write performance. The key ingredients of our system are (1) decoupled operation processing using non-volatile memory, (2) prioritized thread control, and (3) CPU-efficient backend data store. Our system emphasizes low CPU overhead and high CPU efficiency to efficiently utilize NVMe SSDs in a distributed storage environment. We implement our system in Ceph. Compared to the native Ceph, our prototype system delivers more than 3x performance improvement for small random write I/Os in terms of both IOPS and latency by efficiently utilizing CPU cores.
在云生态系统中,分布式块存储系统提供持久的块存储服务,是运行云原生服务的基础构建块。然而,现有的分布式存储系统在全nvme存储配置下的随机写工作负载表现不佳,成为cpu瓶颈。我们对传统的带有NVMe ssd的分布式块存储系统进行了基于屋顶线的性能分析,结果表明,瓶颈并不存在于某个特定的软件模块中,而是存在于整个软件堆栈中;(1)紧密耦合的I/O处理;(2)低效的线程架构;(3)本地后端数据存储导致过多的CPU使用。为此,我们重新构建了一个现代分布式块存储系统,以提高随机写入性能。我们系统的关键组成部分是:(1)使用非易失性内存的解耦操作处理,(2)优先级线程控制,以及(3)cpu高效的后端数据存储。我们的系统强调低CPU开销和高CPU效率,以便在分布式存储环境中有效地利用NVMe ssd。我们在Ceph中实现我们的系统。与原生Ceph相比,我们的原型系统通过有效利用CPU内核,在IOPS和延迟方面为小型随机写I/ o提供了3倍以上的性能提升。
{"title":"Re-architecting Distributed Block Storage System for Improving Random Write Performance","authors":"Myoungwon Oh, Jiwoong Park, S. Park, Adel Choi, Jongyoul Lee, Jin-Hyeok Choi, H. Yeom","doi":"10.1109/ICDCS51616.2021.00019","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00019","url":null,"abstract":"In cloud ecosystems, distributed block storage systems are used to provide a persistent block storage service, which is the fundamental building block for operating cloud native services. However, existing distributed storage systems performed poorly for random write workloads in an all-NVMe storage configuration, becoming CPU-bottlenecked. Our roofline-based approach to performance analysis on a conventional distributed block storage system with NVMe SSDs reveals that the bottleneck does not lie in one specific software module, but across the entire software stack; (1) tightly coupled I/O processing, (2) inefficient threading architecture, and (3) local backend data store causing excessive CPU usage. To this end, we re-architect a modern distributed block storage system for improving random write performance. The key ingredients of our system are (1) decoupled operation processing using non-volatile memory, (2) prioritized thread control, and (3) CPU-efficient backend data store. Our system emphasizes low CPU overhead and high CPU efficiency to efficiently utilize NVMe SSDs in a distributed storage environment. We implement our system in Ceph. Compared to the native Ceph, our prototype system delivers more than 3x performance improvement for small random write I/Os in terms of both IOPS and latency by efficiently utilizing CPU cores.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117342109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Everyone in SDN Contributes: Fault Localization via Well-Designed Rules SDN中的每个人都贡献:通过精心设计的规则进行故障定位
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00043
Zhijun Hu, Libing Wu, Jianxin Li, Chao Ma, Xiaochuan Shi
Probing techniques are widely used to identify faulty nodes in networks. Existing probe-based solutions for SDN fault localizationcan focus on two ways: per-rule and per-path. Both promote some certain switches to reporters by installing on them report rules. To avoid hindering other test packets, such report rules must vary between tests or be deleted before a next test, thus incurring excessive consumption on either TCAM resources of switches or bandwidth reserved for control messages. In this paper we present Voyager, a hybrid fault localization solution for SDN that fully combines the advantages of per-rule and per-path tests. Voyager significantly reduces the number of report rules and allows them to reside and function in switches persistently. With only one well-designed report rule for each switch installed, Voyager pinpoints faulty switches easily and tightly by sending test packets straight. Tests in Voyager are parallelizable and report rules are non-invasive. The performance evaluation on realistic datasets shows that Voyager is 24.0% to 92.3% faster than existing solutions.
探测技术被广泛用于识别网络中的故障节点。现有的基于探测的SDN故障定位方案主要关注两种方法:按规则和按路径。两者都通过在记者身上安装报道规则来促进某些开关。为了避免妨碍其他测试数据包,这些报告规则必须在测试之间变化或在下一个测试之前删除,从而导致过度消耗交换机的TCAM资源或为控制消息保留的带宽。在本文中,我们提出了Voyager,一种用于SDN的混合故障定位解决方案,它充分结合了按规则和按路径测试的优点。Voyager显著减少了报告规则的数量,并允许它们持久地驻留在开关中并发挥作用。由于每个安装的交换机只有一个精心设计的报告规则,Voyager通过直接发送测试数据包,轻松而紧密地确定故障交换机。Voyager中的测试是并行的,报告规则是非侵入性的。在实际数据集上的性能评估表明,Voyager比现有的解决方案快24.0%到92.3%。
{"title":"Everyone in SDN Contributes: Fault Localization via Well-Designed Rules","authors":"Zhijun Hu, Libing Wu, Jianxin Li, Chao Ma, Xiaochuan Shi","doi":"10.1109/ICDCS51616.2021.00043","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00043","url":null,"abstract":"Probing techniques are widely used to identify faulty nodes in networks. Existing probe-based solutions for SDN fault localizationcan focus on two ways: per-rule and per-path. Both promote some certain switches to reporters by installing on them report rules. To avoid hindering other test packets, such report rules must vary between tests or be deleted before a next test, thus incurring excessive consumption on either TCAM resources of switches or bandwidth reserved for control messages. In this paper we present Voyager, a hybrid fault localization solution for SDN that fully combines the advantages of per-rule and per-path tests. Voyager significantly reduces the number of report rules and allows them to reside and function in switches persistently. With only one well-designed report rule for each switch installed, Voyager pinpoints faulty switches easily and tightly by sending test packets straight. Tests in Voyager are parallelizable and report rules are non-invasive. The performance evaluation on realistic datasets shows that Voyager is 24.0% to 92.3% faster than existing solutions.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116389134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MandiPass: Secure and Usable User Authentication via Earphone IMU MandiPass:通过耳机IMU进行安全可用的用户认证
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00070
Jianwei Liu, Wenfan Song, Leming Shen, Jinsong Han, Xian Xu, K. Ren
Biometric plays an important role in user authentication. However, the most widely used biometrics, such as facial feature and fingerprint, are easy to capture or record, and thus vulnerable to spoofing attacks. On the contrary, intracorporal biometrics, such as electrocardiography and electroencephalography, are hard to collect, and hence more secure for authentication. Unfortunately, adopting them is not user-friendly due to their complicated collection methods and inconvenient constraints on users. In this paper, we propose a novel biometric-based authentication system, namely MandiPass. MandiPass leverages inertial measurement units (IMU), which have been widely deployed in portable devices, to collect intracorporal biometric from the vibration of user's mandible. The authentication merely requires user to voice a short ‘EMM’ for generating the vibration. In this way, MandiPass enables a secure and user-friendly biometric-based authentication. We theoretically validate the feasibility of MandiPass and develop a two-branch deep neural network for effective biometric extraction. We also utilize a Gaussian matrix to defend against replay attacks. Extensive experiment results with 34 volunteers show that MandiPass can achieve an equal error rate of 1.28%, even under various harsh environments.
生物识别技术在用户认证中起着重要的作用。然而,最广泛使用的生物特征,如面部特征和指纹,容易被捕获或记录,因此容易受到欺骗攻击。相反,身体内的生物特征,如心电图和脑电图,很难收集,因此更安全的身份验证。不幸的是,由于它们复杂的收集方法和对用户不方便的约束,采用它们并不友好。在本文中,我们提出了一种新的基于生物特征的身份验证系统,即MandiPass。MandiPass利用惯性测量单元(IMU)从用户下颌骨的振动中收集体内生物特征,该单元已广泛应用于便携式设备中。该认证只需要用户发出一个简短的“EMM”来产生振动。通过这种方式,MandiPass实现了安全和用户友好的基于生物特征的身份验证。我们从理论上验证了MandiPass的可行性,并开发了一种用于有效生物特征提取的双分支深度神经网络。我们还利用高斯矩阵来防御重放攻击。对34名志愿者的广泛实验结果表明,即使在各种恶劣环境下,MandiPass也可以实现1.28%的错误率。
{"title":"MandiPass: Secure and Usable User Authentication via Earphone IMU","authors":"Jianwei Liu, Wenfan Song, Leming Shen, Jinsong Han, Xian Xu, K. Ren","doi":"10.1109/ICDCS51616.2021.00070","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00070","url":null,"abstract":"Biometric plays an important role in user authentication. However, the most widely used biometrics, such as facial feature and fingerprint, are easy to capture or record, and thus vulnerable to spoofing attacks. On the contrary, intracorporal biometrics, such as electrocardiography and electroencephalography, are hard to collect, and hence more secure for authentication. Unfortunately, adopting them is not user-friendly due to their complicated collection methods and inconvenient constraints on users. In this paper, we propose a novel biometric-based authentication system, namely MandiPass. MandiPass leverages inertial measurement units (IMU), which have been widely deployed in portable devices, to collect intracorporal biometric from the vibration of user's mandible. The authentication merely requires user to voice a short ‘EMM’ for generating the vibration. In this way, MandiPass enables a secure and user-friendly biometric-based authentication. We theoretically validate the feasibility of MandiPass and develop a two-branch deep neural network for effective biometric extraction. We also utilize a Gaussian matrix to defend against replay attacks. Extensive experiment results with 34 volunteers show that MandiPass can achieve an equal error rate of 1.28%, even under various harsh environments.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123549797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
GeoCol: A Geo-distributed Cloud Storage System with Low Cost and Latency using Reinforcement Learning GeoCol:基于强化学习的低成本、低延迟的地理分布式云存储系统
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00023
Haoyu Wang, Haiying Shen, Zijian Li, Shuhao Tian
More and more web applications are deployed on the cloud storage services that store data objects of the web applications in the geo-distributed datacenters belonging to Cloud Service Providers (CSPs). In order to provide low request latency to the web application users, in the previous work, the web application developers need to store more data object replicas in a large number of datacenters or send redundant requests to multiple datacenters (e.g., closest datacenters), both of which increase monetary cost. In this paper, we conducted request latency measurement from a GENI server (as a client) to AWS S3 datacenters for one month, and our observations lay the foundation for our proposed system called GeoCol, a geo-distributed cloud storage system with low cost and latency using reinforcement learning (RL). To achieve the optimal tradeoff between the monetary cost and the request latency, GeoCol encompasses a request split method and a storage planning method. The request split method uses the SARIMA machine learning (ML) technique to predict the request latency as an input to an RL model to determine the number of sub-requests and the datacenter for each sub-request for a request in order to enable the parallel transmissions for a data object. In the storage planning method, each datacenter uses RL to determine whether each data object should be stored and the storage type of each stored data object. Our trace-driven experiment on AWS S3 and GENI platform shows that GeoCol outperforms other comparison methods in monetary cost with 32 % reduction and data object request latency with 51 % reduction.
越来越多的web应用部署在云存储服务上,云存储服务将web应用的数据对象存储在云服务提供商(csp)的地理分布式数据中心中。为了向web应用程序用户提供低的请求延迟,在之前的工作中,web应用程序开发人员需要在大量的数据中心中存储更多的数据对象副本,或者向多个数据中心(例如最近的数据中心)发送冗余请求,这两者都增加了货币成本。在本文中,我们从GENI服务器(作为客户端)到AWS S3数据中心进行了为期一个月的请求延迟测量,我们的观察结果为我们提出的名为GeoCol的系统奠定了基础,GeoCol是一种使用强化学习(RL)的低成本和延迟的地理分布式云存储系统。为了实现货币成本和请求延迟之间的最佳权衡,GeoCol包含了请求分割方法和存储规划方法。请求分割方法使用SARIMA机器学习(ML)技术来预测请求延迟,作为RL模型的输入,以确定请求的子请求数量和每个子请求的数据中心,以便为数据对象启用并行传输。在存储规划方法中,每个数据中心使用RL来确定每个数据对象是否需要存储,以及每个存储数据对象的存储类型。我们在AWS S3和GENI平台上进行的跟踪驱动实验表明,GeoCol在货币成本方面比其他比较方法降低了32%,数据对象请求延迟降低了51%。
{"title":"GeoCol: A Geo-distributed Cloud Storage System with Low Cost and Latency using Reinforcement Learning","authors":"Haoyu Wang, Haiying Shen, Zijian Li, Shuhao Tian","doi":"10.1109/ICDCS51616.2021.00023","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00023","url":null,"abstract":"More and more web applications are deployed on the cloud storage services that store data objects of the web applications in the geo-distributed datacenters belonging to Cloud Service Providers (CSPs). In order to provide low request latency to the web application users, in the previous work, the web application developers need to store more data object replicas in a large number of datacenters or send redundant requests to multiple datacenters (e.g., closest datacenters), both of which increase monetary cost. In this paper, we conducted request latency measurement from a GENI server (as a client) to AWS S3 datacenters for one month, and our observations lay the foundation for our proposed system called GeoCol, a geo-distributed cloud storage system with low cost and latency using reinforcement learning (RL). To achieve the optimal tradeoff between the monetary cost and the request latency, GeoCol encompasses a request split method and a storage planning method. The request split method uses the SARIMA machine learning (ML) technique to predict the request latency as an input to an RL model to determine the number of sub-requests and the datacenter for each sub-request for a request in order to enable the parallel transmissions for a data object. In the storage planning method, each datacenter uses RL to determine whether each data object should be stored and the storage type of each stored data object. Our trace-driven experiment on AWS S3 and GENI platform shows that GeoCol outperforms other comparison methods in monetary cost with 32 % reduction and data object request latency with 51 % reduction.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":" 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120831353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Poster: Off-path VoIP Interception Attacks 海报:脱路VoIP拦截攻击
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00117
Tianxiang Dai, Haya Shulman, M. Waidner
The proliferation of Voice-over-IP (VoIP) technologies make them a lucrative target of attacks. While many attack vectors have been uncovered, one critical vector has not yet received attention: hijacking telephony via DNS cache poisoning. We demonstrate practical VoIP hijack attacks by manipulating DNS responses with a weak off-path attacker. We evaluate our attacks against popular telephony VoIP systems in the Internet and provide a live demo of the attack against Extensible Messaging and Presence Protocol at https://sit4.me/M4.
ip语音(VoIP)技术的普及使其成为有利可图的攻击目标。虽然许多攻击媒介已经被发现,但一个关键的媒介尚未受到关注:通过DNS缓存中毒劫持电话。我们演示了实际的VoIP劫持攻击通过操纵DNS响应与一个弱的偏离路径攻击者。我们评估了针对互联网上流行的电话VoIP系统的攻击,并在https://sit4.me/M4上提供了针对可扩展消息传递和存在协议的攻击的实时演示。
{"title":"Poster: Off-path VoIP Interception Attacks","authors":"Tianxiang Dai, Haya Shulman, M. Waidner","doi":"10.1109/ICDCS51616.2021.00117","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00117","url":null,"abstract":"The proliferation of Voice-over-IP (VoIP) technologies make them a lucrative target of attacks. While many attack vectors have been uncovered, one critical vector has not yet received attention: hijacking telephony via DNS cache poisoning. We demonstrate practical VoIP hijack attacks by manipulating DNS responses with a weak off-path attacker. We evaluate our attacks against popular telephony VoIP systems in the Internet and provide a live demo of the attack against Extensible Messaging and Presence Protocol at https://sit4.me/M4.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123991011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polygraph: Accountable Byzantine Agreement 测谎仪:拜占庭协议
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00046
Pierre Civit, Seth Gilbert, V. Gramoli
In this paper, we introduce Polygraph, the first accountable Byzantine consensus algorithm. If among $n$ users $t < n/3$ are malicious then it ensures consensus; otherwise (if $tgeq n/3)$, it eventually detects malicious users that cause disagreement. Polygraph is appealing for blockchain applications as it allows them to totally order blocks in a chain whenever possible, hence avoiding forks and double spending and, otherwise, to punish (e.g., via slashing) at least $n/3$ malicious users when a fork occurs. This problem is more difficult than perhaps it first appears. One could try identifying malicious senders by extending classic Byzantine consensus algorithms to piggyback signed messages. We show however that to achieve accountability the resulting algorithms would then need to exchange $Omega(kappa^{2}cdot n^{5})$ bits, where $kappa$ is the security parameter of the signature scheme. By contrast, Polygraph has communication complexity $O(kappacdot n^{4})$. Finally, we implement Polygraph in a blockchain and compare it to the Red Belly Blockchain to show that it commits more than 10,000 Bitcoin-like transactions per second when deployed on 80 geodistributed machines.
本文介绍了第一个可问责拜占庭共识算法Polygraph。如果$n$用户$t < n/3$是恶意的,那么它确保了共识;否则(如果$tgeq n/3)$,它最终会检测到引起分歧的恶意用户。测谎对区块链应用程序很有吸引力,因为它允许它们在可能的情况下完全订购链中的区块,从而避免分叉和双重支出,否则,当分叉发生时,至少可以惩罚(例如,通过削减)$n/3$恶意用户。这个问题可能比最初看起来要困难得多。人们可以尝试通过扩展经典的拜占庭共识算法来识别恶意发送者。然而,我们表明,为了实现问责制,生成的算法需要交换$Omega(kappa^{2}cdot n^{5})$位,其中$kappa$是签名方案的安全参数。相比之下,测谎仪具有通信复杂性$O(kappacdot n^{4})$。最后,我们在区块链中实现了Polygraph,并将其与Red Belly区块链进行了比较,结果显示,当部署在80台地理分布式机器上时,它每秒提交超过10,000个类似比特币的交易。
{"title":"Polygraph: Accountable Byzantine Agreement","authors":"Pierre Civit, Seth Gilbert, V. Gramoli","doi":"10.1109/ICDCS51616.2021.00046","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00046","url":null,"abstract":"In this paper, we introduce Polygraph, the first accountable Byzantine consensus algorithm. If among $n$ users $t < n/3$ are malicious then it ensures consensus; otherwise (if $tgeq n/3)$, it eventually detects malicious users that cause disagreement. Polygraph is appealing for blockchain applications as it allows them to totally order blocks in a chain whenever possible, hence avoiding forks and double spending and, otherwise, to punish (e.g., via slashing) at least $n/3$ malicious users when a fork occurs. This problem is more difficult than perhaps it first appears. One could try identifying malicious senders by extending classic Byzantine consensus algorithms to piggyback signed messages. We show however that to achieve accountability the resulting algorithms would then need to exchange $Omega(kappa^{2}cdot n^{5})$ bits, where $kappa$ is the security parameter of the signature scheme. By contrast, Polygraph has communication complexity $O(kappacdot n^{4})$. Finally, we implement Polygraph in a blockchain and compare it to the Red Belly Blockchain to show that it commits more than 10,000 Bitcoin-like transactions per second when deployed on 80 geodistributed machines.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Enabling Low Latency Edge Intelligence based on Multi-exit DNNs in the Wild 基于多出口dnn在野外实现低延迟边缘智能
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00075
Zhaowu Huang, Fang Dong, Dian Shen, Junxue Zhang, Huitian Wang, Guangxing Cai, Qiang He
In recent years, deep neural networks (DNNs) have witnessed a booming of artificial intelligence Internet of Things applications with stringent demands across high accuracy and low latency. A widely adopted solution is to process such computation-intensive DNNs inference tasks with edge computing. Nevertheless, existing edge-based DNN processing methods still cannot achieve acceptable performance due to the intensive transmission data and unnecessary computation. To address the above limitations, we take the advantage of Multi-exit DNNs (ME-DNNs) that allows the tasks to exit early at different depths of the DNN during inference, based on the input complexity. However, naively deploying ME-DNNs in edge still fails to deliver fast and consistent inference in the wild environment. Specifically, 1) at the model-level, unsuitable exit settings will increase additional computational overhead and will lead to excessive queuing delay; 2) at the computation-level, it is hard to sustain high performance consistently in the dynamic edge computing environment. In this paper, we present a Low Latency Edge Intelligence Scheme based on Multi-Exit DNNs (LEIME) to tackle the aforementioned problem. At the model-level, we propose an exit setting algorithm to automatically build optimal ME-DNNs with lower time complexity; At the computation-level, we present a distributed offloading mechanism to fine-tune the task dispatching at runtime to sustain high performance in the dynamic environment, which has the property of close-to-optimal performance guarantee. Finally, we implement a prototype system and extensively evaluate it through testbed and large-scale simulation experiments. Experimental results demonstrate that LEIME significantly improves applications' performance, achieving 1.1–18.7 × speedup in different situations.
近年来,深度神经网络(deep neural networks, dnn)在人工智能物联网应用领域蓬勃发展,对高精度和低延迟有着严格的要求。一种被广泛采用的解决方案是用边缘计算来处理这种计算密集型的dnn推理任务。然而,现有的基于边缘的深度神经网络处理方法,由于传输数据量大,计算量大,仍然无法达到令人满意的性能。为了解决上述限制,我们利用了多出口DNN (me -DNN)的优势,它允许任务在推理期间根据输入复杂性在DNN的不同深度提前退出。然而,在边缘中天真地部署me - dnn仍然无法在野外环境中提供快速一致的推理。具体而言,1)在模型层面,不合适的出口设置会增加额外的计算开销,并导致过大的排队延迟;2)在计算层面,在动态边缘计算环境下难以持续保持高性能。在本文中,我们提出了一种基于多出口dnn (LEIME)的低延迟边缘智能方案来解决上述问题。在模型层面,提出了一种自动构建时间复杂度较低的最优me - dnn的退出设置算法;在计算层面,我们提出了一种分布式卸载机制,在运行时微调任务调度,以保持动态环境下的高性能,具有接近最优的性能保证。最后,我们实现了一个原型系统,并通过试验台和大规模仿真实验对其进行了广泛的评估。实验结果表明,LEIME显著提高了应用程序的性能,在不同情况下可实现1.1-18.7倍的加速。
{"title":"Enabling Low Latency Edge Intelligence based on Multi-exit DNNs in the Wild","authors":"Zhaowu Huang, Fang Dong, Dian Shen, Junxue Zhang, Huitian Wang, Guangxing Cai, Qiang He","doi":"10.1109/ICDCS51616.2021.00075","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00075","url":null,"abstract":"In recent years, deep neural networks (DNNs) have witnessed a booming of artificial intelligence Internet of Things applications with stringent demands across high accuracy and low latency. A widely adopted solution is to process such computation-intensive DNNs inference tasks with edge computing. Nevertheless, existing edge-based DNN processing methods still cannot achieve acceptable performance due to the intensive transmission data and unnecessary computation. To address the above limitations, we take the advantage of Multi-exit DNNs (ME-DNNs) that allows the tasks to exit early at different depths of the DNN during inference, based on the input complexity. However, naively deploying ME-DNNs in edge still fails to deliver fast and consistent inference in the wild environment. Specifically, 1) at the model-level, unsuitable exit settings will increase additional computational overhead and will lead to excessive queuing delay; 2) at the computation-level, it is hard to sustain high performance consistently in the dynamic edge computing environment. In this paper, we present a Low Latency Edge Intelligence Scheme based on Multi-Exit DNNs (LEIME) to tackle the aforementioned problem. At the model-level, we propose an exit setting algorithm to automatically build optimal ME-DNNs with lower time complexity; At the computation-level, we present a distributed offloading mechanism to fine-tune the task dispatching at runtime to sustain high performance in the dynamic environment, which has the property of close-to-optimal performance guarantee. Finally, we implement a prototype system and extensively evaluate it through testbed and large-scale simulation experiments. Experimental results demonstrate that LEIME significantly improves applications' performance, achieving 1.1–18.7 × speedup in different situations.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128290884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GreenHetero: Adaptive Power Allocation for Heterogeneous Green Datacenters GreenHetero:异构绿色数据中心的自适应功率分配
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00024
Haoran Cai, Q. Cao, Hong Jiang, Qiang Wang
In recent years, the design of green datacenters and their enabling technologies, including renewable power managements, have gained a lot of attraction in both industry and academia. However, the maintenance and upgrade of the underlying server system over time (e.g., server replacement due to failures, capacity increases, or migrations), which make datacenters increasingly more heterogeneous in their key processing components (e.g., capacity and variety of processors, memory and storage devices), present a great challenge to optimal allocation of renewable power supply. In other words, the current heterogeneity-unaware power allocation policies have failed to achieve optimal performance given a limited and time varying renewable power supply. In this paper, we propose a dynamic power allocation framework called GreenHetero, which enables adaptive power allocation among heterogeneous servers in green datacenters to achieve the optimal performance when the renewable power varies. Specifically, the GreenHetero scheduler dynamically maintains and updates a performance-power database for each server configuration and workload type through lightweight profiling method. Based on the database and power prediction, the scheduler leverages a well-designed solver to determine the optimal power allocation ratio among heterogeneous servers at runtime. Finally, the power enforcer is used to implement the power source selections and the power allocation decisions. We build an experimental prototype to evaluate GreenHetero. The evaluation shows that our solution can improve the average performance by 1.2x-2.2x and the renewable power utilization by up to 2.7x under tens of representative datacenter workloads compared with the heterogeneity-unaware baseline scheduler.
近年来,绿色数据中心的设计及其支持技术,包括可再生能源管理,在工业界和学术界都获得了很大的吸引力。然而,随着时间的推移,底层服务器系统的维护和升级(例如,由于故障而更换服务器,容量增加或迁移)使得数据中心在其关键处理组件(例如,容量和各种处理器,内存和存储设备)中变得越来越异构,这对可再生能源供应的优化分配提出了巨大挑战。换句话说,在可再生能源供应有限且时变的情况下,当前的不考虑异构性的电力分配策略无法达到最优性能。本文提出了一种名为GreenHetero的动态功率分配框架,该框架能够在绿色数据中心的异构服务器之间进行自适应功率分配,从而在可再生功率变化时达到最优性能。具体来说,GreenHetero调度器通过轻量级分析方法动态维护和更新每个服务器配置和工作负载类型的性能数据库。基于数据库和功率预测,调度器利用设计良好的求解器在运行时确定异构服务器之间的最佳功率分配比例。最后,使用功率强制器实现电源选择和功率分配决策。我们建立了一个实验原型来评估GreenHetero。评估表明,与异构不感知基线调度器相比,我们的解决方案可以在数十个代表性数据中心工作负载下将平均性能提高1.2 -2.2倍,可再生能源利用率提高2.7倍。
{"title":"GreenHetero: Adaptive Power Allocation for Heterogeneous Green Datacenters","authors":"Haoran Cai, Q. Cao, Hong Jiang, Qiang Wang","doi":"10.1109/ICDCS51616.2021.00024","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00024","url":null,"abstract":"In recent years, the design of green datacenters and their enabling technologies, including renewable power managements, have gained a lot of attraction in both industry and academia. However, the maintenance and upgrade of the underlying server system over time (e.g., server replacement due to failures, capacity increases, or migrations), which make datacenters increasingly more heterogeneous in their key processing components (e.g., capacity and variety of processors, memory and storage devices), present a great challenge to optimal allocation of renewable power supply. In other words, the current heterogeneity-unaware power allocation policies have failed to achieve optimal performance given a limited and time varying renewable power supply. In this paper, we propose a dynamic power allocation framework called GreenHetero, which enables adaptive power allocation among heterogeneous servers in green datacenters to achieve the optimal performance when the renewable power varies. Specifically, the GreenHetero scheduler dynamically maintains and updates a performance-power database for each server configuration and workload type through lightweight profiling method. Based on the database and power prediction, the scheduler leverages a well-designed solver to determine the optimal power allocation ratio among heterogeneous servers at runtime. Finally, the power enforcer is used to implement the power source selections and the power allocation decisions. We build an experimental prototype to evaluate GreenHetero. The evaluation shows that our solution can improve the average performance by 1.2x-2.2x and the renewable power utilization by up to 2.7x under tens of representative datacenter workloads compared with the heterogeneity-unaware baseline scheduler.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130853585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TCP BBR in Cloud Networks: Challenges, Analysis, and Solutions 云网络中的TCP BBR:挑战、分析和解决方案
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00094
Phuong Ha, Minh Vu, T. Le, Lisong Xu
Google introduced BBR representing a new model-based TCP class in 2016, which improves throughput and latency of Google's backbone and services and is now the second most popular TCP on the Internet. As BBR is designed as a general-purpose congestion control to replace current widely deployed congestion control such as Reno and CUBIC, this raises the importance of studying its performance in different types of networks. In this paper, we study BBR's performance in cloud networks, which have grown rapidly but have not been studied in the existing BBR works. For the first time, we show both analytically and experimentally that due to the virtual machine (VM) scheduling in cloud networks, BBR underestimates the pacing rate, delivery rate, and estimated bandwidth, which are three key elements of its control loop. This underestimation can exacerbate iteratively and exponentially over time, and can cause BBR's throughput to reduce to almost zero. We propose a BBR patch that captures the VM scheduling impact on BBR's model and improves its throughput in cloud networks. Our evaluation of the modified BBR on the testbed and EC2 shows a significant improvement in the throughput and bandwidth estimation accuracy over the original BBR in cloud networks with heavy VM scheduling.
b谷歌在2016年推出了BBR,它代表了一种新的基于模型的TCP类,它提高了谷歌骨干网和服务的吞吐量和延迟,现在是互联网上第二大流行的TCP。由于BBR被设计为一种通用的拥塞控制,以取代目前广泛部署的拥塞控制,如Reno和CUBIC,这就提高了研究其在不同类型网络中的性能的重要性。在本文中,我们研究了云网络中BBR的性能,云网络发展迅速,但在现有的BBR工作中尚未得到研究。我们首次通过分析和实验证明,由于云网络中的虚拟机(VM)调度,BBR低估了其控制回路的三个关键要素——起跳率、交付率和估计带宽。随着时间的推移,这种低估会以迭代和指数方式加剧,并可能导致BBR的吞吐量减少到几乎为零。我们提出了一个BBR补丁,可以捕获VM调度对BBR模型的影响,并提高其在云网络中的吞吐量。我们在测试平台和EC2上对改进后的BBR进行了评估,结果显示,在具有繁重VM调度的云网络中,与原始BBR相比,吞吐量和带宽估计精度有了显著提高。
{"title":"TCP BBR in Cloud Networks: Challenges, Analysis, and Solutions","authors":"Phuong Ha, Minh Vu, T. Le, Lisong Xu","doi":"10.1109/ICDCS51616.2021.00094","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00094","url":null,"abstract":"Google introduced BBR representing a new model-based TCP class in 2016, which improves throughput and latency of Google's backbone and services and is now the second most popular TCP on the Internet. As BBR is designed as a general-purpose congestion control to replace current widely deployed congestion control such as Reno and CUBIC, this raises the importance of studying its performance in different types of networks. In this paper, we study BBR's performance in cloud networks, which have grown rapidly but have not been studied in the existing BBR works. For the first time, we show both analytically and experimentally that due to the virtual machine (VM) scheduling in cloud networks, BBR underestimates the pacing rate, delivery rate, and estimated bandwidth, which are three key elements of its control loop. This underestimation can exacerbate iteratively and exponentially over time, and can cause BBR's throughput to reduce to almost zero. We propose a BBR patch that captures the VM scheduling impact on BBR's model and improves its throughput in cloud networks. Our evaluation of the modified BBR on the testbed and EC2 shows a significant improvement in the throughput and bandwidth estimation accuracy over the original BBR in cloud networks with heavy VM scheduling.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130486662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1