首页 > 最新文献

2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)最新文献

英文 中文
Gengar: An RDMA-based Distributed Hybrid Memory Pool Gengar:一个基于rdma的分布式混合内存池
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00018
Zhuohui Duan, Haikun Liu, Haodi Lu, Xiaofei Liao, Hai Jin, Yu Zhang, Bingsheng He
Byte-addressable Non-volatile Memory (NVM) technologies promise higher density and lower cost than DRAM. They have been increasingly employed for data center applications. Despite many previous studies on using NVM in a single machine, there remain challenges to best utilize it in a distributed data center environment. This paper presents Gengar, an RDMA-enabled Distributed Shared Hybrid Memory (DSHM) pool with simple programming APIs on viewing remote NVM and DRAM in a global memory space. We propose to exploit semantics of RDMA primitives to identify frequently-accessed data in the hybrid memory pool, and cache it in distributed DRAM buffers. We redesign RDMA communication protocols to reduce the bottleneck of RDMA write latency by leveraging a proxy mechanism. Gengar also supports memory sharing among multiple users with data consistency guarantee. We evaluate Gengar in a real testbed equipped with Intel Optane DC Persistent DIMMs. Experimental results show that Gengar significantly improves the performance of public benchmarks such as MapReduce and YCSB by up to 70 % compared with state-of-the-art DSHM systems.
字节可寻址非易失性存储器(NVM)技术承诺比DRAM具有更高的密度和更低的成本。它们越来越多地用于数据中心应用程序。尽管之前有许多关于在单台机器中使用NVM的研究,但在分布式数据中心环境中最好地利用它仍然存在挑战。Gengar是一个支持rdma的分布式共享混合内存(DSHM)池,具有简单的编程api,用于在全局内存空间中查看远程NVM和DRAM。我们建议利用RDMA原语的语义来识别混合内存池中频繁访问的数据,并将其缓存在分布式DRAM缓冲区中。我们重新设计了RDMA通信协议,利用代理机制减少了RDMA写入延迟的瓶颈。Gengar还支持多用户之间的内存共享,并保证数据一致性。我们在配备英特尔Optane DC Persistent内存条的真实测试台上对Gengar进行了评估。实验结果表明,与最先进的DSHM系统相比,Gengar显著提高了MapReduce和YCSB等公共基准测试的性能,提高幅度高达70%。
{"title":"Gengar: An RDMA-based Distributed Hybrid Memory Pool","authors":"Zhuohui Duan, Haikun Liu, Haodi Lu, Xiaofei Liao, Hai Jin, Yu Zhang, Bingsheng He","doi":"10.1109/ICDCS51616.2021.00018","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00018","url":null,"abstract":"Byte-addressable Non-volatile Memory (NVM) technologies promise higher density and lower cost than DRAM. They have been increasingly employed for data center applications. Despite many previous studies on using NVM in a single machine, there remain challenges to best utilize it in a distributed data center environment. This paper presents Gengar, an RDMA-enabled Distributed Shared Hybrid Memory (DSHM) pool with simple programming APIs on viewing remote NVM and DRAM in a global memory space. We propose to exploit semantics of RDMA primitives to identify frequently-accessed data in the hybrid memory pool, and cache it in distributed DRAM buffers. We redesign RDMA communication protocols to reduce the bottleneck of RDMA write latency by leveraging a proxy mechanism. Gengar also supports memory sharing among multiple users with data consistency guarantee. We evaluate Gengar in a real testbed equipped with Intel Optane DC Persistent DIMMs. Experimental results show that Gengar significantly improves the performance of public benchmarks such as MapReduce and YCSB by up to 70 % compared with state-of-the-art DSHM systems.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116994118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
When Delta Sync Meets Message-Locked Encryption: a Feature-based Delta Sync Scheme for Encrypted Cloud Storage 当增量同步满足消息锁定加密时:加密云存储的基于功能的增量同步方案
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00040
Suzhen Wu, Zhanhong Tu, Zuocheng Wang, Zhirong Shen, Bo Mao
As increasingly prevalent, more and more data are stored in the cloud storage, which brings us two major challenges. First, the modified files in the cloud should be quickly synchronized (sync) to ensure data consistency, e.g., delta sync achieves efficient cloud sync by synchronizing only the updated part of the file. Second, the huge data in the cloud needs to be deduplicated and encrypted, e.g., message-locked encryption (MLE) implements data deduplication by encrypting the content between different users. However, when both are combined, few updates in the content can cause large sync traffic amplification for both keys and ciphertext in the MLE-based cloud storage, which significantly degrading the cloud sync efficiency. In this paper, we propose an feature-based encryption sync scheme FeatureSync to improve the performance of synchronizing multiple encrypted files by merging several files before synchronizing. The performance evaluations on a lightweight prototype implementation of FeatureSync show that FeatureSync reduces the cloud sync time by 72.6% and the cloud sync traffic by 78.5% on average, compared with the state-of-the-art sync schemes.
随着越来越普遍,越来越多的数据存储在云存储中,这给我们带来了两大挑战。首先,云中的修改文件需要快速同步(sync)以保证数据的一致性,例如delta sync通过只同步文件中更新的部分来实现高效的云同步。其次,云中的海量数据需要进行重复数据删除和加密,例如消息锁定加密(message-locked encryption, MLE)通过对不同用户之间的内容进行加密来实现重复数据删除。但是,当两者结合使用时,内容中的少量更新可能会导致基于mle的云存储中的密钥和密文的大量同步流量放大,从而显著降低云同步效率。在本文中,我们提出了一种基于特征的加密同步方案FeatureSync,通过在同步之前合并多个文件来提高多个加密文件同步的性能。对FeatureSync轻量级原型实现的性能评估表明,与最先进的同步方案相比,FeatureSync平均减少了72.6%的云同步时间和78.5%的云同步流量。
{"title":"When Delta Sync Meets Message-Locked Encryption: a Feature-based Delta Sync Scheme for Encrypted Cloud Storage","authors":"Suzhen Wu, Zhanhong Tu, Zuocheng Wang, Zhirong Shen, Bo Mao","doi":"10.1109/ICDCS51616.2021.00040","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00040","url":null,"abstract":"As increasingly prevalent, more and more data are stored in the cloud storage, which brings us two major challenges. First, the modified files in the cloud should be quickly synchronized (sync) to ensure data consistency, e.g., delta sync achieves efficient cloud sync by synchronizing only the updated part of the file. Second, the huge data in the cloud needs to be deduplicated and encrypted, e.g., message-locked encryption (MLE) implements data deduplication by encrypting the content between different users. However, when both are combined, few updates in the content can cause large sync traffic amplification for both keys and ciphertext in the MLE-based cloud storage, which significantly degrading the cloud sync efficiency. In this paper, we propose an feature-based encryption sync scheme FeatureSync to improve the performance of synchronizing multiple encrypted files by merging several files before synchronizing. The performance evaluations on a lightweight prototype implementation of FeatureSync show that FeatureSync reduces the cloud sync time by 72.6% and the cloud sync traffic by 78.5% on average, compared with the state-of-the-art sync schemes.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126715762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Harmony: A Scheduling Framework Optimized for Multiple Distributed Machine Learning Jobs Harmony:一个针对多个分布式机器学习作业优化的调度框架
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00085
Woo-Yeon Lee, Yunseong Lee, Won Wook Song, Youngseok Yang, Jooyeon Kim, Byung-Gon Chun
We introduce Harmony, a new scheduling framework that executes multiple Parameter-Server ML training jobs together to improve cluster resource utilization. Harmony coordinates a fine-grained execution of co-located jobs with complementary resource usages to avoid contention and to efficiently share resources between the jobs. To resolve the memory pressure due to the increased number of simultaneous jobs, Harmony uses a data spill/reload mechanism optimized for multiple jobs with the iterative execution pattern. Our evaluation shows that Harmony improves cluster resource utilization by up to 1.65×, resulting in a reduction of the mean ML training job time by about 53%, and makespan, the total time to process all given jobs, by about 38%, compared to the traditional approaches that allocate dedicated resources to each job.
我们引入了Harmony,这是一个新的调度框架,它可以一起执行多个参数服务器机器学习训练任务,以提高集群资源利用率。Harmony协调位于同一位置的作业的细粒度执行和互补的资源使用,以避免争用,并在作业之间有效地共享资源。为了解决由于并发作业数量增加而带来的内存压力,Harmony使用了一种数据溢出/重新加载机制,该机制针对具有迭代执行模式的多个作业进行了优化。我们的评估表明,与为每个作业分配专用资源的传统方法相比,Harmony将集群资源利用率提高了1.65倍,从而将平均ML训练作业时间减少了约53%,并且将makespan(处理所有给定作业的总时间)减少了约38%。
{"title":"Harmony: A Scheduling Framework Optimized for Multiple Distributed Machine Learning Jobs","authors":"Woo-Yeon Lee, Yunseong Lee, Won Wook Song, Youngseok Yang, Jooyeon Kim, Byung-Gon Chun","doi":"10.1109/ICDCS51616.2021.00085","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00085","url":null,"abstract":"We introduce Harmony, a new scheduling framework that executes multiple Parameter-Server ML training jobs together to improve cluster resource utilization. Harmony coordinates a fine-grained execution of co-located jobs with complementary resource usages to avoid contention and to efficiently share resources between the jobs. To resolve the memory pressure due to the increased number of simultaneous jobs, Harmony uses a data spill/reload mechanism optimized for multiple jobs with the iterative execution pattern. Our evaluation shows that Harmony improves cluster resource utilization by up to 1.65×, resulting in a reduction of the mean ML training job time by about 53%, and makespan, the total time to process all given jobs, by about 38%, compared to the traditional approaches that allocate dedicated resources to each job.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123229203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Demo: Automatically Retrainable Self Improving Model for the Automated Classification of Software Incidents into Multiple Classes 演示:软件事件自动分类为多类的自动可再训练自改进模型
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00113
Badal Agrawal, Mohit Mishra
Developers across most of the organizations face the issue of manually dealing with the classification of the software bug reports. Software bug reports often contain text and other useful information that are common for a particular type of bug. This information can be extracted using the techniques of Natural Language Processing and combined with the manual classification done by the developers until now to create a properly labelled data set for training a supervised learning model for automatically classifying the bug reports into their respective categories. Previous studies have only focused on binary classification of software incident reports as bug and non-bug. Our novel approach achieves an accuracy of 76.94% for a 10-factor classification problem on the bug repository created by Microsoft Dynamics 365 team. In addition, we propose a novel method for automatically retraining the model and updating it with developer feedback in case of misclassification that will significantly reduce the maintenance cost and effort.
大多数组织的开发人员都面临着手动处理软件错误报告分类的问题。软件错误报告通常包含文本和其他有用的信息,这些信息对于特定类型的错误是常见的。这些信息可以使用自然语言处理技术提取,并与开发人员迄今为止所做的手动分类相结合,以创建一个适当标记的数据集,用于训练一个监督学习模型,以自动将bug报告分类到各自的类别中。以往的研究只关注软件事件报告的二进制分类,即bug和non-bug。我们的新方法在Microsoft Dynamics 365团队创建的错误存储库上实现了10因素分类问题的76.94%的准确率。此外,我们提出了一种新的方法来自动重新训练模型,并在分类错误的情况下使用开发人员的反馈来更新模型,这将大大减少维护成本和工作量。
{"title":"Demo: Automatically Retrainable Self Improving Model for the Automated Classification of Software Incidents into Multiple Classes","authors":"Badal Agrawal, Mohit Mishra","doi":"10.1109/ICDCS51616.2021.00113","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00113","url":null,"abstract":"Developers across most of the organizations face the issue of manually dealing with the classification of the software bug reports. Software bug reports often contain text and other useful information that are common for a particular type of bug. This information can be extracted using the techniques of Natural Language Processing and combined with the manual classification done by the developers until now to create a properly labelled data set for training a supervised learning model for automatically classifying the bug reports into their respective categories. Previous studies have only focused on binary classification of software incident reports as bug and non-bug. Our novel approach achieves an accuracy of 76.94% for a 10-factor classification problem on the bug repository created by Microsoft Dynamics 365 team. In addition, we propose a novel method for automatically retraining the model and updating it with developer feedback in case of misclassification that will significantly reduce the maintenance cost and effort.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126487127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Demo: Discover, Provision, and Orchestration of Machine Learning Inference Services in Heterogeneous Edge 演示:在异构边缘中发现、提供和编排机器学习推理服务
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00115
Roberto Morabito, M. Chiang
In recent years, the research community started to extensively study how edge computing can enhance the provisioning of a seamless and performing Machine Learning (ML) experience. Boosting the performance of ML inference at the edge became a driving factor especially for enabling those use-cases in which proximity to the data sources, near real-time requirements, and need of a reduced network latency represent a determining factor. The growing demand of edge-based ML services has been also boosted by an increasing market release of small-form factor inference accelerators devices that feature, however, heterogeneous and not fully interoperable software and hardware characteristics. A key aspect that has not yet been fully investigated is how to discover and efficiently optimize the provision of ML inference services in distributed edge systems featuring heterogeneous edge inference accelerators - not neglecting also that the limited devices computation capabilities may imply the need of orchestrating the inference execution provisioning among the different system's devices. The main goal of this demo is to showcase how ML inference services can be agnostically discovered, provisioned, and orchestrated in a cluster of heterogeneous and distributed edge nodes.
近年来,研究界开始广泛研究边缘计算如何增强提供无缝和执行的机器学习(ML)体验。提高边缘机器学习推理的性能成为一个驱动因素,特别是对于那些接近数据源、接近实时需求和减少网络延迟的需求是决定因素的用例来说。基于边缘的机器学习服务日益增长的需求也受到越来越多的小型推理加速器设备市场发布的推动,然而,这些设备具有异构和不完全可互操作的软件和硬件特征。尚未充分研究的一个关键方面是如何在具有异构边缘推理加速器的分布式边缘系统中发现和有效地优化ML推理服务的提供-也不要忽视有限的设备计算能力可能意味着需要在不同系统的设备之间编排推理执行供应。本演示的主要目标是展示如何在异构和分布式边缘节点的集群中发现、供应和编排ML推理服务。
{"title":"Demo: Discover, Provision, and Orchestration of Machine Learning Inference Services in Heterogeneous Edge","authors":"Roberto Morabito, M. Chiang","doi":"10.1109/ICDCS51616.2021.00115","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00115","url":null,"abstract":"In recent years, the research community started to extensively study how edge computing can enhance the provisioning of a seamless and performing Machine Learning (ML) experience. Boosting the performance of ML inference at the edge became a driving factor especially for enabling those use-cases in which proximity to the data sources, near real-time requirements, and need of a reduced network latency represent a determining factor. The growing demand of edge-based ML services has been also boosted by an increasing market release of small-form factor inference accelerators devices that feature, however, heterogeneous and not fully interoperable software and hardware characteristics. A key aspect that has not yet been fully investigated is how to discover and efficiently optimize the provision of ML inference services in distributed edge systems featuring heterogeneous edge inference accelerators - not neglecting also that the limited devices computation capabilities may imply the need of orchestrating the inference execution provisioning among the different system's devices. The main goal of this demo is to showcase how ML inference services can be agnostically discovered, provisioned, and orchestrated in a cluster of heterogeneous and distributed edge nodes.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130622756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evidence in Hand: Passive Vibration Response-based Continuous User Authentication 证据在手:基于被动振动响应的连续用户认证
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00101
Hangcheng Cao, Hongbo Jiang, Daibo Liu, Jie Xiong
Continuous user authentication is of great importance to maintain security for a mobile system and protect user's privacy throughout a login session. In this paper, we propose HandPass, a continuous user authentication system that employs the vibration responses of concealed hand biometrics, which are passively activated by the natural user-device interactions on the touchscreen. Hand vibration responses are instantly triggered and embodied in the mechanical vibration of the force-bearing body (i.e., the mobile device and the holding hand). Therefore, a built-in accelerometer can effectively capture the intrinsic features of hand vibration responses. The hand vibration response is determined by the trigger force and the complex hand structure, which is unique to each user and is difficult (if not impossible) to counterfeit. HandPass is a passive hand vibration response-based continuous user authentication system hosted on smartphones, with advantages of non-intrusiveness, high efficiency, and user-friendliness. We prototyped HandPass on Android smartphones and comprehensively evaluated its performance by recruiting 43 volunteers. Experiment results show that HandPass can achieve 97.3 % overall authentication accuracy and only 1.8 % false acceptance rate in diverse scenarios.
持续的用户身份验证对于维护移动系统的安全性和在整个登录会话中保护用户的隐私非常重要。在本文中,我们提出了HandPass,这是一个连续的用户认证系统,它利用隐藏的手部生物特征的振动响应,由触摸屏上的自然用户设备交互被动激活。手部振动响应被瞬间触发并体现为承受力体(即移动设备和手持设备)的机械振动。因此,内置加速度计可以有效地捕捉手部振动响应的内在特征。手的振动响应是由触发力和复杂的手结构决定的,这对每个用户来说都是独一无二的,很难(如果不是不可能)伪造。HandPass是一种基于被动手部振动响应的基于智能手机的连续用户认证系统,具有非侵入性、高效性和用户友好性等优点。我们在Android智能手机上制作了HandPass原型,并招募了43名志愿者,对其性能进行了全面评估。实验结果表明,HandPass在不同场景下的总体认证准确率为97.3%,错误接受率仅为1.8%。
{"title":"Evidence in Hand: Passive Vibration Response-based Continuous User Authentication","authors":"Hangcheng Cao, Hongbo Jiang, Daibo Liu, Jie Xiong","doi":"10.1109/ICDCS51616.2021.00101","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00101","url":null,"abstract":"Continuous user authentication is of great importance to maintain security for a mobile system and protect user's privacy throughout a login session. In this paper, we propose HandPass, a continuous user authentication system that employs the vibration responses of concealed hand biometrics, which are passively activated by the natural user-device interactions on the touchscreen. Hand vibration responses are instantly triggered and embodied in the mechanical vibration of the force-bearing body (i.e., the mobile device and the holding hand). Therefore, a built-in accelerometer can effectively capture the intrinsic features of hand vibration responses. The hand vibration response is determined by the trigger force and the complex hand structure, which is unique to each user and is difficult (if not impossible) to counterfeit. HandPass is a passive hand vibration response-based continuous user authentication system hosted on smartphones, with advantages of non-intrusiveness, high efficiency, and user-friendliness. We prototyped HandPass on Android smartphones and comprehensively evaluated its performance by recruiting 43 volunteers. Experiment results show that HandPass can achieve 97.3 % overall authentication accuracy and only 1.8 % false acceptance rate in diverse scenarios.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134633274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Behind Block Explorers: Public Blockchain Measurement and Security Implication 区块探索者背后:公共区块链测量和安全含义
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00029
Hwanjo Heo, Seungwon Shin
Blockchain data has become a popular subject in studying various aspects of blockchains including the security of underlying mechanisms. However, the main chain block data, usually available from block explorer services, does not serve as a sufficient source of transaction and block dynamics that are only visible from a large-scale event measurement. In this paper, the transaction and block arrival events of the two popular public blockchains, i.e., Bitcoin and Ethereum, are measured to investigate the hidden dynamics of blockchain networks. We share our key findings and security implications including a false universal assumption of previous mining related studies and an invalid transaction propagation problem that can be exploited to launch a Denial-of-Service attack on a network.
区块链数据已经成为研究区块链各个方面的热门主题,包括底层机制的安全性。然而,通常从区块资源管理器服务中获得的主链区块数据不能作为交易和区块动态的足够来源,这些数据只能从大规模事件测量中看到。本文通过测量比特币和以太坊这两个流行的公链的交易和区块到达事件,来研究区块链网络的隐藏动态。我们分享了我们的主要发现和安全含义,包括对先前挖矿相关研究的错误普遍假设,以及可以被利用来在网络上发起拒绝服务攻击的无效事务传播问题。
{"title":"Behind Block Explorers: Public Blockchain Measurement and Security Implication","authors":"Hwanjo Heo, Seungwon Shin","doi":"10.1109/ICDCS51616.2021.00029","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00029","url":null,"abstract":"Blockchain data has become a popular subject in studying various aspects of blockchains including the security of underlying mechanisms. However, the main chain block data, usually available from block explorer services, does not serve as a sufficient source of transaction and block dynamics that are only visible from a large-scale event measurement. In this paper, the transaction and block arrival events of the two popular public blockchains, i.e., Bitcoin and Ethereum, are measured to investigate the hidden dynamics of blockchain networks. We share our key findings and security implications including a false universal assumption of previous mining related studies and an invalid transaction propagation problem that can be exploited to launch a Denial-of-Service attack on a network.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130696936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Infinite Balanced Allocation via Finite Capacities 有限容量下的无限均衡分配
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00096
P. Berenbrink, Tom Friedetzky, Christopher Hahn, L. Hintze, Dominik Kaaser, Peter Kling, Lars Nagel
We analyze the following infinite load balancing process, modeled as a classical balls-into-bins game: There are $n$ bins (servers) with a limited capacity (buffer) of size $c=c(n)in mathbb{N}$. Given a fixed arrival rate $lambda=lambda(n)in(0,1)$, in every round $lambda n$ new balls (requests) are generated. Together with possible leftovers from previous rounds, these balls compete to be allocated to the bins. To this end, every ball samples a bin independently and uniformly at random and tries to allocate itself to that bin. Each bin accepts as many balls as possible until its buffer is full, preferring balls of higher age. At the end of the round, every bin deletes the ball it allocated first. We study how the buffer size $c$ affects the performance of this process. For this, we analyze both the number of balls competing each round (including the leftovers from previous rounds) as well as the worst-case waiting time of individual balls. We show that (i) the number of competing balls is at any (even exponentially large) time bounded with high probability by $4 cdot c^{-1} cdot ln (1/(1-lambda))cdot n + mathrm{O}(c cdot n)$ and that (ii) the waiting time of a given ball is with high probability at most $(4 cdot ln (1/(1-lambda)))/ (c cdot (1-1/e)) + log log n + mathrm{O}(c)$. These results indicate a sweet spot for the choice of $c$ around $c = Theta(sqrt{log (1/(1-lambda))})$. Compared to a related process with infinite capacity [Berenbrink et al., PODC'16], for constant $lambda$ the waiting time is reduced from $mathrm{O}(log n)$ to $mathrm{O}(log log n)$. Even for large $lambda approx 1 - 1/n$ we reduce the waiting time from $mathrm{O}(log n)$ to $mathrm{O}(sqrt{log n})$.
我们分析了以下无限负载平衡过程,将其建模为一个经典的球入箱博弈:有$n$箱(服务器),其容量(缓冲区)的大小为$c=c(n)in mathbb{N}$。给定固定的到达率$lambda=lambda(n)in(0,1)$,在每一轮$lambda n$中生成新的球(请求)。这些球和前几轮可能剩下的球一起竞争被分配到垃圾箱里。为此,每个球独立地、均匀地随机取样一个箱子,并尝试将自己分配到那个箱子中。每个箱子都尽可能多地接受球,直到它的缓冲区被填满,更喜欢年龄较大的球。在一轮结束时,每个箱子都会删除它首先分配的球。我们研究缓冲大小$c$如何影响这个过程的性能。为此,我们分析了每轮比赛的球数(包括前几轮的剩余球)以及每个球的最坏情况等待时间。我们证明(i)竞争球的数量在任何(甚至是指数大)的时间内都有高概率为$4 cdot c^{-1} cdot ln (1/(1-lambda))cdot n + mathrm{O}(c cdot n)$,并且(ii)给定球的等待时间最多有高概率为$(4 cdot ln (1/(1-lambda)))/ (c cdot (1-1/e)) + log log n + mathrm{O}(c)$。这些结果表明,在$c = Theta(sqrt{log (1/(1-lambda))})$附近选择$c$的最佳位置。与具有无限容量的相关过程相比[Berenbrink et al., PODC'16],当$lambda$为常数时,等待时间从$mathrm{O}(log n)$减少到$mathrm{O}(log log n)$。即使对于较大的$lambda approx 1 - 1/n$,我们也将等待时间从$mathrm{O}(log n)$减少到$mathrm{O}(sqrt{log n})$。
{"title":"Infinite Balanced Allocation via Finite Capacities","authors":"P. Berenbrink, Tom Friedetzky, Christopher Hahn, L. Hintze, Dominik Kaaser, Peter Kling, Lars Nagel","doi":"10.1109/ICDCS51616.2021.00096","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00096","url":null,"abstract":"We analyze the following infinite load balancing process, modeled as a classical balls-into-bins game: There are $n$ bins (servers) with a limited capacity (buffer) of size $c=c(n)in mathbb{N}$. Given a fixed arrival rate $lambda=lambda(n)in(0,1)$, in every round $lambda n$ new balls (requests) are generated. Together with possible leftovers from previous rounds, these balls compete to be allocated to the bins. To this end, every ball samples a bin independently and uniformly at random and tries to allocate itself to that bin. Each bin accepts as many balls as possible until its buffer is full, preferring balls of higher age. At the end of the round, every bin deletes the ball it allocated first. We study how the buffer size $c$ affects the performance of this process. For this, we analyze both the number of balls competing each round (including the leftovers from previous rounds) as well as the worst-case waiting time of individual balls. We show that (i) the number of competing balls is at any (even exponentially large) time bounded with high probability by $4 cdot c^{-1} cdot ln (1/(1-lambda))cdot n + mathrm{O}(c cdot n)$ and that (ii) the waiting time of a given ball is with high probability at most $(4 cdot ln (1/(1-lambda)))/ (c cdot (1-1/e)) + log log n + mathrm{O}(c)$. These results indicate a sweet spot for the choice of $c$ around $c = Theta(sqrt{log (1/(1-lambda))})$. Compared to a related process with infinite capacity [Berenbrink et al., PODC'16], for constant $lambda$ the waiting time is reduced from $mathrm{O}(log n)$ to $mathrm{O}(log log n)$. Even for large $lambda approx 1 - 1/n$ we reduce the waiting time from $mathrm{O}(log n)$ to $mathrm{O}(sqrt{log n})$.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114839198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FreeLauncher: Lossless Failure Recovery of Parameter Servers with Ultralight Replication 参数服务器的无损故障恢复与超轻复制
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00052
Yangyang Zhang, Jianxin Li, Yiming Zhang, Lijie Wang, Ling Liu
Modern distributed machine learning (ML) systems leverage large-scale computing infrastructures to achieve fast model training. For many servers jointly training a model, failure recovery becomes an important challenge when a training task could be accomplished in minutes rather than days. The state-of-the-art checkpointing mechanism cannot meet the need of efficient recovery for large-scale ML, because its high cost prevents timely checkpointing and a server failure will likely cause a substantial loss of intermediate results when the checkpointing intervals are comparable to the entire training times. This paper proposes FreeLauncher (FLR), a lossless recovery mechanism for large-scale ML which performs ultralight replication (instead of checkpointing) to guarantee all intermediate training results (parameters) to be timely replicated. Our key insight is that in the parameter-server (PS) architecture there already exist multiple copies for each intermediate result not only in the server but also in the workers, most of which are qualified for failure recovery. FLR addresses the challenges of parameter sparsity (e.g., when training LDA) and staleness (e.g., when adopting relaxed consistency) by selectively replicating the latest copies of the sparse/stale parameters to ensure at least k up-to-date copies to be existent, which can handle any k-1 failures by re-launching the failed servers with recovered parameters from workers. We implement FLR on Tensorflow. Evaluation results show that FLR achieves lossless failure recovery (almost requiring no recomputation) at little cost.
现代分布式机器学习(ML)系统利用大规模计算基础设施来实现快速的模型训练。对于联合训练模型的许多服务器来说,当训练任务可以在几分钟而不是几天内完成时,故障恢复就成为一个重要的挑战。最先进的检查点机制不能满足大规模机器学习高效恢复的需要,因为它的高成本阻止了及时的检查点,并且当检查点间隔与整个训练时间相当时,服务器故障可能会导致中间结果的大量损失。本文提出了一种用于大规模机器学习的无损恢复机制——自由启动器(FLR),它执行超轻复制(而不是检查点)来保证所有中间训练结果(参数)的及时复制。我们的关键见解是,在参数服务器(PS)体系结构中,每个中间结果已经存在多个副本,不仅在服务器中,而且在工作器中,其中大多数都有资格进行故障恢复。FLR解决了参数稀疏性(例如,当训练LDA时)和过时性(例如,当采用放松一致性时)的挑战,通过有选择性地复制稀疏/过时参数的最新副本,以确保存在至少k个最新副本,这可以通过重新启动失败的服务器来处理任何k-1失败。我们在Tensorflow上实现FLR。评估结果表明,FLR以较低的成本实现了无损故障恢复(几乎不需要重新计算)。
{"title":"FreeLauncher: Lossless Failure Recovery of Parameter Servers with Ultralight Replication","authors":"Yangyang Zhang, Jianxin Li, Yiming Zhang, Lijie Wang, Ling Liu","doi":"10.1109/ICDCS51616.2021.00052","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00052","url":null,"abstract":"Modern distributed machine learning (ML) systems leverage large-scale computing infrastructures to achieve fast model training. For many servers jointly training a model, failure recovery becomes an important challenge when a training task could be accomplished in minutes rather than days. The state-of-the-art checkpointing mechanism cannot meet the need of efficient recovery for large-scale ML, because its high cost prevents timely checkpointing and a server failure will likely cause a substantial loss of intermediate results when the checkpointing intervals are comparable to the entire training times. This paper proposes FreeLauncher (FLR), a lossless recovery mechanism for large-scale ML which performs ultralight replication (instead of checkpointing) to guarantee all intermediate training results (parameters) to be timely replicated. Our key insight is that in the parameter-server (PS) architecture there already exist multiple copies for each intermediate result not only in the server but also in the workers, most of which are qualified for failure recovery. FLR addresses the challenges of parameter sparsity (e.g., when training LDA) and staleness (e.g., when adopting relaxed consistency) by selectively replicating the latest copies of the sparse/stale parameters to ensure at least k up-to-date copies to be existent, which can handle any k-1 failures by re-launching the failed servers with recovered parameters from workers. We implement FLR on Tensorflow. Evaluation results show that FLR achieves lossless failure recovery (almost requiring no recomputation) at little cost.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deterministic Contention Resolution without Collision Detection: Throughput vs Energy 无冲突检测的确定性争用解决:吞吐量与能量
Pub Date : 2021-07-01 DOI: 10.1109/ICDCS51616.2021.00100
G. D. Marco, D. Kowalski, Grzegorz Stachowiak
This paper studies the Contention resolution problem on a shared channel (also known as a multiple access channel). A set of $n$ stations are connected to a common device and are able to communicate by transmitting and listening. Each station may have a message to broadcast. At any round, a transmission is successful if and only if exactly one station is transmitting at that round. Simultaneous transmissions interfere one another and, as a result, the respective messages are lost. The Contention resolution is the fundamental problem of scheduling the transmissions into rounds in such a way that any station delivers successfully its message on the channel. We consider a general dynamic distributed setting. We assume that the stations can join (or be activated on) the channel at arbitrary times (dynamic scenario). This has to be contrasted with the simplified static scenario, in which all stations are assumed to be activated simultaneously. We also assume that the stations are not able to detect whether a collision among simultaneous transmissions occurred (model without collision detection). Finally, there is no global clock in the system: each station measures the time using its own local clock which starts when the station is activated and is possibly out of sync with respect to the other stations. We study non-adaptive deterministic distributed algorithms for the contention resolution problem and assess their efficiency both in terms of channel utilization (also called throughput) and energy consumption. While this topic has been quite extensively examined for randomized algorithms, this is, to the best of our knowledge, the first paper to discuss to which extent deterministic contention resolution algorithms can be efficient in terms of both channel utilization and energy consumption. Our results imply an exponential separation gap between static and dynamic setting with respect to channel utilization. We also show that the knowledge of the number of participating stations k (or an upper bound on it) has a substantial impact on the energy consumption.
本文研究了共享信道(也称为多址信道)上的争用解决问题。一组$n$站连接到一个公共设备,能够通过发送和侦听进行通信。每个电台可能有一条消息要广播。在任何一轮中,当且仅当在该轮中有一个站点正在传输时,传输才成功。同时传输会互相干扰,结果导致各自的信息丢失。争用解决方案是将传输安排为轮的基本问题,以使任何站点都能在信道上成功地传递其消息。我们考虑一个一般的动态分布设置。我们假设电台可以在任意时间加入(或被激活)频道(动态场景)。这必须与简化的静态场景形成对比,在简化的静态场景中,假设所有站点同时被激活。我们还假设台站无法检测到同时传输之间是否发生碰撞(没有碰撞检测的模型)。最后,系统中没有全局时钟:每个站点使用自己的本地时钟测量时间,该时钟在站点被激活时开始,并且可能与其他站点不同步。我们研究了用于争用解决问题的非自适应确定性分布式算法,并从信道利用率(也称为吞吐量)和能量消耗两方面评估了它们的效率。虽然这个主题已经对随机算法进行了相当广泛的研究,但据我们所知,这是第一篇讨论确定性争用解决算法在多大程度上可以在通道利用和能量消耗方面有效的论文。我们的结果表明,在通道利用率方面,静态和动态设置之间存在指数分离差距。我们还表明,参与站点数量k(或其上界)的知识对能源消耗有实质性影响。
{"title":"Deterministic Contention Resolution without Collision Detection: Throughput vs Energy","authors":"G. D. Marco, D. Kowalski, Grzegorz Stachowiak","doi":"10.1109/ICDCS51616.2021.00100","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00100","url":null,"abstract":"This paper studies the Contention resolution problem on a shared channel (also known as a multiple access channel). A set of $n$ stations are connected to a common device and are able to communicate by transmitting and listening. Each station may have a message to broadcast. At any round, a transmission is successful if and only if exactly one station is transmitting at that round. Simultaneous transmissions interfere one another and, as a result, the respective messages are lost. The Contention resolution is the fundamental problem of scheduling the transmissions into rounds in such a way that any station delivers successfully its message on the channel. We consider a general dynamic distributed setting. We assume that the stations can join (or be activated on) the channel at arbitrary times (dynamic scenario). This has to be contrasted with the simplified static scenario, in which all stations are assumed to be activated simultaneously. We also assume that the stations are not able to detect whether a collision among simultaneous transmissions occurred (model without collision detection). Finally, there is no global clock in the system: each station measures the time using its own local clock which starts when the station is activated and is possibly out of sync with respect to the other stations. We study non-adaptive deterministic distributed algorithms for the contention resolution problem and assess their efficiency both in terms of channel utilization (also called throughput) and energy consumption. While this topic has been quite extensively examined for randomized algorithms, this is, to the best of our knowledge, the first paper to discuss to which extent deterministic contention resolution algorithms can be efficient in terms of both channel utilization and energy consumption. Our results imply an exponential separation gap between static and dynamic setting with respect to channel utilization. We also show that the knowledge of the number of participating stations k (or an upper bound on it) has a substantial impact on the energy consumption.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132462564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1