首页 > 最新文献

2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)最新文献

英文 中文
A Multi-node Collaborative Storage Strategy via Clustering in Blockchain Network 区块链网络中基于聚类的多节点协同存储策略
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00164
Mengya Li, Yang Qin, Bing Liu, X. Chu
Blockchain is essentially a distributed ledger shared by all nodes in the system. All nodes in blockchain are equal, and each node holds all transactions and blocks in the network. As the network continues to expand, the data rises linearly. Participates are about to face the problem of storage limitation. Blockchain is hard to scale.This paper introduces ICIStrategy, a multi-node collaborative storage strategy based on intra-cluster integrity. In ICIStrategy, we divide all participates into several clusters. Each cluster requires holding all data of the network, whereas a node within the cluster does not need to maintain data integrity. It aims to solve the storage pressure by reducing the amount data that each participate need to store and reduce communication overhead by collaboratively storing and verifying blocks through in-cluster nodes. Moreover, the ICIStrategy could greatly save the overhead of bootstrapping. We show the mode of operation in our strategy. We further analysis the performance of ICIStrategy and conduct simulation experiments. The results of several comparative experiments show that our strategy just needs 25% of storage space needed by Rapidchain, which indeed solve the problem of storage limitation and improve the blockchain performance.
区块链本质上是系统中所有节点共享的分布式账本。区块链中的所有节点都是相等的,每个节点保存网络中的所有事务和块。随着网络的不断扩展,数据呈线性增长。参与者将面临存储限制的问题。b区块链很难扩展。介绍了一种基于集群内完整性的多节点协同存储策略ICIStrategy。在ICIStrategy中,我们将所有参与者分成几个集群。每个集群都需要保存网络的所有数据,而集群中的节点不需要维护数据完整性。它旨在通过减少每个参与者需要存储的数据量来解决存储压力,并通过集群内节点协作存储和验证块来减少通信开销。此外,ICIStrategy可以大大节省自引导的开销。我们在战略中说明了我们的运作方式。我们进一步分析了ICIStrategy的性能,并进行了仿真实验。几个对比实验的结果表明,我们的策略只需要Rapidchain所需存储空间的25%,确实解决了存储限制的问题,提高了区块链性能。
{"title":"A Multi-node Collaborative Storage Strategy via Clustering in Blockchain Network","authors":"Mengya Li, Yang Qin, Bing Liu, X. Chu","doi":"10.1109/ICDCS47774.2020.00164","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00164","url":null,"abstract":"Blockchain is essentially a distributed ledger shared by all nodes in the system. All nodes in blockchain are equal, and each node holds all transactions and blocks in the network. As the network continues to expand, the data rises linearly. Participates are about to face the problem of storage limitation. Blockchain is hard to scale.This paper introduces ICIStrategy, a multi-node collaborative storage strategy based on intra-cluster integrity. In ICIStrategy, we divide all participates into several clusters. Each cluster requires holding all data of the network, whereas a node within the cluster does not need to maintain data integrity. It aims to solve the storage pressure by reducing the amount data that each participate need to store and reduce communication overhead by collaboratively storing and verifying blocks through in-cluster nodes. Moreover, the ICIStrategy could greatly save the overhead of bootstrapping. We show the mode of operation in our strategy. We further analysis the performance of ICIStrategy and conduct simulation experiments. The results of several comparative experiments show that our strategy just needs 25% of storage space needed by Rapidchain, which indeed solve the problem of storage limitation and improve the blockchain performance.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127488539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
SEIZE User Desired Moments: Runtime Inspection for Parallel Dataflow Systems 抓住用户期望的时刻:运行时检查并行数据流系统
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00147
Youfu Li, Matteo Interlandi, Fotis Psallidas, Wei Wang, C. Zaniolo
In Data-Intensive Scalable Computing (DISC) Systems, data transformations are concealed by exposed APIs, and intermediate execution moments are masked under dataflow transitions. Consequently, many crucial features and optimizations (e.g., debugging, data provenance, runtime skew detection) are not well-supported. Inspired by our experience in implementing features and optimizations over DISC systems, we present SEIZE, a unified framework that enables dataflow inspection— wiretapping the data-path with listening logic —in MapReduce-style programming model. We generalize our lessons learned by providing a set of primitives defining dataflow inspection, orchestration options for different inspection granularities, and operator decomposition and dataflow puncutation strategy for dataflow intervention. We demonstrate the generality and flexibility of the approach by deploying SEIZE in both Apache Spark and Apache Flink. Our experiments show that, the overhead introduced by the inspection logic is most of the time negligible (less than 5% in Spark and 10% in Flink).
在数据密集型可扩展计算(DISC)系统中,数据转换被暴露的api隐藏,中间执行时刻被数据流转换掩盖。因此,许多关键的特性和优化(例如,调试、数据来源、运行时倾斜检测)没有得到很好的支持。受我们在DISC系统上实现功能和优化的经验的启发,我们提出了一个统一的框架,可以在mapreduce风格的编程模型中进行数据流检查-使用侦听逻辑窃听数据路径。我们通过提供一组定义数据流检查的原语、不同检查粒度的编排选项以及用于数据流干预的操作符分解和数据流标点策略来概括我们的经验教训。我们通过在Apache Spark和Apache Flink中部署SEIZE来展示这种方法的通用性和灵活性。我们的实验表明,检查逻辑带来的开销在大多数情况下可以忽略不计(在Spark中小于5%,在Flink中小于10%)。
{"title":"SEIZE User Desired Moments: Runtime Inspection for Parallel Dataflow Systems","authors":"Youfu Li, Matteo Interlandi, Fotis Psallidas, Wei Wang, C. Zaniolo","doi":"10.1109/ICDCS47774.2020.00147","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00147","url":null,"abstract":"In Data-Intensive Scalable Computing (DISC) Systems, data transformations are concealed by exposed APIs, and intermediate execution moments are masked under dataflow transitions. Consequently, many crucial features and optimizations (e.g., debugging, data provenance, runtime skew detection) are not well-supported. Inspired by our experience in implementing features and optimizations over DISC systems, we present SEIZE, a unified framework that enables dataflow inspection— wiretapping the data-path with listening logic —in MapReduce-style programming model. We generalize our lessons learned by providing a set of primitives defining dataflow inspection, orchestration options for different inspection granularities, and operator decomposition and dataflow puncutation strategy for dataflow intervention. We demonstrate the generality and flexibility of the approach by deploying SEIZE in both Apache Spark and Apache Flink. Our experiments show that, the overhead introduced by the inspection logic is most of the time negligible (less than 5% in Spark and 10% in Flink).","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122816225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DistStream: An Order-Aware Distributed Framework for Online-Offline Stream Clustering Algorithms DistStream:在线-离线流聚类算法的顺序感知分布式框架
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00075
Lijie Xu, Xingtong Ye, Kai Kang, Tian Guo, Wensheng Dou, Wei Wang, Jun Wei
Stream clustering is an important data mining technique to capture the evolving patterns in real-time data streams. Today’s data streams, e.g., IoT events and Web clicks, are usually high-speed and contain dynamically-changing patterns. Existing stream clustering algorithms usually follow an online-offline paradigm with a one-record-at-a-time update model, which was designed for running in a single machine. These stream clustering algorithms, with this sequential update model, cannot be efficiently parallelized and fail to deliver the required high throughput for stream clustering.In this paper, we present DistStream, a distributed framework that can effectively scale out online-offline stream clustering algorithms. To parallelize these algorithms for high throughput, we develop a mini-batch update model with efficient parallelization approaches. To maintain high clustering quality, DistStream’s mini-batch update model preserves the update order in all the computation steps during parallel execution, which can reflect the recent changes for dynamically-changing streaming data. We implement DistStream atop Spark Streaming, as well as four representative stream clustering algorithms based on DistStream. Our evaluation on three real-world datasets shows that DistStream-based stream clustering algorithms can achieve sublinear throughput gain and comparable (99%) clustering quality with their single-machine counterparts.
流聚类是一种重要的数据挖掘技术,用于捕捉实时数据流中不断变化的模式。今天的数据流,例如物联网事件和网页点击,通常是高速的,并且包含动态变化的模式。现有的流聚类算法通常采用在线-离线模式,采用一次一条记录的更新模型,该模型是为在一台机器上运行而设计的。使用这种顺序更新模型的流聚类算法不能有效地并行化,不能提供流聚类所需的高吞吐量。在本文中,我们提出了DistStream,一个分布式框架,可以有效地扩展在线-离线流聚类算法。为了并行化这些算法以获得高吞吐量,我们开发了一个具有高效并行化方法的小批量更新模型。为了保持高集群质量,DistStream的小批量更新模型在并行执行过程中保留了所有计算步骤的更新顺序,可以反映动态变化的流数据的最新变化。我们在Spark Streaming之上实现了DistStream,以及基于DistStream的四种代表性流聚类算法。我们对三个真实数据集的评估表明,基于diststream的流聚类算法可以实现亚线性吞吐量增益,并且与单机同类算法的聚类质量相当(99%)。
{"title":"DistStream: An Order-Aware Distributed Framework for Online-Offline Stream Clustering Algorithms","authors":"Lijie Xu, Xingtong Ye, Kai Kang, Tian Guo, Wensheng Dou, Wei Wang, Jun Wei","doi":"10.1109/ICDCS47774.2020.00075","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00075","url":null,"abstract":"Stream clustering is an important data mining technique to capture the evolving patterns in real-time data streams. Today’s data streams, e.g., IoT events and Web clicks, are usually high-speed and contain dynamically-changing patterns. Existing stream clustering algorithms usually follow an online-offline paradigm with a one-record-at-a-time update model, which was designed for running in a single machine. These stream clustering algorithms, with this sequential update model, cannot be efficiently parallelized and fail to deliver the required high throughput for stream clustering.In this paper, we present DistStream, a distributed framework that can effectively scale out online-offline stream clustering algorithms. To parallelize these algorithms for high throughput, we develop a mini-batch update model with efficient parallelization approaches. To maintain high clustering quality, DistStream’s mini-batch update model preserves the update order in all the computation steps during parallel execution, which can reflect the recent changes for dynamically-changing streaming data. We implement DistStream atop Spark Streaming, as well as four representative stream clustering algorithms based on DistStream. Our evaluation on three real-world datasets shows that DistStream-based stream clustering algorithms can achieve sublinear throughput gain and comparable (99%) clustering quality with their single-machine counterparts.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129960935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SafetyNet: Interference Protection via Transparent PHY Layer Coding 安全网络:通过透明物理层编码的干扰保护
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00034
Zhimeng Yin, Wenchao Jiang, Ruofeng Liu, S. Kim, T. He
Overcrowded wireless devices in unlicensed bands compete for spectrum access, generating excessive cross-technology interference (CTI), which has become a major source of performance degradation especially for low-power IoT (e.g., ZigBee) networks. This paper presents a new forward error correction (FEC) mechanism to alleviate CTI, named SafetyNet. Designed for ZigBee, SafetyNet is inspired by the observation that ZigBee is overly robust for environment noises, but insufficiently protected from high-power CTI. By effectively embedding correction code bits into the PHY layer SafetyNet significantly enhances CTI robustness without compromising noise resilience. SafetyNet additionally offers a set of unique features including (i) transparency, making it compatible with millions of readily-deployed ZigBee devices and (ii) zero additional cost on energy and spectrum, as it does not increase the frame length. Such features not only differentiate SafetyNet from known FEC techniques (e.g., Hamming and Reed-Solomon), but also uniquely position it to be critically beneficial for today’s crowded wireless environment. Our extensive evaluation on physical testbeds shows that SafetyNet significantly improves ZigBee’s CTI robustness under a wide range of networking settings, where it corrects 55% of the corrupted packets.
在未经许可的频段中,过度拥挤的无线设备竞争频谱接入,产生过多的跨技术干扰(CTI),这已成为性能下降的主要来源,特别是对于低功耗物联网(例如ZigBee)网络。本文提出了一种新的前向纠错(FEC)机制来缓解CTI,称为SafetyNet。SafetyNet是为ZigBee设计的,灵感来自于ZigBee对环境噪声的鲁棒性,但对大功率CTI的保护不足。通过有效地将校正码位嵌入到物理层中,SafetyNet显着增强了CTI鲁棒性,而不会影响噪声恢复能力。SafetyNet还提供了一系列独特的功能,包括:(i)透明度,使其与数百万随时部署的ZigBee设备兼容;(ii)零额外的能量和频谱成本,因为它不会增加帧长度。这些特性不仅将SafetyNet与已知的FEC技术(例如,Hamming和Reed-Solomon)区分开来,而且还使其独特的位置对当今拥挤的无线环境至关重要。我们对物理测试平台的广泛评估表明,SafetyNet在广泛的网络设置下显着提高了ZigBee的CTI鲁棒性,其中它纠正了55%的损坏数据包。
{"title":"SafetyNet: Interference Protection via Transparent PHY Layer Coding","authors":"Zhimeng Yin, Wenchao Jiang, Ruofeng Liu, S. Kim, T. He","doi":"10.1109/ICDCS47774.2020.00034","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00034","url":null,"abstract":"Overcrowded wireless devices in unlicensed bands compete for spectrum access, generating excessive cross-technology interference (CTI), which has become a major source of performance degradation especially for low-power IoT (e.g., ZigBee) networks. This paper presents a new forward error correction (FEC) mechanism to alleviate CTI, named SafetyNet. Designed for ZigBee, SafetyNet is inspired by the observation that ZigBee is overly robust for environment noises, but insufficiently protected from high-power CTI. By effectively embedding correction code bits into the PHY layer SafetyNet significantly enhances CTI robustness without compromising noise resilience. SafetyNet additionally offers a set of unique features including (i) transparency, making it compatible with millions of readily-deployed ZigBee devices and (ii) zero additional cost on energy and spectrum, as it does not increase the frame length. Such features not only differentiate SafetyNet from known FEC techniques (e.g., Hamming and Reed-Solomon), but also uniquely position it to be critically beneficial for today’s crowded wireless environment. Our extensive evaluation on physical testbeds shows that SafetyNet significantly improves ZigBee’s CTI robustness under a wide range of networking settings, where it corrects 55% of the corrupted packets.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Performance Impact of NUMA on One-sided RDMA Interactions NUMA对单侧RDMA交互性能影响的研究
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00194
J. Nelson, R. Palmieri
One of the consequences of ultra-fast networks like InfiniBand is that known implications of Non-uniform Memory Access (NUMA) locality now constitute a higher percentage of execution time for distributed systems employing Remote Direct Memory Access (RDMA). Our findings quantify the role NUMA plays in RDMA operation performance and uncovers unexpected behavior.
InfiniBand等超高速网络的后果之一是,对于采用远程直接内存访问(RDMA)的分布式系统来说,非统一内存访问(NUMA)局部性的已知影响现在构成了更高比例的执行时间。我们的研究结果量化了NUMA在RDMA操作性能中的作用,并揭示了意想不到的行为。
{"title":"On the Performance Impact of NUMA on One-sided RDMA Interactions","authors":"J. Nelson, R. Palmieri","doi":"10.1109/ICDCS47774.2020.00194","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00194","url":null,"abstract":"One of the consequences of ultra-fast networks like InfiniBand is that known implications of Non-uniform Memory Access (NUMA) locality now constitute a higher percentage of execution time for distributed systems employing Remote Direct Memory Access (RDMA). Our findings quantify the role NUMA plays in RDMA operation performance and uncovers unexpected behavior.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"11 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123689278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Selective Deletion in a Blockchain 区块链中的选择性删除
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00160
P. Hillmann, Marcus Knüpfer, Erik Heiland, A. Karcher
The constantly growing size of blockchains becomes a challenge with the increasing usage. Especially the storage of unwanted data in a blockchain is an issue, because it cannot be removed naturally. In order to counteract this problem, we present the first concept for the selective deletion of single entries in a blockchain. For this purpose, the general consensus algorithm is extended by the functionality of regularly creating summary blocks. Previous data of the chain are summarized and stored again in a new block, leaving out unwanted information. With a shifting marker of the Genesis Block, data can be deleted from the beginning of a blockchain. In this way, the technology of the blockchain becomes fully transactional. The concept is independent of a specific block structure, network structure, or consensus algorithm. Moreover, this functionality can be adapted to current blockchains to solve multiple problems related to scalability. This approach enables the transfer of blockchain technology to further fields of application, among others in the area of Industry 4.0 and Product Life-cycle Management.
随着使用的增加,区块链的规模不断增长成为一个挑战。特别是在区块链中存储不需要的数据是一个问题,因为它不能自然删除。为了解决这个问题,我们提出了选择性删除区块链中单个条目的第一个概念。为此,一般共识算法通过定期创建汇总块的功能进行扩展。区块链的先前数据被汇总并再次存储在一个新的区块中,删除不需要的信息。通过创世纪区块的移动标记,可以从区块链的开始删除数据。通过这种方式,区块链技术变得完全具有事务性。该概念独立于特定的块结构、网络结构或共识算法。此外,该功能可以适用于当前的区块链,以解决与可扩展性相关的多个问题。这种方法可以将区块链技术转移到其他应用领域,包括工业4.0和产品生命周期管理领域。
{"title":"Selective Deletion in a Blockchain","authors":"P. Hillmann, Marcus Knüpfer, Erik Heiland, A. Karcher","doi":"10.1109/ICDCS47774.2020.00160","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00160","url":null,"abstract":"The constantly growing size of blockchains becomes a challenge with the increasing usage. Especially the storage of unwanted data in a blockchain is an issue, because it cannot be removed naturally. In order to counteract this problem, we present the first concept for the selective deletion of single entries in a blockchain. For this purpose, the general consensus algorithm is extended by the functionality of regularly creating summary blocks. Previous data of the chain are summarized and stored again in a new block, leaving out unwanted information. With a shifting marker of the Genesis Block, data can be deleted from the beginning of a blockchain. In this way, the technology of the blockchain becomes fully transactional. The concept is independent of a specific block structure, network structure, or consensus algorithm. Moreover, this functionality can be adapted to current blockchains to solve multiple problems related to scalability. This approach enables the transfer of blockchain technology to further fields of application, among others in the area of Industry 4.0 and Product Life-cycle Management.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116359705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
LVQ: A Lightweight Verifiable Query Approach for Transaction History in Bitcoin LVQ:比特币交易历史的一种轻量级可验证查询方法
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00096
Xiaohai Dai, Jiang Xiao, Wenhui Yang, Chaofan Wang, Jian Chang, Rui Han, Hai Jin
In the Bitcoin system, transaction history of an address can be useful in many scenarios, such as the balance calculation and behavior analysis. However, it is non-trivial for a common user who runs a light node to fetch historical transactions, since it only stores headers without any transaction details. It usually has to request a full node who stores the complete data. Validation of these query results is critical and mainly involves two aspects: correctness and completeness. The former can be implemented via Merkle branch easily, while the latter is quite difficult in the Bitcoin protocol. To enable the completeness validation, a strawman design is proposed, which simply includes the BF (Bloom filter) in the headers. However, since the size of BF is about KB, light nodes in the strawman will suffer from the incremental storage burden. What’s worse, an integrated block must be transmitted when BF cannot work, resulting in large network overhead. In this paper, we propose LVQ, the first lightweight verifiable query approach that reduces the storage requirement and network overhead at the same time. To be specific, by only storing the hash of BF in headers, LVQ keeps data stored by light nodes being little. Besides, LVQ introduces a novel BMT (BF integrated Merkle Tree) structure for lightweight query, which can eliminate the communication costs of query results by merging the multiple successive BFs. Furthermore, when BF cannot work, a lightweight proof by SMT (Sorted Merkle Tree) is exploited to further reduce the network overhead. The security analysis confirms LVQ’s ability to enable both correctness and completeness validation. In addition, the experimental results demonstrate its lightweight.
在比特币系统中,一个地址的交易历史在很多情况下都很有用,比如余额计算和行为分析。但是,对于运行轻节点的普通用户来说,获取历史事务是非常重要的,因为它只存储头,而不存储任何事务细节。它通常需要请求存储完整数据的完整节点。这些查询结果的验证非常关键,主要涉及两个方面:正确性和完整性。前者可以通过Merkle分支轻松实现,而后者在比特币协议中相当困难。为了实现完整性验证,提出了一种稻草人设计,该设计仅在报头中包含BF (Bloom filter)。但是,由于BF的大小约为KB,因此稻草人中的轻节点将承受增量存储负担。更糟糕的是,当BF不能工作时,必须传输一个集成块,导致网络开销很大。在本文中,我们提出了LVQ,这是第一个轻量级可验证查询方法,同时减少了存储需求和网络开销。具体来说,LVQ通过只将BF的哈希存储在header中,使得轻节点存储的数据很少。此外,LVQ还引入了一种新的BMT (BF集成Merkle树)结构用于轻量级查询,该结构通过合并多个连续的BF来消除查询结果的通信开销。此外,当BF不能工作时,利用SMT(排序默克尔树)的轻量级证明来进一步减少网络开销。安全性分析确认了LVQ支持正确性和完整性验证的能力。此外,实验结果也证明了它的轻量化。
{"title":"LVQ: A Lightweight Verifiable Query Approach for Transaction History in Bitcoin","authors":"Xiaohai Dai, Jiang Xiao, Wenhui Yang, Chaofan Wang, Jian Chang, Rui Han, Hai Jin","doi":"10.1109/ICDCS47774.2020.00096","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00096","url":null,"abstract":"In the Bitcoin system, transaction history of an address can be useful in many scenarios, such as the balance calculation and behavior analysis. However, it is non-trivial for a common user who runs a light node to fetch historical transactions, since it only stores headers without any transaction details. It usually has to request a full node who stores the complete data. Validation of these query results is critical and mainly involves two aspects: correctness and completeness. The former can be implemented via Merkle branch easily, while the latter is quite difficult in the Bitcoin protocol. To enable the completeness validation, a strawman design is proposed, which simply includes the BF (Bloom filter) in the headers. However, since the size of BF is about KB, light nodes in the strawman will suffer from the incremental storage burden. What’s worse, an integrated block must be transmitted when BF cannot work, resulting in large network overhead. In this paper, we propose LVQ, the first lightweight verifiable query approach that reduces the storage requirement and network overhead at the same time. To be specific, by only storing the hash of BF in headers, LVQ keeps data stored by light nodes being little. Besides, LVQ introduces a novel BMT (BF integrated Merkle Tree) structure for lightweight query, which can eliminate the communication costs of query results by merging the multiple successive BFs. Furthermore, when BF cannot work, a lightweight proof by SMT (Sorted Merkle Tree) is exploited to further reduce the network overhead. The security analysis confirms LVQ’s ability to enable both correctness and completeness validation. In addition, the experimental results demonstrate its lightweight.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124227035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
SafePay on Ethereum: A Framework For Detecting Unfair Payments in Smart Contracts 以太坊上的安全支付:一个检测智能合约中不公平支付的框架
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00116
Yue Li, Han Liu, Zhiqiang Yang, Qian Ren, Lei Wang, Bangdao Chen
Smart contracts on the Ethereum blockchain are notoriously known as vulnerable to external attacks. Many of their issues led to a considerably large financial loss as they resulted from broken payments by digital assets, e.g., cryptocurrency. Existing research focused on specific patterns to find such problems, e.g., reentrancy bug, nondeterministic recipient etc., yet may lead to false alarms or miss important issues. To mitigate these limitations, we designed the SafePay analysis framework to find unfair payments in Ethereum smart contracts. Compared to existing analyzers, SafePay can detect potential blockchain transactions with feasible exploits thus effectively avoid false reports. Specifically, the detection is driven by a systematic search for violations on fair value exchange (FVE), i.e., a new security invariant introduced in SafePay to indicate that each party “fairly” pays to others. The preliminary evaluation validated the efficacy of SafePay by reporting previously unknown issues and decreasing the number of false alarms.
众所周知,以太坊区块链上的智能合约容易受到外部攻击。他们的许多问题导致了相当大的经济损失,因为它们是由数字资产(例如加密货币)的中断支付造成的。现有的研究侧重于特定的模式来发现此类问题,例如,可重入错误,不确定的收件人等,但可能导致误报或遗漏重要问题。为了减轻这些限制,我们设计了SafePay分析框架,以发现以太坊智能合约中的不公平支付。与现有的分析工具相比,SafePay可以检测出具有可行漏洞的潜在区块链交易,从而有效避免虚假报告。具体来说,检测是由对公平价值交换(FVE)违规行为的系统搜索驱动的,即在SafePay中引入的一个新的安全不变量,表明每一方“公平”地向他人支付。初步评估通过报告先前未知的问题和减少假警报的数量,验证了SafePay的有效性。
{"title":"SafePay on Ethereum: A Framework For Detecting Unfair Payments in Smart Contracts","authors":"Yue Li, Han Liu, Zhiqiang Yang, Qian Ren, Lei Wang, Bangdao Chen","doi":"10.1109/ICDCS47774.2020.00116","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00116","url":null,"abstract":"Smart contracts on the Ethereum blockchain are notoriously known as vulnerable to external attacks. Many of their issues led to a considerably large financial loss as they resulted from broken payments by digital assets, e.g., cryptocurrency. Existing research focused on specific patterns to find such problems, e.g., reentrancy bug, nondeterministic recipient etc., yet may lead to false alarms or miss important issues. To mitigate these limitations, we designed the SafePay analysis framework to find unfair payments in Ethereum smart contracts. Compared to existing analyzers, SafePay can detect potential blockchain transactions with feasible exploits thus effectively avoid false reports. Specifically, the detection is driven by a systematic search for violations on fair value exchange (FVE), i.e., a new security invariant introduced in SafePay to indicate that each party “fairly” pays to others. The preliminary evaluation validated the efficacy of SafePay by reporting previously unknown issues and decreasing the number of false alarms.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127903783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Kill Two Birds with One Stone: Auto-tuning RocksDB for High Bandwidth and Low Latency 一石二鸟:自动调整RocksDB的高带宽和低延迟
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00113
Yichen Jia, Feng Chen
Log-Structured Merge (LSM) tree based key-value stores are widely deployed in data centers. Due to its complex internal structures, appropriately configuring a modern key-value data store system, which can have more than 50 parameters with various hardware and system settings, is a highly challenging task. Currently, the industry still heavily relies on a traditional, experience-based, hand-tuning approach for performance tuning. Many simply adopt the default setting out of the box with no changes. Auto-tuning, as a self-adaptive solution, is thus highly appealing for achieving optimal or near-optimal performance in real-world deployment.In this paper, we quantitatively study and compare five optimization methods for auto-tuning the performance of LSM-tree based key-value stores. In order to evaluate the auto-tuning processes, we have conducted an exhaustive set of experiments over RocksDB, a representative LSM-tree data store. We have collected over 12,000 experimental records in 6 months, with about 2,000 software configurations of 6 parameters on different hardware setups. We have compared five representative algorithms, in terms of throughput, the 99th percentile tail latency, convergence time, real-time system throughput, and the iteration process, etc. We find that multi-objective optimization (MOO) methods can achieve a good balance among multiple targets, which satisfies the unique needs of key-value services. The more specific Quality of Service (QoS) requirements users can provide, the better performance these algorithms can achieve. We also find that the number of concurrent threads and the write buffer size are the two most impactful parameters determining the throughput and the 99th percentile tail latency across different hardware and workloads. Finally, we provide system-level explanations for the auto-tuning results and also discuss the associated implications for system designers and practitioners. We hope this work will pave the way towards a practical, high-speed auto-tuning solution for key-value data store systems.
基于日志结构合并(Log-Structured Merge, LSM)树的键值存储广泛应用于数据中心。由于其复杂的内部结构,适当地配置一个现代键值数据存储系统是一项极具挑战性的任务,它可以有50多个参数和各种硬件和系统设置。目前,业界仍然严重依赖传统的、基于经验的手动调优方法进行性能调优。许多人直接采用默认设置,不做任何更改。因此,自动调优作为一种自适应解决方案,对于在实际部署中实现最优或接近最优的性能非常有吸引力。在本文中,我们定量地研究和比较了五种自动调优基于lsm树的键值存储性能的优化方法。为了评估自动调优过程,我们在RocksDB(一个代表性的lsm树数据存储)上进行了一组详尽的实验。我们在6个月的时间里收集了超过12000条实验记录,在不同的硬件设置下,有大约2000个6个参数的软件配置。我们从吞吐量、第99百分位尾延迟、收敛时间、实时系统吞吐量和迭代过程等方面比较了五种代表性算法。研究发现,多目标优化(MOO)方法可以很好地实现多个目标之间的平衡,满足键值服务的独特需求。用户提供的服务质量(QoS)要求越具体,这些算法的性能就越好。我们还发现并发线程的数量和写缓冲区大小是决定吞吐量和不同硬件和工作负载的第99百分位尾部延迟的两个最具影响力的参数。最后,我们提供了自动调优结果的系统级解释,并讨论了系统设计者和实践者的相关含义。我们希望这项工作将为键值数据存储系统的实用、高速自动调优解决方案铺平道路。
{"title":"Kill Two Birds with One Stone: Auto-tuning RocksDB for High Bandwidth and Low Latency","authors":"Yichen Jia, Feng Chen","doi":"10.1109/ICDCS47774.2020.00113","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00113","url":null,"abstract":"Log-Structured Merge (LSM) tree based key-value stores are widely deployed in data centers. Due to its complex internal structures, appropriately configuring a modern key-value data store system, which can have more than 50 parameters with various hardware and system settings, is a highly challenging task. Currently, the industry still heavily relies on a traditional, experience-based, hand-tuning approach for performance tuning. Many simply adopt the default setting out of the box with no changes. Auto-tuning, as a self-adaptive solution, is thus highly appealing for achieving optimal or near-optimal performance in real-world deployment.In this paper, we quantitatively study and compare five optimization methods for auto-tuning the performance of LSM-tree based key-value stores. In order to evaluate the auto-tuning processes, we have conducted an exhaustive set of experiments over RocksDB, a representative LSM-tree data store. We have collected over 12,000 experimental records in 6 months, with about 2,000 software configurations of 6 parameters on different hardware setups. We have compared five representative algorithms, in terms of throughput, the 99th percentile tail latency, convergence time, real-time system throughput, and the iteration process, etc. We find that multi-objective optimization (MOO) methods can achieve a good balance among multiple targets, which satisfies the unique needs of key-value services. The more specific Quality of Service (QoS) requirements users can provide, the better performance these algorithms can achieve. We also find that the number of concurrent threads and the write buffer size are the two most impactful parameters determining the throughput and the 99th percentile tail latency across different hardware and workloads. Finally, we provide system-level explanations for the auto-tuning results and also discuss the associated implications for system designers and practitioners. We hope this work will pave the way towards a practical, high-speed auto-tuning solution for key-value data store systems.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"24 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113941596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Flow control in SDN-Edge-Cloud cooperation system with machine learning 基于机器学习的SDN-Edge-Cloud协同系统流量控制
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00169
R. Shinkuma, Yoshinobu Yamada, Takehiro Sato, E. Oki
Real-time prediction of communications (or road) traffic by using cloud computing and sensor data collected by Internet-of-Things (IoT) devices would be very useful application of big-data analytics. However, upstream data flow from IoT devices to the cloud server could be problematic, even in fifth generation (5G) networks, because networks have mainly been designed for downstream data flows like for video delivery. This paper proposes a framework in which a software defined network (SDN), edge server, and cloud server cooperate with each other to control the upstream flow to maintain the accuracy of the real-time predictions under the condition of a limited network bandwidth. The framework consists of a system model, methods of prediction and determining the importance of data using machine learning, and a mathematical optimization. Our key idea is that the SDN controller optimizes data flows in the SDN on the basis of feature importance scores, which indicate the importance of the data in terms of the prediction accuracy. The feature importance scores are extracted from the prediction model by a machine-learning feature selection method that has traditionally been used to suppress effects of noise or irrelevant input variables. Our framework is examined in a simulation study using a real dataset consisting of mobile traffic logs. The results validate the framework; it maintains prediction accuracy under the constraint of limited available network bandwidth. Potential applications are also discussed.
利用云计算和物联网(IoT)设备收集的传感器数据实时预测通信(或道路)流量将是大数据分析的非常有用的应用。然而,从物联网设备到云服务器的上游数据流可能会有问题,即使在第五代(5G)网络中也是如此,因为网络主要是为视频传输等下游数据流设计的。本文提出了一种在网络带宽有限的情况下,软件定义网络(SDN)、边缘服务器和云服务器相互协作控制上游流量的框架,以保持实时预测的准确性。该框架由系统模型、使用机器学习预测和确定数据重要性的方法以及数学优化组成。我们的关键思想是SDN控制器根据特征重要性分数来优化SDN中的数据流,特征重要性分数表示数据在预测精度方面的重要性。通过机器学习特征选择方法从预测模型中提取特征重要性分数,该方法传统上用于抑制噪声或不相关输入变量的影响。我们的框架在模拟研究中使用由移动流量日志组成的真实数据集进行了检查。结果验证了该框架的有效性;在有限的可用网络带宽约束下保持预测精度。并讨论了潜在的应用。
{"title":"Flow control in SDN-Edge-Cloud cooperation system with machine learning","authors":"R. Shinkuma, Yoshinobu Yamada, Takehiro Sato, E. Oki","doi":"10.1109/ICDCS47774.2020.00169","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00169","url":null,"abstract":"Real-time prediction of communications (or road) traffic by using cloud computing and sensor data collected by Internet-of-Things (IoT) devices would be very useful application of big-data analytics. However, upstream data flow from IoT devices to the cloud server could be problematic, even in fifth generation (5G) networks, because networks have mainly been designed for downstream data flows like for video delivery. This paper proposes a framework in which a software defined network (SDN), edge server, and cloud server cooperate with each other to control the upstream flow to maintain the accuracy of the real-time predictions under the condition of a limited network bandwidth. The framework consists of a system model, methods of prediction and determining the importance of data using machine learning, and a mathematical optimization. Our key idea is that the SDN controller optimizes data flows in the SDN on the basis of feature importance scores, which indicate the importance of the data in terms of the prediction accuracy. The feature importance scores are extracted from the prediction model by a machine-learning feature selection method that has traditionally been used to suppress effects of noise or irrelevant input variables. Our framework is examined in a simulation study using a real dataset consisting of mobile traffic logs. The results validate the framework; it maintains prediction accuracy under the constraint of limited available network bandwidth. Potential applications are also discussed.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132076851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1