首页 > 最新文献

2012 IEEE 32nd International Conference on Distributed Computing Systems最新文献

英文 中文
A Kautz-based Real-Time and Energy-Efficient Wireless Sensor and Actuator Network 基于kautz的实时高能效无线传感器和执行器网络
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.43
Ze Li, Haiying Shen
Wireless Sensor and Actuator Networks (WSANs) are composed of sensors and actuators to perform distributed sensing and actuating tasks. Most WSAN applications (e.g., fire detection) demand that actuators rapidly respond to events under observation. Therefore, real-time and fault-tolerant transmission is a critical requirement in WSANs to enable sensed data to reach actuators reliably and quickly. Due to limited power resources, energy-efficiency is another crucial requirement. Such requirements become formidably challenging in large-scale WSANs. However, existing WSANs fall short in meeting these requirements. To this end, we first theoretically study the Kautz graph for its applicability in WSANs to meet these requirements. We then propose a Kautz-based Real-time, Fault-tolerant and Energy-efficient WSAN (REFER). REFER has a protocol that embeds Kautz graphs into the physical topology of a WSAN for real-time communication and connects the graphs using Distributed Hash Table (DHT) for high scalability. We also theoretically study routing paths in the Kautz graph, based on which we develop an efficient fault-tolerant routing protocol. It enables a relay node to quickly and efficiently identify the next shortest path from itself to the destination only based on node IDs upon routing failure. REFER is advantageous over previous Kautz graph based works in that it does not need an energy-consuming protocol to find the next shortest path and it can maintain the consistency between the overlay and physical topology. Experimental results demonstrate the superior performance of REFER in comparison with existing systems in terms of real-time communication, energy-efficiency, fault-tolerance and scalability.
无线传感器和致动器网络(wsan)由传感器和致动器组成,用于执行分布式传感和致动任务。大多数WSAN应用(例如,火灾探测)要求执行器对观察到的事件快速响应。因此,实时和容错传输是无线传感器网络的关键要求,以使传感数据能够可靠、快速地到达执行器。由于电力资源有限,能源效率是另一个至关重要的要求。这样的要求在大规模wsan中变得非常具有挑战性。然而,现有的无线局域网不能满足这些要求。为此,我们首先从理论上研究Kautz图在wsan中的适用性,以满足这些要求。然后,我们提出了一种基于kautz的实时、容错和节能的WSAN(参见)。reference有一个协议,它将Kautz图嵌入到WSAN的物理拓扑中以实现实时通信,并使用分布式哈希表(DHT)连接图以实现高可扩展性。我们还从理论上研究了Kautz图中的路由路径,在此基础上开发了一种高效的容错路由协议。它使中继节点能够在路由失败时仅根据节点id快速有效地识别从自身到目的地的下一条最短路径。相对于以前基于Kautz图的工作,它的优势在于它不需要一个耗能的协议来寻找下一个最短路径,并且它可以保持覆盖和物理拓扑之间的一致性。实验结果表明,与现有系统相比,该系统在通信实时性、节能性、容错性和可扩展性等方面都具有优越的性能。
{"title":"A Kautz-based Real-Time and Energy-Efficient Wireless Sensor and Actuator Network","authors":"Ze Li, Haiying Shen","doi":"10.1109/ICDCS.2012.43","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.43","url":null,"abstract":"Wireless Sensor and Actuator Networks (WSANs) are composed of sensors and actuators to perform distributed sensing and actuating tasks. Most WSAN applications (e.g., fire detection) demand that actuators rapidly respond to events under observation. Therefore, real-time and fault-tolerant transmission is a critical requirement in WSANs to enable sensed data to reach actuators reliably and quickly. Due to limited power resources, energy-efficiency is another crucial requirement. Such requirements become formidably challenging in large-scale WSANs. However, existing WSANs fall short in meeting these requirements. To this end, we first theoretically study the Kautz graph for its applicability in WSANs to meet these requirements. We then propose a Kautz-based Real-time, Fault-tolerant and Energy-efficient WSAN (REFER). REFER has a protocol that embeds Kautz graphs into the physical topology of a WSAN for real-time communication and connects the graphs using Distributed Hash Table (DHT) for high scalability. We also theoretically study routing paths in the Kautz graph, based on which we develop an efficient fault-tolerant routing protocol. It enables a relay node to quickly and efficiently identify the next shortest path from itself to the destination only based on node IDs upon routing failure. REFER is advantageous over previous Kautz graph based works in that it does not need an energy-consuming protocol to find the next shortest path and it can maintain the consistency between the overlay and physical topology. Experimental results demonstrate the superior performance of REFER in comparison with existing systems in terms of real-time communication, energy-efficiency, fault-tolerance and scalability.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"29 1","pages":"62-71"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79500382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Power of Lights: Synchronizing Asynchronous Robots Using Visible Bits 光的力量:使用可见比特同步异步机器人
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.71
S. Das, P. Flocchini, G. Prencipe, N. Santoro, M. Yamashita
In this paper we study the power of using lights, i.e. visible external memory, for distributed computation by autonomous robots moving in LookCompute-Move (LCM) cycles. With respect to the LCM cycles, the most common models studied in the literature are the fully-synchronous (FSYNC), the semisynchronous (SSYNC), and the asynchronous (ASYNC). In this paper we introduce in the ASYNC model, the weakest of the three, the availability of visible external memory: each robot is equipped with a light bulb that is visible to all other robots, and that can display a constant numbers of different colors; the colors are persistent, that is they are not automatically reset at the end of each cycle. We first study the relationship between ASYNC with visible bits and SSYNC. We prove hat asynchronous robots, when equipped with a constant number of colors, are strictly more powerful than traditional semisynchronous robots. We also show that, when enhanced with visible lights, the difference between asynchrony and semi-synchrony disappears; this result must be contrasted with the strict dominance ASYNC <;SSYNC between the models without lights. We then study the relationship between ASYNC with visible bits and FSYNC. We prove that asynchronous robots with a constant number of visible bits, if they can remember a single snapshot, are strictly more powerful than fully-synchronous robots. This is to be contrasted with the fact that, without lights, ASYNC robots are not even as powerful as SSYNC, even if they remember an unlimited number of previous snapshots. These results demonstrate the power of using visible external memory for distributed computation with autonomous robots. In particular, asynchrony can be overcome with the power of lights.
在本文中,我们研究了在LookCompute-Move (LCM)周期中移动的自主机器人使用光(即可见外部存储器)进行分布式计算的能力。关于LCM周期,文献中研究的最常见的模型是全同步(FSYNC),半同步(SSYNC)和异步(ASYNC)。在本文中,我们在ASYNC模型中引入了三种模型中最弱的一种,即可见外部存储器的可用性:每个机器人都配备一个对所有其他机器人可见的灯泡,并且可以显示恒定数量的不同颜色;颜色是持久的,也就是说,它们不会在每个循环结束时自动重置。我们首先研究了带可见位的异步和SSYNC之间的关系。我们证明了异步机器人,当配备了一定数量的颜色时,严格地比传统的半同步机器人更强大。我们还表明,当用可见光增强时,异步和半同步之间的区别消失了;这一结果必须与没有光照的模型之间严格的ASYNC <;SSYNC优势形成对比。然后我们研究了带可见位的异步和FSYNC之间的关系。我们证明了具有恒定可见比特数的异步机器人,如果它们能够记住单个快照,则严格地比完全同步机器人更强大。与此形成对比的是,在没有光线的情况下,ASYNC机器人甚至不如SSYNC强大,即使它们记住了无限数量的先前快照。这些结果证明了在自主机器人中使用可见外部存储器进行分布式计算的能力。特别是,异步可以通过光的力量来克服。
{"title":"The Power of Lights: Synchronizing Asynchronous Robots Using Visible Bits","authors":"S. Das, P. Flocchini, G. Prencipe, N. Santoro, M. Yamashita","doi":"10.1109/ICDCS.2012.71","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.71","url":null,"abstract":"In this paper we study the power of using lights, i.e. visible external memory, for distributed computation by autonomous robots moving in LookCompute-Move (LCM) cycles. With respect to the LCM cycles, the most common models studied in the literature are the fully-synchronous (FSYNC), the semisynchronous (SSYNC), and the asynchronous (ASYNC). In this paper we introduce in the ASYNC model, the weakest of the three, the availability of visible external memory: each robot is equipped with a light bulb that is visible to all other robots, and that can display a constant numbers of different colors; the colors are persistent, that is they are not automatically reset at the end of each cycle. We first study the relationship between ASYNC with visible bits and SSYNC. We prove hat asynchronous robots, when equipped with a constant number of colors, are strictly more powerful than traditional semisynchronous robots. We also show that, when enhanced with visible lights, the difference between asynchrony and semi-synchrony disappears; this result must be contrasted with the strict dominance ASYNC <;SSYNC between the models without lights. We then study the relationship between ASYNC with visible bits and FSYNC. We prove that asynchronous robots with a constant number of visible bits, if they can remember a single snapshot, are strictly more powerful than fully-synchronous robots. This is to be contrasted with the fact that, without lights, ASYNC robots are not even as powerful as SSYNC, even if they remember an unlimited number of previous snapshots. These results demonstrate the power of using visible external memory for distributed computation with autonomous robots. In particular, asynchrony can be overcome with the power of lights.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"60 1","pages":"506-515"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79474743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Scaling Down Off-the-Shelf Data Compression: Backwards-Compatible Fine-Grain Mixing 缩减现成的数据压缩:向后兼容的细粒度混合
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.21
Michael Gray, P. Peterson, P. Reiher
Pu and Singaravelu presented Fine-Grain Mixing, an adaptive compression system which aimed to maximize CPU and network utilization simultaneously by splitting a network stream into a mixture of compressed and uncompressed blocks. Blocks were compressed opportunistically in a send buffer, they compressed as many blocks as they could without becoming a bottleneck. They successfully utilized all available CPU and network bandwidth even on high speed connections. In addition, they noted much greater throughput than previous adaptive compression systems. Here, we take a different view of FG-Mixing than was taken by Pu and Singaravelu and give another explanation for its high performance: that fine-grain mixing of compressed and uncompressed blocks enables off-the-shelf compressors to scale down their degree of compression linearly with decreasing CPU usage. Exploring the scaling behavior in-depth allows us to make a variety of improvements to fine-grain mixed compression: better compression ratios for a given level of CPU consumption, a wider range of data reduction and CPU cost options, and parallelized compression to take advantage of multi-core CPUs. We make full compatibility with the ubiquitous deflate decompress or (as used in many network protocols directly, or as the back-end of the gzip and Zip formats) a primary goal, rather than using a special, incompatible protocol as in the original implementation of FG-Mixing. Moreover, we show that the benefits of fine-grain mixing are retained by our compatible version.
Pu和Singaravelu提出了一种自适应压缩系统“细粒混合”(Fine-Grain Mixing),该系统旨在通过将网络流拆分为压缩和未压缩块的混合物,从而同时最大化CPU和网络利用率。块在发送缓冲区中被随机压缩,他们尽可能多地压缩块,而不会成为瓶颈。他们成功地利用了所有可用的CPU和网络带宽,即使在高速连接上也是如此。此外,他们注意到比以前的自适应压缩系统更大的吞吐量。这里,我们采用了与Pu和Singaravelu不同的FG-Mixing观点,并对其高性能给出了另一种解释:压缩和未压缩块的细粒度混合使现成的压缩机能够随着CPU使用的减少而线性降低压缩程度。深入探索缩放行为使我们能够对细粒度混合压缩进行各种改进:给定CPU消耗水平的更好的压缩比,更广泛的数据减少和CPU成本选项,以及利用多核CPU的并行压缩。我们将完全兼容无处不在的deflate解压缩或(如在许多网络协议中直接使用,或作为gzip和Zip格式的后端)作为主要目标,而不是像fg - mix的原始实现那样使用特殊的、不兼容的协议。此外,我们表明,我们的兼容版本保留了细颗粒混合的好处。
{"title":"Scaling Down Off-the-Shelf Data Compression: Backwards-Compatible Fine-Grain Mixing","authors":"Michael Gray, P. Peterson, P. Reiher","doi":"10.1109/ICDCS.2012.21","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.21","url":null,"abstract":"Pu and Singaravelu presented Fine-Grain Mixing, an adaptive compression system which aimed to maximize CPU and network utilization simultaneously by splitting a network stream into a mixture of compressed and uncompressed blocks. Blocks were compressed opportunistically in a send buffer, they compressed as many blocks as they could without becoming a bottleneck. They successfully utilized all available CPU and network bandwidth even on high speed connections. In addition, they noted much greater throughput than previous adaptive compression systems. Here, we take a different view of FG-Mixing than was taken by Pu and Singaravelu and give another explanation for its high performance: that fine-grain mixing of compressed and uncompressed blocks enables off-the-shelf compressors to scale down their degree of compression linearly with decreasing CPU usage. Exploring the scaling behavior in-depth allows us to make a variety of improvements to fine-grain mixed compression: better compression ratios for a given level of CPU consumption, a wider range of data reduction and CPU cost options, and parallelized compression to take advantage of multi-core CPUs. We make full compatibility with the ubiquitous deflate decompress or (as used in many network protocols directly, or as the back-end of the gzip and Zip formats) a primary goal, rather than using a special, incompatible protocol as in the original implementation of FG-Mixing. Moreover, we show that the benefits of fine-grain mixing are retained by our compatible version.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"42 1","pages":"112-121"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75233881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Optimal Distributed Data Collection for Asynchronous Cognitive Radio Networks 异步认知无线网络的最优分布式数据采集
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.29
Zhipeng Cai, S. Ji, Jing He, A. Bourgeois
As a promising communication paradigm, Cognitive Radio Networks (CRNs) have paved a road for Secondary Users (SUs) to opportunistically exploit unused licensed spectrum without causing unacceptable interference to Primary Users (PUs). In this paper, we study the distributed data collection problem for asynchronous CRNs, which has not been addressed before. First, we study the Proper Carrier-sensing Range (PCR) for SUs. By working with this PCR, an SU can successfully conduct data transmission without disturbing the activities of PUs and other SUs. Subsequently, based on the PCR, we propose an Asynchronous Distributed Data Collection (ADDC) algorithm with fairness consideration for CRNs. ADDC collects data of a snapshot to the base station in a distributed manner without any time synchronization requirement. The algorithm is scalable and more practical compared with centralized and synchronized algorithms. Through comprehensive theoretical analysis, we show that ADDC is order-optimal in terms of delay and capacity, as long as an SU has a positive probability to access the spectrum. Finally, extensive simulation results indicate that ADDC can effectively finish a data collection task and significantly reduce data collection delay.
作为一种很有前途的通信范式,认知无线网络(crn)为辅助用户(su)在不给主用户(pu)造成不可接受的干扰的情况下利用未使用的许可频谱铺平了道路。本文研究了异步crn的分布式数据收集问题,这是以往没有解决的问题。首先,我们研究了SUs的合适的载体感应范围(PCR)。通过使用该PCR, SU可以成功地进行数据传输,而不会干扰pu和其他SU的活动。在此基础上,提出了一种考虑crn公平性的异步分布式数据采集(ADDC)算法。ADDC将快照的数据以分布式方式采集到基站,不需要时间同步。与集中式和同步式算法相比,该算法具有可扩展性和实用性。通过全面的理论分析,我们证明了只要SU具有正的接入概率,就延迟和容量而言,ADDC是顺序最优的。最后,大量的仿真结果表明,ADDC可以有效地完成数据采集任务,并显著降低数据采集延迟。
{"title":"Optimal Distributed Data Collection for Asynchronous Cognitive Radio Networks","authors":"Zhipeng Cai, S. Ji, Jing He, A. Bourgeois","doi":"10.1109/ICDCS.2012.29","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.29","url":null,"abstract":"As a promising communication paradigm, Cognitive Radio Networks (CRNs) have paved a road for Secondary Users (SUs) to opportunistically exploit unused licensed spectrum without causing unacceptable interference to Primary Users (PUs). In this paper, we study the distributed data collection problem for asynchronous CRNs, which has not been addressed before. First, we study the Proper Carrier-sensing Range (PCR) for SUs. By working with this PCR, an SU can successfully conduct data transmission without disturbing the activities of PUs and other SUs. Subsequently, based on the PCR, we propose an Asynchronous Distributed Data Collection (ADDC) algorithm with fairness consideration for CRNs. ADDC collects data of a snapshot to the base station in a distributed manner without any time synchronization requirement. The algorithm is scalable and more practical compared with centralized and synchronized algorithms. Through comprehensive theoretical analysis, we show that ADDC is order-optimal in terms of delay and capacity, as long as an SU has a positive probability to access the spectrum. Finally, extensive simulation results indicate that ADDC can effectively finish a data collection task and significantly reduce data collection delay.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"57 1","pages":"245-254"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74336980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
MOVE: A Large Scale Keyword-Based Content Filtering and Dissemination System MOVE:一个大型的基于关键字的内容过滤和传播系统
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.32
Weixiong Rao, Lei Chen, P. Hui, S. Tarkoma
The Web 2.0 era is characterized by the emergence of a very large amount of live content. A real time and fine grained content filtering approach can precisely keep users up-to-date the information that they are interested. The key of the approach is to offer a scalable match algorithm. One might treat the content match as a special kind of content search, and resort to the classic algorithm [5]. However, due to blind flooding, [5] cannot be simply adapted for scalable content match. To increase the throughput of scalable match, we propose an adaptive approach to allocate (i.e, replicate and partition) filters. The allocation is based on our observation on real datasets: most users prefer to use short queries, consisting of around 2-3 terms per query, and web content typically contains tens and even thousands of terms per article. Thus, by reducing the number of processed documents, we can reduce the latency of matching large articles with filters, and have chance to achieve higher throughput. We implement our approach on an open source project, Apache Cassandra. The experiment with real datasets shows that our approach can achieve around folds of better throughput than two counterpart state-of-the-arts solutions.
Web 2.0时代的特点是出现了大量的实时内容。实时和细粒度的内容过滤方法可以精确地让用户了解他们感兴趣的最新信息。该方法的关键是提供可伸缩的匹配算法。人们可能会将内容匹配视为一种特殊的内容搜索,并采用经典算法[5]。然而,由于盲目泛洪,[5]不能简单地适应可扩展的内容匹配。为了提高可扩展匹配的吞吐量,我们提出了一种自适应的方法来分配(即复制和分区)过滤器。分配是基于我们对真实数据集的观察:大多数用户更喜欢使用简短的查询,每个查询由大约2-3个术语组成,而web内容通常每篇文章包含数十甚至数千个术语。因此,通过减少处理文档的数量,我们可以减少与过滤器匹配大文章的延迟,并有机会实现更高的吞吐量。我们在一个开源项目Apache Cassandra上实现了我们的方法。对真实数据集的实验表明,我们的方法可以实现比两种最先进的解决方案高出约两倍的吞吐量。
{"title":"MOVE: A Large Scale Keyword-Based Content Filtering and Dissemination System","authors":"Weixiong Rao, Lei Chen, P. Hui, S. Tarkoma","doi":"10.1109/ICDCS.2012.32","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.32","url":null,"abstract":"The Web 2.0 era is characterized by the emergence of a very large amount of live content. A real time and fine grained content filtering approach can precisely keep users up-to-date the information that they are interested. The key of the approach is to offer a scalable match algorithm. One might treat the content match as a special kind of content search, and resort to the classic algorithm [5]. However, due to blind flooding, [5] cannot be simply adapted for scalable content match. To increase the throughput of scalable match, we propose an adaptive approach to allocate (i.e, replicate and partition) filters. The allocation is based on our observation on real datasets: most users prefer to use short queries, consisting of around 2-3 terms per query, and web content typically contains tens and even thousands of terms per article. Thus, by reducing the number of processed documents, we can reduce the latency of matching large articles with filters, and have chance to achieve higher throughput. We implement our approach on an open source project, Apache Cassandra. The experiment with real datasets shows that our approach can achieve around folds of better throughput than two counterpart state-of-the-arts solutions.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"24 1","pages":"445-454"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73216578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Attributed-Based Access Control for Multi-authority Systems in Cloud Storage 云存储中基于属性的多权限系统访问控制
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.42
Kan Yang, X. Jia
Cipher text-Policy Attribute-base Encryption (CP-ABE) is regarded as one of the most suitable technologies for data access control in cloud storage. In almost all existing CP-ABE schemes, it is assumed that there is only one authority in the system responsible for issuing attributes to the users. However, in many applications, there are multiple authorities co-exist in a system and each authority is able to issue attributes independently. In this paper, we design an access control framework for multi-authority systems and propose an efficient and secure multi-authority access control scheme for cloud storage. We first design an efficient multi-authority CP-ABE scheme that does not require a global authority and can support any LSSS access structure. Then, we prove its security in the random oracle model. We also propose a new technique to solve the attribute revocation problem in multi-authority CP-ABE systems. The analysis and simulation results show that our multi-authority access control scheme is scalable and efficient.
密文-策略属性基础加密(Cipher text-Policy Attribute-base Encryption, CP-ABE)被认为是云存储中最适合的数据访问控制技术之一。在几乎所有现有的CP-ABE方案中,都假定系统中只有一个机构负责向用户颁发属性。但是,在许多应用程序中,系统中存在多个授权机构,每个授权机构都能够独立地发布属性。本文设计了一个多权限系统的访问控制框架,提出了一种高效、安全的云存储多权限访问控制方案。我们首先设计了一种高效的多授权CP-ABE方案,该方案不需要全局授权,可以支持任何LSSS访问结构。然后,在随机oracle模型中证明了其安全性。我们还提出了一种新的技术来解决多权威CP-ABE系统中的属性撤销问题。分析和仿真结果表明,该多权限访问控制方案具有可扩展性和高效性。
{"title":"Attributed-Based Access Control for Multi-authority Systems in Cloud Storage","authors":"Kan Yang, X. Jia","doi":"10.1109/ICDCS.2012.42","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.42","url":null,"abstract":"Cipher text-Policy Attribute-base Encryption (CP-ABE) is regarded as one of the most suitable technologies for data access control in cloud storage. In almost all existing CP-ABE schemes, it is assumed that there is only one authority in the system responsible for issuing attributes to the users. However, in many applications, there are multiple authorities co-exist in a system and each authority is able to issue attributes independently. In this paper, we design an access control framework for multi-authority systems and propose an efficient and secure multi-authority access control scheme for cloud storage. We first design an efficient multi-authority CP-ABE scheme that does not require a global authority and can support any LSSS access structure. Then, we prove its security in the random oracle model. We also propose a new technique to solve the attribute revocation problem in multi-authority CP-ABE systems. The analysis and simulation results show that our multi-authority access control scheme is scalable and efficient.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"26 1","pages":"536-545"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73428840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 161
Joint Optimization of Computing and Cooling Energy: Analytic Model and a Machine Room Case Study 计算和冷却能量的联合优化:分析模型和机房实例研究
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.64
Shen Li, H. Le, N. Pham, Jin Heo, T. Abdelzaher
Total energy minimization in data centers (including both computing and cooling energy) requires modeling the interactions between computing decisions (such as load distribution) and heat transfer in the room, since load acts as heat sources whose distribution in space affects cooling energy. This paper presents the first closed-form analytic optimal solution for load distribution in a machine rack that minimizes the sum of computing and cooling energy. We show that by considering actuation knobs on both computing and cooling sides, it is possible to reduce energy cost comparing to state of the art solutions that do not offer holistic energy optimization. The above can be achieved while meeting both throughput requirements and maximum CPU temperature constraints. Using a thorough evaluation on a real test bed of 20 machines, we demonstrate that our simple model adequately captures the thermal behavior and energy consumption of the system. We further show that our approach saves more energy compared to the state of the art in the field.
数据中心的总能量最小化(包括计算能量和冷却能量)需要对计算决策(如负荷分布)和室内传热之间的相互作用进行建模,因为负荷作为热源,其在空间中的分布影响冷却能量。本文首次提出了计算能量和冷却能量之和最小的机架负载分配的封闭式解析最优解。我们表明,通过考虑计算和冷却方面的驱动旋钮,与不提供整体能源优化的最先进解决方案相比,有可能降低能源成本。在满足吞吐量要求和最大CPU温度限制的情况下,可以实现上述目标。通过对20台机器的真实测试平台进行全面评估,我们证明了我们的简单模型充分捕捉了系统的热行为和能耗。我们进一步表明,与该领域的最先进技术相比,我们的方法节省了更多的能源。
{"title":"Joint Optimization of Computing and Cooling Energy: Analytic Model and a Machine Room Case Study","authors":"Shen Li, H. Le, N. Pham, Jin Heo, T. Abdelzaher","doi":"10.1109/ICDCS.2012.64","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.64","url":null,"abstract":"Total energy minimization in data centers (including both computing and cooling energy) requires modeling the interactions between computing decisions (such as load distribution) and heat transfer in the room, since load acts as heat sources whose distribution in space affects cooling energy. This paper presents the first closed-form analytic optimal solution for load distribution in a machine rack that minimizes the sum of computing and cooling energy. We show that by considering actuation knobs on both computing and cooling sides, it is possible to reduce energy cost comparing to state of the art solutions that do not offer holistic energy optimization. The above can be achieved while meeting both throughput requirements and maximum CPU temperature constraints. Using a thorough evaluation on a real test bed of 20 machines, we demonstrate that our simple model adequately captures the thermal behavior and energy consumption of the system. We further show that our approach saves more energy compared to the state of the art in the field.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"89 1","pages":"396-405"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75197150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Combining Partial Redundancy and Checkpointing for HPC 结合部分冗余和检查点的高性能计算
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.56
James Elliott, Kishor Kharbas, David Fiala, F. Mueller, Kurt B. Ferreira, C. Engelmann
Today's largest High Performance Computing (HPC) systems exceed one Petaflops (1015 floating point operations per second) and exascale systems are projected within seven years. But reliability is becoming one of the major challenges faced by exascale computing. With billion-core parallelism, the mean time to failure is projected to be in the range of minutes or hours instead of days. Failures are becoming the norm rather than the exception during execution of HPC applications. Current fault tolerance techniques in HPC focus on reactive ways to mitigate faults, namely via checkpoint and restart (C/R). Apart from storage overheads, C/R-based fault recovery comes at an additional cost in terms of application performance because normal execution is disrupted when checkpoints are taken. Studies have shown that applications running at a large scale spend more than 50% of their total time saving checkpoints, restarting and redoing lost work. Redundancy is another fault tolerance technique, which employs redundant processes performing the same task. If a process fails, a replica of it can take over its execution. Thus, redundant copies can decrease the overall failure rate. The downside of redundancy is that extra resources are required and there is an additional overhead on communication and synchronization. This work contributes a model and analyzes the benefit of C/R in coordination with redundancy at different degrees to minimize the total wallclock time and resources utilization of HPC applications. We further conduct experiments with an implementation of redundancy within the MPI layer on a cluster. Our experimental results confirm the benefit of dual and triple redundancy - but not for partial redundancy - and show a close fit to the model. At ≈ 80, 000 processes, dual redundancy requires twice the number of processing resources for an application but allows two jobs of 128 hours wallclock time to finish within the time of just one job without redundancy. For narrow ranges of processor counts, partial redundancy results in the lowest time. Once the count exceeds ≈ 770, 000, triple redundancy has the lowest overall cost. Thus, redundancy allows one to trade-off additional resource requirements against wallclock time, which provides a tuning knob for users to adapt to resource availabilities.
当今最大的高性能计算(HPC)系统超过了每秒一千万亿次浮点运算(每秒1015次浮点运算),而exascale系统预计将在七年内实现。但可靠性正成为百亿亿次计算面临的主要挑战之一。对于十亿核并行,平均故障时间预计在几分钟或几小时内,而不是几天。在HPC应用程序的执行过程中,失败正在成为常态,而不是例外。当前HPC中的容错技术主要集中在响应式的方式来减轻故障,即通过检查点和重启(C/R)。除了存储开销之外,基于C/ r的故障恢复在应用程序性能方面还需要额外的成本,因为当采取检查点时,正常的执行会中断。研究表明,大规模运行的应用程序花费了超过50%的时间来保存检查点、重新启动和重做丢失的工作。冗余是另一种容错技术,它使用冗余进程执行相同的任务。如果一个进程失败,它的一个副本可以接管它的执行。因此,冗余副本可以降低总体故障率。冗余的缺点是需要额外的资源,并且在通信和同步方面有额外的开销。本文建立了一个模型,并分析了C/R在不同程度的冗余协调下的效益,以最大限度地减少HPC应用程序的总时钟时间和资源利用率。我们进一步在集群的MPI层内进行了冗余实现的实验。我们的实验结果证实了双重和三重冗余的好处-但不是部分冗余-并显示出与模型的密切拟合。在≈80,000个进程时,双冗余需要两倍于应用程序的处理资源数量,但允许两个128小时时钟时间的作业在一个没有冗余的作业的时间内完成。对于处理器数量范围较窄的情况,部分冗余可以节省最少的时间。一旦数量超过约77万,三层冗余的总成本最低。因此,冗余允许人们权衡额外的资源需求和时钟时间,这为用户提供了一个调整旋钮,以适应资源可用性。
{"title":"Combining Partial Redundancy and Checkpointing for HPC","authors":"James Elliott, Kishor Kharbas, David Fiala, F. Mueller, Kurt B. Ferreira, C. Engelmann","doi":"10.1109/ICDCS.2012.56","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.56","url":null,"abstract":"Today's largest High Performance Computing (HPC) systems exceed one Petaflops (1015 floating point operations per second) and exascale systems are projected within seven years. But reliability is becoming one of the major challenges faced by exascale computing. With billion-core parallelism, the mean time to failure is projected to be in the range of minutes or hours instead of days. Failures are becoming the norm rather than the exception during execution of HPC applications. Current fault tolerance techniques in HPC focus on reactive ways to mitigate faults, namely via checkpoint and restart (C/R). Apart from storage overheads, C/R-based fault recovery comes at an additional cost in terms of application performance because normal execution is disrupted when checkpoints are taken. Studies have shown that applications running at a large scale spend more than 50% of their total time saving checkpoints, restarting and redoing lost work. Redundancy is another fault tolerance technique, which employs redundant processes performing the same task. If a process fails, a replica of it can take over its execution. Thus, redundant copies can decrease the overall failure rate. The downside of redundancy is that extra resources are required and there is an additional overhead on communication and synchronization. This work contributes a model and analyzes the benefit of C/R in coordination with redundancy at different degrees to minimize the total wallclock time and resources utilization of HPC applications. We further conduct experiments with an implementation of redundancy within the MPI layer on a cluster. Our experimental results confirm the benefit of dual and triple redundancy - but not for partial redundancy - and show a close fit to the model. At ≈ 80, 000 processes, dual redundancy requires twice the number of processing resources for an application but allows two jobs of 128 hours wallclock time to finish within the time of just one job without redundancy. For narrow ranges of processor counts, partial redundancy results in the lowest time. Once the count exceeds ≈ 770, 000, triple redundancy has the lowest overall cost. Thus, redundancy allows one to trade-off additional resource requirements against wallclock time, which provides a tuning knob for users to adapt to resource availabilities.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"26 1","pages":"615-626"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77704392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 142
Total Order in Content-Based Publish/Subscribe Systems 基于内容的发布/订阅系统中的总顺序
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.17
Kaiwen Zhang, Vinod Muthusamy, H. Jacobsen
Total ordering is a messaging guarantee increasingly required of content-based pub/sub systems, which are traditionally focused on performance. The main challenge is the uniform ordering of streams of publications from multiple publishers within an overlay broker network to be delivered to multiple subscribers. Our solution integrates total ordering into the pub/sub logic instead of offloading it as an external service. We show that our solution is fully distributed and relies only on local broker knowledge and overlay links. We can identify and isolate specific publications and subscribers where synchronization is required: the overhead is therefore contained to the affected subscribers. Our solution remains safe under the presence of failure, where we show total order to be impossible to maintain. Our experiments demonstrate that our solution scales with the number of subscriptions and has limited overhead for the non-conflicting cases. A holistic comparison with group communication systems is offered to evaluate their relative scalability.
总体排序是基于内容的发布/订阅系统日益需要的消息传递保证,而这些系统传统上关注性能。主要的挑战是将来自覆盖代理网络中的多个发布者的发布流统一排序,以交付给多个订阅者。我们的解决方案将总排序集成到发布/订阅逻辑中,而不是将其作为外部服务卸载。我们展示了我们的解决方案是完全分布式的,并且仅依赖于本地代理知识和覆盖链接。我们可以识别和隔离需要同步的特定发布和订阅者:因此,开销包含在受影响的订阅者中。在出现故障的情况下,我们的解决方案仍然是安全的,在这种情况下,我们表明完全的秩序是不可能维持的。我们的实验表明,我们的解决方案随着订阅数量的增加而扩展,并且对于非冲突情况的开销有限。并与群通信系统进行了整体比较,以评估其相对可扩展性。
{"title":"Total Order in Content-Based Publish/Subscribe Systems","authors":"Kaiwen Zhang, Vinod Muthusamy, H. Jacobsen","doi":"10.1109/ICDCS.2012.17","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.17","url":null,"abstract":"Total ordering is a messaging guarantee increasingly required of content-based pub/sub systems, which are traditionally focused on performance. The main challenge is the uniform ordering of streams of publications from multiple publishers within an overlay broker network to be delivered to multiple subscribers. Our solution integrates total ordering into the pub/sub logic instead of offloading it as an external service. We show that our solution is fully distributed and relies only on local broker knowledge and overlay links. We can identify and isolate specific publications and subscribers where synchronization is required: the overhead is therefore contained to the affected subscribers. Our solution remains safe under the presence of failure, where we show total order to be impossible to maintain. Our experiments demonstrate that our solution scales with the number of subscriptions and has limited overhead for the non-conflicting cases. A holistic comparison with group communication systems is offered to evaluate their relative scalability.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"66 1","pages":"335-344"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81621517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Provably-Efficient Job Scheduling for Energy and Fairness in Geographically Distributed Data Centers 地理分布数据中心能源与公平性的可证明高效作业调度
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.77
Shaolei Ren, Yuxiong He, Fei Xu
Decreasing the soaring energy cost is imperative in large data centers. Meanwhile, limited computational resources need to be fairly allocated among different organizations. Latency is another major concern for resource management. Nevertheless, energy cost, resource allocation fairness, and latency are important but often contradicting metrics on scheduling data center workloads. In this paper, we explore the benefit of electricity price variations across time and locations. We study the problem of scheduling batch jobs, which originate from multiple organizations/users and are scheduled to multiple geographically-distributed data centers. We propose a provably-efficient online scheduling algorithm -- Gre Far -- which optimizes the energy cost and fairness among different organizations subject to queueing delay constraints. Gre Far does not require any statistical information of workload arrivals or electricity prices. We prove that it can minimize the cost (in terms of an affine combination of energy cost and weighted fairness) arbitrarily close to that of the optimal offline algorithm with future information. Moreover, by appropriately setting the control parameters, Gre Far achieves a desirable tradeoff among energy cost, fairness and latency.
在大型数据中心,降低飙升的能源成本势在必行。同时,有限的计算资源需要在不同的组织之间公平分配。延迟是资源管理的另一个主要关注点。尽管如此,能源成本、资源分配公平性和延迟都很重要,但在调度数据中心工作负载时,它们往往相互矛盾。在本文中,我们探讨了不同时间和地点的电价变化的好处。我们研究了调度批作业的问题,这些批作业来自多个组织/用户,并被调度到多个地理分布的数据中心。本文提出了一种可证明高效的在线调度算法——Gre Far,该算法优化了受排队延迟约束的不同组织之间的能源成本和公平性。Gre Far不需要任何工作量到达或电价的统计信息。我们证明了它可以最小化成本(从能量成本和加权公平性的仿射组合的角度来看),任意接近于具有未来信息的最优离线算法。此外,通过合理设置控制参数,Gre Far在能源成本、公平性和时延之间达到了理想的平衡。
{"title":"Provably-Efficient Job Scheduling for Energy and Fairness in Geographically Distributed Data Centers","authors":"Shaolei Ren, Yuxiong He, Fei Xu","doi":"10.1109/ICDCS.2012.77","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.77","url":null,"abstract":"Decreasing the soaring energy cost is imperative in large data centers. Meanwhile, limited computational resources need to be fairly allocated among different organizations. Latency is another major concern for resource management. Nevertheless, energy cost, resource allocation fairness, and latency are important but often contradicting metrics on scheduling data center workloads. In this paper, we explore the benefit of electricity price variations across time and locations. We study the problem of scheduling batch jobs, which originate from multiple organizations/users and are scheduled to multiple geographically-distributed data centers. We propose a provably-efficient online scheduling algorithm -- Gre Far -- which optimizes the energy cost and fairness among different organizations subject to queueing delay constraints. Gre Far does not require any statistical information of workload arrivals or electricity prices. We prove that it can minimize the cost (in terms of an affine combination of energy cost and weighted fairness) arbitrarily close to that of the optimal offline algorithm with future information. Moreover, by appropriately setting the control parameters, Gre Far achieves a desirable tradeoff among energy cost, fairness and latency.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"55 1","pages":"22-31"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91259939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 125
期刊
2012 IEEE 32nd International Conference on Distributed Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1