首页 > 最新文献

2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems最新文献

英文 中文
Distributed Routing for Vehicular Ad Hoc Networks: Throughput-Delay Tradeoff 车辆自组织网络的分布式路由:吞吐量-延迟权衡
A. Abedi, Majid Ghaderi, C. Williamson
In this paper, we address the problem of low-latency routing in a vehicular highway network. To cover long highways while minimizing the number of required roadside access points, we utilize vehicle-to-vehicle communication to propagate data in the network. Vehicular networks are highly dynamic, and hence routing algorithms that require global network state information or centralized coordination are not suitable for such networks. Instead, we develop a novel distributed routing algorithm that requires minimal coordination among vehicles, while achieving a highly efficient throughput-delay tradeoff. Specifically, we show that the proposed algorithm achieves a throughput that is within a factor of 1/e of the throughput of an algorithm that centrally coordinates vehicle transmissions in a highly dense network, and yet its end-to-end delay is approximately half of that of a widely studied ALOHA-based randomized routing algorithm. We evaluate our algorithm analytically and through simulations and compare its throughput-delay performance against the ALOHA-based randomized routing.
在本文中,我们解决了车辆高速公路网络中的低延迟路由问题。为了覆盖较长的高速公路,同时尽量减少所需路边接入点的数量,我们利用车对车通信在网络中传播数据。车辆网络具有高度动态性,需要全局网络状态信息或集中协调的路由算法不适合此类网络。相反,我们开发了一种新的分布式路由算法,该算法需要车辆之间的最小协调,同时实现了高效的吞吐量-延迟权衡。具体而言,我们表明,所提出的算法实现的吞吐量在高密度网络中集中协调车辆传输的算法吞吐量的1/e以内,但其端到端延迟约为广泛研究的基于aloha的随机路由算法的一半。我们通过分析和仿真来评估我们的算法,并将其吞吐量延迟性能与基于aloha的随机路由进行比较。
{"title":"Distributed Routing for Vehicular Ad Hoc Networks: Throughput-Delay Tradeoff","authors":"A. Abedi, Majid Ghaderi, C. Williamson","doi":"10.1109/MASCOTS.2010.14","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.14","url":null,"abstract":"In this paper, we address the problem of low-latency routing in a vehicular highway network. To cover long highways while minimizing the number of required roadside access points, we utilize vehicle-to-vehicle communication to propagate data in the network. Vehicular networks are highly dynamic, and hence routing algorithms that require global network state information or centralized coordination are not suitable for such networks. Instead, we develop a novel distributed routing algorithm that requires minimal coordination among vehicles, while achieving a highly efficient throughput-delay tradeoff. Specifically, we show that the proposed algorithm achieves a throughput that is within a factor of 1/e of the throughput of an algorithm that centrally coordinates vehicle transmissions in a highly dense network, and yet its end-to-end delay is approximately half of that of a widely studied ALOHA-based randomized routing algorithm. We evaluate our algorithm analytically and through simulations and compare its throughput-delay performance against the ALOHA-based randomized routing.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125103711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Frequency Based Chunking for Data De-Duplication 基于频率的数据重复删除分块
Guanlin Lu, Yu Jin, D. Du
A predominant portion of Internet services, like content delivery networks, news broadcasting, blogs sharing and social networks, etc., is data centric. A significant amount of new data is generated by these services each day. To efficiently store and maintain backups for such data is a challenging task for current data storage systems. Chunking based deduplication (dedup) methods are widely used to eliminate redundant data and hence reduce the required total storage space. In this paper, we propose a novel Frequency Based Chunking (FBC) algorithm. Unlike the most popular Content-Defined Chunking (CDC) algorithm which divides the data stream randomly according to the content, FBC explicitly utilizes the chunk frequency information in the data stream to enhance the data deduplication gain especially when the metadata overhead is taken into consideration. The FBC algorithm consists of two components, a statistical chunk frequency estimation algorithm for identifying the globally appeared frequent chunks, and a two-stage chunking algorithm which uses these chunk frequencies to obtain a better chunking result. To evaluate the effectiveness of the proposed FBC algorithm, we conducted extensive experiments on heterogeneous datasets. In all experiments, the FBC algorithm persistently outperforms the CDC algorithm in terms of achieving a better dedup gain or producing much less number of chunks. Particularly, our experiments show that FBC produces 2.5 ~ 4 times less number of chunks than that of a baseline CDC which achieving the same Duplicate Elimination Ratio (DER). Another benefit of FBC over CDC is that the FBC with average chunk size greater than or equal to that of CDC achieves up to 50% higher DER than that of a CDC algorithm.
互联网服务的主要部分,如内容交付网络、新闻广播、博客分享和社交网络等,都是以数据为中心的。这些服务每天都会产生大量的新数据。有效地存储和维护这些数据的备份对于当前的数据存储系统来说是一项具有挑战性的任务。基于分块的重复数据删除(dedup)方法被广泛用于消除冗余数据,从而减少所需的总存储空间。在本文中,我们提出了一种新的基于频率的分块算法。与最流行的CDC (content - defined Chunking)算法(根据内容随机划分数据流)不同,FBC明确地利用数据流中的块频率信息来提高重复数据删除的增益,特别是在考虑元数据开销的情况下。FBC算法由两个部分组成,一个是用于识别全局出现的频繁块的统计块频率估计算法,另一个是利用这些块频率获得更好的分块结果的两阶段分块算法。为了评估所提出的FBC算法的有效性,我们在异构数据集上进行了大量的实验。在所有实验中,FBC算法在获得更好的去噪增益或产生更少的块数量方面始终优于CDC算法。特别是,我们的实验表明,在达到相同的重复消除比(DER)的情况下,FBC产生的块数量比基线CDC少2.5 ~ 4倍。与CDC相比,FBC的另一个优点是,平均块大小大于或等于CDC的FBC算法的DER比CDC算法高50%。
{"title":"Frequency Based Chunking for Data De-Duplication","authors":"Guanlin Lu, Yu Jin, D. Du","doi":"10.1109/MASCOTS.2010.37","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.37","url":null,"abstract":"A predominant portion of Internet services, like content delivery networks, news broadcasting, blogs sharing and social networks, etc., is data centric. A significant amount of new data is generated by these services each day. To efficiently store and maintain backups for such data is a challenging task for current data storage systems. Chunking based deduplication (dedup) methods are widely used to eliminate redundant data and hence reduce the required total storage space. In this paper, we propose a novel Frequency Based Chunking (FBC) algorithm. Unlike the most popular Content-Defined Chunking (CDC) algorithm which divides the data stream randomly according to the content, FBC explicitly utilizes the chunk frequency information in the data stream to enhance the data deduplication gain especially when the metadata overhead is taken into consideration. The FBC algorithm consists of two components, a statistical chunk frequency estimation algorithm for identifying the globally appeared frequent chunks, and a two-stage chunking algorithm which uses these chunk frequencies to obtain a better chunking result. To evaluate the effectiveness of the proposed FBC algorithm, we conducted extensive experiments on heterogeneous datasets. In all experiments, the FBC algorithm persistently outperforms the CDC algorithm in terms of achieving a better dedup gain or producing much less number of chunks. Particularly, our experiments show that FBC produces 2.5 ~ 4 times less number of chunks than that of a baseline CDC which achieving the same Duplicate Elimination Ratio (DER). Another benefit of FBC over CDC is that the FBC with average chunk size greater than or equal to that of CDC achieves up to 50% higher DER than that of a CDC algorithm.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Examining Energy Use in Heterogeneous Archival Storage Systems 检查能源使用在异构档案存储系统
I. Adams, E. L. Miller, M. Storer
Controlling energy usage in data centers, and storage in particular, continues to rise in importance. Many systems and models have examined energy efficiency through intelligent spin-down of disks and novel data layouts, yet little work has been done to examine how power usage over the course of months to years is impacted by the characteristics of the storage devices chosen for use. Long-term power usage is particularly important for archival storage systems, since it is a large contributor to overall system cost. In this work, we begin exploring the impact that broad policies (e.g. utilize high-bandwidth devices first) have upon the power efficiency of a disk based archival storage system of heterogeneous devices over the course of a year. Using a discrete event simulator, we found that even simple heuristic policies for allocating space can have significant impact on the power usage of a system. We show that our system growth policies can cause power usage to vary from 10% higher to 18% lower than a naive random data allocation scheme. We also found that under low read rates power is dominated by that used in standby modes. Most interestingly, we found cases where concentrating data on fewer devices yielded increased power usage.
控制数据中心的能源使用,尤其是存储,变得越来越重要。许多系统和模型已经通过磁盘的智能休眠和新颖的数据布局来检测能源效率,但是很少有人研究选择使用的存储设备的特性如何影响数月至数年的电力使用。对于档案存储系统来说,长期的电力使用尤为重要,因为它是整个系统成本的重要组成部分。在这项工作中,我们开始探索广泛的策略(例如,首先利用高带宽设备)在一年的过程中对基于磁盘的异构设备归档存储系统的电源效率的影响。通过使用离散事件模拟器,我们发现,即使是用于分配空间的简单启发式策略也会对系统的电力使用产生重大影响。我们表明,我们的系统增长策略可能导致电力使用比简单的随机数据分配方案高10%到低18%。我们还发现,在低读取速率下,待机模式下的功耗占主导地位。最有趣的是,我们发现将数据集中在更少的设备上会导致耗电量增加。
{"title":"Examining Energy Use in Heterogeneous Archival Storage Systems","authors":"I. Adams, E. L. Miller, M. Storer","doi":"10.1109/MASCOTS.2010.38","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.38","url":null,"abstract":"Controlling energy usage in data centers, and storage in particular, continues to rise in importance. Many systems and models have examined energy efficiency through intelligent spin-down of disks and novel data layouts, yet little work has been done to examine how power usage over the course of months to years is impacted by the characteristics of the storage devices chosen for use. Long-term power usage is particularly important for archival storage systems, since it is a large contributor to overall system cost. In this work, we begin exploring the impact that broad policies (e.g. utilize high-bandwidth devices first) have upon the power efficiency of a disk based archival storage system of heterogeneous devices over the course of a year. Using a discrete event simulator, we found that even simple heuristic policies for allocating space can have significant impact on the power usage of a system. We show that our system growth policies can cause power usage to vary from 10% higher to 18% lower than a naive random data allocation scheme. We also found that under low read rates power is dominated by that used in standby modes. Most interestingly, we found cases where concentrating data on fewer devices yielded increased power usage.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114718268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Modeling and Evaluation of Control Flow Vulnerability in the Embedded System 嵌入式系统控制流漏洞建模与评估
M. Rouf, Soontae Kim
Faults in control flow-changing instructions are critical for correct execution because the faults could change the behavior of programs very differently from what they are expected to show. The conventional techniques to deal with control flow vulnerability typically add extra instructions to detect control flow-related faults, which increase both static and dynamic instructions, consequently, execution time and energy consumption. In contrast, we make our own control flow vulnerability model to evaluate the effects of different compiler optimizations. We find that different programs show very different degrees of control flow vulnerabilities and some compiler optimizations have high correlation to control flow vulnerability. The results observed in this work can be used to generate more resilient code against control flow-related faults.
控制流改变指令中的错误对于正确执行是至关重要的,因为错误可能会改变程序的行为,使其与预期的表现大相径庭。传统的控制流漏洞处理技术通常会增加额外的指令来检测控制流相关的故障,这增加了静态和动态指令,从而增加了执行时间和能量消耗。与此相反,我们建立了自己的控制流漏洞模型来评估不同编译器优化的效果。我们发现不同的程序显示出不同程度的控制流漏洞,一些编译器优化与控制流漏洞有很高的相关性。在这项工作中观察到的结果可用于生成更有弹性的代码,以应对与控制流相关的错误。
{"title":"Modeling and Evaluation of Control Flow Vulnerability in the Embedded System","authors":"M. Rouf, Soontae Kim","doi":"10.1109/MASCOTS.2010.71","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.71","url":null,"abstract":"Faults in control flow-changing instructions are critical for correct execution because the faults could change the behavior of programs very differently from what they are expected to show. The conventional techniques to deal with control flow vulnerability typically add extra instructions to detect control flow-related faults, which increase both static and dynamic instructions, consequently, execution time and energy consumption. In contrast, we make our own control flow vulnerability model to evaluate the effects of different compiler optimizations. We find that different programs show very different degrees of control flow vulnerabilities and some compiler optimizations have high correlation to control flow vulnerability. The results observed in this work can be used to generate more resilient code against control flow-related faults.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"28 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129418251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Barra: A Parallel Functional Simulator for GPGPU Barra: GPGPU的并行功能模拟器
Caroline Collange, M. Daumas, D. Defour, David Parello
We present Barra, a simulator of Graphics Processing Units (GPU) tuned for general purpose processing (GPGPU). It is based on the UNISIM framework and it simulates the native instruction set of the Tesla architecture at the functional level. The inputs are CUDA executables produced by NVIDIA tools. No alterations are needed to perform simulations. As it uses parallelism, Barra generates detailed statistics on executions in about the time needed by CUDA to operate in emulation mode. We use it to understand and explore the micro-architecture design spaces of GPUs.
我们介绍了Barra,一个图形处理单元(GPU)模拟器,用于通用处理(GPGPU)。它基于UNISIM框架,在功能层面模拟了Tesla架构的本地指令集。输入是由NVIDIA工具生成的CUDA可执行文件。执行模拟不需要任何更改。由于它使用并行性,Barra在CUDA在仿真模式下运行所需的时间内生成有关执行的详细统计数据。我们用它来理解和探索gpu的微架构设计空间。
{"title":"Barra: A Parallel Functional Simulator for GPGPU","authors":"Caroline Collange, M. Daumas, D. Defour, David Parello","doi":"10.1109/MASCOTS.2010.43","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.43","url":null,"abstract":"We present Barra, a simulator of Graphics Processing Units (GPU) tuned for general purpose processing (GPGPU). It is based on the UNISIM framework and it simulates the native instruction set of the Tesla architecture at the functional level. The inputs are CUDA executables produced by NVIDIA tools. No alterations are needed to perform simulations. As it uses parallelism, Barra generates detailed statistics on executions in about the time needed by CUDA to operate in emulation mode. We use it to understand and explore the micro-architecture design spaces of GPUs.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129757901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 119
Efficient Discovery of Loop Nests in Execution Traces 执行轨迹中循环巢的有效发现
Qiang Xu, J. Subhlok, Nathaniel Hammen
Execution and communication traces are central to performance modeling and analysis. Since the traces can be very long, meaningful compression and extraction of representative behavior is important. Commonly used compression procedures identify repeating patterns in sections of the input string and replace each instance with a representative symbol. This can prevent the identification of long repeating sequences corresponding to outer loops in a trace. This paper introduces and analyzes a framework for identifying the maximal loop nest from a trace. The discovery of loop nests makes construction of compressed representative traces straightforward. The paper also introduces a greedy algorithm for fast ``near optimal'' loop nest discovery with well defined bounds. Results of compressing MPI communication traces of NAS parallel benchmarks show that both algorithms identified the basic loop structures correctly. The greedy algorithm was also very efficient with an average processing time of 16.5 seconds for an average trace length of 71695 MPI events.
执行和通信跟踪是性能建模和分析的核心。由于轨迹可能很长,因此对代表性行为进行有意义的压缩和提取非常重要。常用的压缩过程识别输入字符串部分中的重复模式,并用代表性符号替换每个实例。这可以防止识别与跟踪中的外部循环相对应的长重复序列。本文介绍并分析了一种从轨迹中识别最大环巢的框架。环状巢的发现使构造压缩的代表性痕迹变得简单明了。本文还介绍了一种贪心算法,用于快速发现具有良好边界的“近最优”环巢。压缩NAS并行基准的MPI通信轨迹的结果表明,两种算法都能正确识别基本环路结构。贪婪算法也非常高效,平均跟踪长度为71695个MPI事件,平均处理时间为16.5秒。
{"title":"Efficient Discovery of Loop Nests in Execution Traces","authors":"Qiang Xu, J. Subhlok, Nathaniel Hammen","doi":"10.1109/MASCOTS.2010.28","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.28","url":null,"abstract":"Execution and communication traces are central to performance modeling and analysis. Since the traces can be very long, meaningful compression and extraction of representative behavior is important. Commonly used compression procedures identify repeating patterns in sections of the input string and replace each instance with a representative symbol. This can prevent the identification of long repeating sequences corresponding to outer loops in a trace. This paper introduces and analyzes a framework for identifying the maximal loop nest from a trace. The discovery of loop nests makes construction of compressed representative traces straightforward. The paper also introduces a greedy algorithm for fast ``near optimal'' loop nest discovery with well defined bounds. Results of compressing MPI communication traces of NAS parallel benchmarks show that both algorithms identified the basic loop structures correctly. The greedy algorithm was also very efficient with an average processing time of 16.5 seconds for an average trace length of 71695 MPI events.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130755557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Expanding the Event Horizon in Parallelized Network Simulations 在并行网络模拟中扩展事件视界
G. Kunz, O. Landsiedel, S. Götz, Klaus Wehrle, J. Gross, Farshad Naghibi
The simulation models of wireless networks rapidly increase in complexity to accurately model wireless channel characteristics and the properties of advanced transmission technologies. Such detailed models typically lead to a high computational load per simulation event that accumulates to extensive simulation runtimes. Reducing runtimes through parallelization is challenging since it depends on detecting causally independent events that can execute concurrently. Most existing approaches base this detection on lookaheads derived from channel propagation latency or protocol characteristics. In wireless networks, these lookaheads are typically short, causing the potential for parallelization and the achievable speedup to remain small. This paper presents Horizon, which unlocks a substantial portion of a simulation model's workload for parallelization by going beyond the traditional lookahead. We show how to augment discrete events with durations to identify a much larger horizon of independent simulation events and efficiently schedule them on multi-core systems. Our evaluation shows that this approach can significantly cut down the runtime of simulations, in particular for complex and accurate models of wireless networks.
为了准确地模拟无线信道特性和先进传输技术的特性,无线网络仿真模型的复杂性迅速增加。这种详细的模型通常会导致每个模拟事件的高计算负载,累积到大量的模拟运行时间。通过并行化减少运行时间是一项挑战,因为它依赖于检测可以并发执行的因果独立事件。大多数现有的方法都是基于从信道传播延迟或协议特性派生的预读来进行这种检测的。在无线网络中,这些查找头通常很短,导致并行化的潜力和可实现的加速仍然很小。本文介绍了Horizon,它通过超越传统的前瞻性,为并行化解锁了仿真模型工作负载的很大一部分。我们展示了如何增加具有持续时间的离散事件,以识别更大范围的独立模拟事件,并在多核系统上有效地调度它们。我们的评估表明,这种方法可以显著缩短模拟的运行时间,特别是对于复杂和精确的无线网络模型。
{"title":"Expanding the Event Horizon in Parallelized Network Simulations","authors":"G. Kunz, O. Landsiedel, S. Götz, Klaus Wehrle, J. Gross, Farshad Naghibi","doi":"10.1109/MASCOTS.2010.26","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.26","url":null,"abstract":"The simulation models of wireless networks rapidly increase in complexity to accurately model wireless channel characteristics and the properties of advanced transmission technologies. Such detailed models typically lead to a high computational load per simulation event that accumulates to extensive simulation runtimes. Reducing runtimes through parallelization is challenging since it depends on detecting causally independent events that can execute concurrently. Most existing approaches base this detection on lookaheads derived from channel propagation latency or protocol characteristics. In wireless networks, these lookaheads are typically short, causing the potential for parallelization and the achievable speedup to remain small. This paper presents Horizon, which unlocks a substantial portion of a simulation model's workload for parallelization by going beyond the traditional lookahead. We show how to augment discrete events with durations to identify a much larger horizon of independent simulation events and efficiently schedule them on multi-core systems. Our evaluation shows that this approach can significantly cut down the runtime of simulations, in particular for complex and accurate models of wireless networks.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126265454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
New Algorithms for File System Cooperative Caching 文件系统协同缓存的新算法
Eric Anderson, Christopher Hoover, Xiaozhou Li
We present two new cooperative caching algorithms that allow a cluster of file system clients to cache chunks of files instead of directly accessing them from origin file servers. The first algorithm, called C-LRU (Cooperative-LRU), is based on the simple D-LRU (Distributed-LRU) algorithm, but moves a chunk's position closer to the tail of its local LRU list when the number of copies of the chunk increases. The second algorithm, called RobinHood, is based on the N-Chance algorithm, but targets chunks cached at many clients for replacement when forwarding a singlet to a peer. We evaluate these algorithms on a variety of workloads, including several publicly available traces, and find that the new algorithms significantly outperform their predecessors.
我们提出了两种新的协作缓存算法,允许文件系统客户端集群缓存文件块,而不是直接从原始文件服务器访问它们。第一种算法称为C-LRU (Cooperative-LRU),基于简单的D-LRU (Distributed-LRU)算法,但是当块的副本数量增加时,将块的位置移动到更靠近其本地LRU列表尾部的位置。第二种算法称为RobinHood,基于N-Chance算法,但在向对等端转发单个let时,针对缓存在许多客户机上的块进行替换。我们在各种工作负载上评估了这些算法,包括一些公开可用的跟踪,并发现新算法明显优于它们的前辈。
{"title":"New Algorithms for File System Cooperative Caching","authors":"Eric Anderson, Christopher Hoover, Xiaozhou Li","doi":"10.1109/MASCOTS.2010.59","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.59","url":null,"abstract":"We present two new cooperative caching algorithms that allow a cluster of file system clients to cache chunks of files instead of directly accessing them from origin file servers. The first algorithm, called C-LRU (Cooperative-LRU), is based on the simple D-LRU (Distributed-LRU) algorithm, but moves a chunk's position closer to the tail of its local LRU list when the number of copies of the chunk increases. The second algorithm, called RobinHood, is based on the N-Chance algorithm, but targets chunks cached at many clients for replacement when forwarding a singlet to a peer. We evaluate these algorithms on a variety of workloads, including several publicly available traces, and find that the new algorithms significantly outperform their predecessors.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133965049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Effective Quality of Service Differentiation for Real-world Storage Systems 实际存储系统的有效服务质量区分
Rui Zhang, D. Chambliss, P. Pandey, William Shearman, J. Ruiz, Yan Xu, Joseph Hyde
Data storage is an integral part of IT infrastructures, where Quality of Service (QoS) differentiation amongst customers and their applications is essential for many. Achieving this objective in a production environment is nontrivial, because these environments are complex and dynamic. Numerous practical and engineering constraints render the task even more challenging. This paper presents SLED-2, a QoS differentiation solution that meets these challenges in offering effective protection to the performance of important workloads at the expense of less important workloads when needed. SLED-2 uses a customized feedback heuristic that rate-limits selected I/O streams. This approach is unique in that it accounts for a number of important practical considerations, including fine-grained controls, errors in storage systems models, and inexpensive and safe QoS management. SLED-2 has been implemented for the IBM DS8000 series storage servers and shown to be highly effective in a set of hostile and practical scenarios using test facilities for IBM storage products.
数据存储是IT基础设施的一个组成部分,其中客户及其应用程序之间的服务质量(QoS)差异对许多人来说至关重要。在生产环境中实现这一目标并非易事,因为这些环境是复杂和动态的。许多实际和工程上的限制使得这项任务更具挑战性。本文介绍了SLED-2,这是一种QoS差异化解决方案,可以在需要时以牺牲不太重要的工作负载为代价,为重要工作负载的性能提供有效保护,从而应对这些挑战。SLED-2使用自定义的反馈启发式来限制所选I/O流的速率。这种方法的独特之处在于它考虑了许多重要的实际考虑,包括细粒度控制、存储系统模型中的错误以及廉价且安全的QoS管理。已经为IBM DS8000系列存储服务器实现了SLED-2,并使用IBM存储产品的测试设施在一系列恶劣和实际的场景中显示出了很高的效率。
{"title":"Effective Quality of Service Differentiation for Real-world Storage Systems","authors":"Rui Zhang, D. Chambliss, P. Pandey, William Shearman, J. Ruiz, Yan Xu, Joseph Hyde","doi":"10.1109/MASCOTS.2010.63","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.63","url":null,"abstract":"Data storage is an integral part of IT infrastructures, where Quality of Service (QoS) differentiation amongst customers and their applications is essential for many. Achieving this objective in a production environment is nontrivial, because these environments are complex and dynamic. Numerous practical and engineering constraints render the task even more challenging. This paper presents SLED-2, a QoS differentiation solution that meets these challenges in offering effective protection to the performance of important workloads at the expense of less important workloads when needed. SLED-2 uses a customized feedback heuristic that rate-limits selected I/O streams. This approach is unique in that it accounts for a number of important practical considerations, including fine-grained controls, errors in storage systems models, and inexpensive and safe QoS management. SLED-2 has been implemented for the IBM DS8000 series storage servers and shown to be highly effective in a set of hostile and practical scenarios using test facilities for IBM storage products.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114537393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Clasas: A Key-Store for the Cloud 类:用于云的密钥存储
T. Schwarz, D. Long
We propose Clasas (from the Castilian “Claves seguras” for “secure keys”), a key-store for distributed storage such in the Cloud. The security of Clasas derives from breaking keys into K shares and storing the key shares at many different sites. This provides both a probabilistic and a deterministic guarantee against an adversary trying to obtain keys. The probabilistic guarantee is based on a combinatorial explosion, which forces an adversary to subvert a very large portion of the storage sites for even a minute chance of obtaining a key. The deterministic guarantee stems from the use of LH* distributed linear hashing. Our use of the LH* addressing rules insures that no two key shares (belonging to the same key) are ever, even in transit, stored at the same site. Consequentially, an adversary has to subvert at least K sites. In addition, even an insider with extensive administrative privileges over many of the sites used for key storage is prevented from obtaining access to any key. Our key-store uses LH* or its scalable availability derivate, LH*RS to distribute key shares among a varying number of storage sites in a manner transparent to its users. While an adversary faces very high obstacles in obtaining a key, clients or authorized entities acting on their behalf can access keys with a very small number of messages, even if they do not know all sites where key shares are stored. This allows easy sharing of keys, rekeying, and key revocation.
我们提出Clasas(来自Castilian“Claves seguras”,意为“安全密钥”),这是一个用于分布式存储的密钥存储,例如在云中。Clasas的安全性源于将密钥分成K个共享,并将密钥共享存储在许多不同的站点。这既提供了概率保证,也提供了确定性保证,防止攻击者试图获取密钥。概率保证是基于组合爆炸,它迫使对手破坏很大一部分存储地点,即使只有一分钟的机会获得密钥。确定性保证源于LH*分布式线性哈希的使用。我们使用LH*寻址规则确保没有两个密钥共享(属于同一密钥)存储在同一站点,即使在传输中也是如此。因此,对手必须破坏至少K个站点。此外,即使内部人员对用于密钥存储的许多站点具有广泛的管理权限,也无法访问任何密钥。我们的密钥存储使用LH*或其可扩展可用性衍生产品LH*RS,以对用户透明的方式在不同数量的存储站点之间分发密钥共享。虽然攻击者在获取密钥方面面临很大的障碍,但代表他们的客户端或授权实体可以使用非常少量的消息访问密钥,即使他们不知道存储密钥共享的所有站点。这样可以方便地共享密钥、更新密钥和撤销密钥。
{"title":"Clasas: A Key-Store for the Cloud","authors":"T. Schwarz, D. Long","doi":"10.1109/MASCOTS.2010.35","DOIUrl":"https://doi.org/10.1109/MASCOTS.2010.35","url":null,"abstract":"We propose Clasas (from the Castilian “Claves seguras” for “secure keys”), a key-store for distributed storage such in the Cloud. The security of Clasas derives from breaking keys into K shares and storing the key shares at many different sites. This provides both a probabilistic and a deterministic guarantee against an adversary trying to obtain keys. The probabilistic guarantee is based on a combinatorial explosion, which forces an adversary to subvert a very large portion of the storage sites for even a minute chance of obtaining a key. The deterministic guarantee stems from the use of LH* distributed linear hashing. Our use of the LH* addressing rules insures that no two key shares (belonging to the same key) are ever, even in transit, stored at the same site. Consequentially, an adversary has to subvert at least K sites. In addition, even an insider with extensive administrative privileges over many of the sites used for key storage is prevented from obtaining access to any key. Our key-store uses LH* or its scalable availability derivate, LH*RS to distribute key shares among a varying number of storage sites in a manner transparent to its users. While an adversary faces very high obstacles in obtaining a key, clients or authorized entities acting on their behalf can access keys with a very small number of messages, even if they do not know all sites where key shares are stored. This allows easy sharing of keys, rekeying, and key revocation.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114851870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1