首页 > 最新文献

11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.最新文献

英文 中文
MQNA - Markovian queueing networks analyser MQNA——马尔可夫排队网络分析器
L. Brenner, Paulo Fernandes, Afonso Sales
This paper describes the MQNA - Markovian queueing networks analyser, a software tool to model and obtain the stationary solution of a large class of queueing networks. MQNA can directly solve open and closed product-form queueing networks using classical algorithms. For finite capacity queueing models, MQNA generates Markovian description in the stochastic automata networks (SAN) and stochastic petri nets (SPN) formalisms. Such descriptions can be exported to the PEPS - performance evaluation of parallel systems and SMART - stochastic model checking analyzer for reliability and timing software tools that can solve SAN and SPN models respectively.
本文介绍了MQNA -马尔可夫排队网络分析器,这是一种对一类大型排队网络进行建模和求解平稳解的软件工具。MQNA可以使用经典算法直接求解开闭产品型排队网络。对于有限容量排队模型,MQNA在随机自动机网络(SAN)和随机petri网(SPN)形式中生成马尔可夫描述。这些描述可以导出到PEPS -并行系统性能评估和SMART -随机模型可靠性检查分析仪和定时软件工具,分别可以解决SAN和SPN模型。
{"title":"MQNA - Markovian queueing networks analyser","authors":"L. Brenner, Paulo Fernandes, Afonso Sales","doi":"10.1109/MASCOT.2003.1240657","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240657","url":null,"abstract":"This paper describes the MQNA - Markovian queueing networks analyser, a software tool to model and obtain the stationary solution of a large class of queueing networks. MQNA can directly solve open and closed product-form queueing networks using classical algorithms. For finite capacity queueing models, MQNA generates Markovian description in the stochastic automata networks (SAN) and stochastic petri nets (SPN) formalisms. Such descriptions can be exported to the PEPS - performance evaluation of parallel systems and SMART - stochastic model checking analyzer for reliability and timing software tools that can solve SAN and SPN models respectively.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116378046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Synthesizing representative I/O workloads using iterative distillation 使用迭代蒸馏综合代表性的I/O工作负载
Zachary Kurmas, K. Keeton, K. Mackenzie
Storage systems designers are still searching for better methods of obtaining representative I/O workloads to drive studies of I/O systems. Traces of production workloads are very accurate, but inflexible and difficult to obtain. The use of synthetic workloads addresses these limitations; however, synthetic workloads are accurate only if they share certain key properties with the production workload on which they are based (e.g., mean request size, read percentage). Unfortunately, we do not know which properties are "key " for a given workload and storage system. We have developed a tool, the Distiller, that automatically identifies the key properties ("attribute-values") of the workload. The Distiller then uses these attribute-values to generate a synthetic workload representative of the production workload. This paper presents the design and evaluation of the Distiller. We demonstrate how the Distiller finds representative synthetic workloads for simple artificial workloads and three production workload traces.
存储系统设计人员仍在寻找更好的方法来获得具有代表性的I/O工作负载,以推动对I/O系统的研究。生产工作负载的轨迹非常准确,但不灵活且难以获得。合成工作负载的使用解决了这些限制;然而,合成工作负载只有在与它们所基于的生产工作负载共享某些关键属性(例如,平均请求大小、读取百分比)时才是准确的。不幸的是,我们不知道哪些属性是给定工作负载和存储系统的“关键”。我们已经开发了一个工具蒸馏器,它可以自动识别工作负载的关键属性(“属性值”)。然后,蒸馏器使用这些属性值生成代表生产工作负载的合成工作负载。本文介绍了蒸馏器的设计与评价。我们将演示蒸馏器如何为简单的人工工作负载和三个生产工作负载跟踪找到具有代表性的合成工作负载。
{"title":"Synthesizing representative I/O workloads using iterative distillation","authors":"Zachary Kurmas, K. Keeton, K. Mackenzie","doi":"10.1109/MASCOT.2003.1240637","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240637","url":null,"abstract":"Storage systems designers are still searching for better methods of obtaining representative I/O workloads to drive studies of I/O systems. Traces of production workloads are very accurate, but inflexible and difficult to obtain. The use of synthetic workloads addresses these limitations; however, synthetic workloads are accurate only if they share certain key properties with the production workload on which they are based (e.g., mean request size, read percentage). Unfortunately, we do not know which properties are \"key \" for a given workload and storage system. We have developed a tool, the Distiller, that automatically identifies the key properties (\"attribute-values\") of the workload. The Distiller then uses these attribute-values to generate a synthetic workload representative of the production workload. This paper presents the design and evaluation of the Distiller. We demonstrate how the Distiller finds representative synthetic workloads for simple artificial workloads and three production workload traces.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
A light-weight, temporary file system for large-scale Web servers 用于大型Web服务器的轻量级临时文件系统
Jun Wang, Dong Li
Several recent studies have pointed out that file I/Os can be a major performance bottleneck for some large Web servers. Large I/O buffer caches often do not work effectively for large servers. This paper presents a novel, lightweight, temporary file system called TFS that can effectively improve I/O performance for large servers. TFS is a more cost-effective scheme compared to the full caching policy for large servers. It is a user-level application that manages files on a raw disk or raw disk partition and works in conjunction with a file system as an I/O accelerator. Since the entire system works in the user space, it is easy and inexpensive to implement and maintain. It also has good portability. TFS uses a novel disk storage subsystem called cluster-structured storage system (CSS) to manage files. CSS uses only large disk reads and writes and does no have garbage collection problems. Comprehensive trace-driven simulation experiments show that, TFS achieves up to 160% better system throughput and reduces up to 77% I/O latency per URL operation than that in a traditional Unix fast file system in large Web servers.
最近的一些研究指出,文件I/ o可能是一些大型Web服务器的主要性能瓶颈。大型I/O缓冲缓存通常不能有效地用于大型服务器。本文介绍了一种新的、轻量级的临时文件系统TFS,它可以有效地提高大型服务器的I/O性能。与大型服务器的完全缓存策略相比,TFS是一种更具成本效益的方案。它是一个用户级应用程序,用于管理原始磁盘或原始磁盘分区上的文件,并作为I/O加速器与文件系统一起工作。由于整个系统都在用户空间中工作,因此实现和维护起来既简单又便宜。它也有很好的便携性。TFS使用一种称为集群结构化存储系统(CSS)的新型磁盘存储子系统来管理文件。CSS只使用大磁盘读写,没有垃圾收集问题。综合跟踪驱动的仿真实验表明,在大型Web服务器中,与传统的Unix快速文件系统相比,TFS实现了高达160%的系统吞吐量,每个URL操作减少了高达77%的I/O延迟。
{"title":"A light-weight, temporary file system for large-scale Web servers","authors":"Jun Wang, Dong Li","doi":"10.1109/MASCOT.2003.1240647","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240647","url":null,"abstract":"Several recent studies have pointed out that file I/Os can be a major performance bottleneck for some large Web servers. Large I/O buffer caches often do not work effectively for large servers. This paper presents a novel, lightweight, temporary file system called TFS that can effectively improve I/O performance for large servers. TFS is a more cost-effective scheme compared to the full caching policy for large servers. It is a user-level application that manages files on a raw disk or raw disk partition and works in conjunction with a file system as an I/O accelerator. Since the entire system works in the user space, it is easy and inexpensive to implement and maintain. It also has good portability. TFS uses a novel disk storage subsystem called cluster-structured storage system (CSS) to manage files. CSS uses only large disk reads and writes and does no have garbage collection problems. Comprehensive trace-driven simulation experiments show that, TFS achieves up to 160% better system throughput and reduces up to 77% I/O latency per URL operation than that in a traditional Unix fast file system in large Web servers.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122256551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Disk built-in caches: evaluation on system performance 磁盘内置缓存:对系统性能的评估
Yingwu Zhu, Yimin Hu
Disk drive manufacturers are putting increasingly larger built-in caches into disk drives. Today, 2 MB buffers are common on low-end retail IDE/ATA drives, and some SCSI drives are now available with 16 MB. However, few published data are available to demonstrate that such large built-in caches can noticeably improve overall system performance. In this paper, we investigated the impact of the disk built-in cache on file system response time when the file system buffer cache becomes larger. Via detailed file system and disk system simulation, we arrive at three main conclusions: (1) With a reasonably-sized file system buffer cache (16 MB or more), there is very little performance benefit of using a built-in cache larger than 512 KB. (2) As a readahead buffer, the disk built-in cache provides noticeable performance improvements for workloads with read sequentiality, but has little positive effect on performance if there are more concurrent sequential workloads than cache segments. (3) As a writing cache, it also has some positive effects on some workloads, at the cost of reducing reliability. The disk drive industry is very cost-sensitive. Our research indicates that the current trend of using large built-in caches is unnecessary and a waste of money and power for most users. Disk manufacturers could use much smaller built-in caches to reduce the cost as well as power-consumption, without affecting performance.
磁盘驱动器制造商正在将越来越大的内置缓存放入磁盘驱动器中。今天,在低端零售IDE/ATA驱动器上,2 MB的缓冲区很常见,一些SCSI驱动器现在可以使用16 MB的缓冲区。然而,很少有发布的数据可以证明如此大的内置缓存可以显着提高整体系统性能。在本文中,我们研究了当文件系统缓冲区缓存变大时,磁盘内置缓存对文件系统响应时间的影响。通过详细的文件系统和磁盘系统模拟,我们得出了三个主要结论:(1)对于大小合理的文件系统缓冲缓存(16 MB或更多),使用大于512 KB的内置缓存的性能优势非常小。(2)磁盘内置缓存作为预读缓冲区,对于有读顺序的工作负载有明显的性能提升,但当并发顺序工作负载多于缓存段时,对性能的积极影响不大。(3)作为写cache,它对某些工作负载也有一定的积极作用,但代价是降低可靠性。磁盘驱动器行业对成本非常敏感。我们的研究表明,目前使用大型内置缓存的趋势是不必要的,而且对大多数用户来说是浪费金钱和电力。磁盘制造商可以使用更小的内置缓存来降低成本和功耗,而不会影响性能。
{"title":"Disk built-in caches: evaluation on system performance","authors":"Yingwu Zhu, Yimin Hu","doi":"10.1109/MASCOT.2003.1240675","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240675","url":null,"abstract":"Disk drive manufacturers are putting increasingly larger built-in caches into disk drives. Today, 2 MB buffers are common on low-end retail IDE/ATA drives, and some SCSI drives are now available with 16 MB. However, few published data are available to demonstrate that such large built-in caches can noticeably improve overall system performance. In this paper, we investigated the impact of the disk built-in cache on file system response time when the file system buffer cache becomes larger. Via detailed file system and disk system simulation, we arrive at three main conclusions: (1) With a reasonably-sized file system buffer cache (16 MB or more), there is very little performance benefit of using a built-in cache larger than 512 KB. (2) As a readahead buffer, the disk built-in cache provides noticeable performance improvements for workloads with read sequentiality, but has little positive effect on performance if there are more concurrent sequential workloads than cache segments. (3) As a writing cache, it also has some positive effects on some workloads, at the cost of reducing reliability. The disk drive industry is very cost-sensitive. Our research indicates that the current trend of using large built-in caches is unnecessary and a waste of money and power for most users. Disk manufacturers could use much smaller built-in caches to reduce the cost as well as power-consumption, without affecting performance.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134286300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
System-level simulation modeling with MLDesigner 使用MLDesigner进行系统级仿真建模
G. Schorcht, I. Troxel, K. Farhangian, P. Unger, Daniel Zinn, C. Mick, A. George, H. Salzwedel
System-level design presents special simulation modeling challenges. System-level models address the architectural and functional performance of complex systems. Systems are decomposed into a series of interacting sub-systems. Architectures define subsystems, the interconnections between subsystems and contention for shared resources. Functions define the input and output behavior of subsystems. Mission-level studies explore system performance in the context of mission-level scenarios. This paper demonstrates a variety of complex system simulation models ranging from a mission-level, satellite-based air traffic management system to a RISC processor built with MLDesigner, a system-level design tool. All of the case studies demonstrate system-level design techniques using discrete event simulation.
系统级设计提出了特殊的仿真建模挑战。系统级模型处理复杂系统的体系结构和功能性能。系统被分解成一系列相互作用的子系统。体系结构定义了子系统、子系统之间的互连和共享资源的争用。函数定义子系统的输入和输出行为。任务级研究在任务级场景的背景下探索系统性能。本文演示了各种复杂的系统仿真模型,从任务级、基于卫星的空中交通管理系统到使用系统级设计工具MLDesigner构建的RISC处理器。所有的案例研究都演示了使用离散事件模拟的系统级设计技术。
{"title":"System-level simulation modeling with MLDesigner","authors":"G. Schorcht, I. Troxel, K. Farhangian, P. Unger, Daniel Zinn, C. Mick, A. George, H. Salzwedel","doi":"10.1109/MASCOT.2003.1240659","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240659","url":null,"abstract":"System-level design presents special simulation modeling challenges. System-level models address the architectural and functional performance of complex systems. Systems are decomposed into a series of interacting sub-systems. Architectures define subsystems, the interconnections between subsystems and contention for shared resources. Functions define the input and output behavior of subsystems. Mission-level studies explore system performance in the context of mission-level scenarios. This paper demonstrates a variety of complex system simulation models ranging from a mission-level, satellite-based air traffic management system to a RISC processor built with MLDesigner, a system-level design tool. All of the case studies demonstrate system-level design techniques using discrete event simulation.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131564144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
A packet-level simulation study of optimal Web proxy cache placement 最优Web代理缓存放置的包级模拟研究
Gwen Houtzager, C. Williamson
The Web proxy cache placement problem is a classical optimization problem: place N proxies within an internetwork so as to minimize the average user response time for retrieving Web objects. In this paper, we tackle this problem using packet-level ns2 network simulations. There are three main conclusions from our study. First, network-level effects (e.g., TCP dynamics, network congestion) can have a significant impact on user-level Web performance, and must not be overlooked when optimizing Web proxy cache placement. Second, cache filter effects can have a pronounced impact on the overall optimal caching solution. Third, small perturbations to the Web workload can produce quite different solutions for optimal proxy cache placement. This implies that robust, approximate solutions are more important than "perfect" optimal solutions. The paper provides several general heuristics for cache placement based on our packet-level simulations.
Web代理缓存放置问题是一个经典的优化问题:在互联网中放置N个代理,以便最小化检索Web对象的平均用户响应时间。在本文中,我们使用分组级ns2网络模拟来解决这个问题。我们的研究得出了三个主要结论。首先,网络级影响(例如,TCP动态、网络拥塞)会对用户级Web性能产生重大影响,在优化Web代理缓存放置时不能忽视这些影响。其次,缓存过滤器效果会对整体最优缓存解决方案产生显著影响。第三,对Web工作负载的微小扰动可以产生完全不同的最佳代理缓存放置解决方案。这意味着健壮的近似解比“完美的”最优解更重要。基于我们的包级模拟,本文提供了几种通用的缓存放置启发式方法。
{"title":"A packet-level simulation study of optimal Web proxy cache placement","authors":"Gwen Houtzager, C. Williamson","doi":"10.1109/MASCOT.2003.1240677","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240677","url":null,"abstract":"The Web proxy cache placement problem is a classical optimization problem: place N proxies within an internetwork so as to minimize the average user response time for retrieving Web objects. In this paper, we tackle this problem using packet-level ns2 network simulations. There are three main conclusions from our study. First, network-level effects (e.g., TCP dynamics, network congestion) can have a significant impact on user-level Web performance, and must not be overlooked when optimizing Web proxy cache placement. Second, cache filter effects can have a pronounced impact on the overall optimal caching solution. Third, small perturbations to the Web workload can produce quite different solutions for optimal proxy cache placement. This implies that robust, approximate solutions are more important than \"perfect\" optimal solutions. The paper provides several general heuristics for cache placement based on our packet-level simulations.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134633973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An open tool to compute stochastic bounds on steady-state distributions and rewards 一个用于计算稳态分布和奖励的随机边界的开放工具
J. Fourneau, M. Coz, N. Pekergin, F. Quessette
We present X-Bounds, a new tool to implement a methodology based on stochastic ordering, algorithmic derivation of simpler Markov chains and numerical analysis of these chains. The performance indices defined by reward functions are stochastically bounded by reward functions computed on much simpler or smaller Markov chains obtained after aggregation or simplification. This leads to an important reduction on numerical complexity. Typically, chains are ten times smaller and the accuracy may be good enough.
我们提出了X-Bounds,一个新的工具来实现基于随机排序的方法,简单马尔可夫链的算法推导和这些链的数值分析。由奖励函数定义的绩效指标随机地由在聚合或化简后得到的更简单或更小的马尔可夫链上计算的奖励函数限定。这大大降低了数值复杂度。通常,链条要小十倍,精度可能足够好。
{"title":"An open tool to compute stochastic bounds on steady-state distributions and rewards","authors":"J. Fourneau, M. Coz, N. Pekergin, F. Quessette","doi":"10.1109/MASCOT.2003.1240661","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240661","url":null,"abstract":"We present X-Bounds, a new tool to implement a methodology based on stochastic ordering, algorithmic derivation of simpler Markov chains and numerical analysis of these chains. The performance indices defined by reward functions are stochastically bounded by reward functions computed on much simpler or smaller Markov chains obtained after aggregation or simplification. This leads to an important reduction on numerical complexity. Typically, chains are ten times smaller and the accuracy may be good enough.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114380286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Mapping peer behavior to packet-level details: a framework for packet-level simulation of peer-to-peer systems 将对等行为映射到包级细节:对等系统的包级模拟的框架
Qi He, M. Ammar, G. Riley, Himanshu Raj, R. Fujimoto
The growing interest in peer-to-peer systems (such as Gnutella) has inspired numerous research activities in this area. Although many demonstrations have been performed that show that the performance of a peer-to-peer system is highly dependent on the underlying network characteristics, much of the evaluation of peer-to-peer proposals has used simplified models that fail to include a detailed model of the underlying network. This can be largely attributed to the complexity in experimenting with a scalable peer-to-peer system simulator built on top of a scalable network simulator with packet-level details. In this work we design and develop a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators. The simulation environment is portable to different network simulators, which enables us to simulate a realistic large scale peer-to-peer system using existing parallelization techniques. We demonstrate the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics.
对点对点系统(如Gnutella)日益增长的兴趣激发了该领域的许多研究活动。尽管已经进行了许多演示,表明点对点系统的性能高度依赖于底层网络特征,但对点对点建议的许多评估都使用了简化的模型,而这些模型没有包括底层网络的详细模型。这在很大程度上是由于在具有包级详细信息的可扩展网络模拟器之上构建可扩展点对点系统模拟器的实验非常复杂。在这项工作中,我们设计和开发了一个可扩展和可扩展的点对点仿真环境框架,该环境可以建立在现有的包级网络模拟器之上。仿真环境可移植到不同的网络模拟器,这使我们能够使用现有的并行化技术模拟一个现实的大规模点对点系统。我们在一些简单的实验中演示了模拟器的使用,这些实验显示了Gnutella系统性能如何受到网络特性的影响。
{"title":"Mapping peer behavior to packet-level details: a framework for packet-level simulation of peer-to-peer systems","authors":"Qi He, M. Ammar, G. Riley, Himanshu Raj, R. Fujimoto","doi":"10.1109/MASCOT.2003.1240644","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240644","url":null,"abstract":"The growing interest in peer-to-peer systems (such as Gnutella) has inspired numerous research activities in this area. Although many demonstrations have been performed that show that the performance of a peer-to-peer system is highly dependent on the underlying network characteristics, much of the evaluation of peer-to-peer proposals has used simplified models that fail to include a detailed model of the underlying network. This can be largely attributed to the complexity in experimenting with a scalable peer-to-peer system simulator built on top of a scalable network simulator with packet-level details. In this work we design and develop a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators. The simulation environment is portable to different network simulators, which enables us to simulate a realistic large scale peer-to-peer system using existing parallelization techniques. We demonstrate the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125924806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 92
Bottleneck estimation for load control gateways 负载控制网关的瓶颈估计
K. Pandit, J. Schmitt, M. Karsten, R. Steinmetz
Providing quality of service (QoS) to inelastic data transmissions in a cost-efficient, highly scalable, and realistic fashion in IP networks remains a challenging research issue. In M. Karsten, J. Schmitt (2002-03), a new approach for a basic, domain-oriented, reactive QoS system based on so-called load control gateways has been proposed and experimentally evaluated. These load control gateways base their load/admission control decisions on observations of simple, binary marking algorithms executed at internal nodes, which allows the gateways to infer knowledge about the load on each path to peer load control gateways. The original load control system proposal utilizes rather simple, conservative admission control decision criteria. In this paper, we focus on methods to improve the admission control decision by using probability theoretical insights in order to better estimate the load situation of a bottleneck on a given path. This is achieved by making assumptions on the probability distribution of the load state of the nodes and analyzing the effect on the path marking probability. We show that even with benevolent assumptions the exact calculation is mathematically intractable for a larger number of internal nodes and develop a heuristic in the form of a Monte Carlo based algorithm. To illustrate the overall benefit of our approach we give a number of numerical examples which provide a quantitative feeling on how the admission control decision can be improved. Overall, we believe the result of this paper to be an important enhancement of the admission control part of the original load control system which allows to make better usage of resources while at the same time controlling statistically the guarantees provided to inelastic transmissions.
在IP网络中,如何以一种经济、高可扩展性和现实的方式为非弹性数据传输提供服务质量(QoS)仍然是一个具有挑战性的研究问题。在M. Karsten, J. Schmitt(2002-03)中,提出了一种基于所谓负载控制网关的基本的、面向领域的、反应性QoS系统的新方法,并进行了实验评估。这些负载控制网关的负载/接纳控制决策基于对内部节点执行的简单二进制标记算法的观察,这允许网关推断到对等负载控制网关的每条路径上的负载知识。原来的负荷控制系统方案采用了相当简单、保守的接纳控制决策准则。在本文中,我们重点研究了利用概率论的见解来改进接纳控制决策的方法,以便更好地估计给定路径上瓶颈的负载情况。这是通过对节点负载状态的概率分布进行假设,并分析对路径标记概率的影响来实现的。我们表明,即使有善意的假设,精确的计算对于大量的内部节点在数学上是难以处理的,并以基于蒙特卡罗算法的形式开发了一个启发式算法。为了说明我们的方法的总体好处,我们给出了一些数值例子,这些例子提供了如何改进接纳控制决策的定量感觉。总的来说,我们认为本文的结果是对原有负荷控制系统的接纳控制部分的重要改进,它可以更好地利用资源,同时对提供给非弹性传动的保证进行统计控制。
{"title":"Bottleneck estimation for load control gateways","authors":"K. Pandit, J. Schmitt, M. Karsten, R. Steinmetz","doi":"10.1109/MASCOT.2003.1240672","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240672","url":null,"abstract":"Providing quality of service (QoS) to inelastic data transmissions in a cost-efficient, highly scalable, and realistic fashion in IP networks remains a challenging research issue. In M. Karsten, J. Schmitt (2002-03), a new approach for a basic, domain-oriented, reactive QoS system based on so-called load control gateways has been proposed and experimentally evaluated. These load control gateways base their load/admission control decisions on observations of simple, binary marking algorithms executed at internal nodes, which allows the gateways to infer knowledge about the load on each path to peer load control gateways. The original load control system proposal utilizes rather simple, conservative admission control decision criteria. In this paper, we focus on methods to improve the admission control decision by using probability theoretical insights in order to better estimate the load situation of a bottleneck on a given path. This is achieved by making assumptions on the probability distribution of the load state of the nodes and analyzing the effect on the path marking probability. We show that even with benevolent assumptions the exact calculation is mathematically intractable for a larger number of internal nodes and develop a heuristic in the form of a Monte Carlo based algorithm. To illustrate the overall benefit of our approach we give a number of numerical examples which provide a quantitative feeling on how the admission control decision can be improved. Overall, we believe the result of this paper to be an important enhancement of the admission control part of the original load control system which allows to make better usage of resources while at the same time controlling statistically the guarantees provided to inelastic transmissions.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
It's not fair - evaluating efficient disk scheduling 评估有效的磁盘调度是不公平的
Alma Riska, E. Riedel
Storage system designers prefer to limit the maximum queue length at individual disks to only a few outstanding requests, to avoid possible request starvation. In this paper, we evaluate the benefits and performance implications of allowing disks to queue more requests. We show that the average response time in the storage subsystem is reduced when queuing more requests and optimizing (based on seek and/or position time) request scheduling at the disk. We argue that the disk, as the only service center in a storage subsystem, is able to best utilize its resources via scheduling when it has the most complete view of the load it is about to process. The benefits of longer queues at the disks are even more obvious when the system operates under transient overload conditions.
存储系统设计人员倾向于将单个磁盘上的最大队列长度限制为只有几个未完成的请求,以避免可能的请求耗尽。在本文中,我们评估了允许磁盘将更多请求排队的好处和性能影响。我们表明,当对更多请求排队并优化(基于寻道和/或位置时间)磁盘上的请求调度时,存储子系统中的平均响应时间会减少。我们认为,磁盘作为存储子系统中唯一的服务中心,当它对即将处理的负载具有最完整的视图时,它能够通过调度来最好地利用其资源。当系统在瞬时过载条件下运行时,磁盘上较长队列的好处更加明显。
{"title":"It's not fair - evaluating efficient disk scheduling","authors":"Alma Riska, E. Riedel","doi":"10.1109/MASCOT.2003.1240673","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240673","url":null,"abstract":"Storage system designers prefer to limit the maximum queue length at individual disks to only a few outstanding requests, to avoid possible request starvation. In this paper, we evaluate the benefits and performance implications of allowing disks to queue more requests. We show that the average response time in the storage subsystem is reduced when queuing more requests and optimizing (based on seek and/or position time) request scheduling at the disk. We argue that the disk, as the only service center in a storage subsystem, is able to best utilize its resources via scheduling when it has the most complete view of the load it is about to process. The benefits of longer queues at the disks are even more obvious when the system operates under transient overload conditions.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130725303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1