Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240657
L. Brenner, Paulo Fernandes, Afonso Sales
This paper describes the MQNA - Markovian queueing networks analyser, a software tool to model and obtain the stationary solution of a large class of queueing networks. MQNA can directly solve open and closed product-form queueing networks using classical algorithms. For finite capacity queueing models, MQNA generates Markovian description in the stochastic automata networks (SAN) and stochastic petri nets (SPN) formalisms. Such descriptions can be exported to the PEPS - performance evaluation of parallel systems and SMART - stochastic model checking analyzer for reliability and timing software tools that can solve SAN and SPN models respectively.
{"title":"MQNA - Markovian queueing networks analyser","authors":"L. Brenner, Paulo Fernandes, Afonso Sales","doi":"10.1109/MASCOT.2003.1240657","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240657","url":null,"abstract":"This paper describes the MQNA - Markovian queueing networks analyser, a software tool to model and obtain the stationary solution of a large class of queueing networks. MQNA can directly solve open and closed product-form queueing networks using classical algorithms. For finite capacity queueing models, MQNA generates Markovian description in the stochastic automata networks (SAN) and stochastic petri nets (SPN) formalisms. Such descriptions can be exported to the PEPS - performance evaluation of parallel systems and SMART - stochastic model checking analyzer for reliability and timing software tools that can solve SAN and SPN models respectively.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116378046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240637
Zachary Kurmas, K. Keeton, K. Mackenzie
Storage systems designers are still searching for better methods of obtaining representative I/O workloads to drive studies of I/O systems. Traces of production workloads are very accurate, but inflexible and difficult to obtain. The use of synthetic workloads addresses these limitations; however, synthetic workloads are accurate only if they share certain key properties with the production workload on which they are based (e.g., mean request size, read percentage). Unfortunately, we do not know which properties are "key " for a given workload and storage system. We have developed a tool, the Distiller, that automatically identifies the key properties ("attribute-values") of the workload. The Distiller then uses these attribute-values to generate a synthetic workload representative of the production workload. This paper presents the design and evaluation of the Distiller. We demonstrate how the Distiller finds representative synthetic workloads for simple artificial workloads and three production workload traces.
{"title":"Synthesizing representative I/O workloads using iterative distillation","authors":"Zachary Kurmas, K. Keeton, K. Mackenzie","doi":"10.1109/MASCOT.2003.1240637","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240637","url":null,"abstract":"Storage systems designers are still searching for better methods of obtaining representative I/O workloads to drive studies of I/O systems. Traces of production workloads are very accurate, but inflexible and difficult to obtain. The use of synthetic workloads addresses these limitations; however, synthetic workloads are accurate only if they share certain key properties with the production workload on which they are based (e.g., mean request size, read percentage). Unfortunately, we do not know which properties are \"key \" for a given workload and storage system. We have developed a tool, the Distiller, that automatically identifies the key properties (\"attribute-values\") of the workload. The Distiller then uses these attribute-values to generate a synthetic workload representative of the production workload. This paper presents the design and evaluation of the Distiller. We demonstrate how the Distiller finds representative synthetic workloads for simple artificial workloads and three production workload traces.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240647
Jun Wang, Dong Li
Several recent studies have pointed out that file I/Os can be a major performance bottleneck for some large Web servers. Large I/O buffer caches often do not work effectively for large servers. This paper presents a novel, lightweight, temporary file system called TFS that can effectively improve I/O performance for large servers. TFS is a more cost-effective scheme compared to the full caching policy for large servers. It is a user-level application that manages files on a raw disk or raw disk partition and works in conjunction with a file system as an I/O accelerator. Since the entire system works in the user space, it is easy and inexpensive to implement and maintain. It also has good portability. TFS uses a novel disk storage subsystem called cluster-structured storage system (CSS) to manage files. CSS uses only large disk reads and writes and does no have garbage collection problems. Comprehensive trace-driven simulation experiments show that, TFS achieves up to 160% better system throughput and reduces up to 77% I/O latency per URL operation than that in a traditional Unix fast file system in large Web servers.
{"title":"A light-weight, temporary file system for large-scale Web servers","authors":"Jun Wang, Dong Li","doi":"10.1109/MASCOT.2003.1240647","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240647","url":null,"abstract":"Several recent studies have pointed out that file I/Os can be a major performance bottleneck for some large Web servers. Large I/O buffer caches often do not work effectively for large servers. This paper presents a novel, lightweight, temporary file system called TFS that can effectively improve I/O performance for large servers. TFS is a more cost-effective scheme compared to the full caching policy for large servers. It is a user-level application that manages files on a raw disk or raw disk partition and works in conjunction with a file system as an I/O accelerator. Since the entire system works in the user space, it is easy and inexpensive to implement and maintain. It also has good portability. TFS uses a novel disk storage subsystem called cluster-structured storage system (CSS) to manage files. CSS uses only large disk reads and writes and does no have garbage collection problems. Comprehensive trace-driven simulation experiments show that, TFS achieves up to 160% better system throughput and reduces up to 77% I/O latency per URL operation than that in a traditional Unix fast file system in large Web servers.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122256551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240675
Yingwu Zhu, Yimin Hu
Disk drive manufacturers are putting increasingly larger built-in caches into disk drives. Today, 2 MB buffers are common on low-end retail IDE/ATA drives, and some SCSI drives are now available with 16 MB. However, few published data are available to demonstrate that such large built-in caches can noticeably improve overall system performance. In this paper, we investigated the impact of the disk built-in cache on file system response time when the file system buffer cache becomes larger. Via detailed file system and disk system simulation, we arrive at three main conclusions: (1) With a reasonably-sized file system buffer cache (16 MB or more), there is very little performance benefit of using a built-in cache larger than 512 KB. (2) As a readahead buffer, the disk built-in cache provides noticeable performance improvements for workloads with read sequentiality, but has little positive effect on performance if there are more concurrent sequential workloads than cache segments. (3) As a writing cache, it also has some positive effects on some workloads, at the cost of reducing reliability. The disk drive industry is very cost-sensitive. Our research indicates that the current trend of using large built-in caches is unnecessary and a waste of money and power for most users. Disk manufacturers could use much smaller built-in caches to reduce the cost as well as power-consumption, without affecting performance.
{"title":"Disk built-in caches: evaluation on system performance","authors":"Yingwu Zhu, Yimin Hu","doi":"10.1109/MASCOT.2003.1240675","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240675","url":null,"abstract":"Disk drive manufacturers are putting increasingly larger built-in caches into disk drives. Today, 2 MB buffers are common on low-end retail IDE/ATA drives, and some SCSI drives are now available with 16 MB. However, few published data are available to demonstrate that such large built-in caches can noticeably improve overall system performance. In this paper, we investigated the impact of the disk built-in cache on file system response time when the file system buffer cache becomes larger. Via detailed file system and disk system simulation, we arrive at three main conclusions: (1) With a reasonably-sized file system buffer cache (16 MB or more), there is very little performance benefit of using a built-in cache larger than 512 KB. (2) As a readahead buffer, the disk built-in cache provides noticeable performance improvements for workloads with read sequentiality, but has little positive effect on performance if there are more concurrent sequential workloads than cache segments. (3) As a writing cache, it also has some positive effects on some workloads, at the cost of reducing reliability. The disk drive industry is very cost-sensitive. Our research indicates that the current trend of using large built-in caches is unnecessary and a waste of money and power for most users. Disk manufacturers could use much smaller built-in caches to reduce the cost as well as power-consumption, without affecting performance.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134286300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240659
G. Schorcht, I. Troxel, K. Farhangian, P. Unger, Daniel Zinn, C. Mick, A. George, H. Salzwedel
System-level design presents special simulation modeling challenges. System-level models address the architectural and functional performance of complex systems. Systems are decomposed into a series of interacting sub-systems. Architectures define subsystems, the interconnections between subsystems and contention for shared resources. Functions define the input and output behavior of subsystems. Mission-level studies explore system performance in the context of mission-level scenarios. This paper demonstrates a variety of complex system simulation models ranging from a mission-level, satellite-based air traffic management system to a RISC processor built with MLDesigner, a system-level design tool. All of the case studies demonstrate system-level design techniques using discrete event simulation.
{"title":"System-level simulation modeling with MLDesigner","authors":"G. Schorcht, I. Troxel, K. Farhangian, P. Unger, Daniel Zinn, C. Mick, A. George, H. Salzwedel","doi":"10.1109/MASCOT.2003.1240659","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240659","url":null,"abstract":"System-level design presents special simulation modeling challenges. System-level models address the architectural and functional performance of complex systems. Systems are decomposed into a series of interacting sub-systems. Architectures define subsystems, the interconnections between subsystems and contention for shared resources. Functions define the input and output behavior of subsystems. Mission-level studies explore system performance in the context of mission-level scenarios. This paper demonstrates a variety of complex system simulation models ranging from a mission-level, satellite-based air traffic management system to a RISC processor built with MLDesigner, a system-level design tool. All of the case studies demonstrate system-level design techniques using discrete event simulation.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131564144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240677
Gwen Houtzager, C. Williamson
The Web proxy cache placement problem is a classical optimization problem: place N proxies within an internetwork so as to minimize the average user response time for retrieving Web objects. In this paper, we tackle this problem using packet-level ns2 network simulations. There are three main conclusions from our study. First, network-level effects (e.g., TCP dynamics, network congestion) can have a significant impact on user-level Web performance, and must not be overlooked when optimizing Web proxy cache placement. Second, cache filter effects can have a pronounced impact on the overall optimal caching solution. Third, small perturbations to the Web workload can produce quite different solutions for optimal proxy cache placement. This implies that robust, approximate solutions are more important than "perfect" optimal solutions. The paper provides several general heuristics for cache placement based on our packet-level simulations.
{"title":"A packet-level simulation study of optimal Web proxy cache placement","authors":"Gwen Houtzager, C. Williamson","doi":"10.1109/MASCOT.2003.1240677","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240677","url":null,"abstract":"The Web proxy cache placement problem is a classical optimization problem: place N proxies within an internetwork so as to minimize the average user response time for retrieving Web objects. In this paper, we tackle this problem using packet-level ns2 network simulations. There are three main conclusions from our study. First, network-level effects (e.g., TCP dynamics, network congestion) can have a significant impact on user-level Web performance, and must not be overlooked when optimizing Web proxy cache placement. Second, cache filter effects can have a pronounced impact on the overall optimal caching solution. Third, small perturbations to the Web workload can produce quite different solutions for optimal proxy cache placement. This implies that robust, approximate solutions are more important than \"perfect\" optimal solutions. The paper provides several general heuristics for cache placement based on our packet-level simulations.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134633973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240661
J. Fourneau, M. Coz, N. Pekergin, F. Quessette
We present X-Bounds, a new tool to implement a methodology based on stochastic ordering, algorithmic derivation of simpler Markov chains and numerical analysis of these chains. The performance indices defined by reward functions are stochastically bounded by reward functions computed on much simpler or smaller Markov chains obtained after aggregation or simplification. This leads to an important reduction on numerical complexity. Typically, chains are ten times smaller and the accuracy may be good enough.
{"title":"An open tool to compute stochastic bounds on steady-state distributions and rewards","authors":"J. Fourneau, M. Coz, N. Pekergin, F. Quessette","doi":"10.1109/MASCOT.2003.1240661","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240661","url":null,"abstract":"We present X-Bounds, a new tool to implement a methodology based on stochastic ordering, algorithmic derivation of simpler Markov chains and numerical analysis of these chains. The performance indices defined by reward functions are stochastically bounded by reward functions computed on much simpler or smaller Markov chains obtained after aggregation or simplification. This leads to an important reduction on numerical complexity. Typically, chains are ten times smaller and the accuracy may be good enough.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114380286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240644
Qi He, M. Ammar, G. Riley, Himanshu Raj, R. Fujimoto
The growing interest in peer-to-peer systems (such as Gnutella) has inspired numerous research activities in this area. Although many demonstrations have been performed that show that the performance of a peer-to-peer system is highly dependent on the underlying network characteristics, much of the evaluation of peer-to-peer proposals has used simplified models that fail to include a detailed model of the underlying network. This can be largely attributed to the complexity in experimenting with a scalable peer-to-peer system simulator built on top of a scalable network simulator with packet-level details. In this work we design and develop a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators. The simulation environment is portable to different network simulators, which enables us to simulate a realistic large scale peer-to-peer system using existing parallelization techniques. We demonstrate the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics.
{"title":"Mapping peer behavior to packet-level details: a framework for packet-level simulation of peer-to-peer systems","authors":"Qi He, M. Ammar, G. Riley, Himanshu Raj, R. Fujimoto","doi":"10.1109/MASCOT.2003.1240644","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240644","url":null,"abstract":"The growing interest in peer-to-peer systems (such as Gnutella) has inspired numerous research activities in this area. Although many demonstrations have been performed that show that the performance of a peer-to-peer system is highly dependent on the underlying network characteristics, much of the evaluation of peer-to-peer proposals has used simplified models that fail to include a detailed model of the underlying network. This can be largely attributed to the complexity in experimenting with a scalable peer-to-peer system simulator built on top of a scalable network simulator with packet-level details. In this work we design and develop a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators. The simulation environment is portable to different network simulators, which enables us to simulate a realistic large scale peer-to-peer system using existing parallelization techniques. We demonstrate the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125924806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240672
K. Pandit, J. Schmitt, M. Karsten, R. Steinmetz
Providing quality of service (QoS) to inelastic data transmissions in a cost-efficient, highly scalable, and realistic fashion in IP networks remains a challenging research issue. In M. Karsten, J. Schmitt (2002-03), a new approach for a basic, domain-oriented, reactive QoS system based on so-called load control gateways has been proposed and experimentally evaluated. These load control gateways base their load/admission control decisions on observations of simple, binary marking algorithms executed at internal nodes, which allows the gateways to infer knowledge about the load on each path to peer load control gateways. The original load control system proposal utilizes rather simple, conservative admission control decision criteria. In this paper, we focus on methods to improve the admission control decision by using probability theoretical insights in order to better estimate the load situation of a bottleneck on a given path. This is achieved by making assumptions on the probability distribution of the load state of the nodes and analyzing the effect on the path marking probability. We show that even with benevolent assumptions the exact calculation is mathematically intractable for a larger number of internal nodes and develop a heuristic in the form of a Monte Carlo based algorithm. To illustrate the overall benefit of our approach we give a number of numerical examples which provide a quantitative feeling on how the admission control decision can be improved. Overall, we believe the result of this paper to be an important enhancement of the admission control part of the original load control system which allows to make better usage of resources while at the same time controlling statistically the guarantees provided to inelastic transmissions.
在IP网络中,如何以一种经济、高可扩展性和现实的方式为非弹性数据传输提供服务质量(QoS)仍然是一个具有挑战性的研究问题。在M. Karsten, J. Schmitt(2002-03)中,提出了一种基于所谓负载控制网关的基本的、面向领域的、反应性QoS系统的新方法,并进行了实验评估。这些负载控制网关的负载/接纳控制决策基于对内部节点执行的简单二进制标记算法的观察,这允许网关推断到对等负载控制网关的每条路径上的负载知识。原来的负荷控制系统方案采用了相当简单、保守的接纳控制决策准则。在本文中,我们重点研究了利用概率论的见解来改进接纳控制决策的方法,以便更好地估计给定路径上瓶颈的负载情况。这是通过对节点负载状态的概率分布进行假设,并分析对路径标记概率的影响来实现的。我们表明,即使有善意的假设,精确的计算对于大量的内部节点在数学上是难以处理的,并以基于蒙特卡罗算法的形式开发了一个启发式算法。为了说明我们的方法的总体好处,我们给出了一些数值例子,这些例子提供了如何改进接纳控制决策的定量感觉。总的来说,我们认为本文的结果是对原有负荷控制系统的接纳控制部分的重要改进,它可以更好地利用资源,同时对提供给非弹性传动的保证进行统计控制。
{"title":"Bottleneck estimation for load control gateways","authors":"K. Pandit, J. Schmitt, M. Karsten, R. Steinmetz","doi":"10.1109/MASCOT.2003.1240672","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240672","url":null,"abstract":"Providing quality of service (QoS) to inelastic data transmissions in a cost-efficient, highly scalable, and realistic fashion in IP networks remains a challenging research issue. In M. Karsten, J. Schmitt (2002-03), a new approach for a basic, domain-oriented, reactive QoS system based on so-called load control gateways has been proposed and experimentally evaluated. These load control gateways base their load/admission control decisions on observations of simple, binary marking algorithms executed at internal nodes, which allows the gateways to infer knowledge about the load on each path to peer load control gateways. The original load control system proposal utilizes rather simple, conservative admission control decision criteria. In this paper, we focus on methods to improve the admission control decision by using probability theoretical insights in order to better estimate the load situation of a bottleneck on a given path. This is achieved by making assumptions on the probability distribution of the load state of the nodes and analyzing the effect on the path marking probability. We show that even with benevolent assumptions the exact calculation is mathematically intractable for a larger number of internal nodes and develop a heuristic in the form of a Monte Carlo based algorithm. To illustrate the overall benefit of our approach we give a number of numerical examples which provide a quantitative feeling on how the admission control decision can be improved. Overall, we believe the result of this paper to be an important enhancement of the admission control part of the original load control system which allows to make better usage of resources while at the same time controlling statistically the guarantees provided to inelastic transmissions.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/MASCOT.2003.1240673
Alma Riska, E. Riedel
Storage system designers prefer to limit the maximum queue length at individual disks to only a few outstanding requests, to avoid possible request starvation. In this paper, we evaluate the benefits and performance implications of allowing disks to queue more requests. We show that the average response time in the storage subsystem is reduced when queuing more requests and optimizing (based on seek and/or position time) request scheduling at the disk. We argue that the disk, as the only service center in a storage subsystem, is able to best utilize its resources via scheduling when it has the most complete view of the load it is about to process. The benefits of longer queues at the disks are even more obvious when the system operates under transient overload conditions.
{"title":"It's not fair - evaluating efficient disk scheduling","authors":"Alma Riska, E. Riedel","doi":"10.1109/MASCOT.2003.1240673","DOIUrl":"https://doi.org/10.1109/MASCOT.2003.1240673","url":null,"abstract":"Storage system designers prefer to limit the maximum queue length at individual disks to only a few outstanding requests, to avoid possible request starvation. In this paper, we evaluate the benefits and performance implications of allowing disks to queue more requests. We show that the average response time in the storage subsystem is reduced when queuing more requests and optimizing (based on seek and/or position time) request scheduling at the disk. We argue that the disk, as the only service center in a storage subsystem, is able to best utilize its resources via scheduling when it has the most complete view of the load it is about to process. The benefits of longer queues at the disks are even more obvious when the system operates under transient overload conditions.","PeriodicalId":344411,"journal":{"name":"11th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130725303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}