Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366158
Eric Anderson, Christopher Hoover, Xiaozhou Li, Joseph A. Tucek
Distributed systems are notoriously difficult to implement and debug. One important tool for understanding the behavior of distributed systems is tracing. Unfortunately, effective tracing for modern distributed systems faces several challenges. First, many interesting behaviors in distributed systems only occur rarely, or at full production scale. Hence we need tracing mechanisms which impose minimal overhead, in order to allow always-on tracing of production instances. Second, for high-speed systems, messages can be delivered in significantly less time than the error of traditional time synchronization techniques such as network time protocol (NTP), necessitating time adjustment techniques with much higher precision. Third, distributed systems today may generate millions of events per second systemwide, resulting in traces consisting of billions of events. Such large traces can overwhelm existing trace analysis tools. These challenges make effective tracing difficult. We present techniques that address these three challenges. Our contributions include 1) a low-overhead tracing mechanism, which allows tracing of large systems without impacting their behavior or performance (0.14 μs/event), 2) a post hoc technique for producing highly accurate time synchronization across hosts (within 10 /ts, compared to between 100 μs to 2 ms for NTP), and 3) incremental data processing techniques which facilitate analyzing traces containing billions of trace points on desktop systems. We have successfully applied these techniques to two distributed systems, a cooperative caching system and a distributed storage system, and from our experience, we believe our techniques are applicable to other distributed systems.
{"title":"Efficient tracing and performance analysis for large distributed systems","authors":"Eric Anderson, Christopher Hoover, Xiaozhou Li, Joseph A. Tucek","doi":"10.1109/MASCOT.2009.5366158","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366158","url":null,"abstract":"Distributed systems are notoriously difficult to implement and debug. One important tool for understanding the behavior of distributed systems is tracing. Unfortunately, effective tracing for modern distributed systems faces several challenges. First, many interesting behaviors in distributed systems only occur rarely, or at full production scale. Hence we need tracing mechanisms which impose minimal overhead, in order to allow always-on tracing of production instances. Second, for high-speed systems, messages can be delivered in significantly less time than the error of traditional time synchronization techniques such as network time protocol (NTP), necessitating time adjustment techniques with much higher precision. Third, distributed systems today may generate millions of events per second systemwide, resulting in traces consisting of billions of events. Such large traces can overwhelm existing trace analysis tools. These challenges make effective tracing difficult. We present techniques that address these three challenges. Our contributions include 1) a low-overhead tracing mechanism, which allows tracing of large systems without impacting their behavior or performance (0.14 μs/event), 2) a post hoc technique for producing highly accurate time synchronization across hosts (within 10 /ts, compared to between 100 μs to 2 ms for NTP), and 3) incremental data processing techniques which facilitate analyzing traces containing billions of trace points on desktop systems. We have successfully applied these techniques to two distributed systems, a cooperative caching system and a distributed storage system, and from our experience, we believe our techniques are applicable to other distributed systems.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124872223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366785
Andrzej Kochut
Desktop virtualization is a new delivery method in which desktop operating systems execute in a data center and users access their applications using stateless “thin-client” devices. This paradigm promises significant benefits in terms of data security, flexibility, and reduction of the total cost of ownership. However, in order to further optimize this approach while maintaining good user experience, efficient resource management algorithms are required. This paper formulates an analytical model allowing for detailed investigation of how power consumption of virtualized server farm depends on properties of workload, adaptiveness of virtualization infrastructure, and average density of virtual machines per physical server. Assumptions needed to develop the model are confirmed using statistical analysis of desktop workload traces and the model itself is also validated using simulations. We apply the model to compare power consumption of static and dynamic virtual machine allocation strategies. The results of the study can be used to develop online virtual machine migration algorithms. Even though this paper focuses on virtualized systems running desktop workloads, the modeling approach is general and can be applied in other contexts.
{"title":"Power and performance modeling of virtualized desktop systems","authors":"Andrzej Kochut","doi":"10.1109/MASCOT.2009.5366785","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366785","url":null,"abstract":"Desktop virtualization is a new delivery method in which desktop operating systems execute in a data center and users access their applications using stateless “thin-client” devices. This paradigm promises significant benefits in terms of data security, flexibility, and reduction of the total cost of ownership. However, in order to further optimize this approach while maintaining good user experience, efficient resource management algorithms are required. This paper formulates an analytical model allowing for detailed investigation of how power consumption of virtualized server farm depends on properties of workload, adaptiveness of virtualization infrastructure, and average density of virtual machines per physical server. Assumptions needed to develop the model are confirmed using statistical analysis of desktop workload traces and the model itself is also validated using simulations. We apply the model to compare power consumption of static and dynamic virtual machine allocation strategies. The results of the study can be used to develop online virtual machine migration algorithms. Even though this paper focuses on virtualized systems running desktop workloads, the modeling approach is general and can be applied in other contexts.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130908683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366777
Hongxia Sun, C. Williamson
In multi-rate cellular transmission systems, users with different Quality of Service (QoS) requirements share the same wireless channel. In this paper, we investigate the problem of efficient resource allocation for scheduling with differentiated QoS support in a multi-rate system. We propose Dynamic Global Proportional Fairness (DGPF) scheduling on the downlink. We investigate the performance of the scheduling algorithm and model the proposed scheme in a High Speed Downlink Packet Access (HSDPA) simulation environment. The simulation results show that our approach can achieve suitable QoS for different classes of users without compromising aggregate network throughput. The results also show that TCP dynamics affect overall system performance.
{"title":"Service differentiation in multi-rate HSDPA systems","authors":"Hongxia Sun, C. Williamson","doi":"10.1109/MASCOT.2009.5366777","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366777","url":null,"abstract":"In multi-rate cellular transmission systems, users with different Quality of Service (QoS) requirements share the same wireless channel. In this paper, we investigate the problem of efficient resource allocation for scheduling with differentiated QoS support in a multi-rate system. We propose Dynamic Global Proportional Fairness (DGPF) scheduling on the downlink. We investigate the performance of the scheduling algorithm and model the proposed scheme in a High Speed Downlink Packet Access (HSDPA) simulation environment. The simulation results show that our approach can achieve suitable QoS for different classes of users without compromising aggregate network throughput. The results also show that TCP dynamics affect overall system performance.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129320554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366277
Livathinos S. Nikolaos
One of the most widely used simulation environments for mobile wireless networks is the Network Simulator 2 (NS-2). However NS-2 stores its outcome in a text file, so there is a need for a visualization tool to animate the simulation of the wireless network. The purpose of this tool is to help the researcher examine in detail how the wireless protocol works both on a network and a node basis. It is clear that much of this information is protocol dependent and cannot be depicted properly by a general purpose animation process. Existing animation tools do not provide this level of information nor permit the specific protocol to control the animation at all. EXAMS is an NS-2 visualization tool for mobile simulations which makes possible the portrayal of NS-2's internal information like transmission properties and node's data structures. This is mainly possible due to EXAMS extensible architecture which separates the animation process into a general and a protocol specific part. The latter can be developed independently by the protocol designer and loaded on demand. These and other useful characteristics of the EXAMS tool can be an invaluable help for a researcher in order to investigate and debug a mobile networking protocol.
{"title":"EXtensible animator for mobile simulations: EXAMS","authors":"Livathinos S. Nikolaos","doi":"10.1109/MASCOT.2009.5366277","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366277","url":null,"abstract":"One of the most widely used simulation environments for mobile wireless networks is the Network Simulator 2 (NS-2). However NS-2 stores its outcome in a text file, so there is a need for a visualization tool to animate the simulation of the wireless network. The purpose of this tool is to help the researcher examine in detail how the wireless protocol works both on a network and a node basis. It is clear that much of this information is protocol dependent and cannot be depicted properly by a general purpose animation process. Existing animation tools do not provide this level of information nor permit the specific protocol to control the animation at all. EXAMS is an NS-2 visualization tool for mobile simulations which makes possible the portrayal of NS-2's internal information like transmission properties and node's data structures. This is mainly possible due to EXAMS extensible architecture which separates the animation process into a general and a protocol specific part. The latter can be developed independently by the protocol designer and loaded on demand. These and other useful characteristics of the EXAMS tool can be an invaluable help for a researcher in order to investigate and debug a mobile networking protocol.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124890502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5363142
Taniya Siddiqua, S. Gurumurthi
Silicon reliability is a key challenge facing the microprocessor industry. Processors need to be designed such that they are resilient against both soft errors and lifetime reliability phenomena. However, techniques developed to address one class of reliability problems may impact other aspects of silicon reliability. In this paper, we show that Redundant Multi-Threading (RMT), which provides soft error protection, exacerbates lifetime reliability. We then explore two different architectural approaches to tackle this problem, namely, Dynamic Voltage Scaling (DVS) and partial RMT. We show that each approach has certain strengths and weaknesses with respect to performance, soft error coverage, and lifetime reliability. We then propose and evaluate a hybrid approach that combines DVS and partial RMT. We show that this approach provides better improvement in lifetime reliability than DVS or partial RMT alone, buys back a significant amount of performance that is lost due to DVS, and provides nearly complete soft error coverage.
{"title":"Balancing soft error coverage with lifetime reliability in redundantly multithreaded processors","authors":"Taniya Siddiqua, S. Gurumurthi","doi":"10.1109/MASCOT.2009.5363142","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5363142","url":null,"abstract":"Silicon reliability is a key challenge facing the microprocessor industry. Processors need to be designed such that they are resilient against both soft errors and lifetime reliability phenomena. However, techniques developed to address one class of reliability problems may impact other aspects of silicon reliability. In this paper, we show that Redundant Multi-Threading (RMT), which provides soft error protection, exacerbates lifetime reliability. We then explore two different architectural approaches to tackle this problem, namely, Dynamic Voltage Scaling (DVS) and partial RMT. We show that each approach has certain strengths and weaknesses with respect to performance, soft error coverage, and lifetime reliability. We then propose and evaluate a hybrid approach that combines DVS and partial RMT. We show that this approach provides better improvement in lifetime reliability than DVS or partial RMT alone, buys back a significant amount of performance that is lost due to DVS, and provides nearly complete soft error coverage.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127230365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366744
A. Marin, S. R. Bulò
In the last few years several new results about product-form solutions of stochastic models have been formulated. In particular, the Reversed Compound Agent Theorem (RCAT) and its extensions play a pivotal role in the characterization of cooperating stochastic models in product-form. Although these results have been used to prove several well-known theorems (e.g., Jackson queueing network and G-network solutions) as well as novel ones, to the best of our knowledge, an automatic tool to derive the product-form solution (if present) of a generic cooperation among a set of stochastic processes, is not yet developed. In this paper we address the problem of solving the non-linear system of equations that arises from the application of RCAT. We present an iterative algorithm that is the base of a software tool currently under development. We illustrate the algorithm, discuss the convergence and the complexity, compare it with previous algorithms defined for the analysis of the Jackson networks and the G-networks. Several tests have been conducted involving the solutions of a (arbitrary) large number of cooperating processes in product-form by RCAT.
{"title":"A general algorithm to compute the steady-state solution of product-form cooperating Markov chains","authors":"A. Marin, S. R. Bulò","doi":"10.1109/MASCOT.2009.5366744","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366744","url":null,"abstract":"In the last few years several new results about product-form solutions of stochastic models have been formulated. In particular, the Reversed Compound Agent Theorem (RCAT) and its extensions play a pivotal role in the characterization of cooperating stochastic models in product-form. Although these results have been used to prove several well-known theorems (e.g., Jackson queueing network and G-network solutions) as well as novel ones, to the best of our knowledge, an automatic tool to derive the product-form solution (if present) of a generic cooperation among a set of stochastic processes, is not yet developed. In this paper we address the problem of solving the non-linear system of equations that arises from the application of RCAT. We present an iterative algorithm that is the base of a software tool currently under development. We illustrate the algorithm, discuss the convergence and the complexity, compare it with previous algorithms defined for the analysis of the Jackson networks and the G-networks. Several tests have been conducted involving the solutions of a (arbitrary) large number of cooperating processes in product-form by RCAT.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128119306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366697
G. Khan, V. Dumitriu
The conception of Network-on-Chip (NoC) presents system designers with a new approach to the design of on-chip interconnection structures. However, such networks present designers with a large array of design parameters and decisions, many of which are critical to the efficient operation of NoC systems. To aid the design process of complex systems-on-chip, this paper presents a NoC simulation environment that has been developed and implemented using SystemC, a transaction-level modeling language. The simulation environment consists of on-chip components as well as traffic generators, which can generate various types of traffic patterns. A set of simulation results demonstrates the types of parameters that can affect performance of on-chip systems, including topology, network latency and achievable throughput. The results also verify the modeling capabilities of the proposed environment.
{"title":"Simulation environment for design and verification of Network-on-Chip and multi-core systems","authors":"G. Khan, V. Dumitriu","doi":"10.1109/MASCOT.2009.5366697","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366697","url":null,"abstract":"The conception of Network-on-Chip (NoC) presents system designers with a new approach to the design of on-chip interconnection structures. However, such networks present designers with a large array of design parameters and decisions, many of which are critical to the efficient operation of NoC systems. To aid the design process of complex systems-on-chip, this paper presents a NoC simulation environment that has been developed and implemented using SystemC, a transaction-level modeling language. The simulation environment consists of on-chip components as well as traffic generators, which can generate various types of traffic patterns. A set of simulation results demonstrates the types of parameters that can affect performance of on-chip systems, including topology, network latency and achievable throughput. The results also verify the modeling capabilities of the proposed environment.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125107056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366721
Alessandro Moro, E. Mumolo, M. Nolich
In this paper, we present a novel approach for accurate modeling of computer workloads. According to this approach, the sequences of features generated by a program during its execution are considered as time series and are processed with signal processing techniques both for feature extraction and statistical pattern matching. In the feature extraction phase we used spectral analysis for describing the sequence and to retain the important information. In the pattern matching phase we used a simplified form of bidimensional Hidden Markov Model, called pseudo2D-HMM, as Statistical Machine Learning Algorithm. Several processes of the same workload are necessary to obtain a 2D-HMM model of the workload. In this way, the models are obtained in an initial training phase; we developed techniques for on-line workload classification of a running process and for synthetic traces generation. The proposed algorithms is evaluated via trace-driven simulations using the SPEC 2000 workloads. We show that pseudo2D-HMMs accurately describe memory references sequences; the classification accuracy is about 92% with six different workloads.
{"title":"Workload modeling using pseudo2D-HMM","authors":"Alessandro Moro, E. Mumolo, M. Nolich","doi":"10.1109/MASCOT.2009.5366721","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366721","url":null,"abstract":"In this paper, we present a novel approach for accurate modeling of computer workloads. According to this approach, the sequences of features generated by a program during its execution are considered as time series and are processed with signal processing techniques both for feature extraction and statistical pattern matching. In the feature extraction phase we used spectral analysis for describing the sequence and to retain the important information. In the pattern matching phase we used a simplified form of bidimensional Hidden Markov Model, called pseudo2D-HMM, as Statistical Machine Learning Algorithm. Several processes of the same workload are necessary to obtain a 2D-HMM model of the workload. In this way, the models are obtained in an initial training phase; we developed techniques for on-line workload classification of a running process and for synthetic traces generation. The proposed algorithms is evaluated via trace-driven simulations using the SPEC 2000 workloads. We show that pseudo2D-HMMs accurately describe memory references sequences; the classification accuracy is about 92% with six different workloads.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124257353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366631
Ashish Raniwala, Pradipta De, Srikant Sharma, Rupa Krishnan, T. Chiueh
Network flows running on a wireless mesh network (WMN) may suffer from partial failures in the form of serious throughput degradation, sometimes to the extent of starvation, because of weaknesses in the underlying MAC protocol, dissimilar physical transmission rates or different degrees of local congestion. Most existing WMN transport protocols fail to take these factors into account. This paper describes the design, implementation and evaluation of a coordinated congestion control (C3L) algorithm that guarantees fair resource allocation under adverse scenarios and thus provides end-to-end max-min fairness among competing flows. The C3L algorithm features an advanced topology discovery mechanism that detects the inhibition of wireless communication links, and a general collision domain capacity re-estimation mechanism that effectively addresses such inhibition. A comprehensive ns-2-based simulation study as well as empirical measurements taken from an IEEE 802.11a-based multi-hop wireless testbed demonstrate that the C3L algorithm greatly improves inter-flow fairness, eliminates the starvation problem, and at the same time maintains high radio resource utilization efficiency.
{"title":"Globally fair radio resource allocation for wireless mesh networks","authors":"Ashish Raniwala, Pradipta De, Srikant Sharma, Rupa Krishnan, T. Chiueh","doi":"10.1109/MASCOT.2009.5366631","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366631","url":null,"abstract":"Network flows running on a wireless mesh network (WMN) may suffer from partial failures in the form of serious throughput degradation, sometimes to the extent of starvation, because of weaknesses in the underlying MAC protocol, dissimilar physical transmission rates or different degrees of local congestion. Most existing WMN transport protocols fail to take these factors into account. This paper describes the design, implementation and evaluation of a coordinated congestion control (C3L) algorithm that guarantees fair resource allocation under adverse scenarios and thus provides end-to-end max-min fairness among competing flows. The C3L algorithm features an advanced topology discovery mechanism that detects the inhibition of wireless communication links, and a general collision domain capacity re-estimation mechanism that effectively addresses such inhibition. A comprehensive ns-2-based simulation study as well as empirical measurements taken from an IEEE 802.11a-based multi-hop wireless testbed demonstrate that the C3L algorithm greatly improves inter-flow fairness, eliminates the starvation problem, and at the same time maintains high radio resource utilization efficiency.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127818279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366710
G. Kunz, O. Landsiedel, Klaus Wehrle
Network simulation faces an increasing demand for highly detailed simulation models which in turn require efficient handling of their inherent computational complexity. This demand for detailed models includes both accurate estimations of processing time and in-depth modeling of wireless technologies. For instance, one might want to investigate if a particular device can incorporate a computationally complex radio transmission technology while meeting the deadlines of a multi-media streaming application such as VoIP.
{"title":"Horizon — Exploiting timing information for parallel network simulation","authors":"G. Kunz, O. Landsiedel, Klaus Wehrle","doi":"10.1109/MASCOT.2009.5366710","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366710","url":null,"abstract":"Network simulation faces an increasing demand for highly detailed simulation models which in turn require efficient handling of their inherent computational complexity. This demand for detailed models includes both accurate estimations of processing time and in-depth modeling of wireless technologies. For instance, one might want to investigate if a particular device can incorporate a computationally complex radio transmission technology while meeting the deadlines of a multi-media streaming application such as VoIP.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127392637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}