Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366798
Nouha Baccour, A. Koubâa, M. B. Jamaa, H. Youssef, Marco Zúñiga, M. Alves
Link quality estimation (LQE) in wireless sensor networks (WSNs) is a fundamental building block for an efficient and cross-layer design of higher layer network protocols. Several link quality estimators have been reported in the literature; however, none has been thoroughly evaluated. There is thus a need for a comparative study of these estimators as well as the assessment of their impact on higher layer protocols. In this paper, we perform an extensive comparative simulation study of some well-known link quality estimators using TOSSIM. We first analyze the statistical properties of the link quality estimators independently of higher-layer protocols, then we investigate their impact on the Collection Tree Routing Protocol (CTP). This work is a fundamental step to understand the statistical behavior of LQE techniques, helping system designers choose the most appropriate for their network protocol architectures.
{"title":"A comparative simulation study of link quality estimators in wireless sensor networks","authors":"Nouha Baccour, A. Koubâa, M. B. Jamaa, H. Youssef, Marco Zúñiga, M. Alves","doi":"10.1109/MASCOT.2009.5366798","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366798","url":null,"abstract":"Link quality estimation (LQE) in wireless sensor networks (WSNs) is a fundamental building block for an efficient and cross-layer design of higher layer network protocols. Several link quality estimators have been reported in the literature; however, none has been thoroughly evaluated. There is thus a need for a comparative study of these estimators as well as the assessment of their impact on higher layer protocols. In this paper, we perform an extensive comparative simulation study of some well-known link quality estimators using TOSSIM. We first analyze the statistical properties of the link quality estimators independently of higher-layer protocols, then we investigate their impact on the Collection Tree Routing Protocol (CTP). This work is a fundamental step to understand the statistical behavior of LQE techniques, helping system designers choose the most appropriate for their network protocol architectures.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"23 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120869996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366821
V. Babka, P. Libic, P. Tůma
Although important from software performance perspective, the behavior of memory caches is not captured by the common approaches to modeling of software performance, where the software performance models tend to treat operation durations as constants despite the fact that the operations compete for memory caches. Incorporating memory cache models into software performance models is hindered by the fact that existing cache models do not provide information about timings and penalties, but only about hits and misses. The paper outlines the relationship of cache events and cache timings on a real computer architecture, indicating that the existing practice of modeling cache miss penalties as constants is not sufficient to model software performance faithfully.
{"title":"Timing penalties associated with cache sharing","authors":"V. Babka, P. Libic, P. Tůma","doi":"10.1109/MASCOT.2009.5366821","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366821","url":null,"abstract":"Although important from software performance perspective, the behavior of memory caches is not captured by the common approaches to modeling of software performance, where the software performance models tend to treat operation durations as constants despite the fact that the operations compete for memory caches. Incorporating memory cache models into software performance models is hindered by the fact that existing cache models do not provide information about timings and penalties, but only about hits and misses. The paper outlines the relationship of cache events and cache timings on a real computer architecture, indicating that the existing practice of modeling cache miss penalties as constants is not sufficient to model software performance faithfully.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"1962 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129757967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366162
G. Abbas, A. Nagar, H. Tawfik, J. Goulermas
Network Utility maximization (NUM) provides an important perspective to conduct rate allocation where optimal performance, in terms of maximal aggregate bandwidth utility, is generally achieved such that each source adaptively adjusts its transmission rate. Behind most of the recent literature on NUM, common assumptions are that traffic flows are elastic and that their utility functions are strictly concave. This provides design simplicity but, in practice, limits the applicability of resulting protocols, in that severe QoS problems may be encountered when bandwidth is shared by inelastic flows. This paper investigates the problem of distributively allocating data transmission rates to multiclass services, both elastic and inelastic, and overcomes the restrictive and often unrealistic assumptions. The proposed method is based on the Lagrangian Relaxation for a dual formulation that decomposes the higher dimension NUM into a number of subproblems. We use a novel Surrogate Subgradient based stochastic method to solve the dual problem. Unlike the ordinary subgradient methods, Surrogate Subgradient can compute optimal prices without the need to solve all the subproblems. For the lower dimension, nonlinear and nonconvex subproblems we use a hybrid Particle Swarm Optimization (PSO) and Sequential Quadratic Programming (SQP) method, where the objective is to achieve fast convergence as well as accuracy. We demonstrate the efficiency of the proposed rate allocation algorithm, in terms maintaining QoS for multiclass services, and validate its scalability and accuracy for large scale flows.
{"title":"Quality of service issues and nonconvex Network Utility Maximization for inelastic services in the Internet","authors":"G. Abbas, A. Nagar, H. Tawfik, J. Goulermas","doi":"10.1109/MASCOT.2009.5366162","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366162","url":null,"abstract":"Network Utility maximization (NUM) provides an important perspective to conduct rate allocation where optimal performance, in terms of maximal aggregate bandwidth utility, is generally achieved such that each source adaptively adjusts its transmission rate. Behind most of the recent literature on NUM, common assumptions are that traffic flows are elastic and that their utility functions are strictly concave. This provides design simplicity but, in practice, limits the applicability of resulting protocols, in that severe QoS problems may be encountered when bandwidth is shared by inelastic flows. This paper investigates the problem of distributively allocating data transmission rates to multiclass services, both elastic and inelastic, and overcomes the restrictive and often unrealistic assumptions. The proposed method is based on the Lagrangian Relaxation for a dual formulation that decomposes the higher dimension NUM into a number of subproblems. We use a novel Surrogate Subgradient based stochastic method to solve the dual problem. Unlike the ordinary subgradient methods, Surrogate Subgradient can compute optimal prices without the need to solve all the subproblems. For the lower dimension, nonlinear and nonconvex subproblems we use a hybrid Particle Swarm Optimization (PSO) and Sequential Quadratic Programming (SQP) method, where the objective is to achieve fast convergence as well as accuracy. We demonstrate the efficiency of the proposed rate allocation algorithm, in terms maintaining QoS for multiclass services, and validate its scalability and accuracy for large scale flows.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115008096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366195
I. Iliadis
The reliability of disk storage systems is adversely affected by the presence of latent sector errors. Disk scrubbing and intradisk redundancy are two schemes proposed to cope with unrecoverable or latent media errors and enhance the reliability of RAID storage systems. Two recent studies have investigated the effectiveness of these schemes, but they have reached opposing conclusions. These studies were conducted using two different modeling approaches. We present a detailed investigation which reveals that this discrepancy originates from the difference in the approach adopted, and the level of detail incorporated by the two models. We show that, as a consequence, these models provide reliability results which may differ by orders of magnitude therefore leading to contradicting conclusions. We develop a common analytical framework within which we investigate the details, merits, weaknesses, and applicability of each model. We resolve this discrepancy by deriving enhanced models that incorporate inherent characteristics of the latent-error process and provide realistic reliability results that are in good agreement. We subsequently reassess the reliability results and conclusions presented in previous studies regarding the disk scrubbing and the intradisk redundancy scheme.
{"title":"Reliability modeling of RAID storage systems with latent errors","authors":"I. Iliadis","doi":"10.1109/MASCOT.2009.5366195","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366195","url":null,"abstract":"The reliability of disk storage systems is adversely affected by the presence of latent sector errors. Disk scrubbing and intradisk redundancy are two schemes proposed to cope with unrecoverable or latent media errors and enhance the reliability of RAID storage systems. Two recent studies have investigated the effectiveness of these schemes, but they have reached opposing conclusions. These studies were conducted using two different modeling approaches. We present a detailed investigation which reveals that this discrepancy originates from the difference in the approach adopted, and the level of detail incorporated by the two models. We show that, as a consequence, these models provide reliability results which may differ by orders of magnitude therefore leading to contradicting conclusions. We develop a common analytical framework within which we investigate the details, merits, weaknesses, and applicability of each model. We resolve this discrepancy by deriving enhanced models that incorporate inherent characteristics of the latent-error process and provide realistic reliability results that are in good agreement. We subsequently reassess the reliability results and conclusions presented in previous studies regarding the disk scrubbing and the intradisk redundancy scheme.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114511677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366741
Wint Yi Poe, J. Schmitt
The deficient energy supplies of wireless sensor networks (WSNs) drives network designers to optimize energy consumption in various ways. Not only with regard to the energy issue but also with respect to system performance, we design a local search technique for sink placement in WSNs that tries to minimize the maximum worst-case delay and extend the lifetime of a WSN, simultaneously. Since it is not feasible for a sink to use global information, which especially applies to large-scale WSNs, we introduce a self-organized sink placement (SOSP) strategy that combines the advantages of our previous works [1] and [2]. The goal of this research is to provide a better sink placement strategy with a lower communication overhead. Avoiding the costly design of using nodes' location information, each sink sets up its own group by communicating to its n-hop distance neighbors. While keeping the locally optimal placement, SOSP exhibits a quality of the solutions with respect to communication overhead as well as computational effort that are better than previous solutions. To model and consequently control the worst-case delay of a given WSN we build upon the so-called sensor network calculus (a recent methodology first introduced in [3]).
{"title":"Self-organized sink placement in large-scale wireless sensor networks","authors":"Wint Yi Poe, J. Schmitt","doi":"10.1109/MASCOT.2009.5366741","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366741","url":null,"abstract":"The deficient energy supplies of wireless sensor networks (WSNs) drives network designers to optimize energy consumption in various ways. Not only with regard to the energy issue but also with respect to system performance, we design a local search technique for sink placement in WSNs that tries to minimize the maximum worst-case delay and extend the lifetime of a WSN, simultaneously. Since it is not feasible for a sink to use global information, which especially applies to large-scale WSNs, we introduce a self-organized sink placement (SOSP) strategy that combines the advantages of our previous works [1] and [2]. The goal of this research is to provide a better sink placement strategy with a lower communication overhead. Avoiding the costly design of using nodes' location information, each sink sets up its own group by communicating to its n-hop distance neighbors. While keeping the locally optimal placement, SOSP exhibits a quality of the solutions with respect to communication overhead as well as computational effort that are better than previous solutions. To model and consequently control the worst-case delay of a given WSN we build upon the so-called sensor network calculus (a recent methodology first introduced in [3]).","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133804080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366607
S. Casolari, M. Colajanni, F. L. Presti
All runtime management decisions in computer and information systems require immediate detection of relevant changes in the state of their resources. This is accomplished by continuously monitoring the performance/utilization of key system resources and by using appropriate statistical tests to detect the occurance of significant state changes. Unfortunately, the complexity of today systems and applications and the unpredictability of user request patterns result in highly variable and non stationary time series which are difficult to analyze. As a consequence, present solutions for detecting state changes at runtime are affected by excessive time delays or false positives. We propose a novel “agile” runtime detector that solves the delay vs. false positive tradeoff: it is able to detect the relevant state changes as fast as the best reactive models with the lowest percentages of false positives. All evaluations carried out for a large set of scenarios confirm the efficacy and robustness of the proposed model.
{"title":"Runtime state change detector of computer system resources under non stationary conditions","authors":"S. Casolari, M. Colajanni, F. L. Presti","doi":"10.1109/MASCOT.2009.5366607","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366607","url":null,"abstract":"All runtime management decisions in computer and information systems require immediate detection of relevant changes in the state of their resources. This is accomplished by continuously monitoring the performance/utilization of key system resources and by using appropriate statistical tests to detect the occurance of significant state changes. Unfortunately, the complexity of today systems and applications and the unpredictability of user request patterns result in highly variable and non stationary time series which are difficult to analyze. As a consequence, present solutions for detecting state changes at runtime are affected by excessive time delays or false positives. We propose a novel “agile” runtime detector that solves the delay vs. false positive tradeoff: it is able to detect the relevant state changes as fast as the best reactive models with the lowest percentages of false positives. All evaluations carried out for a large set of scenarios confirm the efficacy and robustness of the proposed model.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132928004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366166
Dinesh Kumar, A. Tantawi, Li Zhang
Modern, adaptive software systems must often adjust or reconfigure their architecture in order to respond to continuous changes in their execution environment. Efficient autonomic control in such systems is highly dependent on the accuracy of their representative performance model. In this paper, we are concerned with real-time estimation of a performance model for adaptive software systems that process multiple classes of transactional workload. Based on an open queueing network model and an Extended Kalman Filter (EKF), experiments in this work show that: 1) the model parameter estimates converge to the actual value very slowly when the variation in incoming workload is very low, 2) the estimates fail to converge quickly to the new value when there is a step-change caused by adaptive reconfiguration of the actual software parameters. We therefore propose a modified EKF design in which the measurement model is augmented with a set of constraints based on past measurement values. Experiments demonstrate the effectiveness of our approach that leads to significant improvement in convergence in the two cases.
{"title":"Real-time performance modeling for adaptive software systems with multi-class workload","authors":"Dinesh Kumar, A. Tantawi, Li Zhang","doi":"10.1109/MASCOT.2009.5366166","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366166","url":null,"abstract":"Modern, adaptive software systems must often adjust or reconfigure their architecture in order to respond to continuous changes in their execution environment. Efficient autonomic control in such systems is highly dependent on the accuracy of their representative performance model. In this paper, we are concerned with real-time estimation of a performance model for adaptive software systems that process multiple classes of transactional workload. Based on an open queueing network model and an Extended Kalman Filter (EKF), experiments in this work show that: 1) the model parameter estimates converge to the actual value very slowly when the variation in incoming workload is very low, 2) the estimates fail to converge quickly to the new value when there is a step-change caused by adaptive reconfiguration of the actual software parameters. We therefore propose a modified EKF design in which the measurement model is augmented with a set of constraints based on past measurement values. Experiments demonstrate the effectiveness of our approach that leads to significant improvement in convergence in the two cases.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122029971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366831
N. Mi, G. Casale, Qi Zhang, Alma Riska, E. Smirni
In this paper, we propose a new approach for the development of load control policies in autonomic multitier systems. We control system load in a completely new way compared to existing policies: we leverage on the autocorrelation of service times and show that autocorrelation can be used to forecast future service requirements of requests and adaptively control system load. To the best of our knowledge, this is the first direct application of autocorrelation of service times to autonomic load control. We propose ALoC and D ALoC, two autocorrelation-driven policies that drop a percentage of the load in order to meet pre-defined quality-of-service levels in a distributed system. Both policies are easy to implement and rely on minimal assumptions. In particular, D ALoC is a fully no-knowledge measurement-based policy that self-adjusts its load control parameters based only on policy targets and on statistical information of requests served in the past. We illustrate the effectiveness of these new policies in a distributed multi-server setting via detailed trace driven simulations. We show that if these policies are employed in the server with a temporal dependent service process, then end-to-end response time, across all servers, reduces up to 80% by only dropping at most 13% of the incoming requests. Using real traces, we also show that, in the constrained case of being able to drop only from a portion of the incoming workload, our policy still improves request response time by up to 30%.
{"title":"Autocorrelation-driven load control in distributed systems","authors":"N. Mi, G. Casale, Qi Zhang, Alma Riska, E. Smirni","doi":"10.1109/MASCOT.2009.5366831","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366831","url":null,"abstract":"In this paper, we propose a new approach for the development of load control policies in autonomic multitier systems. We control system load in a completely new way compared to existing policies: we leverage on the autocorrelation of service times and show that autocorrelation can be used to forecast future service requirements of requests and adaptively control system load. To the best of our knowledge, this is the first direct application of autocorrelation of service times to autonomic load control. We propose ALoC and D ALoC, two autocorrelation-driven policies that drop a percentage of the load in order to meet pre-defined quality-of-service levels in a distributed system. Both policies are easy to implement and rely on minimal assumptions. In particular, D ALoC is a fully no-knowledge measurement-based policy that self-adjusts its load control parameters based only on policy targets and on statistical information of requests served in the past. We illustrate the effectiveness of these new policies in a distributed multi-server setting via detailed trace driven simulations. We show that if these policies are employed in the server with a temporal dependent service process, then end-to-end response time, across all servers, reduces up to 80% by only dropping at most 13% of the incoming requests. Using real traces, we also show that, in the constrained case of being able to drop only from a portion of the incoming workload, our policy still improves request response time by up to 30%.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"32-33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132068225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5367057
Guosong Tian, C. Fidge, Yu-Chu Tian
Discrete event-driven simulations of digital communication networks have been used widely. However, it is difficult to use a network simulator to simulate a hybrid system in which some objects are not discrete event-driven but are continuous time-driven. A networked control system (NCS) is such an application, in which physical process dynamics are continuous by nature. We have designed and implemented a hybrid simulation environment which effectively integrates models of continuous-time plant processes and discrete-event communication networks by extending the open source network simulator NS-2. To do this a synchronisation mechanism was developed to connect a continuous plant simulation with a discrete network simulation. Furthermore, for evaluating co-design approaches in an NCS environment, a piggybacking method was adopted to allow the control period to be adjusted during simulations. The effectiveness of the technique is demonstrated through case studies which simulate a networked control scenario in which the communication and control system properties are defined explicitly.
{"title":"Hybrid system simulation of computer control applications over communication networks","authors":"Guosong Tian, C. Fidge, Yu-Chu Tian","doi":"10.1109/MASCOT.2009.5367057","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5367057","url":null,"abstract":"Discrete event-driven simulations of digital communication networks have been used widely. However, it is difficult to use a network simulator to simulate a hybrid system in which some objects are not discrete event-driven but are continuous time-driven. A networked control system (NCS) is such an application, in which physical process dynamics are continuous by nature. We have designed and implemented a hybrid simulation environment which effectively integrates models of continuous-time plant processes and discrete-event communication networks by extending the open source network simulator NS-2. To do this a synchronisation mechanism was developed to connect a continuous plant simulation with a discrete network simulation. Furthermore, for evaluating co-design approaches in an NCS environment, a piggybacking method was adopted to allow the control period to be adjusted during simulations. The effectiveness of the technique is demonstrated through case studies which simulate a networked control scenario in which the communication and control system properties are defined explicitly.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128174793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366239
H. Gupta, A. Mahanti, V. Ribeiro
The immense popularity of new-age “Web 2.0” applications such as YouTube, Flickr, and Facebook, and non-Web applications such as Peer-to-Peer (P2P) file sharing, Voice over IP, online games, and media streaming have significantly altered the composition of Internet traffic with respect to what it was a few years ago. In light of these changes, this paper revisits Internet traffic characteristics and models that were proposed when “traditional” Web traffic was the largest contributor to Internet traffic. Specifically, we study whether or not the following characteristics, namely: (1) traffic is self-similar and long-range dependent, and (2) traffic can be approximated by Poisson at smaller time scales, are still valid. Our experiments on recent traces show that these traffic characteristics continue to hold. We further argue that current Internet traffic can be viewed to have two key constituents, namely Web+ and P2P+; Web+ traffic consists of traffic from both Web 1.0 and Web 2.0 applications; P2P+ traffic consists largely of traffic from P2P applications and other non-Web applications excluding applications on well-known ports such as FTP and SMTP. We then show that both Web+ and P2P+ components exhibit self-similar behavior and can be approximated by Poisson at smaller time scales.
{"title":"Revisiting coexistence of poissonity and self-similarity in Internet traffic","authors":"H. Gupta, A. Mahanti, V. Ribeiro","doi":"10.1109/MASCOT.2009.5366239","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366239","url":null,"abstract":"The immense popularity of new-age “Web 2.0” applications such as YouTube, Flickr, and Facebook, and non-Web applications such as Peer-to-Peer (P2P) file sharing, Voice over IP, online games, and media streaming have significantly altered the composition of Internet traffic with respect to what it was a few years ago. In light of these changes, this paper revisits Internet traffic characteristics and models that were proposed when “traditional” Web traffic was the largest contributor to Internet traffic. Specifically, we study whether or not the following characteristics, namely: (1) traffic is self-similar and long-range dependent, and (2) traffic can be approximated by Poisson at smaller time scales, are still valid. Our experiments on recent traces show that these traffic characteristics continue to hold. We further argue that current Internet traffic can be viewed to have two key constituents, namely Web+ and P2P+; Web+ traffic consists of traffic from both Web 1.0 and Web 2.0 applications; P2P+ traffic consists largely of traffic from P2P applications and other non-Web applications excluding applications on well-known ports such as FTP and SMTP. We then show that both Web+ and P2P+ components exhibit self-similar behavior and can be approximated by Poisson at smaller time scales.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}