The performance of anomaly detection algorithms is usually measured using the total residual error. This error metric is calculated by comparing the labels assigned by the detection algorithm against a reference ground truth. Obtaining a highly expressive ground truth is by itself a challenging task, if not infeasible. Often, a dataset is manually labeled by domain experts. However, manual labeling is error prone. In real-world sensor network deployments, it becomes even more difficult to label a sensor dataset due to the large amount of samples, the complexity of visualizing the data, and the uncertainty in the existence of anomalies. This paper proposes an automated technique which uses highly representative anomaly models for labeling. We demonstrate the effectiveness of this technique through evaluating a classification algorithm using our designed anomaly models as ground truth. We show that the classification accuracy is similar to that when using manually labeled real-world data points.
{"title":"Modeling Anomalies Prevalent in Sensor Network Deployments: A Representative Ground Truth","authors":"Giovani Rimon Abuaitah, Bin Wang","doi":"10.1109/MASCOTS.2013.57","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.57","url":null,"abstract":"The performance of anomaly detection algorithms is usually measured using the total residual error. This error metric is calculated by comparing the labels assigned by the detection algorithm against a reference ground truth. Obtaining a highly expressive ground truth is by itself a challenging task, if not infeasible. Often, a dataset is manually labeled by domain experts. However, manual labeling is error prone. In real-world sensor network deployments, it becomes even more difficult to label a sensor dataset due to the large amount of samples, the complexity of visualizing the data, and the uncertainty in the existence of anomalies. This paper proposes an automated technique which uses highly representative anomaly models for labeling. We demonstrate the effectiveness of this technique through evaluating a classification algorithm using our designed anomaly models as ground truth. We show that the classification accuracy is similar to that when using manually labeled real-world data points.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yahia Benmoussa, Jalil Boukhobza, E. Senn, D. Benazzouz
Mobile devices such as smart-phones and tablets are increasingly becoming the most important channel for delivering end-user Internet traffic especially multimedia content. One of the most popular use of these terminals is video streaming. In this type of application, video decoding is considered as the most compute and energy intensive part. Some specific processing units, such as dedicated Digital Signal Processors (DSPs), are added to those devices in order to optimize the performance and energy consumption. In this context, the objective of this paper is to give a comprehensive and comparative study of the performance and energy consumption of video decoding application on embedded heterogeneous platforms containing a GPP and a DSP. To achieve this goal, a performance and energy characterization methodology for H.264/AVC video decoding is proposed. This methodology considers a large set of video coding parameters and operating clock frequencies to reflect different execution scenarios ranging from low-quality video decoding on low-end mobile phones to high-quality video decoding on tablets. The obtained results revealed that the best performance-energy trade-off highly depends on the required video bit-rate and resolution. For instance, the GPP can be the best choice in many cases due to a significant overhead in DSP decoding which may represent 30% of the total decoding energy in some cases. Some explanations about the obtained performance and overheads are given. Finally, guidelines on which processing element to choose according to video properties are also proposed.
{"title":"GPP vs DSP: A Performance/Energy Characterization and Evaluation of Video Decoding","authors":"Yahia Benmoussa, Jalil Boukhobza, E. Senn, D. Benazzouz","doi":"10.1109/MASCOTS.2013.35","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.35","url":null,"abstract":"Mobile devices such as smart-phones and tablets are increasingly becoming the most important channel for delivering end-user Internet traffic especially multimedia content. One of the most popular use of these terminals is video streaming. In this type of application, video decoding is considered as the most compute and energy intensive part. Some specific processing units, such as dedicated Digital Signal Processors (DSPs), are added to those devices in order to optimize the performance and energy consumption. In this context, the objective of this paper is to give a comprehensive and comparative study of the performance and energy consumption of video decoding application on embedded heterogeneous platforms containing a GPP and a DSP. To achieve this goal, a performance and energy characterization methodology for H.264/AVC video decoding is proposed. This methodology considers a large set of video coding parameters and operating clock frequencies to reflect different execution scenarios ranging from low-quality video decoding on low-end mobile phones to high-quality video decoding on tablets. The obtained results revealed that the best performance-energy trade-off highly depends on the required video bit-rate and resolution. For instance, the GPP can be the best choice in many cases due to a significant overhead in DSP decoding which may represent 30% of the total decoding energy in some cases. Some explanations about the obtained performance and overheads are given. Finally, guidelines on which processing element to choose according to video properties are also proposed.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"46 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115939398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Zimmerling, F. Ferrari, L. Mottola, L. Thiele
Mathematical models play a pivotal role in understanding and designing advanced low-power wireless systems. However, the distributed and uncoordinated operation of traditional multi-hop low-power wireless protocols greatly complicates their accurate modeling. This is mainly because these protocols build and maintain substantial network state to cope with the dynamics of low-power wireless links. Recent protocols depart from this design by leveraging synchronous transmissions (ST), whereby multiple nodes simultaneously transmit towards the same receiver, as opposed to pair wise link-based transmissions (LT). ST improve the one-hop packet reliability to an extent that efficient multi-hop protocols with little network state are feasible. This paper studies whether ST also enable simple yet accurate modeling of these protocols. Our contribution to this end is two-fold. First, we show, through experiments on a 139-node test bed, that characterizing packet receptions and losses as a sequence of independent and identically distributed (i.i.d.) Bernoulli trials-a common assumption in protocol modeling but often illegitimate for LT-is largely valid for ST. We then show how this finding simplifies the modeling of a recent ST-based protocol, by deriving (i) sufficient conditions for probabilistic guarantees on the end-to-end packet reliability, and (ii) a Markovian model to estimate the long-term energy consumption. Validation using test bed experiments confirms that our simple models are also highly accurate, for example, the model error in energy against real measurements is 0.25%, a figure never reported before in the related literature.
{"title":"On Modeling Low-Power Wireless Protocols Based on Synchronous Packet Transmissions","authors":"Marco Zimmerling, F. Ferrari, L. Mottola, L. Thiele","doi":"10.1109/MASCOTS.2013.76","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.76","url":null,"abstract":"Mathematical models play a pivotal role in understanding and designing advanced low-power wireless systems. However, the distributed and uncoordinated operation of traditional multi-hop low-power wireless protocols greatly complicates their accurate modeling. This is mainly because these protocols build and maintain substantial network state to cope with the dynamics of low-power wireless links. Recent protocols depart from this design by leveraging synchronous transmissions (ST), whereby multiple nodes simultaneously transmit towards the same receiver, as opposed to pair wise link-based transmissions (LT). ST improve the one-hop packet reliability to an extent that efficient multi-hop protocols with little network state are feasible. This paper studies whether ST also enable simple yet accurate modeling of these protocols. Our contribution to this end is two-fold. First, we show, through experiments on a 139-node test bed, that characterizing packet receptions and losses as a sequence of independent and identically distributed (i.i.d.) Bernoulli trials-a common assumption in protocol modeling but often illegitimate for LT-is largely valid for ST. We then show how this finding simplifies the modeling of a recent ST-based protocol, by deriving (i) sufficient conditions for probabilistic guarantees on the end-to-end packet reliability, and (ii) a Markovian model to estimate the long-term energy consumption. Validation using test bed experiments confirms that our simple models are also highly accurate, for example, the model error in energy against real measurements is 0.25%, a figure never reported before in the related literature.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114448865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi Zhou, Fangming Liu, Yong Xu, Ruolan Zou, Hong Xu, John C.S. Lui, Hai Jin
Recently, data center carbon emission has become an emerging concern for the cloud service providers. Previous works are limited on cutting down the power consumption of the data centers to defuse such a concern. In this paper, we show how the spatial and temporal variabilities of the electricity carbon footprint can be fully exploited to further green the cloud running on top of geographically distributed data centers. We jointly consider the electricity cost, service level agreement (SLA) requirement, and emission reduction budget. To navigate such a three-way tradeoff, we take advantage of Lyapunov optimization techniques to design and analyze a carbon-aware control framework, which makes online decisions on geographical load balancing, capacity right-sizing, and server speed scaling. Results from rigorous mathematical analyses and real-world trace-driven empirical evaluation demonstrate its effectiveness in both minimizing electricity cost and reducing carbon emission.
{"title":"Carbon-Aware Load Balancing for Geo-distributed Cloud Services","authors":"Zhi Zhou, Fangming Liu, Yong Xu, Ruolan Zou, Hong Xu, John C.S. Lui, Hai Jin","doi":"10.1109/MASCOTS.2013.31","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.31","url":null,"abstract":"Recently, data center carbon emission has become an emerging concern for the cloud service providers. Previous works are limited on cutting down the power consumption of the data centers to defuse such a concern. In this paper, we show how the spatial and temporal variabilities of the electricity carbon footprint can be fully exploited to further green the cloud running on top of geographically distributed data centers. We jointly consider the electricity cost, service level agreement (SLA) requirement, and emission reduction budget. To navigate such a three-way tradeoff, we take advantage of Lyapunov optimization techniques to design and analyze a carbon-aware control framework, which makes online decisions on geographical load balancing, capacity right-sizing, and server speed scaling. Results from rigorous mathematical analyses and real-world trace-driven empirical evaluation demonstrate its effectiveness in both minimizing electricity cost and reducing carbon emission.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"52 25","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133655855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Zipf distribution is widely used to model Web site popularity, video popularity, and file referencing behavior. In recent published work, we proposed and evaluated a Zipf-based policy for probabilistic piece selection in Peer-to-Peer (P2P) media streaming. In this current paper, we revisit this Zipf model in more detail, and identify two fundamentally different modeling approaches, namely regenerative versus degenerative Zipf models. We illustrate the differences between the two models, provide refined analytical models for each, and validate the models with simulations in the context of P2P media streaming. The results show that the regenerative model is more appropriate for P2P streaming, because of its stronger sequential progress.
{"title":"On Zipf Models for Probabilistic Piece Selection in P2P Stored Media Streaming","authors":"C. Williamson, Niklas Carlsson","doi":"10.1109/MASCOTS.2013.24","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.24","url":null,"abstract":"The Zipf distribution is widely used to model Web site popularity, video popularity, and file referencing behavior. In recent published work, we proposed and evaluated a Zipf-based policy for probabilistic piece selection in Peer-to-Peer (P2P) media streaming. In this current paper, we revisit this Zipf model in more detail, and identify two fundamentally different modeling approaches, namely regenerative versus degenerative Zipf models. We illustrate the differences between the two models, provide refined analytical models for each, and validate the models with simulations in the context of P2P media streaming. The results show that the regenerative model is more appropriate for P2P streaming, because of its stronger sequential progress.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133593283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The computation of the steady-state distribution of Continuous Time Markov Chains (CTMCs) may be a computationally hard problem when the number of states is very large. In order to overcome this problem, in the literature, several solutions have been proposed such as the reduction of the state space cardinality by lumping, the factorization based on product-form analysis and the application of the notion of reversibility. In this paper we address this problem by introducing the notion of auto reversibility which is defined as a symmetric co inductive relation which induces an equivalence relation among the chain's states. We show that all the states belonging to the same equivalence class share the same stationary probabilities and hence the computation of the steady-state distribution can be computationally more efficient. The definition of auto reversibility takes inspiration by the Kolmogorov's criteria for reversible processes and hence requires to test a property on all the minimal cycles of the chain. We show that the notion of auto reversibility is different from that of reversible processes and does not correspond to other state aggregation techniques such as lumping. Finally, we discuss the applicability of our results in the case of models defined in terms of a Markovian process Algebra such as the Performance Evaluation Process Algebra.
{"title":"Autoreversibility: Exploiting Symmetries in Markov Chains","authors":"A. Marin, S. Rossi","doi":"10.1109/MASCOTS.2013.23","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.23","url":null,"abstract":"The computation of the steady-state distribution of Continuous Time Markov Chains (CTMCs) may be a computationally hard problem when the number of states is very large. In order to overcome this problem, in the literature, several solutions have been proposed such as the reduction of the state space cardinality by lumping, the factorization based on product-form analysis and the application of the notion of reversibility. In this paper we address this problem by introducing the notion of auto reversibility which is defined as a symmetric co inductive relation which induces an equivalence relation among the chain's states. We show that all the states belonging to the same equivalence class share the same stationary probabilities and hence the computation of the steady-state distribution can be computationally more efficient. The definition of auto reversibility takes inspiration by the Kolmogorov's criteria for reversible processes and hence requires to test a property on all the minimal cycles of the chain. We show that the notion of auto reversibility is different from that of reversible processes and does not correspond to other state aggregation techniques such as lumping. Finally, we discuss the applicability of our results in the case of models defined in terms of a Markovian process Algebra such as the Performance Evaluation Process Algebra.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121672690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabian Brosig, F. Gorsler, Nikolaus Huber, Samuel Kounev
Performance management and performance prediction of services deployed in virtualized environments is a challenging task. On the one hand, the virtualization layer makes the estimation of performance model parameters difficult and inaccurate. On the other hand, it is difficult to model the hyper visor scheduler in a representative and practically feasible manner. In this paper, we describe how to obtain relevant parameters, such as the virtualization overhead, depending on the amount and type of available monitoring data. We adapt classical queueing-theory-based modeling techniques to make them usable for different configurations of virtualized environments. We provide answers how to include the virtualization overhead into queueing network models, and how to take the contention between different VMs into account. Finally, we evaluate our approach in representative scenarios based on the SPECjEnterprise2010 standard benchmark and XenServer 5.5, showing significant improvements in the prediction accuracy and discussing further open issues for performance prediction in virtualized environments.
{"title":"Evaluating Approaches for Performance Prediction in Virtualized Environments","authors":"Fabian Brosig, F. Gorsler, Nikolaus Huber, Samuel Kounev","doi":"10.1109/MASCOTS.2013.61","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.61","url":null,"abstract":"Performance management and performance prediction of services deployed in virtualized environments is a challenging task. On the one hand, the virtualization layer makes the estimation of performance model parameters difficult and inaccurate. On the other hand, it is difficult to model the hyper visor scheduler in a representative and practically feasible manner. In this paper, we describe how to obtain relevant parameters, such as the virtualization overhead, depending on the amount and type of available monitoring data. We adapt classical queueing-theory-based modeling techniques to make them usable for different configurations of virtualized environments. We provide answers how to include the virtualization overhead into queueing network models, and how to take the contention between different VMs into account. Finally, we evaluate our approach in representative scenarios based on the SPECjEnterprise2010 standard benchmark and XenServer 5.5, showing significant improvements in the prediction accuracy and discussing further open issues for performance prediction in virtualized environments.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122615681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On mobile devices, such as smart phones and tablets, client-side JavaScript is a significant contributor to power consumption, and thus battery lifetime. We claim that this is partially due to JavaScript interpretation running faster than is necessary to maintain a satisfactory user experience, and we propose that JavaScript implementations include a user-configurable throttle. To evaluate our claim we developed a web proxy system, named JSSlow, that reduces power consumption by transcoding client-side JavaScript and injecting "sleep" invocations. This can be done safely, even given JavaScript's single-threaded nature, through the use of continuation passing, and the proxy model requires neither server nor client-side changes. Using JSSlow we studied the 120 most popular sites and found that the technique could reduce power consumption by an average of 5% on Android phones. We also considered buggy code (52% reduction) and advertising (10% reduction). To evaluate the system's impact on the user experience, we conducted a user study consisting of interactive tasks the user carried out on. The perceived performance impact varies by user and site, with the variation being highest on the most interactive sites, such as games. This argues for making the throttle user-configurable in some cases.
{"title":"Making JavaScript Better by Making It Even Slower","authors":"Maciej Swiech, P. Dinda","doi":"10.1109/MASCOTS.2013.15","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.15","url":null,"abstract":"On mobile devices, such as smart phones and tablets, client-side JavaScript is a significant contributor to power consumption, and thus battery lifetime. We claim that this is partially due to JavaScript interpretation running faster than is necessary to maintain a satisfactory user experience, and we propose that JavaScript implementations include a user-configurable throttle. To evaluate our claim we developed a web proxy system, named JSSlow, that reduces power consumption by transcoding client-side JavaScript and injecting \"sleep\" invocations. This can be done safely, even given JavaScript's single-threaded nature, through the use of continuation passing, and the proxy model requires neither server nor client-side changes. Using JSSlow we studied the 120 most popular sites and found that the technique could reduce power consumption by an average of 5% on Android phones. We also considered buggy code (52% reduction) and advertising (10% reduction). To evaluate the system's impact on the user experience, we conducted a user study consisting of interactive tasks the user carried out on. The perceived performance impact varies by user and site, with the variation being highest on the most interactive sites, such as games. This argues for making the throttle user-configurable in some cases.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115298620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory-related software defects manifest after a long incubation time and are usually discovered in a production scenario. As a consequence, this frequently encountered class of so-called software aging problems incur severe follow-up costs, including performance and reliability degradation, need for workarounds (usually controlled restarts) and effort for localizing the causes. While many excellent tools for identifying memory leaks exist, they are inappropriate for automated leak detection or isolation as they require developer involvement or slow down execution considerably. In this work we propose a lightweight approach which allows for automated leak detection during the standardized unit or integration tests. The core idea is to compare at the byte-code level the memory allocation behavior of related development versions of the same software. We evaluate our approach by injecting memory leaks into the YARN component of the popular Hadoop framework and comparing the accuracy of detection and isolation in various scenarios. The results show that the approach can detect and isolate such defects with high precision, even if multiple leaks are injected at once.
{"title":"Detection and Root Cause Analysis of Memory-Related Software Aging Defects by Automated Tests","authors":"Felix Langner, A. Andrzejak","doi":"10.1109/MASCOTS.2013.53","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.53","url":null,"abstract":"Memory-related software defects manifest after a long incubation time and are usually discovered in a production scenario. As a consequence, this frequently encountered class of so-called software aging problems incur severe follow-up costs, including performance and reliability degradation, need for workarounds (usually controlled restarts) and effort for localizing the causes. While many excellent tools for identifying memory leaks exist, they are inappropriate for automated leak detection or isolation as they require developer involvement or slow down execution considerably. In this work we propose a lightweight approach which allows for automated leak detection during the standardized unit or integration tests. The core idea is to compare at the byte-code level the memory allocation behavior of related development versions of the same software. We evaluate our approach by injecting memory leaks into the YARN component of the popular Hadoop framework and comparing the accuracy of detection and isolation in various scenarios. The results show that the approach can detect and isolate such defects with high precision, even if multiple leaks are injected at once.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115351166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vengatanathan Krishnamoorthi, Niklas Carlsson, D. Eager, A. Mahanti, N. Shahmehri
HTTP-based Adaptive Streaming (HAS) has become a widely-used video delivery technology. Use of HTTP enables relatively easy firewall/NAT traversal and content caching. While caching is an important aspect of HAS, there is not much public research on the performance impact proxies and their policies have on HAS. In this paper we build an experimental framework using open source Squid proxies and the most recent Open Source Media Framework (OSMF). A range of content-aware policies can be implemented in the proxies and tested, while the player software can be instrumented to measure performance as seen at the client. Using this framework, the paper makes three main contributions. First, we present a scenario-based performance evaluation of the latest version of the OSMF player. Second, we quantify the benefits using different proxy-assisted solutions, including basic best effort policies and more advanced content quality aware prefetching policies. Finally, we present and evaluate a cooperative framework in which clients and proxies share information to improve performance. In general, the bottleneck location and network conditions play central roles in which policy choices are most advantageous, as they significantly impact the relative performance differences between policy classes. We conclude that careful design and policy selection is important when trying to enhance HAS performance using proxy assistance.
{"title":"Helping Hand or Hidden Hurdle: Proxy-Assisted HTTP-Based Adaptive Streaming Performance","authors":"Vengatanathan Krishnamoorthi, Niklas Carlsson, D. Eager, A. Mahanti, N. Shahmehri","doi":"10.1109/MASCOTS.2013.26","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.26","url":null,"abstract":"HTTP-based Adaptive Streaming (HAS) has become a widely-used video delivery technology. Use of HTTP enables relatively easy firewall/NAT traversal and content caching. While caching is an important aspect of HAS, there is not much public research on the performance impact proxies and their policies have on HAS. In this paper we build an experimental framework using open source Squid proxies and the most recent Open Source Media Framework (OSMF). A range of content-aware policies can be implemented in the proxies and tested, while the player software can be instrumented to measure performance as seen at the client. Using this framework, the paper makes three main contributions. First, we present a scenario-based performance evaluation of the latest version of the OSMF player. Second, we quantify the benefits using different proxy-assisted solutions, including basic best effort policies and more advanced content quality aware prefetching policies. Finally, we present and evaluate a cooperative framework in which clients and proxies share information to improve performance. In general, the bottleneck location and network conditions play central roles in which policy choices are most advantageous, as they significantly impact the relative performance differences between policy classes. We conclude that careful design and policy selection is important when trying to enhance HAS performance using proxy assistance.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128663114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}