Ignoring monitoring overhead and not validating measurements are two common mistakes in benchmarking. We extend, apply and evaluate for a practical FFT library our own methodologies for overhead compensation and over-all validation. The overhead and error sources we address include the source instrumentation and separate activities unrelated to the phenomena under study. We are able to quantify the differences with simpler compared with more portable probe technology and relate them to compiler optimization effects. Finally we formulate our framework for validating performance measurement using software accessible counters within existing framework for general software measurements.
{"title":"Towards a framework for source code instrumentation measurement validation","authors":"Haleh Najafzadeh, S. Chaiken","doi":"10.1145/1071021.1071033","DOIUrl":"https://doi.org/10.1145/1071021.1071033","url":null,"abstract":"Ignoring monitoring overhead and not validating measurements are two common mistakes in benchmarking. We extend, apply and evaluate for a practical FFT library our own methodologies for overhead compensation and over-all validation. The overhead and error sources we address include the source instrumentation and separate activities unrelated to the phenomena under study. We are able to quantify the differences with simpler compared with more portable probe technology and relate them to compiler optimization effects. Finally we formulate our framework for validating performance measurement using software accessible counters within existing framework for general software measurements.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130422473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The evolution towards IP-aware access networks creates the possibility (and, indeed, the desirability) of additional network services, like firewalling or NAT, integrated into the network devices. These new services, however, force the network components to be both flexible (to cope with changing protocols and applications) and powerful. Network processors as a platform on which to implement the network services seem to fit the bill.System performance should be assured by incorporating performance analysis into the design of the system, by means of performance modeling at the architectural design stage. This paper describes the use of Software Performance Engineering during the design of a firewall/NAT router on the Intel IXP2400 network processor. Several design options were first modeled and analysed, and based on those simulations, a final design was chosen and implemented.
{"title":"Modeling the performance of a NAT/firewall network service for the IXP2400","authors":"Tom Verdickt, Wim Van de Meerssche, K. Vlaeminck","doi":"10.1145/1071021.1071035","DOIUrl":"https://doi.org/10.1145/1071021.1071035","url":null,"abstract":"The evolution towards IP-aware access networks creates the possibility (and, indeed, the desirability) of additional network services, like firewalling or NAT, integrated into the network devices. These new services, however, force the network components to be both flexible (to cope with changing protocols and applications) and powerful. Network processors as a platform on which to implement the network services seem to fit the bill.System performance should be assured by incorporating performance analysis into the design of the system, by means of performance modeling at the architectural design stage. This paper describes the use of Software Performance Engineering during the design of a firewall/NAT router on the Intel IXP2400 network processor. Several design options were first modeled and analysed, and based on those simulations, a final design was chosen and implemented.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"36 29","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132835939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances in Internet and the availability of powerful computers and high-speed networks have propitiated the rise of Grids. The scheduling of applications in Grids is complex due to factors like the heterogeneity of resources and changes in their availability. Models provide a way of performing repeatable and controllable experiments for evaluating scheduling algorithms under different scenarios. This article describes the development of a performance model for a JAVA-based distributed platform using a SPE methodology, where Layered Queuing Network (LQN) models are derived from Use Case Maps (UCM).
{"title":"Applying SPE techniques for modeling a grid-enabled JAVA platform","authors":"Mariela Curiel, M. Pérez, Ricardo González","doi":"10.1145/1071021.1071036","DOIUrl":"https://doi.org/10.1145/1071021.1071036","url":null,"abstract":"Advances in Internet and the availability of powerful computers and high-speed networks have propitiated the rise of Grids. The scheduling of applications in Grids is complex due to factors like the heterogeneity of resources and changes in their availability. Models provide a way of performing repeatable and controllable experiments for evaluating scheduling algorithms under different scenarios. This article describes the development of a performance model for a JAVA-based distributed platform using a SPE methodology, where Layered Queuing Network (LQN) models are derived from Use Case Maps (UCM).","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122421302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding the characteristics of the users' workload is an important aspect when designing and providing web services. The majority of current workload characterization techniques introduce some limitations when representing the dynamism of the client behavior and the continuous changes in its role. This fact implies that the majority of the existing workload generators model these characteristics in a simple an improperly way. This paper focuses on the dynamism of WWW in general, and the new techniques to characterize the user behavior. Our work is addressed to develop a dynamic workload generator that considers the new behavior of the web clients, and the continuous changes in their role.
{"title":"Modeling continuous changes of the user's dynamic behavior in the WWW","authors":"R. Peña-Ortiz, J. Sahuquillo, A. Pont, J. A. Gil","doi":"10.1145/1071021.1071040","DOIUrl":"https://doi.org/10.1145/1071021.1071040","url":null,"abstract":"Understanding the characteristics of the users' workload is an important aspect when designing and providing web services. The majority of current workload characterization techniques introduce some limitations when representing the dynamism of the client behavior and the continuous changes in its role. This fact implies that the majority of the existing workload generators model these characteristics in a simple an improperly way. This paper focuses on the dynamism of WWW in general, and the new techniques to characterize the user behavior. Our work is addressed to develop a dynamic workload generator that considers the new behavior of the web clients, and the continuous changes in their role.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126512539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. U. Smith, Catalina M. Lladó, V. Cortellessa, A. Marco, L. Williams
The SPE process uses multiple performance assessment tools depending on the state of the software and the amount of performance data available. This paper describes two XML based interchange formats that facilitate using a variety of performance tools in a plug-and-play manner thus enabling the use of the tool best suited to the analysis. The Software Performance Model Interchange Format (S-PMIF) is a common representation that is used to exchange information between (UML-based) software design tools and software performance engineering tools. On the other hand, the performance model interchange format (PMIF 2.0) is a common representation for system performance model data that can be used to move models among system performance modeling tools that use a queueing network model paradigm. This paper first defines an XML based S-PMIF based on an updated SPE meta-model Then it demonstrates the feasibility of using both the S-PMIF and the PMIF 2.0 to automatically translate an architecture description in UML into both a software performance model and a system performance model to study the performance characteristics of the architecture. This required the implementation of some extensions to the XPRIT software in order to export UML models into the S-PMIF and a new function in the SPEED software to import S-PMIF models, which are also described. The SPE process and an experimental proof of concept are presented.
{"title":"From UML models to software performance results: an SPE process based on XML interchange formats","authors":"C. U. Smith, Catalina M. Lladó, V. Cortellessa, A. Marco, L. Williams","doi":"10.1145/1071021.1071030","DOIUrl":"https://doi.org/10.1145/1071021.1071030","url":null,"abstract":"The SPE process uses multiple performance assessment tools depending on the state of the software and the amount of performance data available. This paper describes two XML based interchange formats that facilitate using a variety of performance tools in a plug-and-play manner thus enabling the use of the tool best suited to the analysis. The Software Performance Model Interchange Format (S-PMIF) is a common representation that is used to exchange information between (UML-based) software design tools and software performance engineering tools. On the other hand, the performance model interchange format (PMIF 2.0) is a common representation for system performance model data that can be used to move models among system performance modeling tools that use a queueing network model paradigm. This paper first defines an XML based S-PMIF based on an updated SPE meta-model Then it demonstrates the feasibility of using both the S-PMIF and the PMIF 2.0 to automatically translate an architecture description in UML into both a software performance model and a system performance model to study the performance characteristics of the architecture. This required the implementation of some extensions to the XPRIT software in order to export UML models into the S-PMIF and a new function in the SPEED software to import S-PMIF models, which are also described. The SPE process and an experimental proof of concept are presented.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114709627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To facilitate the use of non-functional analysis results in the selection and assembly of components for component-based systems, automatic prediction tools should be devised, to predict some overall quality attribute of the application without requiring extensive knowledge of analysis methodologies to the application designer. To achieve this goal, a key idea is to define a model transformation that takes as input some "design-oriented" model of the component assembly and produces as a result an "analysis-oriented" model that lends itself to the application of some analysis methodology. However, to actually devise such a transformation, we must face both the heterogeneous design level notations for component-based systems, and the variety of non-functional attributes and related analysis methodologies we could be interested in. In this perspective, we define a kernel language whose aim is to capture the relevant information for the analysis of non-functional attributes of component-based systems, with a focus on performance and reliability. Using this kernel language as a bridge between design-oriented and analysis-oriented notations we reduce the burden of defining a variety of direct transformations from the former to the latter to the less complex problem of defining transformations to/from the kernel language. The proposed kernel language is defined within the MOF (Meta-Object Facility) framework, to allow the exploitation of MOF-based model transformation facilities.
{"title":"From design to analysis models: a kernel language for performance and reliability analysis of component-based systems","authors":"V. Grassi, R. Mirandola, A. Sabetta","doi":"10.1145/1071021.1071024","DOIUrl":"https://doi.org/10.1145/1071021.1071024","url":null,"abstract":"To facilitate the use of non-functional analysis results in the selection and assembly of components for component-based systems, automatic prediction tools should be devised, to predict some overall quality attribute of the application without requiring extensive knowledge of analysis methodologies to the application designer. To achieve this goal, a key idea is to define a model transformation that takes as input some \"design-oriented\" model of the component assembly and produces as a result an \"analysis-oriented\" model that lends itself to the application of some analysis methodology. However, to actually devise such a transformation, we must face both the heterogeneous design level notations for component-based systems, and the variety of non-functional attributes and related analysis methodologies we could be interested in. In this perspective, we define a kernel language whose aim is to capture the relevant information for the analysis of non-functional attributes of component-based systems, with a focus on performance and reliability. Using this kernel language as a bridge between design-oriented and analysis-oriented notations we reduce the burden of defining a variety of direct transformations from the former to the latter to the less complex problem of defining transformations to/from the kernel language. The proposed kernel language is defined within the MOF (Meta-Object Facility) framework, to allow the exploitation of MOF-based model transformation facilities.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125879135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new approach that is useful in identifying and eliminating performance degradation occurring in aging software is proposed. A customer-affecting metric is used to initiate the restoration of such a system to full capacity. A case study is described in which, by simulating an industrial software system, we are able to show that by monitoring a customer-affecting metric and frequently comparing its degradation to the performance objective, we can ensure system stability at a very low cost.
{"title":"Ensuring stable performance for systems that degrade","authors":"Alberto Avritzer, A. Bondi, E. Weyuker","doi":"10.1145/1071021.1071026","DOIUrl":"https://doi.org/10.1145/1071021.1071026","url":null,"abstract":"A new approach that is useful in identifying and eliminating performance degradation occurring in aging software is proposed. A customer-affecting metric is used to initiate the restoration of such a system to full capacity. A case study is described in which, by simulating an industrial software system, we are able to show that by monitoring a customer-affecting metric and frequently comparing its degradation to the performance objective, we can ensure system stability at a very low cost.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This talk provides an overview of Model Driven Architecture (MDA). It outlines the economic drivers of MDA, gives a broad outline of the scope of MDA, and projects the future of MDA.
{"title":"An overview of model driven architecture ® (MDA®): invited talk abstract","authors":"M. Rosen","doi":"10.1145/1071021.1071052","DOIUrl":"https://doi.org/10.1145/1071021.1071052","url":null,"abstract":"This talk provides an overview of Model Driven Architecture (MDA). It outlines the economic drivers of MDA, gives a broad outline of the scope of MDA, and projects the future of MDA.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerònia Rosselló, Catalina M. Lladó, R. Puigjaner, C. U. Smith
A performance model interchange format (PMIF) is a common representation for queuing network model data that can be used to move models among modeling tools. This paper demonstrates how Web services can be used to facilitate the use of modeling tools that can interface with the PMIF. The paper describes the design and implementation of a PMIF Web service for the modeling tool Qnap. Additionally, it shows experimental results that prove the viability of such a Web service.
{"title":"A web service for solving queuing network models using PMIF","authors":"Jerònia Rosselló, Catalina M. Lladó, R. Puigjaner, C. U. Smith","doi":"10.1145/1071021.1071042","DOIUrl":"https://doi.org/10.1145/1071021.1071042","url":null,"abstract":"A performance model interchange format (PMIF) is a common representation for queuing network model data that can be used to move models among modeling tools. This paper demonstrates how Web services can be used to facilitate the use of modeling tools that can interface with the PMIF. The paper describes the design and implementation of a PMIF Web service for the modeling tool Qnap. Additionally, it shows experimental results that prove the viability of such a Web service.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130729806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classical Design of Experiment (DOE) techniques have been in use for many years to aid in the performance testing of systems. In particular fractional factorial designs have been used in cases with many numerical factors to reduce the number of experimental runs necessary. For experiments involving categorical factors, this is not the case; experimenters regularly resort to exhaustive (full factorial) experiments. Recently, D-optimal designs have been used to reduce numbers of tests for experiments involving categorical factors because of their flexibility, but not necessarily because they can closely approximate full factorial results. In commonly used statistical packages, the only generic alternative for reduced experiments involving categorical factors is afforded by optimal designs. The extent to which D-optimal designs succeed in estimating exhaustive results has not been evaluated, and it is natural to determine this. An alternative design based on covering arrays may offer a better approximation of full factorial data. Covering arrays are used in software testing for accurate coverage of interactions, while D-optimal and factorial designs measure the amount of interaction. Initial work involved exhaustive generation of designs in order to compare covering arrays and D-optimal designs in approximating full factorial designs. In that setting, covering arrays provided better approximations of full factorial analysis, while ensuring coverage of all small interactions. Here we examine commercially viable covering array and D-optimal design generators to compare designs. Commercial covering array generators, while not as good as exhaustively generated designs, remain competitive with D-optimal design generators.
{"title":"Software performance testing using covering arrays: efficient screening designs with categorical factors","authors":"Dean S. Hoskins, C. Colbourn, D. Montgomery","doi":"10.1145/1071021.1071034","DOIUrl":"https://doi.org/10.1145/1071021.1071034","url":null,"abstract":"Classical Design of Experiment (DOE) techniques have been in use for many years to aid in the performance testing of systems. In particular fractional factorial designs have been used in cases with many numerical factors to reduce the number of experimental runs necessary. For experiments involving categorical factors, this is not the case; experimenters regularly resort to exhaustive (full factorial) experiments. Recently, D-optimal designs have been used to reduce numbers of tests for experiments involving categorical factors because of their flexibility, but not necessarily because they can closely approximate full factorial results. In commonly used statistical packages, the only generic alternative for reduced experiments involving categorical factors is afforded by optimal designs. The extent to which D-optimal designs succeed in estimating exhaustive results has not been evaluated, and it is natural to determine this. An alternative design based on covering arrays may offer a better approximation of full factorial data. Covering arrays are used in software testing for accurate coverage of interactions, while D-optimal and factorial designs measure the amount of interaction. Initial work involved exhaustive generation of designs in order to compare covering arrays and D-optimal designs in approximating full factorial designs. In that setting, covering arrays provided better approximations of full factorial analysis, while ensuring coverage of all small interactions. Here we examine commercially viable covering array and D-optimal design generators to compare designs. Commercial covering array generators, while not as good as exhaustively generated designs, remain competitive with D-optimal design generators.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"350 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115284721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}