Performance models of software designs can give early warnings of problems such as resource saturation or excessive delays. However models are seldom used because of the considerable effort needed to construct them. Software Architecture and Model Extraction (SAME) is a lightweight model building technique that extracts communication patterns from executable designs or prototypes that use message passing, to develop a Layered Queuing Network model in an automated fashion. It is a formal, traceable model building process. The transformation follows a series of well-defined transformation steps, from input domain, (an executable software design or the implementation of software itself) to output domain, a Layered Queuing Network (LQN) Performance model. The SAME technique is appropriate for a message passing distributed system where tasks interact by point-to-point communication. With SAME, the performance analyst can focus on the principles of software performance analysis rather than model building.
{"title":"Automatic generation of layered queuing software performance models from commonly available traces","authors":"Tauseef A. Israr, D. Lau, G. Franks, M. Woodside","doi":"10.1145/1071021.1071037","DOIUrl":"https://doi.org/10.1145/1071021.1071037","url":null,"abstract":"Performance models of software designs can give early warnings of problems such as resource saturation or excessive delays. However models are seldom used because of the considerable effort needed to construct them. Software Architecture and Model Extraction (SAME) is a lightweight model building technique that extracts communication patterns from executable designs or prototypes that use message passing, to develop a Layered Queuing Network model in an automated fashion. It is a formal, traceable model building process. The transformation follows a series of well-defined transformation steps, from input domain, (an executable software design or the implementation of software itself) to output domain, a Layered Queuing Network (LQN) Performance model. The SAME technique is appropriate for a message passing distributed system where tasks interact by point-to-point communication. With SAME, the performance analyst can focus on the principles of software performance analysis rather than model building.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122343819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Lera, C. Juiz, R. Puigjaner, Christian Kurz, G. Haring, Joachim Zottl
This paper brings together the performance assessment of ambient intelligence architectures systems with ontology engineering. Thus, firstly appropriate description methods for distributed intelligent applications are summarized. Derived from the system characterization, typical software performance engineering techniques are based on the augmented description of the model regarding performance annotations. However, these annotations are only related with the syntactical view of the software architecture. In the next generation of performance assessment tools for ambient intelligent systems, the description of the system would be capable of reasoning and acquiring knowledge about performance. Having an appropriate architectural description including performance aspects, any possible design options for intelligent distributed applications can be evaluated according to their performance impact. Therefore, we propose the use of an ontology with performance-related information not only to possible evaluate the architecture through the common off-line procedure but also the first step to build a broker that assesses the performance of the system during its execution.
{"title":"Performance assessment on ambient intelligent applications through ontologies","authors":"I. Lera, C. Juiz, R. Puigjaner, Christian Kurz, G. Haring, Joachim Zottl","doi":"10.1145/1071021.1071045","DOIUrl":"https://doi.org/10.1145/1071021.1071045","url":null,"abstract":"This paper brings together the performance assessment of ambient intelligence architectures systems with ontology engineering. Thus, firstly appropriate description methods for distributed intelligent applications are summarized. Derived from the system characterization, typical software performance engineering techniques are based on the augmented description of the model regarding performance annotations. However, these annotations are only related with the syntactical view of the software architecture. In the next generation of performance assessment tools for ambient intelligent systems, the description of the system would be capable of reasoning and acquiring knowledge about performance. Having an appropriate architectural description including performance aspects, any possible design options for intelligent distributed applications can be evaluated according to their performance impact. Therefore, we propose the use of an ontology with performance-related information not only to possible evaluate the architecture through the common off-line procedure but also the first step to build a broker that assesses the performance of the system during its execution.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127891223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The change of focus from code to models promoted by OMG's Model Driven Development raises the need for verification of non-functional characteristics of UML models. such as performance, reliability, scalability, security, etc. Many modeling formalisms, techniques and tools have been developed over the years for the analysis of different non-functional characteristics. The challenge is not to reinvent new analysis methods for UML models, but to bridge the gap between UML-based software development tools and different kinds of existing analysis tools. Traditionally, the analysis models were built "by hand". However, a new trend is starting to emerge, that involves the automatic transformation of UML models (annotated with extra information) into various kinds of analysis models. This paper proposes a transformation method of an annotated UML model into a performance model. The mapping between the input model and the output model is defined at a higher level of abstraction based on graph transformation concepts, whereas the implementation of the transformation rules and algorithm uses lower-level XML trees manipulations techniques, such as XML algebra. The target performance model used as an example in this paper is the Layered Queueing Network (LQN); however, the transformation approach can be easily tailored to other performance modelling formalisms.
{"title":"From UML to LQN by XML algebra-based model transformations","authors":"G. Gu, D. Petriu","doi":"10.1145/1071021.1071031","DOIUrl":"https://doi.org/10.1145/1071021.1071031","url":null,"abstract":"The change of focus from code to models promoted by OMG's Model Driven Development raises the need for verification of non-functional characteristics of UML models. such as performance, reliability, scalability, security, etc. Many modeling formalisms, techniques and tools have been developed over the years for the analysis of different non-functional characteristics. The challenge is not to reinvent new analysis methods for UML models, but to bridge the gap between UML-based software development tools and different kinds of existing analysis tools. Traditionally, the analysis models were built \"by hand\". However, a new trend is starting to emerge, that involves the automatic transformation of UML models (annotated with extra information) into various kinds of analysis models. This paper proposes a transformation method of an annotated UML model into a performance model. The mapping between the input model and the output model is defined at a higher level of abstraction based on graph transformation concepts, whereas the implementation of the transformation rules and algorithm uses lower-level XML trees manipulations techniques, such as XML algebra. The target performance model used as an example in this paper is the Layered Queueing Network (LQN); however, the transformation approach can be easily tailored to other performance modelling formalisms.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115195056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces EPYFQ, a novel scheduling algorithm, which is the core technology to provide performance virtualization in shared storage environment. EPYFQ schedules requests from virtual disks according to their shares, and can satisfy the following three requirements demanded by performance virtualization, performance isolation, fairness and throughput. We implement EPYFQ into DM, a kernel module of Linux, which provides logic volume management service, and evaluate it from the above three aspects. Our results show that EPYFQ has good ability of performance isolation and fairness. Besides, we can adjust the tradeoff between tight resource control and throughput of storage system through the parameter ε.
{"title":"EPYFQ: a novel scheduling algorithm for performance virtualization in shared storage environment","authors":"Yong Feng, Yan-yuan Zhang, Rui-yong Jia","doi":"10.1145/1071021.1071051","DOIUrl":"https://doi.org/10.1145/1071021.1071051","url":null,"abstract":"This paper introduces EPYFQ, a novel scheduling algorithm, which is the core technology to provide performance virtualization in shared storage environment. EPYFQ schedules requests from virtual disks according to their shares, and can satisfy the following three requirements demanded by performance virtualization, performance isolation, fairness and throughput. We implement EPYFQ into DM, a kernel module of Linux, which provides logic volume management service, and evaluate it from the above three aspects. Our results show that EPYFQ has good ability of performance isolation and fairness. Besides, we can adjust the tradeoff between tight resource control and throughput of storage system through the parameter ε.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117267071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method to characterize workload from a web server logfile from a user perspective. The data obtained in this process is used to create workload for the TPC-W benchmark.
{"title":"Extending TPC-W to allow for fine grained workload specification","authors":"Christian Kurz, Carlos Guerrero, G. Haring","doi":"10.1145/1071021.1071039","DOIUrl":"https://doi.org/10.1145/1071021.1071039","url":null,"abstract":"This paper presents a method to characterize workload from a web server logfile from a user perspective. The data obtained in this process is used to create workload for the TPC-W benchmark.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133298689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This talk provides an overview of methods and tools used for space software development & verification by EADS Space Transportation.
本次演讲概述了EADS空间运输公司用于空间软件开发和验证的方法和工具。
{"title":"Safety critical software development for space application: invited talk abstract","authors":"M. Turin","doi":"10.1145/1071021.1071054","DOIUrl":"https://doi.org/10.1145/1071021.1071054","url":null,"abstract":"This talk provides an overview of methods and tools used for space software development & verification by EADS Space Transportation.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124281300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
StoCharts have been proposed as a UML statechart extension for performance and dependability evaluation, and have been applied in the context of train radio reliability assessment to show the principal tractability of realistic cases with this approach. In this paper, we extend on this bare feasibility result in two important directions. First, we sketch the cornerstones of a mechanizable translation of StoCharts to MoDeST. The latter is a process algebra-based formalism supported by the MOTOR/MÖBIUS tool tandem. Second, we exploit this translation for a detailed analysis of the train radio case study.
{"title":"From StoCharts to MoDeST: a comparative reliability analysis of train radio communications","authors":"H. Hermanns, D. Jansen, Y. Usenko","doi":"10.1145/1071021.1071023","DOIUrl":"https://doi.org/10.1145/1071021.1071023","url":null,"abstract":"StoCharts have been proposed as a UML statechart extension for performance and dependability evaluation, and have been applied in the context of train radio reliability assessment to show the principal tractability of realistic cases with this approach. In this paper, we extend on this bare feasibility result in two important directions. First, we sketch the cornerstones of a mechanizable translation of StoCharts to MoDeST. The latter is a process algebra-based formalism supported by the MOTOR/MÖBIUS tool tandem. Second, we exploit this translation for a detailed analysis of the train radio case study.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129102228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is concerned about simulation output analysis. This analysis should serve the purpose of making a diagnose about the simulated system and give some hints about possible improvements. The article has its base on previous work done by the same authors[6] but it goes a little further. Some diagnose tools are presented which take as input the simulation results of our simulator and analyzes them to make a statement about the behavior and performance of the simulated system. Moreover, the diagnose tools can give advice on how to improve the system's performance.
{"title":"Automatic performance evaluation and feedback for MASCOT designs","authors":"Pere P. Sancho, C. Juiz, R. Puigjaner","doi":"10.1145/1071021.1071043","DOIUrl":"https://doi.org/10.1145/1071021.1071043","url":null,"abstract":"This paper is concerned about simulation output analysis. This analysis should serve the purpose of making a diagnose about the simulated system and give some hints about possible improvements. The article has its base on previous work done by the same authors[6] but it goes a little further. Some diagnose tools are presented which take as input the simulation results of our simulator and analyzes them to make a statement about the behavior and performance of the simulated system. Moreover, the diagnose tools can give advice on how to improve the system's performance.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122350857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Redundant computation is an execution of a program statement(s) that does not contribute to the program output. The same statement on one execution may exhibit redundant computation whereas on a different execution, it contributes to the program output. A redundant (dead) statement always exhibits redundant computation, i.e., its execution is always redundant. However, a statement that exhibits redundant computation is not necessarily a redundant statement. Redundant computation represents a partial redundancy of a statement. A high degree of redundant computation in a program may indicate a performance deficiency. Therefore, elimination (or reduction) of redundant computation may improve program's performance. In this paper we present an approach of automated detection of redundant computation in programs and show its application in performance analysis. We developed a tool that automatically detects redundant computations in C programs and identifies potential performance deficiencies related to redundant computation. We have performed an experimental study that showed that redundant computation is a commonly occurring phenomenon in programs, and it is frequently a source of performance deficiency.
{"title":"Application of redundant computation in software performance analysis","authors":"Zakarya A. Alzamil, B. Korel","doi":"10.1145/1071021.1071032","DOIUrl":"https://doi.org/10.1145/1071021.1071032","url":null,"abstract":"Redundant computation is an execution of a program statement(s) that does not contribute to the program output. The same statement on one execution may exhibit redundant computation whereas on a different execution, it contributes to the program output. A redundant (dead) statement always exhibits redundant computation, i.e., its execution is always redundant. However, a statement that exhibits redundant computation is not necessarily a redundant statement. Redundant computation represents a partial redundancy of a statement. A high degree of redundant computation in a program may indicate a performance deficiency. Therefore, elimination (or reduction) of redundant computation may improve program's performance. In this paper we present an approach of automated detection of redundant computation in programs and show its application in performance analysis. We developed a tool that automatically detects redundant computations in C programs and identifies potential performance deficiencies related to redundant computation. We have performed an experimental study that showed that redundant computation is a commonly occurring phenomenon in programs, and it is frequently a source of performance deficiency.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Woodside, D. Petriu, D. Petriu, Hui Shen, Toqeer Israr, J. Merseguer
Evaluation of non-functional properties of a design (such as performance, dependability, security, etc.) can be enabled by design annotations specific to the property to be evaluated. Performance properties, for instance, can be annotated on UML designs by using the "UML Profile for Schedulability, Performance and Time (SPT)". However the communication between the design description in UML and the tools used for non-functional properties evaluation requires support, particularly for performance where there are many alternative performance analysis tools that might be applied. This paper describes a tool architecture called PUMA, which provides a unified interface between different kinds of design information and different kinds of performance models, for example Markov models, stochastic Petri nets and process algebras, queues and layered queues.The paper concentrates on the creation of performance models. The unified interface of PUMA is centered on an intermediate model called Core Scenario Model (CSM), which is extracted from the annotated design model. Experience shows that CSM is also necessary for cleaning and auditing the design information, and providing default interpretations in case it is incomplete, before creating a performance model.
{"title":"Performance by unified model analysis (PUMA)","authors":"M. Woodside, D. Petriu, D. Petriu, Hui Shen, Toqeer Israr, J. Merseguer","doi":"10.1145/1071021.1071022","DOIUrl":"https://doi.org/10.1145/1071021.1071022","url":null,"abstract":"Evaluation of non-functional properties of a design (such as performance, dependability, security, etc.) can be enabled by design annotations specific to the property to be evaluated. Performance properties, for instance, can be annotated on UML designs by using the \"UML Profile for Schedulability, Performance and Time (SPT)\". However the communication between the design description in UML and the tools used for non-functional properties evaluation requires support, particularly for performance where there are many alternative performance analysis tools that might be applied. This paper describes a tool architecture called PUMA, which provides a unified interface between different kinds of design information and different kinds of performance models, for example Markov models, stochastic Petri nets and process algebras, queues and layered queues.The paper concentrates on the creation of performance models. The unified interface of PUMA is centered on an intermediate model called Core Scenario Model (CSM), which is extracted from the annotated design model. Experience shows that CSM is also necessary for cleaning and auditing the design information, and providing default interpretations in case it is incomplete, before creating a performance model.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128500567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}