Model interchange approaches support the analysis of software architecture and design by enabling a variety of tools to automatically exchange performance models using a common schema. This paper builds on one of those interchange formats, the Software Performance Model Interchange Format (S-PMIF), and extends it to support the performance analysis of real-time systems. Specifically, it addresses real-time system designs expressed in the Construction and Composition Language (CCL) and their transformation into the S-PMIF for additional performance analyses. This paper defines extensions and changes to the S-PMIF meta-model and schema required for real-time systems. It describes transformations for both simple, best-case models and more detailed models of concurrency and synchronization. A case study demonstrates the techniques and compares performance results from several analyses.
{"title":"Performance analysis of real-time component architectures: a model interchange approach","authors":"Gabriel A. Moreno, C. U. Smith, L. Williams","doi":"10.1145/1383559.1383574","DOIUrl":"https://doi.org/10.1145/1383559.1383574","url":null,"abstract":"Model interchange approaches support the analysis of software architecture and design by enabling a variety of tools to automatically exchange performance models using a common schema. This paper builds on one of those interchange formats, the Software Performance Model Interchange Format (S-PMIF), and extends it to support the performance analysis of real-time systems. Specifically, it addresses real-time system designs expressed in the Construction and Composition Language (CCL) and their transformation into the S-PMIF for additional performance analyses. This paper defines extensions and changes to the S-PMIF meta-model and schema required for real-time systems. It describes transformations for both simple, best-case models and more detailed models of concurrency and synchronization. A case study demonstrates the techniques and compares performance results from several analyses.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The collaboration industry has seen an unimaginable explosion since the basic e-mail clients were introduced decades back. The products are now loaded with powerful features and have fierce competition in the market. To win in the marketplace, the offerings have to meet user expectations and demonstrate a high level of performance. The collaboration products aim at personal productivity, group networking and basic communication flow in the enterprise. They form the nerve-system of an organization, with processes of companies constructed around them, therefore ensuring performance-centric development of such products would be a key to their success. In this paper, we discuss the performance-oriented design, development and testing considerations that can find its application in large-scale multi-tiered collaboration products. We present a methodology termed as "Performance in Each Tier" (PET), which encompasses performance throughout the entire development process. PET concentrates on individual and holistic transactions. The paper describes strategies for dealing with performance issues at early as well as later stages of product development. The approach is applicable to products that have evolved over time with growing customer needs and changing business realities. Finding root causes of performance issues in a large product base and regressing functions for performance fixes, would prove more expensive as compared to prioritizing performance considerations along with feature development. This affirms the Performance Engineering concept that performance shall have priority as the functional features do from the commencement of the product development.
{"title":"Architecting, developing and testing for performance of tiered collaboration products","authors":"Shweta Gupta, Jaitirth V. Shirole","doi":"10.1145/1383559.1383563","DOIUrl":"https://doi.org/10.1145/1383559.1383563","url":null,"abstract":"The collaboration industry has seen an unimaginable explosion since the basic e-mail clients were introduced decades back. The products are now loaded with powerful features and have fierce competition in the market. To win in the marketplace, the offerings have to meet user expectations and demonstrate a high level of performance. The collaboration products aim at personal productivity, group networking and basic communication flow in the enterprise. They form the nerve-system of an organization, with processes of companies constructed around them, therefore ensuring performance-centric development of such products would be a key to their success. In this paper, we discuss the performance-oriented design, development and testing considerations that can find its application in large-scale multi-tiered collaboration products. We present a methodology termed as \"Performance in Each Tier\" (PET), which encompasses performance throughout the entire development process. PET concentrates on individual and holistic transactions. The paper describes strategies for dealing with performance issues at early as well as later stages of product development. The approach is applicable to products that have evolved over time with growing customer needs and changing business realities. Finding root causes of performance issues in a large product base and regressing functions for performance fixes, would prove more expensive as compared to prioritizing performance considerations along with feature development. This affirms the Performance Engineering concept that performance shall have priority as the functional features do from the commencement of the product development.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126340584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service Oriented Architectures (SOA) enable a multitude of service providers (SP) to provide loosely coupled and interoperable services at different Quality of Service (QoS) and cost levels. This paper considers business processes composed of activities that are supported by service providers. The structure of a business process may be expressed by languages such as BPEL and allows for constructs such as sequence, switch, while, flow, and pick. This paper considers the problem of finding the set of service providers that minimizes the total execution time of the business process subject to cost and execution time constraints. The problem is clearly NP-hard. However, the paper presents an optimized algorithm that finds the optimal solution without having to explore the entire solution space. This algorithm can be used to find the optimal solution in problems of moderate size. A heuristic solution is also presented and experimental studies that compare the optimal and heuristic solution show that the average execution time obtained with a heuristic allocation of providers to activities does not exceed 6% of that of the optimal solution.
{"title":"A heuristic approach to optimal service selection in service oriented architectures","authors":"D. Menascé, E. Casalicchio, V. Dubey","doi":"10.1145/1383559.1383562","DOIUrl":"https://doi.org/10.1145/1383559.1383562","url":null,"abstract":"Service Oriented Architectures (SOA) enable a multitude of service providers (SP) to provide loosely coupled and interoperable services at different Quality of Service (QoS) and cost levels. This paper considers business processes composed of activities that are supported by service providers. The structure of a business process may be expressed by languages such as BPEL and allows for constructs such as sequence, switch, while, flow, and pick. This paper considers the problem of finding the set of service providers that minimizes the total execution time of the business process subject to cost and execution time constraints. The problem is clearly NP-hard. However, the paper presents an optimized algorithm that finds the optimal solution without having to explore the entire solution space. This algorithm can be used to find the optimal solution in problems of moderate size. A heuristic solution is also presented and experimental studies that compare the optimal and heuristic solution show that the average execution time obtained with a heuristic allocation of providers to activities does not exceed 6% of that of the optimal solution.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128706485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a tool, PerfCenter, which can be used for performance oriented deployment and configuration of an application in a hosting center, or a "data center". While there are a numb er of tools which aid in the process of performance analysis during the software development cycle, few tools are geared towards aiding a data center architect in making appropriate decisions during the deployment of an application. PerfCenter facilitates this process by allowing specification in terms that are natural to a data center architect. Thus, PerfCenter takes, as input, the number and "specs" of hosts available in a data center, the network architecture of geographically diverse data centers, the deployment of software on hosts, hosts on data centers, and the usage information of the application (scenarios, resource consumption), and provides various performance measures such as scenario response times, and resource utilizations. We describe the PerfCenter specification, and its performance analysis utilities in detail, and illustrate its use in the deployment and configuration of a Webmail application.
{"title":"PerfCenter: a performance modeling tool for application hosting centers","authors":"Akhila Deshpande, V. Apte, S. Marathe","doi":"10.1145/1383559.1383570","DOIUrl":"https://doi.org/10.1145/1383559.1383570","url":null,"abstract":"We present a tool, PerfCenter, which can be used for performance oriented deployment and configuration of an application in a hosting center, or a \"data center\". While there are a numb er of tools which aid in the process of performance analysis during the software development cycle, few tools are geared towards aiding a data center architect in making appropriate decisions during the deployment of an application. PerfCenter facilitates this process by allowing specification in terms that are natural to a data center architect. Thus, PerfCenter takes, as input, the number and \"specs\" of hosts available in a data center, the network architecture of geographically diverse data centers, the deployment of software on hosts, hosts on data centers, and the usage information of the application (scenarios, resource consumption), and provides various performance measures such as scenario response times, and resource utilizations. We describe the PerfCenter specification, and its performance analysis utilities in detail, and illustrate its use in the deployment and configuration of a Webmail application.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126144660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Melia, Catalina M. Lladó, C. U. Smith, R. Puigjaner
Performance model interchange formats are common representations for data that can be used to move models among modeling tools. The Experiment Schema Extension (Ex-SE) provides a means of specifying performance studies (model runs and output). It is independent of a given tool paradigm, e.g., it works with PMIF, S-PMIF, LQN, Petri nets and other paradigms. Petri nets are different from the other paradigms, however, because they provide additional representation and analysis capabilities in addition to performance analysis. Examples include constraints on tokens in places, invariant analysis, reachability analysis, and so on. So to capitalize on these additional capabilities, this paper presents a specific, extended instantiation of the Ex-SE for Petri nets (PN-Ex). The viability of the approach is demonstrated with a case study carried out using PIPE2 (Platform Independent Petri net Editor 2) with an experimental framework.
{"title":"Experimentation and output interchange for petri net models","authors":"M. Melia, Catalina M. Lladó, C. U. Smith, R. Puigjaner","doi":"10.1145/1383559.1383576","DOIUrl":"https://doi.org/10.1145/1383559.1383576","url":null,"abstract":"Performance model interchange formats are common representations for data that can be used to move models among modeling tools. The Experiment Schema Extension (Ex-SE) provides a means of specifying performance studies (model runs and output). It is independent of a given tool paradigm, e.g., it works with PMIF, S-PMIF, LQN, Petri nets and other paradigms. Petri nets are different from the other paradigms, however, because they provide additional representation and analysis capabilities in addition to performance analysis. Examples include constraints on tokens in places, invariant analysis, reachability analysis, and so on. So to capitalize on these additional capabilities, this paper presents a specific, extended instantiation of the Ex-SE for Petri nets (PN-Ex). The viability of the approach is demonstrated with a case study carried out using PIPE2 (Platform Independent Petri net Editor 2) with an experimental framework.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121533502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dharmesh Thakkar, A. Hassan, Gilbert Hamann, P. Flora
Techniques for performance modeling are broadly classified into measurement, analytical and simulation based techniques. Measurement based performance modeling is commonly adopted in practice. Measurement based modeling requires the execution of a large number of performance tests to build accurate performance models. These performance tests must be repeated for every release or build of an application. This is a time consuming and error-prone manual process. In this paper, we present a framework for the systematic and automated building of measurement based performance models. The framework is based on our experience in performance modeling of two large applications: the DVD Store application by Dell and another larger enterprise application. We use the Dell DVD Store application as a running example to demonstrate the various steps in our framework. We present the benefits and shortcomings of our framework. We discuss the expected reduction in effort due to adopting our framework.
性能建模技术大致分为基于测量、分析和仿真的技术。基于测量的性能建模在实践中被广泛采用。基于度量的建模需要执行大量的性能测试来构建准确的性能模型。这些性能测试必须在应用程序的每个版本或构建中重复进行。这是一个耗时且容易出错的手动过程。在本文中,我们提出了一个基于测量的性能模型系统和自动化构建的框架。该框架基于我们对两个大型应用程序的性能建模经验:戴尔的DVD Store应用程序和另一个更大的企业应用程序。我们使用Dell DVD Store应用程序作为运行示例来演示框架中的各个步骤。我们介绍了该框架的优点和缺点。我们讨论了由于采用我们的框架而预期减少的工作量。
{"title":"A framework for measurement based performance modeling","authors":"Dharmesh Thakkar, A. Hassan, Gilbert Hamann, P. Flora","doi":"10.1145/1383559.1383567","DOIUrl":"https://doi.org/10.1145/1383559.1383567","url":null,"abstract":"Techniques for performance modeling are broadly classified into measurement, analytical and simulation based techniques. Measurement based performance modeling is commonly adopted in practice. Measurement based modeling requires the execution of a large number of performance tests to build accurate performance models. These performance tests must be repeated for every release or build of an application. This is a time consuming and error-prone manual process.\u0000 In this paper, we present a framework for the systematic and automated building of measurement based performance models. The framework is based on our experience in performance modeling of two large applications: the DVD Store application by Dell and another larger enterprise application. We use the Dell DVD Store application as a running example to demonstrate the various steps in our framework. We present the benefits and shortcomings of our framework. We discuss the expected reduction in effort due to adopting our framework.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122240624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. P. Duarte, W. Hasling, W. Sherman, D. Paulish, R. M. Leão, E. D. S. E. Silva, V. Cortellessa
We introduce a new methodology that employs an architecture framework that can be used to automatically generate simulation models based on the UML model diagrams created by requirements engineers and software system architects. The framework takes advantage of a library of node models already specified by expert performance engineers. We envision that requirements engineers and architects will be able to generate optimized performance models using this approach by annotating UML deployment diagrams and sequence diagram models with performance requirements. In addition, they would be able to generate optimized simulation models by putting together existing simulation nodes. We report on our experience using our methodology to do a performance analysis of a large e-commerce application employing two different load balancing algorithms for the e-commerce application server. We have found that generating the simulation model using our approach was very efficient because requirements engineers and architects did not have to worry about the details of the simulation nodes implementation, which were developed by performance engineers. Therefore, they could focus their work on the UML diagram models that were related to their own domain of expertise.
{"title":"Extending model transformations in the performance domain with a node modeling library","authors":"F. P. Duarte, W. Hasling, W. Sherman, D. Paulish, R. M. Leão, E. D. S. E. Silva, V. Cortellessa","doi":"10.1145/1383559.1383580","DOIUrl":"https://doi.org/10.1145/1383559.1383580","url":null,"abstract":"We introduce a new methodology that employs an architecture framework that can be used to automatically generate simulation models based on the UML model diagrams created by requirements engineers and software system architects. The framework takes advantage of a library of node models already specified by expert performance engineers. We envision that requirements engineers and architects will be able to generate optimized performance models using this approach by annotating UML deployment diagrams and sequence diagram models with performance requirements. In addition, they would be able to generate optimized simulation models by putting together existing simulation nodes.\u0000 We report on our experience using our methodology to do a performance analysis of a large e-commerce application employing two different load balancing algorithms for the e-commerce application server. We have found that generating the simulation model using our approach was very efficient because requirements engineers and architects did not have to worry about the details of the simulation nodes implementation, which were developed by performance engineers. Therefore, they could focus their work on the UML diagram models that were related to their own domain of expertise.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125904614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transformations of software models (such as UML diagrams) into non-functional models (such as Queueing Networks) have brought a real breakthrough to the entire field of non-functional software validation, because they allow to introduce automatism in the generation of a non-functional model from software artifacts. However, up today almost all the existing approaches are based on general purpose programming languages, such as Java. With the rapid evolution of model transformation languages, it is interesting to study how transformations in the software performance engineering domain may benefit from using constructs and tools of these languages. In this paper we present the results of our implementation, in ATLAS Transformation Language (ATL), of a transformation approach from UML models to Queueing Network models and, laying on a previous implementation of the same transformation in Java, we discuss the differences between these two approaches.
{"title":"Using ATL for transformations in software performance engineering: a step ahead of java-based transformations?","authors":"V. Cortellessa, S. D. Gregorio, A. Marco","doi":"10.1145/1383559.1383575","DOIUrl":"https://doi.org/10.1145/1383559.1383575","url":null,"abstract":"Transformations of software models (such as UML diagrams) into non-functional models (such as Queueing Networks) have brought a real breakthrough to the entire field of non-functional software validation, because they allow to introduce automatism in the generation of a non-functional model from software artifacts. However, up today almost all the existing approaches are based on general purpose programming languages, such as Java. With the rapid evolution of model transformation languages, it is interesting to study how transformations in the software performance engineering domain may benefit from using constructs and tools of these languages. In this paper we present the results of our implementation, in ATLAS Transformation Language (ATL), of a transformation approach from UML models to Queueing Network models and, laying on a previous implementation of the same transformation in Java, we discuss the differences between these two approaches.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"316 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121586626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhonglei Wang, W. Haberl, Stefan Kugele, Michael Tautschnig
In this paper we present an approach of generating SystemC executable models from software designs captured in a new component-based modeling language, COLA, which follows the paradigm of synchronous dataflow. COLA has rigorous semantics and specification mechanisms. Due to its well-founded semantics, it is possible to establish an integrated development process, the artifacts of which can be formally reasoned about and are dealt with in automated tools such as model checkers and code generators. However, the resulting models remain abstract and cannot be executed immediately. Therefor SystemC offers executable models of a component-based flavor. Establishing an automated translation procedure from COLA to SystemC thus allows for design validation and performance analysis during early design phases. We have validated our approach on a case study taken from the automotive domain.
{"title":"Automatic generation of systemc models from component-based designs for early design validation and performance analysis","authors":"Zhonglei Wang, W. Haberl, Stefan Kugele, Michael Tautschnig","doi":"10.1145/1383559.1383577","DOIUrl":"https://doi.org/10.1145/1383559.1383577","url":null,"abstract":"In this paper we present an approach of generating SystemC executable models from software designs captured in a new component-based modeling language, COLA, which follows the paradigm of synchronous dataflow. COLA has rigorous semantics and specification mechanisms. Due to its well-founded semantics, it is possible to establish an integrated development process, the artifacts of which can be formally reasoned about and are dealt with in automated tools such as model checkers and code generators. However, the resulting models remain abstract and cannot be executed immediately. Therefor SystemC offers executable models of a component-based flavor. Establishing an automated translation procedure from COLA to SystemC thus allows for design validation and performance analysis during early design phases. We have validated our approach on a case study taken from the automotive domain.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127922438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent trends in software engineering lean towards modelcentric development methodologies, a context in which the UML plays a crucial role. To provide modellers with quantitative insights into their artifacts, the UML benefits from a framework for software performance evaluation provided by MARTE, the UML profile for model-driven development of Real Time and Embedded Systems. MARTE offers a rich semantics which is general enough to allow different quantitative analysis techniques to act as underlying performance engines. In the present paper we explore the use of the stochastic process algebra PEPA as one such engine, providing a procedure to systematically map activity diagrams onto PEPA models. Independent activity flows are translated into sequential automata which co-ordinate at the synchronisation points expressed by fork and join nodes of the activity. The PEPA performance model is interpreted against a Markovian semantics which allows the calculation of performance indices such as throughput and utilisation. We also discuss the implementation of a new software tool powered by the popular Eclipse platform which implements the fully automatic translation from MARTE-annotated UML activity diagrams to PEPA models.
{"title":"Automatic extraction of PEPA performance models from UML activity diagrams annotated with the MARTE profile","authors":"M. Tribastone, S. Gilmore","doi":"10.1145/1383559.1383569","DOIUrl":"https://doi.org/10.1145/1383559.1383569","url":null,"abstract":"Recent trends in software engineering lean towards modelcentric development methodologies, a context in which the UML plays a crucial role. To provide modellers with quantitative insights into their artifacts, the UML benefits from a framework for software performance evaluation provided by MARTE, the UML profile for model-driven development of Real Time and Embedded Systems. MARTE offers a rich semantics which is general enough to allow different quantitative analysis techniques to act as underlying performance engines. In the present paper we explore the use of the stochastic process algebra PEPA as one such engine, providing a procedure to systematically map activity diagrams onto PEPA models. Independent activity flows are translated into sequential automata which co-ordinate at the synchronisation points expressed by fork and join nodes of the activity. The PEPA performance model is interpreted against a Markovian semantics which allows the calculation of performance indices such as throughput and utilisation. We also discuss the implementation of a new software tool powered by the popular Eclipse platform which implements the fully automatic translation from MARTE-annotated UML activity diagrams to PEPA models.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131215127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}