We present a generic method for the efficient constraint re-resolution of a component-based software architecture after changes such as addition, removal and modification of components. Given a formal description of an evolving system as a constraint-specification problem, our method identifies and executes the re-resolution steps required to verify the system's compliance with constraints after each change. At each step, satisfiability modulo theory (SMT) techniques determine the satisfiability of component constraints expressed as logical formulae over suitably chosen theories of arithmetic, reusing results obtained in previous steps. We illustrate the application of the approach on a constraint-satisfaction problem arising from cloud-deployed software services. The incremental method is shown to re-resolve system constraints in a fraction of the time taken by standard SMT resolution.
{"title":"Efficient re-resolution of SMT specifications for evolving software architectures","authors":"Kenneth Johnson, R. Calinescu","doi":"10.1145/2602576.2602578","DOIUrl":"https://doi.org/10.1145/2602576.2602578","url":null,"abstract":"We present a generic method for the efficient constraint re-resolution of a component-based software architecture after changes such as addition, removal and modification of components. Given a formal description of an evolving system as a constraint-specification problem, our method identifies and executes the re-resolution steps required to verify the system's compliance with constraints after each change. At each step, satisfiability modulo theory (SMT) techniques determine the satisfiability of component constraints expressed as logical formulae over suitably chosen theories of arithmetic, reusing results obtained in previous steps. We illustrate the application of the approach on a constraint-satisfaction problem arising from cloud-deployed software services. The incremental method is shown to re-resolve system constraints in a fraction of the time taken by standard SMT resolution.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Chavarriaga, Carlos Noguera, R. Casallas, V. Jonckers
When developing and deploying applications in the cloud, architects face the challenge of conciliating architectural decisions with the options and restrictions imposed by the chosen cloud provider. An architectural decision can be seen as a two-step process: selecting architectural tactics to promote quality attributes and choosing design alternatives to implement those tactics. Available design alternatives are limited by the offer of the cloud provider. When configuring the cloud platform and its services as directed by the chosen tactics, the architect must be mindful of conflicts among the available alternatives. These trade-offs amongst the desired quality attributes can be difficult to detect, understand and ultimately solve. In this paper, we consider the case of Jelastic, a particular cloud platform provider, to illustrate: 1) the modeling of architectural tactics and their corresponding design alternatives using cloud configuration options, and 2) a process that exploits these models to determine which options to use in order to implement a combination of tactics. Furthermore, we present an analysis for this cloud provider that explains which combinations of tactics and configurations lead to trade-offs.
{"title":"Architectural tactics support in cloud computing providers: the jelastic case","authors":"J. Chavarriaga, Carlos Noguera, R. Casallas, V. Jonckers","doi":"10.1145/2602576.2602580","DOIUrl":"https://doi.org/10.1145/2602576.2602580","url":null,"abstract":"When developing and deploying applications in the cloud, architects face the challenge of conciliating architectural decisions with the options and restrictions imposed by the chosen cloud provider. An architectural decision can be seen as a two-step process: selecting architectural tactics to promote quality attributes and choosing design alternatives to implement those tactics. Available design alternatives are limited by the offer of the cloud provider. When configuring the cloud platform and its services as directed by the chosen tactics, the architect must be mindful of conflicts among the available alternatives. These trade-offs amongst the desired quality attributes can be difficult to detect, understand and ultimately solve. In this paper, we consider the case of Jelastic, a particular cloud platform provider, to illustrate: 1) the modeling of architectural tactics and their corresponding design alternatives using cloud configuration options, and 2) a process that exploits these models to determine which options to use in order to implement a combination of tactics. Furthermore, we present an analysis for this cloud provider that explains which combinations of tactics and configurations lead to trade-offs.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"111 3S 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124556004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern software operates in highly dynamic and often unpredictable environments that can degrade its quality of service. Therefore, it is increasingly important having systems able to adapt their behavior to the environment where they execute at any moment. Nevertheless, software with self-adaptive capabilities is difficult to develop. To make easier its development, different architectural frameworks have been proposed during the last years. A shared characteristic among most frameworks is that they define applications that make an internal use of models, which are analyzed to discover the configurations that better fit in the changing environments. In this context, this tutorial presents the current research advances on architectural frameworks for building self-adaptive software that meets its Quality of Service (QoS). We discuss architectures that use self-adaption to improve the QoS and whose adaptations are planned as a result of the analysis of formal models. We also describe a set of current research challenges that are still preventing the complete automatic control of dependable self-adaptive software.
{"title":"Software QoS enhancement through self-adaptation and formal models","authors":"R. Mirandola, Diego Perez-Palacin","doi":"10.1145/2602576.2611459","DOIUrl":"https://doi.org/10.1145/2602576.2611459","url":null,"abstract":"Modern software operates in highly dynamic and often unpredictable environments that can degrade its quality of service. Therefore, it is increasingly important having systems able to adapt their behavior to the environment where they execute at any moment. Nevertheless, software with self-adaptive capabilities is difficult to develop. To make easier its development, different architectural frameworks have been proposed during the last years. A shared characteristic among most frameworks is that they define applications that make an internal use of models, which are analyzed to discover the configurations that better fit in the changing environments.\u0000 In this context, this tutorial presents the current research advances on architectural frameworks for building self-adaptive software that meets its Quality of Service (QoS). We discuss architectures that use self-adaption to improve the QoS and whose adaptations are planned as a result of the analysis of formal models. We also describe a set of current research challenges that are still preventing the complete automatic control of dependable self-adaptive software.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Wert, M. Oehler, Christoph Heger, Roozbeh Farahbod
Performance problems such as high response times in software applications have a significant effect on the customer's satisfaction. In enterprise applications, performance problems are frequently manifested in inefficient or unnecessary communication patterns between software components originating from poor architectural design or implementation. Due to high manual effort, thorough performance analysis is often neglected, in practice. In order to overcome this problem, automated engineering approaches are required for the detection of performance problems. In this paper, we introduce several heuristics for measurement-based detection of well-known performance anti-patterns in inter-component communications. The detection heuristics comprise load and instrumentation descriptions for performance tests as well as corresponding detection rules. We integrate these heuristics with Dynamic Spotter, a framework for automatic detection of performance problems. We evaluate our heuristics on four evaluation scenarios based on an e-commerce benchmark (TPC-W) where the heuristics detect the expected communication performance anti-patterns and pinpoint their root causes.
{"title":"Automatic detection of performance anti-patterns in inter-component communications","authors":"Alexander Wert, M. Oehler, Christoph Heger, Roozbeh Farahbod","doi":"10.1145/2602576.2602579","DOIUrl":"https://doi.org/10.1145/2602576.2602579","url":null,"abstract":"Performance problems such as high response times in software applications have a significant effect on the customer's satisfaction. In enterprise applications, performance problems are frequently manifested in inefficient or unnecessary communication patterns between software components originating from poor architectural design or implementation. Due to high manual effort, thorough performance analysis is often neglected, in practice. In order to overcome this problem, automated engineering approaches are required for the detection of performance problems. In this paper, we introduce several heuristics for measurement-based detection of well-known performance anti-patterns in inter-component communications. The detection heuristics comprise load and instrumentation descriptions for performance tests as well as corresponding detection rules. We integrate these heuristics with Dynamic Spotter, a framework for automatic detection of performance problems. We evaluate our heuristics on four evaluation scenarios based on an e-commerce benchmark (TPC-W) where the heuristics detect the expected communication performance anti-patterns and pinpoint their root causes.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133066834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software architecture evaluation is an essential part of architecture management and a means to uncover problems and increase confidence in the capability of the software architecture in fulfilling the most critical requirements. Architecture evaluation is typically carried out at an early stage of a software development. However, development efforts are often related to further development of existing software. We present a case study of the software architecture board (SWAB) initiative carried out at in a company called NSN. SWAB employed a lightweight architecture evaluation and management approach to exchange architectural experiences with related products and assess ability to fulfill future requirements. SWAB operated for two years but ultimately came to an end because the desired objectives were not achieved. The case study provides lessons for the evaluation of architecture in mature products and for using a lightweight evaluation approach: Evaluation in mature products seems not to be about finding problems and risk or making trade-offs, but about architecture management such as better communication, raising awareness about the architecture, and increased confidence to the architecture throughout the organization; and a lightweight architecture evaluation seems to be a good approach especially for mature products. However, the motivation and justification for architectural evaluation of mature products remains challenging, as their architecture is already in place and evolved over years towards good candidates, although the need for inter-product communication and alignment of architectural issues can be argued for.
{"title":"Architecture management and evaluation in mature products: experiences from a lightweight approach","authors":"M. Raatikainen, J. Savolainen, T. Männistö","doi":"10.1145/2602576.2602583","DOIUrl":"https://doi.org/10.1145/2602576.2602583","url":null,"abstract":"Software architecture evaluation is an essential part of architecture management and a means to uncover problems and increase confidence in the capability of the software architecture in fulfilling the most critical requirements. Architecture evaluation is typically carried out at an early stage of a software development. However, development efforts are often related to further development of existing software. We present a case study of the software architecture board (SWAB) initiative carried out at in a company called NSN. SWAB employed a lightweight architecture evaluation and management approach to exchange architectural experiences with related products and assess ability to fulfill future requirements. SWAB operated for two years but ultimately came to an end because the desired objectives were not achieved. The case study provides lessons for the evaluation of architecture in mature products and for using a lightweight evaluation approach: Evaluation in mature products seems not to be about finding problems and risk or making trade-offs, but about architecture management such as better communication, raising awareness about the architecture, and increased confidence to the architecture throughout the organization; and a lightweight architecture evaluation seems to be a good approach especially for mature products. However, the motivation and justification for architectural evaluation of mature products remains challenging, as their architecture is already in place and evolved over years towards good candidates, although the need for inter-product communication and alignment of architectural issues can be argued for.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127115129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software quality should be built in from the start: a priori. Software quality can only be guaranteed through verification: a posteriori. It is easy to find arguments for either of these views. Is quality an a priori or a posteriori attribute? Saying "both" does not answer the question, only turns it into a new one: how should we combine the two approaches? Building on both my experience with the Eiffel method and the verification work at ETH I will try to define what exact doses of, respectively, "correctness by construction" and modern verification techniques can, at a realistic cost, yield the best possible quality. The ETH work is based on the idea of "Verification As a Matter Of Course": make verification available to all developments, not just the most critical applications. Integrated in the Eiffel Verification Environment (EVE), the approach combines many different forms of verification, some static (proofs, based on Boogie), some dynamic (tests, based on the AutoTest automatic test framework. The talk will include some of the results from the EVE effort to discuss future trends in the production of reliable architectures.
{"title":"Trust or verify?","authors":"B. Meyer","doi":"10.1145/2602576.2611460","DOIUrl":"https://doi.org/10.1145/2602576.2611460","url":null,"abstract":"Software quality should be built in from the start: a priori. Software quality can only be guaranteed through verification: a posteriori.\u0000 It is easy to find arguments for either of these views. Is quality an a priori or a posteriori attribute? Saying \"both\" does not answer the question, only turns it into a new one: how should we combine the two approaches?\u0000 Building on both my experience with the Eiffel method and the verification work at ETH I will try to define what exact doses of, respectively, \"correctness by construction\" and modern verification techniques can, at a realistic cost, yield the best possible quality.\u0000 The ETH work is based on the idea of \"Verification As a Matter Of Course\": make verification available to all developments, not just the most critical applications. Integrated in the Eiffel Verification Environment (EVE), the approach combines many different forms of verification, some static (proofs, based on Boogie), some dynamic (tests, based on the AutoTest automatic test framework. The talk will include some of the results from the EVE effort to discuss future trends in the production of reliable architectures.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129423317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rania Mzid, C. Mraidha, Jean-Philippe Babau, M. Abid
Model-based approaches for the development of software intensive real-time embedded systems allow early verification of timing properties at the design phase. At this phase, the Real-Time Operating System (RTOS) may not be chosen, hence some assumptions on the software platform are made to achieve timing verifications such as schedulability analysis of tasks describing the application. Among these assumptions, the synchronization protocol which is used to manage the concurrent access to resources that are shared between tasks. A classical solution is to consider the Priority Ceiling Protocol (PCP) synchronization protocol to avoid deadlocks. However, when this protocol is not provided by the target RTOS on which the application will be deployed, the concurrency model becomes not implementable and a new synchronization protocol must be considered. In this paper, we propose the Shared Resource Merge Pattern (SRMP) which aims to prevent deadlocks when the use of PCP protocol is not allowed by the target RTOS. The application of this pattern on the concurrency model must guarantee that the timing properties of the real-time application are still met.
{"title":"SRMP: a software pattern for deadlocks prevention inreal-time concurrency models","authors":"Rania Mzid, C. Mraidha, Jean-Philippe Babau, M. Abid","doi":"10.1145/2602576.2602591","DOIUrl":"https://doi.org/10.1145/2602576.2602591","url":null,"abstract":"Model-based approaches for the development of software intensive real-time embedded systems allow early verification of timing properties at the design phase. At this phase, the Real-Time Operating System (RTOS) may not be chosen, hence some assumptions on the software platform are made to achieve timing verifications such as schedulability analysis of tasks describing the application. Among these assumptions, the synchronization protocol which is used to manage the concurrent access to resources that are shared between tasks. A classical solution is to consider the Priority Ceiling Protocol (PCP) synchronization protocol to avoid deadlocks. However, when this protocol is not provided by the target RTOS on which the application will be deployed, the concurrency model becomes not implementable and a new synchronization protocol must be considered. In this paper, we propose the Shared Resource Merge Pattern (SRMP) which aims to prevent deadlocks when the use of PCP protocol is not allowed by the target RTOS. The application of this pattern on the concurrency model must guarantee that the timing properties of the real-time application are still met.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126392774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design decisions made early in software development have great impact on the software product quality. Design-time reliability prediction is one of the techniques that support software engineers in early design decisions, based on the evaluation of reliability impact of the individual design alternatives. The accuracy of reliability prediction is critically dependent on the accuracy of reliability prediction models, which relies on uncertain failure parameters (such as the failure probability of component-internal actions). Although the effectiveness of the failure-parameter estimation critically influences the usability of the prediction techniques, the parameter estimation often relies on expert knowledge and is not receiving systematic attention. This paper aims to survey existing techniques for estimation and collection of failure parameters in architecture-based reliability prediction models, and presents the findings that can be learned from their detailed analysis.
{"title":"Failure data collection for reliability prediction models: a survey","authors":"Barbora Buhnova, Stanislav Chren, Lucie Fabriková","doi":"10.1145/2602576.2602586","DOIUrl":"https://doi.org/10.1145/2602576.2602586","url":null,"abstract":"Design decisions made early in software development have great impact on the software product quality. Design-time reliability prediction is one of the techniques that support software engineers in early design decisions, based on the evaluation of reliability impact of the individual design alternatives. The accuracy of reliability prediction is critically dependent on the accuracy of reliability prediction models, which relies on uncertain failure parameters (such as the failure probability of component-internal actions). Although the effectiveness of the failure-parameter estimation critically influences the usability of the prediction techniques, the parameter estimation often relies on expert knowledge and is not receiving systematic attention. This paper aims to survey existing techniques for estimation and collection of failure parameters in architecture-based reliability prediction models, and presents the findings that can be learned from their detailed analysis.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125432670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For developing software systems it is crucial to consider non-functional properties already in an early development stage to guarantee that the system will satisfy its non-functional requirements. Following the model-based engineering paradigm facilitates an early analysis of non-functional properties of the system being developed based on the elaborated design models. Although UML is widely used in model-based engineering, it is not suitable for model-based analysis directly due to its lack of formal semantics. Thus, current model-based analysis approaches transform UML models into formal languages dedicated for analyses purpose, which may introduce accidental complexity of implementing the required model transformations. The recently introduced fUML standard provides a formal semantics of a subset of UML enabling the execution of UML models. In this paper, we show how fUML can be utilized for analyzing UML models directly without having to transform them. We present a reusable framework for performing model-based analyses leveraging execution traces of UML models and integrating UML profiles heretofore unsupported by fUML. A case study in the performance analysis domain is used to illustrate the benefits of our framework.
{"title":"Combining fUML and profiles for non-functional analysis based on model execution traces","authors":"L. Berardinelli, Philip Langer, Tanja Mayerhofer","doi":"10.1145/2465478.2465493","DOIUrl":"https://doi.org/10.1145/2465478.2465493","url":null,"abstract":"For developing software systems it is crucial to consider non-functional properties already in an early development stage to guarantee that the system will satisfy its non-functional requirements. Following the model-based engineering paradigm facilitates an early analysis of non-functional properties of the system being developed based on the elaborated design models. Although UML is widely used in model-based engineering, it is not suitable for model-based analysis directly due to its lack of formal semantics. Thus, current model-based analysis approaches transform UML models into formal languages dedicated for analyses purpose, which may introduce accidental complexity of implementing the required model transformations.\u0000 The recently introduced fUML standard provides a formal semantics of a subset of UML enabling the execution of UML models. In this paper, we show how fUML can be utilized for analyzing UML models directly without having to transform them. We present a reusable framework for performing model-based analyses leveraging execution traces of UML models and integrating UML profiles heretofore unsupported by fUML. A case study in the performance analysis domain is used to illustrate the benefits of our framework.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116670767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In general, software architecture is documented using software architecture views to address the different stakeholder concerns. The current trend recognizes that the set of viewpoints should not be fixed but multiple viewpoints might be introduced instead to design and document the software architecture. To ensure the quality of the software architecture various software architecture evaluation approaches have been introduced. In addition several documentation guidelines have been provided to ensure the quality of the software architecture document. Unfortunately, the evaluation of the adopted viewpoints that are used to design and document the software architecture has not been considered explicitly. If the architectural viewpoints are not well-defined then implicitly this will have an impact on the quality of the design and the documentation of the software architecture. We present an evaluation framework for assessing existing or newly defined software architecture viewpoint languages. The approach is based on software language engineering techniques, and considers each viewpoint as a metamodel. The approach does not assume a particular architecture framework and can be applied to existing or newly defined viewpoint languages. We illustrate our approach for modeling and reviewing the first and second editions of the viewpoint languages of the Views and Beyond approach.
{"title":"Evaluation framework for software architecture viewpoint languages","authors":"B. Tekinerdogan, Elif Demirli","doi":"10.1145/2465478.2465483","DOIUrl":"https://doi.org/10.1145/2465478.2465483","url":null,"abstract":"In general, software architecture is documented using software architecture views to address the different stakeholder concerns. The current trend recognizes that the set of viewpoints should not be fixed but multiple viewpoints might be introduced instead to design and document the software architecture. To ensure the quality of the software architecture various software architecture evaluation approaches have been introduced. In addition several documentation guidelines have been provided to ensure the quality of the software architecture document. Unfortunately, the evaluation of the adopted viewpoints that are used to design and document the software architecture has not been considered explicitly. If the architectural viewpoints are not well-defined then implicitly this will have an impact on the quality of the design and the documentation of the software architecture. We present an evaluation framework for assessing existing or newly defined software architecture viewpoint languages. The approach is based on software language engineering techniques, and considers each viewpoint as a metamodel. The approach does not assume a particular architecture framework and can be applied to existing or newly defined viewpoint languages. We illustrate our approach for modeling and reviewing the first and second editions of the viewpoint languages of the Views and Beyond approach.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114543368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}