L. Elorza, Catia Trubiani, V. Cortellessa, Goiuria Sagardui Mendieta
Configurable software systems allow stakeholders to derive variants by selecting software and/or hardware features. Performance analysis of feature-based systems has been of large interest in the last few years, however a major research challenge is still to conduct such analysis before achieving full knowledge of the system, namely under a certain degree of uncertainty. In this paper we present an approach to analyze the correlation between selection of features embedding uncertain parameters and system performance. In particular, we provide best and worst case performance bounds on the basis of selected features and, in cases of wide gaps among these bounds, we carry on a sensitivity analysis process aimed at taming the uncertainty of parameters. The application of our approach to a case study in the e-health domain demonstrates how to support stakeholders in the identification of system variants that meet performance requirements.
{"title":"Performance-based selection of software and hardware features under parameter uncertainty","authors":"L. Elorza, Catia Trubiani, V. Cortellessa, Goiuria Sagardui Mendieta","doi":"10.1145/2602576.2602585","DOIUrl":"https://doi.org/10.1145/2602576.2602585","url":null,"abstract":"Configurable software systems allow stakeholders to derive variants by selecting software and/or hardware features. Performance analysis of feature-based systems has been of large interest in the last few years, however a major research challenge is still to conduct such analysis before achieving full knowledge of the system, namely under a certain degree of uncertainty. In this paper we present an approach to analyze the correlation between selection of features embedding uncertain parameters and system performance. In particular, we provide best and worst case performance bounds on the basis of selected features and, in cases of wide gaps among these bounds, we carry on a sensitivity analysis process aimed at taming the uncertainty of parameters. The application of our approach to a case study in the e-health domain demonstrates how to support stakeholders in the identification of system variants that meet performance requirements.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rising energy and hardware demand is a growing concern in enterprise data centers. It is therefore desirable to limit the hardware resources that need to be added for new enterprise applications (EA). Detailed capacity planning is required to achieve this goal. Otherwise, performance requirements (i.e. response time, throughput, resource utilization) might not be met. This paper introduces resource profiles to support capacity planning. These profiles can be created by EA vendors and allow evaluating energy consumption and performance of EAs for different workloads and hardware environments. Resource profiles are based on architecture-level performance models. These models allow to represent performance-relevant aspects of an EA architecture separately from the hardware environment and workload. The target hardware environment and the expected workload can only be specified by EA hosts and users respectively. To account for these distinct responsibilities, an approach is introduced to adapt resource profiles created by EA vendors to different hardware environments. A case study validates this concept by creating a resource profile for the SPECjEnterprise2010 benchmark application. Predictions using this profile for two hardware environments match energy consumption and performance measurements with an error of mostly below 15%.
{"title":"Using architecture-level performance models as resource profiles for enterprise applications","authors":"Andreas Brunnert, Kilian Wischer, H. Krcmar","doi":"10.1145/2602576.2602587","DOIUrl":"https://doi.org/10.1145/2602576.2602587","url":null,"abstract":"The rising energy and hardware demand is a growing concern in enterprise data centers. It is therefore desirable to limit the hardware resources that need to be added for new enterprise applications (EA). Detailed capacity planning is required to achieve this goal. Otherwise, performance requirements (i.e. response time, throughput, resource utilization) might not be met. This paper introduces resource profiles to support capacity planning. These profiles can be created by EA vendors and allow evaluating energy consumption and performance of EAs for different workloads and hardware environments. Resource profiles are based on architecture-level performance models. These models allow to represent performance-relevant aspects of an EA architecture separately from the hardware environment and workload. The target hardware environment and the expected workload can only be specified by EA hosts and users respectively. To account for these distinct responsibilities, an approach is introduced to adapt resource profiles created by EA vendors to different hardware environments. A case study validates this concept by creating a resource profile for the SPECjEnterprise2010 benchmark application. Predictions using this profile for two hardware environments match energy consumption and performance measurements with an error of mostly below 15%.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123919319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Olsson, Daniel Toll, Anna Wingkvist, Morgan Ericsson
We present an evaluation of a simple method to find architectural problems in a product line of computer games. The method uses dependencies (direct, indirect, or no) to automatically classify types in the implementation to high-level components in the product line architecture. We use a commercially available tool to analyse dependencies in the source code. The automatic classification of types is compared to a manual classification by the developer, and all mismatches are reported. To evaluate the method, we inspect the source code and look for a pre-defined set of architectural problems in all types. We compare the set of types that contained problems to the set of types where the manual and automatic classification disagreed to determine precision and recall. We also investigate what changes are needed to correct the found mismatches by either designing and implementing changes in the source code or refining the automatic classification. Our evaluation shows that the simple method is effective at detecting architectural problems in a product line of four games. The method is lightweight, customisable and easy to implement early in the development cycle.
{"title":"Evaluation of a static architectural conformance checking method in a line of computer games","authors":"Tobias Olsson, Daniel Toll, Anna Wingkvist, Morgan Ericsson","doi":"10.1145/2602576.2602590","DOIUrl":"https://doi.org/10.1145/2602576.2602590","url":null,"abstract":"We present an evaluation of a simple method to find architectural problems in a product line of computer games. The method uses dependencies (direct, indirect, or no) to automatically classify types in the implementation to high-level components in the product line architecture. We use a commercially available tool to analyse dependencies in the source code. The automatic classification of types is compared to a manual classification by the developer, and all mismatches are reported. To evaluate the method, we inspect the source code and look for a pre-defined set of architectural problems in all types. We compare the set of types that contained problems to the set of types where the manual and automatic classification disagreed to determine precision and recall. We also investigate what changes are needed to correct the found mismatches by either designing and implementing changes in the source code or refining the automatic classification. Our evaluation shows that the simple method is effective at detecting architectural problems in a product line of four games. The method is lightweight, customisable and easy to implement early in the development cycle.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"772 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115755213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Johnsen, K. Lundqvist, P. Pettersson, Kaj Hänninen
Design artifacts of embedded systems are subjected to a number of modifications during the development process. Verified artifacts that subsequently are modified must necessarily be re-verified to ensure that no faults have been introduced in response to the modification. We collectively call this type of verification as regression verification. In this paper, we contribute with a technique for selective regression verification of embedded systems modeled in the Architecture Analysis and Design Language (AADL). The technique can be used with any AADL-based verification technique to efficiently perform regression verification by only selecting verification sequences that cover parts that are affected by the modification for re-execution. This allows for the avoidance of unnecessary re-verification, and thereby unnecessary costs. The selection is based on the concept of specification slicing through system dependence graphs (SDGs) such that the effect of a modification can be identified.
{"title":"Regression verification of AADL models through slicing of system dependence graphs","authors":"Andreas Johnsen, K. Lundqvist, P. Pettersson, Kaj Hänninen","doi":"10.1145/2602576.2602589","DOIUrl":"https://doi.org/10.1145/2602576.2602589","url":null,"abstract":"Design artifacts of embedded systems are subjected to a number of modifications during the development process. Verified artifacts that subsequently are modified must necessarily be re-verified to ensure that no faults have been introduced in response to the modification. We collectively call this type of verification as regression verification. In this paper, we contribute with a technique for selective regression verification of embedded systems modeled in the Architecture Analysis and Design Language (AADL). The technique can be used with any AADL-based verification technique to efficiently perform regression verification by only selecting verification sequences that cover parts that are affected by the modification for re-execution. This allows for the avoidance of unnecessary re-verification, and thereby unnecessary costs. The selection is based on the concept of specification slicing through system dependence graphs (SDGs) such that the effect of a modification can be identified.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zengyang Li, Peng Liang, P. Avgeriou, N. Guelfi, Apostolos Ampatzoglou
Architectural technical debt (ATD) is incurred by design decisions that consciously or unconsciously compromise system-wide quality attributes, particularly maintainability and evolvability. ATD needs to be identified and measured, so that it can be monitored and eventually repaid, when appropriate. In practice, ATD is difficult to identify and measure, since ATD does not yield observable behaviors to end users. One indicator of ATD, is the average number of modified components per commit (ANMCC): a higher ANMCC indicates more ATD in a software system. However, it is difficult and sometimes impossible to calculate ANMCC, because the data (i.e., the log of commits) are not always available. In this work, we propose to use software modularity metrics, which can be directly calculated based on source code, as a substitute of ANMCC to indicate ATD. We validate the correlation between ANMCC and modularity metrics through a holistic multiple case study on thirteen open source software projects. The results of this study suggest that two modularity metrics, namely Index of Package Changing Impact (IPCI) and Index of Package Goal Focus (IPGF), have significant correlation with ANMCC, and therefore can be used as alternative ATD indicators.
{"title":"An empirical investigation of modularity metrics for indicating architectural technical debt","authors":"Zengyang Li, Peng Liang, P. Avgeriou, N. Guelfi, Apostolos Ampatzoglou","doi":"10.1145/2602576.2602581","DOIUrl":"https://doi.org/10.1145/2602576.2602581","url":null,"abstract":"Architectural technical debt (ATD) is incurred by design decisions that consciously or unconsciously compromise system-wide quality attributes, particularly maintainability and evolvability. ATD needs to be identified and measured, so that it can be monitored and eventually repaid, when appropriate. In practice, ATD is difficult to identify and measure, since ATD does not yield observable behaviors to end users. One indicator of ATD, is the average number of modified components per commit (ANMCC): a higher ANMCC indicates more ATD in a software system. However, it is difficult and sometimes impossible to calculate ANMCC, because the data (i.e., the log of commits) are not always available. In this work, we propose to use software modularity metrics, which can be directly calculated based on source code, as a substitute of ANMCC to indicate ATD. We validate the correlation between ANMCC and modularity metrics through a holistic multiple case study on thirteen open source software projects. The results of this study suggest that two modularity metrics, namely Index of Package Changing Impact (IPCI) and Index of Package Goal Focus (IPGF), have significant correlation with ANMCC, and therefore can be used as alternative ATD indicators.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114550850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wide availability of multicore CPUs makes concurrency a critical design factor for the software architecture and execution models of industrial controllers, especially with messages passing between tasks running on different cores. To improve performance, we refactored a standardized shared memory IPC mechanism implemented with traditional kernel locks to use lock-free algorithms. Prototyping the changes made it possible to determine the speed-up when the locks were removed, but we could neither easily confirm whether the IPC performance would suffice for the communication patterns in our real-time system, nor could we tell how well the implementation would scale to CPUs with more cores than our test machine. In this paper we report on our experience with using a queuing petri net performance model to predict the impact of memory contention in a multi-core CPU on architecture level performance. We instantiated our model with benchmark data and prototype measurements. The results from our model simulation provide valuable feedback for design decisions and point at potential bottlenecks. Comparison of the prototype's performance with our model simulation results increases credibility of our work. This paper supports other practitioners who consider applying performance modeling to quantify the quality of their architectures.
{"title":"Experiences with modeling memory contention for multi-core industrial real-time systems","authors":"Thijmen de Gooijer, K. Eric Harper","doi":"10.1145/2602576.2602584","DOIUrl":"https://doi.org/10.1145/2602576.2602584","url":null,"abstract":"Wide availability of multicore CPUs makes concurrency a critical design factor for the software architecture and execution models of industrial controllers, especially with messages passing between tasks running on different cores. To improve performance, we refactored a standardized shared memory IPC mechanism implemented with traditional kernel locks to use lock-free algorithms. Prototyping the changes made it possible to determine the speed-up when the locks were removed, but we could neither easily confirm whether the IPC performance would suffice for the communication patterns in our real-time system, nor could we tell how well the implementation would scale to CPUs with more cores than our test machine. In this paper we report on our experience with using a queuing petri net performance model to predict the impact of memory contention in a multi-core CPU on architecture level performance. We instantiated our model with benchmark data and prototype measurements. The results from our model simulation provide valuable feedback for design decisions and point at potential bottlenecks. Comparison of the prototype's performance with our model simulation results increases credibility of our work. This paper supports other practitioners who consider applying performance modeling to quantify the quality of their architectures.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130548793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Dajsuren, Christine M. Gerpheide, Alexander Serebrenik, Anton Wijs, Bogdan Vasilescu, M. Brand
Architecture views have long been used in software industry to systematically model complex systems by representing them from the perspective of related stakeholder concerns. However, consensus has not been reached for the architecture views between automotive architecture description languages and automotive architecture frameworks. Therefore, this paper presents the automotive architecture views based on an elaborate study of existing automotive architecture description techniques. Furthermore, we propose a method to formalize correspondence rules between architecture views to enforce consistency between architecture views. The approach was implemented in a Java plugin for IBM Rational Rhapsody and evaluated in a case study based on the Adaptive Cruise Control system. The outcome of the evaluation is considered to be a useful approach for formalizing correspondences between different views and a useful tool for automotive architects.
{"title":"Formalizing correspondence rules for automotive architecture views","authors":"Y. Dajsuren, Christine M. Gerpheide, Alexander Serebrenik, Anton Wijs, Bogdan Vasilescu, M. Brand","doi":"10.1145/2602576.2602588","DOIUrl":"https://doi.org/10.1145/2602576.2602588","url":null,"abstract":"Architecture views have long been used in software industry to systematically model complex systems by representing them from the perspective of related stakeholder concerns. However, consensus has not been reached for the architecture views between automotive architecture description languages and automotive architecture frameworks. Therefore, this paper presents the automotive architecture views based on an elaborate study of existing automotive architecture description techniques. Furthermore, we propose a method to formalize correspondence rules between architecture views to enforce consistency between architecture views. The approach was implemented in a Java plugin for IBM Rational Rhapsody and evaluated in a case study based on the Adaptive Cruise Control system. The outcome of the evaluation is considered to be a useful approach for formalizing correspondences between different views and a useful tool for automotive architects.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121401005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models play a central role in the assessment of software non-functional properties like performance and reliability. Models can be used both in the initial phases of development to support the designer decisions and at runtime to evaluate the impact of changes in the existing software. However, being abstraction, the models include per-se a certain degree of uncertainty. Nevertheless, often this aspect is neglected and models are used beyond their capabilities. Recognising the presence of uncertainties and managing them, would increase the level of trust in a given software model. In this paper we exploit a recently defined taxonomy that classifies the different types of uncertainties and we define a method that, starting from a given model, helps in recognising the existence of uncertainty, in classifying and managing it. We show the method at work on an example application considering the performance of the application as target non-functional property.
{"title":"Dealing with uncertainties in the performance modelling of software systems","authors":"Diego Perez-Palacin, R. Mirandola","doi":"10.1145/2602576.2602582","DOIUrl":"https://doi.org/10.1145/2602576.2602582","url":null,"abstract":"Models play a central role in the assessment of software non-functional properties like performance and reliability. Models can be used both in the initial phases of development to support the designer decisions and at runtime to evaluate the impact of changes in the existing software. However, being abstraction, the models include per-se a certain degree of uncertainty. Nevertheless, often this aspect is neglected and models are used beyond their capabilities. Recognising the presence of uncertainties and managing them, would increase the level of trust in a given software model. In this paper we exploit a recently defined taxonomy that classifies the different types of uncertainties and we define a method that, starting from a given model, helps in recognising the existence of uncertainty, in classifying and managing it. We show the method at work on an example application considering the performance of the application as target non-functional property.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116903923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
François Fouquet, Grégory Nain, Erwan Daubert, Johann Bourcier, Olivier Barais, N. Plouzeau, Brice Morin
Modern software applications are distributed and often operate in dynamic contexts, where requirements, assumptions about the environment, and usage profiles continuously change. These changes are difficult to predict and to anticipate at design time. The running software system should thus be able to react on its own, by dynamically adapting its behavior, in order to sustain a required quality of service. A key challenge is to provide the system with the necessary flexibility to perform self-adaptation, without compromising dependability. Models@Runtime is an emerging paradigm aiming at transferring traditional modeling activities (focusing on quality, verification, and so on) performed by humans, to the running system. In this trend, Kevoree provides a models@ runtime platform to design heterogeneous, distributed and adaptive applications based on the component based software engineering paradigm. At the end of this tutorial, applicants will be able to develop and assemble new components and communication channel to design complex self-adaptable distributed architectures by reusing existing piece of code.
{"title":"Designing and evolving distributed architecture using kevoree","authors":"François Fouquet, Grégory Nain, Erwan Daubert, Johann Bourcier, Olivier Barais, N. Plouzeau, Brice Morin","doi":"10.1145/2602576.2611461","DOIUrl":"https://doi.org/10.1145/2602576.2611461","url":null,"abstract":"Modern software applications are distributed and often operate in dynamic contexts, where requirements, assumptions about the environment, and usage profiles continuously change. These changes are difficult to predict and to anticipate at design time. The running software system should thus be able to react on its own, by dynamically adapting its behavior, in order to sustain a required quality of service. A key challenge is to provide the system with the necessary flexibility to perform self-adaptation, without compromising dependability. Models@Runtime is an emerging paradigm aiming at transferring traditional modeling activities (focusing on quality, verification, and so on) performed by humans, to the running system. In this trend, Kevoree provides a models@ runtime platform to design heterogeneous, distributed and adaptive applications based on the component based software engineering paradigm. At the end of this tutorial, applicants will be able to develop and assemble new components and communication channel to design complex self-adaptable distributed architectures by reusing existing piece of code.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"123 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Architecture-based self-adaptation is considered as a promising approach to drive down the development and operation costs of complex software systems operating in ever changing environments. However, there is still a lack of evidence supporting the arguments for the beneficial impact of architecture-based self-adaptation on resilience with respect to other customary approaches, such as embedded code-based adaptation. In this paper, we report on an empirical study about the impact on resilience of incorporating architecture-based self-adaptation in an industrial middleware used to collect data in highly populated networks of devices. To this end, we compare the results of resilience evaluation between the original version of the middleware, in which adaptation mechanisms are embedded at the code-level, and a modified version of that middleware in which the adaptation mechanisms are implemented using Rainbow, a framework for architecture-based self-adaptation. Our results show improved levels of resilience in architecture-based compared to embedded code-based self-adaptation.
{"title":"Empirical resilience evaluation of an architecture-based self-adaptive software system","authors":"J. Cámara, Pedro Correia, R. Lemos, M. Vieira","doi":"10.1145/2602576.2602577","DOIUrl":"https://doi.org/10.1145/2602576.2602577","url":null,"abstract":"Architecture-based self-adaptation is considered as a promising approach to drive down the development and operation costs of complex software systems operating in ever changing environments. However, there is still a lack of evidence supporting the arguments for the beneficial impact of architecture-based self-adaptation on resilience with respect to other customary approaches, such as embedded code-based adaptation. In this paper, we report on an empirical study about the impact on resilience of incorporating architecture-based self-adaptation in an industrial middleware used to collect data in highly populated networks of devices. To this end, we compare the results of resilience evaluation between the original version of the middleware, in which adaptation mechanisms are embedded at the code-level, and a modified version of that middleware in which the adaptation mechanisms are implemented using Rainbow, a framework for architecture-based self-adaptation. Our results show improved levels of resilience in architecture-based compared to embedded code-based self-adaptation.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134024310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}