F. Chagas, R. Durelli, Ricardo Terra, V. V. D. Camargo
There are two important artifacts in any Architecture-Conformance Checking (ACC) approach: i) the representation of the PA and ii) the representation of the CA. Many times, inside the same ACC approach, distinct meta-models are adopted for representing the PA and the CA. Besides, it is common the adoption of meta-models unsuitable for representing architectural details. This heterogeneity makes the checking algorithms complex since they must cope with instances that comply with two different meta-models or do not have proper architectural abstractions. KDM is an ISO meta-model proposed by OMG whose goal is to become the standard representation of systems in modernization tools. It is able to represent many aspects of a software system, including source code details, architectural abstractions and the dependencies between them. However, up to this moment, there is no research showing how KDM can be used in ACC approaches. Therefore we present an investigation of adopting KDM as the unique meta-model for representing PA and CA in ACC approaches. We have developed a three-steps ACC approach called ArchKDM. In the first step a DSL assists in the PA specification; in the second step an Eclipse plug-in provides the necessary support and in the last step the checking is conducted. We have also evaluate our approach using two real world systems and the results were very promising, revealing no false positives or negatives.
{"title":"KDM as the Underlying Metamodel in Architecture-Conformance Checking","authors":"F. Chagas, R. Durelli, Ricardo Terra, V. V. D. Camargo","doi":"10.1145/2973839.2973851","DOIUrl":"https://doi.org/10.1145/2973839.2973851","url":null,"abstract":"There are two important artifacts in any Architecture-Conformance Checking (ACC) approach: i) the representation of the PA and ii) the representation of the CA. Many times, inside the same ACC approach, distinct meta-models are adopted for representing the PA and the CA. Besides, it is common the adoption of meta-models unsuitable for representing architectural details. This heterogeneity makes the checking algorithms complex since they must cope with instances that comply with two different meta-models or do not have proper architectural abstractions. KDM is an ISO meta-model proposed by OMG whose goal is to become the standard representation of systems in modernization tools. It is able to represent many aspects of a software system, including source code details, architectural abstractions and the dependencies between them. However, up to this moment, there is no research showing how KDM can be used in ACC approaches. Therefore we present an investigation of adopting KDM as the unique meta-model for representing PA and CA in ACC approaches. We have developed a three-steps ACC approach called ArchKDM. In the first step a DSL assists in the PA specification; in the second step an Eclipse plug-in provides the necessary support and in the last step the checking is conducted. We have also evaluate our approach using two real world systems and the results were very promising, revealing no false positives or negatives.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124092311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juliana Oliveira, N. Cacho, D. Borges, Thaisa Silva, F. C. Filho
Previous work has shown that robustness-related issues like functional errors and app crashes rank among the most common causes for complaints about mobile phone apps. Since most Android applications are written in Java, exception handling is the main mechanism they employ to report and handle errors, similarly to standard Java applications. Thus, the proper use of this mechanism is closely linked to app robustness. Nonetheless, to the best of our knowledge, no previous study analyzed the relationship between source code changes and uncaught exceptions, a common cause of bugs in Android apps, nor whether exception handling code in these apps evolves in the same way as in standard Java applications. This paper presents an empirical study aimed at understanding the relationship between changes in Android programs and their robustness and comparing the evolution of the exception handling code in Android and standard Java applications.
{"title":"An Exploratory Study of Exception Handling Behavior in Evolving Android and Java Applications","authors":"Juliana Oliveira, N. Cacho, D. Borges, Thaisa Silva, F. C. Filho","doi":"10.1145/2973839.2973843","DOIUrl":"https://doi.org/10.1145/2973839.2973843","url":null,"abstract":"Previous work has shown that robustness-related issues like functional errors and app crashes rank among the most common causes for complaints about mobile phone apps. Since most Android applications are written in Java, exception handling is the main mechanism they employ to report and handle errors, similarly to standard Java applications. Thus, the proper use of this mechanism is closely linked to app robustness. Nonetheless, to the best of our knowledge, no previous study analyzed the relationship between source code changes and uncaught exceptions, a common cause of bugs in Android apps, nor whether exception handling code in these apps evolves in the same way as in standard Java applications. This paper presents an empirical study aimed at understanding the relationship between changes in Android programs and their robustness and comparing the evolution of the exception handling code in Android and standard Java applications.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123159253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Cedrim, L. Sousa, Alessandro F. Garcia, Rohit Gheyi
Code smells in a program represent indications of structural quality problems, which can be addressed by software refactoring. Refactoring is widely practiced by developers, and considerable development effort has been invested in refactoring tooling support. There is an explicit assumption that software refactoring improves the structural quality of a program by reducing its density of code smells. However, little has been reported about whether and to what extent developers successfully remove code smells through refactoring. This paper reports a first longitudinal study intended to address this gap. We analyze how often the commonly-used refactoring types affect the density of 5 types of code smells along the version histories of 25 projects. Our findings are based on the analysis of 2,635 refactorings distributed in 11 different types. Surprisingly, 2,506 refactorings (95.1%) did not reduce or introduce code smells. Thus, these findings suggest that refactorings lead to smell reduction less often than what has been reported. According to our data, only 2.24% of refactoring changes removed code smells and 2.66% introduced new ones. Moreover, several smells induced by refactoring tended to live long, i.e., 146 days on average. These smells were only eventually removed when smelly elements started to exhibit poor structural quality and, as a consequence, started to be more costly to get rid of.
{"title":"Does refactoring improve software structural quality? A longitudinal study of 25 projects","authors":"Diego Cedrim, L. Sousa, Alessandro F. Garcia, Rohit Gheyi","doi":"10.1145/2973839.2973848","DOIUrl":"https://doi.org/10.1145/2973839.2973848","url":null,"abstract":"Code smells in a program represent indications of structural quality problems, which can be addressed by software refactoring. Refactoring is widely practiced by developers, and considerable development effort has been invested in refactoring tooling support. There is an explicit assumption that software refactoring improves the structural quality of a program by reducing its density of code smells. However, little has been reported about whether and to what extent developers successfully remove code smells through refactoring. This paper reports a first longitudinal study intended to address this gap. We analyze how often the commonly-used refactoring types affect the density of 5 types of code smells along the version histories of 25 projects. Our findings are based on the analysis of 2,635 refactorings distributed in 11 different types. Surprisingly, 2,506 refactorings (95.1%) did not reduce or introduce code smells. Thus, these findings suggest that refactorings lead to smell reduction less often than what has been reported. According to our data, only 2.24% of refactoring changes removed code smells and 2.66% introduced new ones. Moreover, several smells induced by refactoring tended to live long, i.e., 146 days on average. These smells were only eventually removed when smelly elements started to exhibit poor structural quality and, as a consequence, started to be more costly to get rid of.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134642863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software reuse provides benefits during the software development and maintenance processes. The use of APIs is one of the most common ways to reuse. However, obtaining an easy-to-understand documentation is a challenge faced by developers. Several papers have proposed alternatives to make API documentation more understandable, or even more detailed. However, these studies have not taken into account the complexity of examples in order to make documentation adaptable to different levels of developer experience. In this work, we developed and evaluated four different methodologies to generate tutorials for APIs from the contents of Stack Overflow and organize them according to the complexity of understanding. The methodologies were evaluated through tutorials generated for the Swing API. A survey was conducted to evaluate eight different features of the generated tutorials. The overall outcome was positive on several characteristics, showing the feasibility of automatically generated tutorials. In addition, the adoption of features for presenting tutorial elements in order of complexity, for separating the tutorial in basic and advanced parts, for selecting posts with a tutorial nature and with didactic source code had significantly better results regarding the generation methodology.
{"title":"Automated API Documentation with Tutorials Generated From Stack Overflow","authors":"Adriano M. Rocha, M. Maia","doi":"10.1145/2973839.2973847","DOIUrl":"https://doi.org/10.1145/2973839.2973847","url":null,"abstract":"Software reuse provides benefits during the software development and maintenance processes. The use of APIs is one of the most common ways to reuse. However, obtaining an easy-to-understand documentation is a challenge faced by developers. Several papers have proposed alternatives to make API documentation more understandable, or even more detailed. However, these studies have not taken into account the complexity of examples in order to make documentation adaptable to different levels of developer experience. In this work, we developed and evaluated four different methodologies to generate tutorials for APIs from the contents of Stack Overflow and organize them according to the complexity of understanding. The methodologies were evaluated through tutorials generated for the Swing API. A survey was conducted to evaluate eight different features of the generated tutorials. The overall outcome was positive on several characteristics, showing the feasibility of automatically generated tutorials. In addition, the adoption of features for presenting tutorial elements in order of complexity, for separating the tutorial in basic and advanced parts, for selecting posts with a tutorial nature and with didactic source code had significantly better results regarding the generation methodology.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121395693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristiano M. Cesario, Leonardo Gresta Paulino Murta
Software development using distributed version control systems has become more frequent recently. Such systems bring more flexibility, but also greater complexity to manage and monitor the multiple existing repositories as well as their myriad of branches. In this paper, we propose DyeVC, an approach to assist developers and repository administrators in identifying dependencies among clones of distributed repositories. It allows understanding what is going on around one's clone and depicting the relationship between existing clones. DyeVC was evaluated over open source projects, showing how they could benefit from having such kind of tool in place. We also ran an observational study and a benchmark over DyeVC, and the results were promising: it was considered easy to use and fast for most repository history exploration operations, while providing the expected answers.
{"title":"Topology Awareness for Distributed Version Control Systems","authors":"Cristiano M. Cesario, Leonardo Gresta Paulino Murta","doi":"10.1145/2973839.2973854","DOIUrl":"https://doi.org/10.1145/2973839.2973854","url":null,"abstract":"Software development using distributed version control systems has become more frequent recently. Such systems bring more flexibility, but also greater complexity to manage and monitor the multiple existing repositories as well as their myriad of branches. In this paper, we propose DyeVC, an approach to assist developers and repository administrators in identifying dependencies among clones of distributed repositories. It allows understanding what is going on around one's clone and depicting the relationship between existing clones. DyeVC was evaluated over open source projects, showing how they could benefit from having such kind of tool in place. We also ran an observational study and a benchmark over DyeVC, and the results were promising: it was considered easy to use and fast for most repository history exploration operations, while providing the expected answers.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Product Line Architecture (PLA) design is a key activity for developing successful Software Product Line (SPL) projects. PLA design is a difficult task, mostly due to the complexity of the software systems that SPLs deal with, and their variabilities. Metamodels have been used to support the representation of assets that compose a PLA, SPL variability and the relationships among them. The goal of this study is to characterize the use of metamodeling on PLA design, aiming to identify the main characteristics of metamodels, the elements used for PLA and variability representation and trace the evolution of metamodels. We conducted a systematic literature review to identify the primary studies on the use of metamodels in PLA Design. Thirty-five studies that proposed metamodels to support PLA design were selected. The review main findings are: (i) it is difficult to identify the existence of research trends because the number of publication varies and metamodels lack standardization; (ii) several metamodels support feature representation; (iii) the majority of studies addressed variability representation with variation points in UML diagrams; and, (iv) five evolution lines that describe how metamodels evolved over the years were identified.
{"title":"A Systematic Review on Metamodels to Support Product Line Architecture Design","authors":"Crescencio Rodrigues Lima Neto, C. Chavez","doi":"10.1145/2973839.2973842","DOIUrl":"https://doi.org/10.1145/2973839.2973842","url":null,"abstract":"Product Line Architecture (PLA) design is a key activity for developing successful Software Product Line (SPL) projects. PLA design is a difficult task, mostly due to the complexity of the software systems that SPLs deal with, and their variabilities. Metamodels have been used to support the representation of assets that compose a PLA, SPL variability and the relationships among them. The goal of this study is to characterize the use of metamodeling on PLA design, aiming to identify the main characteristics of metamodels, the elements used for PLA and variability representation and trace the evolution of metamodels. We conducted a systematic literature review to identify the primary studies on the use of metamodels in PLA Design. Thirty-five studies that proposed metamodels to support PLA design were selected. The review main findings are: (i) it is difficult to identify the existence of research trends because the number of publication varies and metamodels lack standardization; (ii) several metamodels support feature representation; (iii) the majority of studies addressed variability representation with variation points in UML diagrams; and, (iv) five evolution lines that describe how metamodels evolved over the years were identified.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125737562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Patrícia Fontes Magalhães, A. Andrade, R. Maciel
In the MDD approach a transformation chain is responsible for the automation or semi-automation of the MDD development process by encapsulating the strategies to convert models into other models until code generation. The design of transformation chains can be complex and demand software engineering facilities such as processes, languages and techniques in order to ensure reuse and portability. The MDD itself provides these facilities and can be used to improve transformations development. In this paper we present a framework to support the development of model transformation chains using MDD. This framework is comprised of a Development Process (MDTDproc), to guide on transformation development tasks; a Model Transformation Profile (MTP) as the transformation modeling language; a transformation chain, which helps the automation of the proposed transformation process; and an environment to support it. We particularly focus on the presentation of MDTDproc and its validation. Initial results have shown that the process is feasible and guides developers to achieve an executable model transformation code.
{"title":"A Model Driven Transformation Development Process for Model to Model Transformation","authors":"Ana Patrícia Fontes Magalhães, A. Andrade, R. Maciel","doi":"10.1145/2973839.2973841","DOIUrl":"https://doi.org/10.1145/2973839.2973841","url":null,"abstract":"In the MDD approach a transformation chain is responsible for the automation or semi-automation of the MDD development process by encapsulating the strategies to convert models into other models until code generation. The design of transformation chains can be complex and demand software engineering facilities such as processes, languages and techniques in order to ensure reuse and portability. The MDD itself provides these facilities and can be used to improve transformations development. In this paper we present a framework to support the development of model transformation chains using MDD. This framework is comprised of a Development Process (MDTDproc), to guide on transformation development tasks; a Model Transformation Profile (MTP) as the transformation modeling language; a transformation chain, which helps the automation of the proposed transformation process; and an environment to support it. We particularly focus on the presentation of MDTDproc and its validation. Initial results have shown that the process is feasible and guides developers to achieve an executable model transformation code.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121393000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Meira, V. Burégio, Paulo Borba, V. Garcia, Jones O. Albuquerque, S. Soares
Since the early days of computers and programs, the process and outcomes of software development has been a minefield plagued with problems and failures, as much as the complexity and complication of software and its development has increased by a thousandfold in half a century. Over the years, a number of theories, laws, best practices, manifestos and methodologies have emerged, with varied degrees of (un)success. Our experience as software engineers of complex and large-scale systems shows that those guidelines are bound to previously defined and often narrow scopes. Enough is enough. Nowadays, nearly every company is in the software and services business and everything is - or is managed by - software. It is about time, then, that the laws that govern our universe ought to be redefined. In this context, we discuss and present a set of universal laws that leads us to propose the first commandment of software engineering for all varieties of information systems.
{"title":"Programming the Universe: The First Commandment of Software Engineering for all Varieties of Information Systems","authors":"S. Meira, V. Burégio, Paulo Borba, V. Garcia, Jones O. Albuquerque, S. Soares","doi":"10.1145/2973839.2982567","DOIUrl":"https://doi.org/10.1145/2973839.2982567","url":null,"abstract":"Since the early days of computers and programs, the process and outcomes of software development has been a minefield plagued with problems and failures, as much as the complexity and complication of software and its development has increased by a thousandfold in half a century. Over the years, a number of theories, laws, best practices, manifestos and methodologies have emerged, with varied degrees of (un)success. Our experience as software engineers of complex and large-scale systems shows that those guidelines are bound to previously defined and often narrow scopes. Enough is enough. Nowadays, nearly every company is in the software and services business and everything is - or is managed by - software. It is about time, then, that the laws that govern our universe ought to be redefined. In this context, we discuss and present a set of universal laws that leads us to propose the first commandment of software engineering for all varieties of information systems.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125186876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaime W. Dias, E. Oliveira, Marco Aurélio Graciotto Silva
Every day increases the level of demand by customers in regard to the quality and complexity of software systems. Because of this, companies are increasingly customizing their software processes according to market and project needs. A systematic way is the use of the Software Process Line strategy (SPrL), in which each product derived from the line is a specific software process. Therefore, variability management is an essential activity. This paper presents an empirical study comparing the compositional and the annotative approaches in representing variability in SPrLs taking into consideration a SCRUM-based SPrL. Eclipse Process Framework was chosen to represent the compositional approach, whereas SMartySPEM was considered to the annotative approach. The approaches were compared taking into account the sequential exploratory strategy based on mixed-methods. A qualitative empirical study comparing these approaches was planned, executed and previously published with relation to the following set of criteria: modularity, traceability, error detection, granularity and systematic management of variability. Such study was based on the expertise of software process experts and provided important information for the hypothesis formulation about systematic management of variability, the main dependent variable of this quantitative study. Thus, the quantitative study presented in this paper analyzes the effectiveness of variability representation. As the main contribution of this paper, we present preliminary evidence on the effectiveness of variability management, allowing supporting the gathering of solid evidence for further research in academic and industrial set about the compositional and annotative approaches for variability management in SPrLs. As a result of this quantitative empirical study it was not statistically possible to confirm that the annotative approach is more effective than the compositional approach.
客户对软件系统的质量和复杂性的要求每天都在增加。正因为如此,公司越来越多地根据市场和项目需求定制他们的软件过程。一种系统的方法是使用软件过程线策略(SPrL),其中每个衍生自该线的产品都是一个特定的软件过程。因此,可变性管理是一项必要的活动。本文提出了一项实证研究,比较了组合方法和注释方法在表示SPrL中可变性的方法,并考虑了基于scrum的SPrL。选择Eclipse Process Framework来表示组合方法,而SMartySPEM则被认为是注释方法。考虑了基于混合方法的顺序探索策略,对两种方法进行了比较。一项比较这些方法的定性实证研究被计划、执行和先前发表,与以下标准集有关:模块化、可追溯性、错误检测、粒度和可变性的系统管理。该研究基于软件过程专家的专业知识,为本定量研究的主要因变量变异性系统管理的假设制定提供了重要信息。因此,本文提出的定量研究分析了变异性表示的有效性。作为本文的主要贡献,我们提供了关于可变性管理有效性的初步证据,为在学术和工业领域进一步研究sprl中可变性管理的组合和注释方法提供了坚实的证据。由于这一定量实证研究的结果,统计上不可能证实注释方法比组成方法更有效。
{"title":"Preliminary Empirical Evidence on SPrL Variability Management with EPF and SMartySPEM","authors":"Jaime W. Dias, E. Oliveira, Marco Aurélio Graciotto Silva","doi":"10.1145/2973839.2973850","DOIUrl":"https://doi.org/10.1145/2973839.2973850","url":null,"abstract":"Every day increases the level of demand by customers in regard to the quality and complexity of software systems. Because of this, companies are increasingly customizing their software processes according to market and project needs. A systematic way is the use of the Software Process Line strategy (SPrL), in which each product derived from the line is a specific software process. Therefore, variability management is an essential activity. This paper presents an empirical study comparing the compositional and the annotative approaches in representing variability in SPrLs taking into consideration a SCRUM-based SPrL. Eclipse Process Framework was chosen to represent the compositional approach, whereas SMartySPEM was considered to the annotative approach. The approaches were compared taking into account the sequential exploratory strategy based on mixed-methods. A qualitative empirical study comparing these approaches was planned, executed and previously published with relation to the following set of criteria: modularity, traceability, error detection, granularity and systematic management of variability. Such study was based on the expertise of software process experts and provided important information for the hypothesis formulation about systematic management of variability, the main dependent variable of this quantitative study. Thus, the quantitative study presented in this paper analyzes the effectiveness of variability representation. As the main contribution of this paper, we present preliminary evidence on the effectiveness of variability management, allowing supporting the gathering of solid evidence for further research in academic and industrial set about the compositional and annotative approaches for variability management in SPrLs. As a result of this quantitative empirical study it was not statistically possible to confirm that the annotative approach is more effective than the compositional approach.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131513014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Access control mechanisms demand rigorous software testing approaches, otherwise they can end up with security flaws. Finite state machines (FSM) have been used for testing Role-Based Access Control (RBAC) mechanisms and complete, but significantly large, test suites can be obtained. Experimental studies have shown that recent FSM testing methods can reduce the overall test suite length for random FSMs. However, since the similarity between random FSMs and these specifying RBAC mechanisms is unclear, these outcomes cannot be necessarily generalized to RBAC. In this paper, we compare the characteristics and effectiveness of test suites generated by traditional and recent FSM testing methods for RBAC policies specified as FSM models. The methods W, HSI and SPY were applied on RBAC policies specified as FSMs and the test suites obtained were evaluated considering test characteristics (number of resets, average test case length, and test suite length) and effectiveness on the RBAC fault domain. Our results corroborate some outcomes of previous investigations in which test suites presented different characteristics. On average, the SPY method generated test suites with 32% less resets, average test case length 78% greater than W and HSI, and overall length 46% lower. There were no differences among FSM testing methods for RBAC regarding effectiveness. However, the SPY method significantly reduced the overall test suite length and the number of resets.
{"title":"Evaluating test characteristics and effectiveness of FSM-based testing methods on RBAC systems","authors":"C. Damasceno, P. Masiero, A. Simão","doi":"10.1145/2973839.2973849","DOIUrl":"https://doi.org/10.1145/2973839.2973849","url":null,"abstract":"Access control mechanisms demand rigorous software testing approaches, otherwise they can end up with security flaws. Finite state machines (FSM) have been used for testing Role-Based Access Control (RBAC) mechanisms and complete, but significantly large, test suites can be obtained. Experimental studies have shown that recent FSM testing methods can reduce the overall test suite length for random FSMs. However, since the similarity between random FSMs and these specifying RBAC mechanisms is unclear, these outcomes cannot be necessarily generalized to RBAC. In this paper, we compare the characteristics and effectiveness of test suites generated by traditional and recent FSM testing methods for RBAC policies specified as FSM models. The methods W, HSI and SPY were applied on RBAC policies specified as FSMs and the test suites obtained were evaluated considering test characteristics (number of resets, average test case length, and test suite length) and effectiveness on the RBAC fault domain. Our results corroborate some outcomes of previous investigations in which test suites presented different characteristics. On average, the SPY method generated test suites with 32% less resets, average test case length 78% greater than W and HSI, and overall length 46% lower. There were no differences among FSM testing methods for RBAC regarding effectiveness. However, the SPY method significantly reduced the overall test suite length and the number of resets.","PeriodicalId":415612,"journal":{"name":"Proceedings of the XXX Brazilian Symposium on Software Engineering","volume":"XCIX 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131385969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}