Aiman Hanna, Hai Zhou Ling, J. Furlong, Zhenrong Yang, M. Debbabi
In this paper, we present a joint approach to automate software security testing using two approaches, namely team edit automata (TEA), and the security chaining approach. Team edit automata is used to formally specify the security properties to be tested. It also composes the monitoring engine of the vulnerability detection process. The security chaining approach is used to generate test-data for the purpose of proving that a vulnerability is not only present in the software being tested but it is also exploitable. The combined approach provides elements of a solution towards the automation of security testing of software.
{"title":"Targeting Security Vulnerabilities: From Specification to Detection (Short Paper)","authors":"Aiman Hanna, Hai Zhou Ling, J. Furlong, Zhenrong Yang, M. Debbabi","doi":"10.1109/QSIC.2008.35","DOIUrl":"https://doi.org/10.1109/QSIC.2008.35","url":null,"abstract":"In this paper, we present a joint approach to automate software security testing using two approaches, namely team edit automata (TEA), and the security chaining approach. Team edit automata is used to formally specify the security properties to be tested. It also composes the monitoring engine of the vulnerability detection process. The security chaining approach is used to generate test-data for the purpose of proving that a vulnerability is not only present in the software being tested but it is also exploitable. The combined approach provides elements of a solution towards the automation of security testing of software.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"19 1","pages":"97-102"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89948255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box testing, we propose a methodology based on machine learning that has shown promising results on a case study.
{"title":"Using Machine Learning to Refine Black-Box Test Specifications and Test Suites","authors":"L. Briand, Y. Labiche, Z. Bawar","doi":"10.1109/QSIC.2008.5","DOIUrl":"https://doi.org/10.1109/QSIC.2008.5","url":null,"abstract":"In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box testing, we propose a methodology based on machine learning that has shown promising results on a case study.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"1 1","pages":"135-144"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89170659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software aging refers to the phenomenon that long-running software shows signs of increasing failing rate, overmuch resource usage, and performance degradation. Software rejuvenation is a proactive approach to dealing with this problem. However, commonly used rejuvenation methods involve a relatively larger overhead. An alternative is to reduce the severity of software aging by online adjusting the settings of related parameters of the system. In this paper, we conduct controlled experiments to analyze severity of software aging under different settings of related parameters. Based on the experimental data, a metric is defined to measure the severity of software aging. A multiple-input and multiple-output (MIMO) model is then constructed to formulate the relationship between severity of software aging and related parameter settings. The proposed MIMO model gives us a way to control the severity of software aging at runtime.
{"title":"On the Relationship between Software Aging and Related Parameters (Short Paper)","authors":"Yun-Fei Jia, Xiu-E Chen, Lei Zhao, K. Cai","doi":"10.1109/QSIC.2008.54","DOIUrl":"https://doi.org/10.1109/QSIC.2008.54","url":null,"abstract":"Software aging refers to the phenomenon that long-running software shows signs of increasing failing rate, overmuch resource usage, and performance degradation. Software rejuvenation is a proactive approach to dealing with this problem. However, commonly used rejuvenation methods involve a relatively larger overhead. An alternative is to reduce the severity of software aging by online adjusting the settings of related parameters of the system. In this paper, we conduct controlled experiments to analyze severity of software aging under different settings of related parameters. Based on the experimental data, a metric is defined to measure the severity of software aging. A multiple-input and multiple-output (MIMO) model is then constructed to formulate the relationship between severity of software aging and related parameter settings. The proposed MIMO model gives us a way to control the severity of software aging at runtime.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"34 1","pages":"241-246"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75560924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key enabling technology for the SOA-based approach is middleware, which comprises of reusable building blocks codifying design patterns. In the SOA-based approach, a system is typically implemented using a composition of a group of such patterns, referred to as a vertical variation. The patterns used in a composition and their configuration options can have a a profound impact on system performance. In this paper we present a model-based performance analysis methodology for a system built using a composition of the reactor, active object and monitor object patterns. We implement the performance model using CSIM and illustrate the methodology using examples. By enabling design-time performance analysis, our methodology alleviates many of the disadvantages of post-implementation performance analysis approaches. The methodology can thus provide key guidance towards meeting the performance objectives of a system in a cost-effective manner.
{"title":"Performance Analysis of a Composition of Middleware Patterns (Short Paper)","authors":"Paul J. Vandal, S. Gokhale","doi":"10.1109/QSIC.2008.47","DOIUrl":"https://doi.org/10.1109/QSIC.2008.47","url":null,"abstract":"A key enabling technology for the SOA-based approach is middleware, which comprises of reusable building blocks codifying design patterns. In the SOA-based approach, a system is typically implemented using a composition of a group of such patterns, referred to as a vertical variation. The patterns used in a composition and their configuration options can have a a profound impact on system performance. In this paper we present a model-based performance analysis methodology for a system built using a composition of the reactor, active object and monitor object patterns. We implement the performance model using CSIM and illustrate the methodology using examples. By enabling design-time performance analysis, our methodology alleviates many of the disadvantages of post-implementation performance analysis approaches. The methodology can thus provide key guidance towards meeting the performance objectives of a system in a cost-effective manner.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"48 1","pages":"175-180"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88597641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Random testing (RT) is a fundamental software testing technique. Motivated by the rationale that neighbouring test cases tend to cause similar execution behaviours, adaptive random testing (ART) was proposed as an enhancement of RT, which enforces random test cases evenly spread over the input domain. ART has always been compared with RT from the perspective of the failure-detection capability. Previous studies have shown that ART can use fewer test cases to detect the first software failure than RT. In this paper, we aim to compare ART and RT from the perspective of program-based coverage. Our experimental results show that given the same number of test cases, ART normally has a higher percentage of coverage than RT. In conclusion, ART outperforms RT not only in terms of the failure-detection capability, but also in terms of the thoroughness of program-based coverage. Therefore, ART delivers a higher confidence of the software under test than RT even when no failure has been revealed.
{"title":"Does Adaptive Random Testing Deliver a Higher Confidence than Random Testing?","authors":"T. Chen, Fei-Ching Kuo, Huai Liu, W. E. Wong","doi":"10.1109/QSIC.2008.23","DOIUrl":"https://doi.org/10.1109/QSIC.2008.23","url":null,"abstract":"Random testing (RT) is a fundamental software testing technique. Motivated by the rationale that neighbouring test cases tend to cause similar execution behaviours, adaptive random testing (ART) was proposed as an enhancement of RT, which enforces random test cases evenly spread over the input domain. ART has always been compared with RT from the perspective of the failure-detection capability. Previous studies have shown that ART can use fewer test cases to detect the first software failure than RT. In this paper, we aim to compare ART and RT from the perspective of program-based coverage. Our experimental results show that given the same number of test cases, ART normally has a higher percentage of coverage than RT. In conclusion, ART outperforms RT not only in terms of the failure-detection capability, but also in terms of the thoroughness of program-based coverage. Therefore, ART delivers a higher confidence of the software under test than RT even when no failure has been revealed.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"51 1","pages":"145-154"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83288382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a process-algebraic approach to verifying process interactions for business collaboration described in business process modelling notation. We first overview our process semantics for BPMN in the language of communicating sequential processes; we then use a simple example of business collaboration to demonstrate how our semantic model may be used to verify compatibility between business participants in a collaboration.
{"title":"Verifying Business Process Compatibility (Short Paper)","authors":"Peter Y. H. Wong, J. Gibbons","doi":"10.1109/QSIC.2008.6","DOIUrl":"https://doi.org/10.1109/QSIC.2008.6","url":null,"abstract":"We describe a process-algebraic approach to verifying process interactions for business collaboration described in business process modelling notation. We first overview our process semantics for BPMN in the language of communicating sequential processes; we then use a simple example of business collaboration to demonstrate how our semantic model may be used to verify compatibility between business participants in a collaboration.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"93 1","pages":"126-131"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77191608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The following paper proposes a solution to bridging the ontological gap between the conceptual test specification on the one side and the test implementation on the other. The cause of the gap is the different ontologies used on each side and the different levels of granularity. Whereas on the conceptual side, the ontology of the application is used, and that at an abstract level, on the implementional side, the ontology of the technical architecture is used at a very detailed level. The author proposes here a solution by which the conceptual level is brought down to a level corresponding to the implementation level and with which the notions of both sides are explicitly linked to one another. The key concepts are objects and use cases.
{"title":"Bridging the Concept to Implementation Gap in Software System Testing","authors":"H. Sneed","doi":"10.1109/QSIC.2008.48","DOIUrl":"https://doi.org/10.1109/QSIC.2008.48","url":null,"abstract":"The following paper proposes a solution to bridging the ontological gap between the conceptual test specification on the one side and the test implementation on the other. The cause of the gap is the different ontologies used on each side and the different levels of granularity. Whereas on the conceptual side, the ontology of the application is used, and that at an abstract level, on the implementional side, the ontology of the technical architecture is used at a very detailed level. The author proposes here a solution by which the conceptual level is brought down to a level corresponding to the implementation level and with which the notions of both sides are explicitly linked to one another. The key concepts are objects and use cases.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"17 1","pages":"67-73"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88027229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Cain, T. Chen, D. Grant, Fei-Ching Kuo, Jean-Guy Schneider
Dynamic data flow analysis is a testing technique that has been successfully used for many procedural programming languages. However, for Object-Oriented (OO) programs, previous investigations have still followed a data-oriented approach to keep track of the state information for various data elements. This paper proposes an OO approach to perform dynamic data flow analysis for OO programs. In this approach, a meta-model of an OO programpsilas runtime structure is constructed to manage the data flow analysis for the program. An implementation of the model for the Java language is presented, illustrating the practicality and effectiveness of this innovative approach.
{"title":"An Object Oriented Approach towards Dynamic Data Flow Analysis (Short Paper)","authors":"A. Cain, T. Chen, D. Grant, Fei-Ching Kuo, Jean-Guy Schneider","doi":"10.1109/QSIC.2008.18","DOIUrl":"https://doi.org/10.1109/QSIC.2008.18","url":null,"abstract":"Dynamic data flow analysis is a testing technique that has been successfully used for many procedural programming languages. However, for Object-Oriented (OO) programs, previous investigations have still followed a data-oriented approach to keep track of the state information for various data elements. This paper proposes an OO approach to perform dynamic data flow analysis for OO programs. In this approach, a meta-model of an OO programpsilas runtime structure is constructed to manage the data flow analysis for the program. An implementation of the model for the Java language is presented, illustrating the practicality and effectiveness of this innovative approach.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"17 1","pages":"163-168"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84069791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.
SQL注入是基于web的应用程序中最突出的漏洞之一。通过成功的攻击利用SQL注入漏洞(SQLIV)可能会导致严重的后果,如绕过身份验证、泄露私人信息等。因此,为SQLIV测试应用程序是确保其质量的重要步骤。然而,这是一个挑战,因为SQLIV的来源变化很大,包括应用程序中缺乏有效的输入过滤器,程序员的不安全编码,操作数据库的api使用不当等。此外,现有的测试方法不能解决生成能够检测SQLIV的足够的测试数据集的问题。在这项工作中,我们提出了一种基于突变的SQLIV测试方法。我们提出了9个在应用程序源代码中注入SQLIV的变异算子。操作符导致突变,只有使用包含SQL注入攻击的测试数据才能杀死突变。通过这种方法,我们强制生成一个足够的测试数据集,其中包含能够揭示SQLIV的有效测试用例。我们实现了一个基于突变的SQL注入漏洞检查(测试)工具(MUSIC),它自动为用Java Server Pages (JSP)编写的应用程序生成突变并执行突变分析。我们用五个用JSP编写的基于web的开源应用程序验证了建议的操作符。结果表明,所提出的算子对SQLIV测试是有效的。
{"title":"MUSIC: Mutation-based SQL Injection Vulnerability Checking","authors":"H. Shahriar, Mohammad Zulkernine","doi":"10.1109/QSIC.2008.33","DOIUrl":"https://doi.org/10.1109/QSIC.2008.33","url":null,"abstract":"SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"22 1","pages":"77-86"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81620232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lianshan Sun, Gang Huang, Yanchun Sun, Hui Song, Hong Mei
Access control of sensitive resources is a widely used means to achieve information security. When building large-scale systems based on popular commercial component middleware, such as J2EE, a usual way to enforce access control is to define access control configurations for components in a declarative manner. These configurations can be interpreted by the J2EE security service to grant or deny access requests to components. However, it is difficult for the developers to define correct access control configurations according to complex and sometimes ambiguous real-world access control requirements. The difficulties come from mainly the complexity of configuring voluminous component methods in large-scale component based systems and some quality constraints on the configurations, for example, the completeness, consistency and performance overhead of configurations. In this paper, we propose a requirements model driven approach for automatic generation of J2EE access control configurations and demonstrate the approach in a J2EE blueprint application.
{"title":"An Approach for Generation of J2EE Access Control Configurations from Requirements Specification","authors":"Lianshan Sun, Gang Huang, Yanchun Sun, Hui Song, Hong Mei","doi":"10.1109/QSIC.2008.4","DOIUrl":"https://doi.org/10.1109/QSIC.2008.4","url":null,"abstract":"Access control of sensitive resources is a widely used means to achieve information security. When building large-scale systems based on popular commercial component middleware, such as J2EE, a usual way to enforce access control is to define access control configurations for components in a declarative manner. These configurations can be interpreted by the J2EE security service to grant or deny access requests to components. However, it is difficult for the developers to define correct access control configurations according to complex and sometimes ambiguous real-world access control requirements. The difficulties come from mainly the complexity of configuring voluminous component methods in large-scale component based systems and some quality constraints on the configurations, for example, the completeness, consistency and performance overhead of configurations. In this paper, we propose a requirements model driven approach for automatic generation of J2EE access control configurations and demonstrate the approach in a J2EE blueprint application.","PeriodicalId":6446,"journal":{"name":"2008 The Eighth International Conference on Quality Software","volume":"8 1","pages":"87-96"},"PeriodicalIF":0.0,"publicationDate":"2008-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84573357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}