Mohammad Reza Farhadi, B. Fung, P. Charland, M. Debbabi
To gain an in-depth understanding of the behaviour of a malware, reverse engineers have to disassemble the malware, analyze the resulting assembly code, and then archive the commented assembly code in a malware repository for future reference. In this paper, we have developed an assembly code clone detection system called BinClone to identify the code clone fragments from a collection of malware binaries with the following major contributions. First, we introduce two deterministic clone detection methods with the goals of improving the recall rate and facilitating malware analysis. Second, our methods allow malware analysts to discover both exact and inexact clones at different token normalization levels. Third, we evaluate our proposed clone detection methods on real-life malware binaries. To the best of our knowledge, this is the first work that studies the problem of assembly code clone detection for malware analysis.
{"title":"BinClone: Detecting Code Clones in Malware","authors":"Mohammad Reza Farhadi, B. Fung, P. Charland, M. Debbabi","doi":"10.1109/SERE.2014.21","DOIUrl":"https://doi.org/10.1109/SERE.2014.21","url":null,"abstract":"To gain an in-depth understanding of the behaviour of a malware, reverse engineers have to disassemble the malware, analyze the resulting assembly code, and then archive the commented assembly code in a malware repository for future reference. In this paper, we have developed an assembly code clone detection system called BinClone to identify the code clone fragments from a collection of malware binaries with the following major contributions. First, we introduce two deterministic clone detection methods with the goals of improving the recall rate and facilitating malware analysis. Second, our methods allow malware analysts to discover both exact and inexact clones at different token normalization levels. Third, we evaluate our proposed clone detection methods on real-life malware binaries. To the best of our knowledge, this is the first work that studies the problem of assembly code clone detection for malware analysis.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123371772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of system requirements and their risks enables software testers to identify more important test cases that can reveal faults associated with risky components. Having identified those test cases, software testers can manage the testing schedule more effectively by running such test cases earlier so that they can fix faults sooner. Some work in this area has been done, but the previous approaches and studies have some limitations, such as an improper use of requirements risks in prioritization and an inadequate evaluation method. To address the limitations, we implemented a new requirements risk-based prioritization technique and evaluated it considering whether the proposed approach can detect faults earlier overall. It can also detect faults associated with risky components earlier. Our results indicate that the proposed approach is effective for detecting faults early and even better for finding faults associated with risky components of the system earlier than the existing techniques.
{"title":"Effective Regression Testing Using Requirements and Risks","authors":"Charitha Hettiarachchi, Hyunsook Do, Byoungju Choi","doi":"10.1109/SERE.2014.29","DOIUrl":"https://doi.org/10.1109/SERE.2014.29","url":null,"abstract":"The use of system requirements and their risks enables software testers to identify more important test cases that can reveal faults associated with risky components. Having identified those test cases, software testers can manage the testing schedule more effectively by running such test cases earlier so that they can fix faults sooner. Some work in this area has been done, but the previous approaches and studies have some limitations, such as an improper use of requirements risks in prioritization and an inadequate evaluation method. To address the limitations, we implemented a new requirements risk-based prioritization technique and evaluated it considering whether the proposed approach can detect faults earlier overall. It can also detect faults associated with risky components earlier. Our results indicate that the proposed approach is effective for detecting faults early and even better for finding faults associated with risky components of the system earlier than the existing techniques.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122620959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context-aware applications often consist of a middleware and a collection of services, which run autonomously adaptive to the changing environments, where a variety of sensors are installed in physical facilities, with end-users moving around. Testing such context-aware applications is challenging due to the complex interactions among the components, especially for the complicated environment modeling. This paper extends a bigraphical sorting predicate logic as constraints to create a meta-model, builds a data model based on the bigraphical meta-model, and proposes to use the sorted bigraphical reaction system (BRS) to model the context-aware environments. Tracing the interactions between the BRS model and the middleware model generates the test cases to verify the interactions between the context-aware environments and the middleware together with the domain services. To decrease the number of test cases, this paper proposes a bigraphical pattern flow testing strategy. An example airport is demonstrated to show fault detection capabilities and reductions of test cases.
{"title":"Generating Test Cases for Context-Aware Applications Using Bigraphs","authors":"Lian Yu, W. Tsai, Yanbing Jiang, J. Gao","doi":"10.1109/SERE.2014.27","DOIUrl":"https://doi.org/10.1109/SERE.2014.27","url":null,"abstract":"Context-aware applications often consist of a middleware and a collection of services, which run autonomously adaptive to the changing environments, where a variety of sensors are installed in physical facilities, with end-users moving around. Testing such context-aware applications is challenging due to the complex interactions among the components, especially for the complicated environment modeling. This paper extends a bigraphical sorting predicate logic as constraints to create a meta-model, builds a data model based on the bigraphical meta-model, and proposes to use the sorted bigraphical reaction system (BRS) to model the context-aware environments. Tracing the interactions between the BRS model and the middleware model generates the test cases to verify the interactions between the context-aware environments and the middleware together with the domain services. To decrease the number of test cases, this paper proposes a bigraphical pattern flow testing strategy. An example airport is demonstrated to show fault detection capabilities and reductions of test cases.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124812639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeta Fourneret, J. Cantenot, F. Bouquet, B. Legeard, Julien Botella
In this paper we introduce SeTGaM, a Model-Based Regression Testing (MBRT) approach based on UML/OCL behavioral models. SeTGaM is a test selection and classification approach that also generates new tests to cover new functionalities of a new version of a system. We extract the behavior of the system from guards/transitions of state charts or pre/post conditions in operations of class diagrams to which we apply impact analysis. This makes it possible to apply our approach to models that use state charts and class diagrams or models without state charts (that only consist of class diagrams). This makes the technique applicable to a larger number of industrial systems. We also propose to reduce the number of false positive dependencies by using a constraint solver. We implemented our approach as plug in for IBM Rational Software Architect and evaluated the tool on two case study systems including an industrial system from the smart card domain. The evaluation confirms that the approach is effective in identifying changes and reducing the effort needed to test a new version of the system. The results also show that the approach is efficient with execution times between 2-3 minutes for most cases. SeTGaM was also able to precisely identify all modification revealing tests.
{"title":"SeTGaM: Generalized Technique for Regression Testing Based on UML/OCL Models","authors":"Elizabeta Fourneret, J. Cantenot, F. Bouquet, B. Legeard, Julien Botella","doi":"10.1109/SERE.2014.28","DOIUrl":"https://doi.org/10.1109/SERE.2014.28","url":null,"abstract":"In this paper we introduce SeTGaM, a Model-Based Regression Testing (MBRT) approach based on UML/OCL behavioral models. SeTGaM is a test selection and classification approach that also generates new tests to cover new functionalities of a new version of a system. We extract the behavior of the system from guards/transitions of state charts or pre/post conditions in operations of class diagrams to which we apply impact analysis. This makes it possible to apply our approach to models that use state charts and class diagrams or models without state charts (that only consist of class diagrams). This makes the technique applicable to a larger number of industrial systems. We also propose to reduce the number of false positive dependencies by using a constraint solver. We implemented our approach as plug in for IBM Rational Software Architect and evaluated the tool on two case study systems including an industrial system from the smart card domain. The evaluation confirms that the approach is effective in identifying changes and reducing the effort needed to test a new version of the system. The results also show that the approach is efficient with execution times between 2-3 minutes for most cases. SeTGaM was also able to precisely identify all modification revealing tests.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121141804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Search-based test generators, such as those using genetic algorithms and alternative variable methods, can automatically generate test inputs. They typically rely on fitness functions to calculate fitness scores for guiding the search process. This paper presents a novel rule-based testing (RBT) approach to automated generation of test inputs from Java byte code without using fitness functions. It extracts tagged paths from the control flow graph of given byte code, analyzes and monitors the predicates in the tagged paths at runtime, and generates test inputs using predefined rules. Our case studies show that RBT has outperformed the test input generators using genetic algorithms and alternative variable methods.
{"title":"Rule-Based Test Input Generation from Bytecode","authors":"Weifeng Xu, Tao Ding, Dianxiang Xu","doi":"10.1109/SERE.2014.24","DOIUrl":"https://doi.org/10.1109/SERE.2014.24","url":null,"abstract":"Search-based test generators, such as those using genetic algorithms and alternative variable methods, can automatically generate test inputs. They typically rely on fitness functions to calculate fitness scores for guiding the search process. This paper presents a novel rule-based testing (RBT) approach to automated generation of test inputs from Java byte code without using fitness functions. It extracts tagged paths from the control flow graph of given byte code, analyzes and monitors the predicates in the tagged paths at runtime, and generates test inputs using predefined rules. Our case studies show that RBT has outperformed the test input generators using genetic algorithms and alternative variable methods.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124146953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.
{"title":"A Modal Model of Stuxnet Attacks on Cyber-physical Systems: A Matter of Trust","authors":"Gerry Howser, B. McMillin","doi":"10.1109/SERE.2014.36","DOIUrl":"https://doi.org/10.1109/SERE.2014.36","url":null,"abstract":"Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"332 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132968602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-Availability requires hardware and software redundancy. Virtualization is a technique - among others - for improving the utilization of hardware resources by making virtual (rather than actual) versions of hardware, operating system, etc. and collocating them on the same hardware. In virtualized environments virtual machines (VMs) are used for the deployment of the software entities. When VMs hosting redundant software entities providing and protecting some service are collocated on the same physical node, the hardware redundancy is lost and the failure of this physical node certainly leads to service outage. To achieve high availability, we need to avoid such single points of failure even in the presence of VM migration. This paper tackles this issue in the context of a standardized middleware for service high-availability.
{"title":"Providing Hardware Redundancy for Highly Available Services in Virtualized Environments","authors":"Azadeh Jahanbanifar, F. Khendek, M. Toeroe","doi":"10.1109/SERE.2014.17","DOIUrl":"https://doi.org/10.1109/SERE.2014.17","url":null,"abstract":"High-Availability requires hardware and software redundancy. Virtualization is a technique - among others - for improving the utilization of hardware resources by making virtual (rather than actual) versions of hardware, operating system, etc. and collocating them on the same hardware. In virtualized environments virtual machines (VMs) are used for the deployment of the software entities. When VMs hosting redundant software entities providing and protecting some service are collocated on the same physical node, the hardware redundancy is lost and the failure of this physical node certainly leads to service outage. To achieve high availability, we need to avoid such single points of failure even in the presence of VM migration. This paper tackles this issue in the context of a standardized middleware for service high-availability.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"123 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132399183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Security testing still is a hard task, especially if focusing on non-functional security testing. The two main reasons behind this are, first, at the most a lack of the necessary knowledge required for security testing, second, managing the almost infinite amount of negative test cases, which result from potential security risks. To the best of our knowledge, the issue of the automatic incorporation of security expert knowledge, e.g., known vulnerabilities, exploits and attacks, in the process of security testing is not well considered in the literature. Furthermore, well-known "de facto" security testing approaches, like fuzzing or penetration testing, lack systematic procedures regarding the order of execution of test cases, which renders security testing a cumbersome task. Hence, in this paper we propose a new method for generating negative security tests by logic programming, which applies a risk analysis to establish a set of negative requirements for later test generation.
{"title":"Security Test Generation by Answer Set Programming","authors":"Philipp Zech, M. Felderer, Basel Katt, R. Breu","doi":"10.1109/SERE.2014.22","DOIUrl":"https://doi.org/10.1109/SERE.2014.22","url":null,"abstract":"Security testing still is a hard task, especially if focusing on non-functional security testing. The two main reasons behind this are, first, at the most a lack of the necessary knowledge required for security testing, second, managing the almost infinite amount of negative test cases, which result from potential security risks. To the best of our knowledge, the issue of the automatic incorporation of security expert knowledge, e.g., known vulnerabilities, exploits and attacks, in the process of security testing is not well considered in the literature. Furthermore, well-known \"de facto\" security testing approaches, like fuzzing or penetration testing, lack systematic procedures regarding the order of execution of test cases, which renders security testing a cumbersome task. Hence, in this paper we propose a new method for generating negative security tests by logic programming, which applies a risk analysis to establish a set of negative requirements for later test generation.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133079393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Replication is the simplest way to achieve data reliability in cloud storage systems. Nevertheless, replication incurs storage overhead to the cloud storage provider (CSP). To verify CSPs' reliability, users can audit CSPs with remote data integrity checking. However, the auditing incurs cost to users. Thus, CSPs and users involve a conflict situation, where users prefer less auditing and CSPs prefer less replication. In this paper, we provide a game-theoretic analysis to get optimal strategies for users and CSP. We use the pricing strategy adopted by Amazon S3 to explain our analysis. Our results show that a user should audit if CSP's reduced data copies are less than 1:81. If CSP believes lower user's staying probability, it should provide more discount or copies. According to this study, a user has the criterion for paying less auditing cost and CSP makes the reliability and pricing policy to keep users in business.
{"title":"Game-Theoretic Strategy Analysis for Data Reliability Management in Cloud Storage Systems","authors":"Chung-Yi Lin, Wen-Guey Tzeng","doi":"10.1109/SERE.2014.32","DOIUrl":"https://doi.org/10.1109/SERE.2014.32","url":null,"abstract":"Replication is the simplest way to achieve data reliability in cloud storage systems. Nevertheless, replication incurs storage overhead to the cloud storage provider (CSP). To verify CSPs' reliability, users can audit CSPs with remote data integrity checking. However, the auditing incurs cost to users. Thus, CSPs and users involve a conflict situation, where users prefer less auditing and CSPs prefer less replication. In this paper, we provide a game-theoretic analysis to get optimal strategies for users and CSP. We use the pricing strategy adopted by Amazon S3 to explain our analysis. Our results show that a user should audit if CSP's reduced data copies are less than 1:81. If CSP believes lower user's staying probability, it should provide more discount or copies. According to this study, a user has the criterion for paying less auditing cost and CSP makes the reliability and pricing policy to keep users in business.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121176521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erasure codes are applied in distributed storage systems for fault-tolerance with lower storage overhead than replications. Later, decentralized erasure codes are proposed for decentralized or loosely-organized storage systems. Repair mechanisms aim at maintaining redundancy over time such that stored data are still retrievable. Two recent repair mechanisms, Noop and Coop, are designed for decentralized erasure code based distributed storage system to minimize connection cost in theoretical manner. We propose a generalized repair framework, which includes Noop and Coop as two extreme cases. We then investigate trade-off between connection cost and data retrievability from an experimental aspect in our repair framework. Our results show that a reasonable data retrievability is achievable with constant connection cost, which is less than previously analytical values. These results are valuable references for a system manager to build a reliable storage system with low connection cost.
{"title":"Reliable Repair Mechanisms with Low Connection Cost for Code Based Distributed Storage Systems","authors":"Hsiao-Ying Lin, Li-Ping Tung, B. Lin","doi":"10.1109/SERE.2014.37","DOIUrl":"https://doi.org/10.1109/SERE.2014.37","url":null,"abstract":"Erasure codes are applied in distributed storage systems for fault-tolerance with lower storage overhead than replications. Later, decentralized erasure codes are proposed for decentralized or loosely-organized storage systems. Repair mechanisms aim at maintaining redundancy over time such that stored data are still retrievable. Two recent repair mechanisms, Noop and Coop, are designed for decentralized erasure code based distributed storage system to minimize connection cost in theoretical manner. We propose a generalized repair framework, which includes Noop and Coop as two extreme cases. We then investigate trade-off between connection cost and data retrievability from an experimental aspect in our repair framework. Our results show that a reasonable data retrievability is achievable with constant connection cost, which is less than previously analytical values. These results are valuable references for a system manager to build a reliable storage system with low connection cost.","PeriodicalId":248957,"journal":{"name":"2014 Eighth International Conference on Software Security and Reliability","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124959027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}