Pub Date : 2017-01-11DOI: 10.1049/iet-sen.2015.0030
Osama Alkrarha, J. Hassine
Abstract state machines (ASMs) have been introduced as a computation model that is more powerful and more universal than standard computation models. The early validation of ASM models would help reduce the cost and risk of having defects propagate, through refinement, to other models, and eventually to code; thus, adversely affecting the quality of the end product. Mutation testing is a well-established fault-based technique for assessing and improving the quality of test suites. However, little research has been devoted to mutation analysis in the context of ASMs. Mutation testing is known to be computationally expensive due to the large number of generated mutants that are executed against a test set. In this study, the authors empirically investigate the application of cost reduction strategies to AsmetaL, an ASM-based formal language. Furthermore, they evaluate experimentally the effectiveness and the savings resulting from applying two techniques: namely, random mutants selection and operator-based selective mutation, in the context of the AsmetaL language. The quantitative results show that both techniques achieved good savings without major impact on effectiveness.
{"title":"Applying selective mutation strategies to the AsmetaL language","authors":"Osama Alkrarha, J. Hassine","doi":"10.1049/iet-sen.2015.0030","DOIUrl":"https://doi.org/10.1049/iet-sen.2015.0030","url":null,"abstract":"Abstract state machines (ASMs) have been introduced as a computation model that is more powerful and more universal than standard computation models. The early validation of ASM models would help reduce the cost and risk of having defects propagate, through refinement, to other models, and eventually to code; thus, adversely affecting the quality of the end product. Mutation testing is a well-established fault-based technique for assessing and improving the quality of test suites. However, little research has been devoted to mutation analysis in the context of ASMs. Mutation testing is known to be computationally expensive due to the large number of generated mutants that are executed against a test set. In this study, the authors empirically investigate the application of cost reduction strategies to AsmetaL, an ASM-based formal language. Furthermore, they evaluate experimentally the effectiveness and the savings resulting from applying two techniques: namely, random mutants selection and operator-based selective mutation, in the context of the AsmetaL language. The quantitative results show that both techniques achieved good savings without major impact on effectiveness.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"17 1","pages":"292-300"},"PeriodicalIF":0.0,"publicationDate":"2017-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85998916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1049/iet-sen.2015.0039
Maryam Mouzarani, B. Sadeghiyan, M. Zolfaghari
During the past decades several methods have been proposed to detect the stack-based buffer overflow vulnerability, though it is still a serious threat to the computer systems. Among the suggested methods, various fuzzers have been proposed to detect this vulnerability. However, many of them are not smart enough to have high code-coverage and detect vulnerabilities in feasible execution paths of the program. The authors present a new smart fuzzing method for detecting stack-based buffer overflows in binary codes. In the proposed method, concolic (concrete + symbolic) execution is used to calculate the path and vulnerability constraints for each execution path in the program. The vulnerability constraints determine which parts of input data and to what length should be extended to cause buffer overflow in an execution path. Based on the calculated constraints, the authors generate test data that detect buffer overflows in feasible execution paths of the program. The authors have implemented the proposed method as a plug-in for Valgrind and tested it on three groups of benchmark programs. The results demonstrate that the calculated vulnerability constraints are accurate and the fuzzer is able to detect the vulnerabilities in these programs. The authors have also compared the implemented fuzzer with three other fuzzers and demonstrated how calculating the path and vulnerability constraints in the method helps to fuzz a program more efficiently.
{"title":"Smart fuzzing method for detecting stack-based buffer overflow in binary codes","authors":"Maryam Mouzarani, B. Sadeghiyan, M. Zolfaghari","doi":"10.1049/iet-sen.2015.0039","DOIUrl":"https://doi.org/10.1049/iet-sen.2015.0039","url":null,"abstract":"During the past decades several methods have been proposed to detect the stack-based buffer overflow vulnerability, though it is still a serious threat to the computer systems. Among the suggested methods, various fuzzers have been proposed to detect this vulnerability. However, many of them are not smart enough to have high code-coverage and detect vulnerabilities in feasible execution paths of the program. The authors present a new smart fuzzing method for detecting stack-based buffer overflows in binary codes. In the proposed method, concolic (concrete + symbolic) execution is used to calculate the path and vulnerability constraints for each execution path in the program. The vulnerability constraints determine which parts of input data and to what length should be extended to cause buffer overflow in an execution path. Based on the calculated constraints, the authors generate test data that detect buffer overflows in feasible execution paths of the program. The authors have implemented the proposed method as a plug-in for Valgrind and tested it on three groups of benchmark programs. The results demonstrate that the calculated vulnerability constraints are accurate and the fuzzer is able to detect the vulnerabilities in these programs. The authors have also compared the implemented fuzzer with three other fuzzers and demonstrated how calculating the path and vulnerability constraints in the method helps to fuzz a program more efficiently.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"218 1","pages":"96-107"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79767072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1049/iet-sen.2015.0054
S. Darwish
The large-scale relational databases normally have a large size and a high degree of sparsity. This has made database compression very important to improve the performance and save storage space. Using standard compression techniques (syntactic) such as Gzip or Zip does not take advantage of the relational properties, as these techniques do not look at the nature of the data. Since semantic compression accounts for and exploits both the meanings and dynamic ranges of error for individual attributes (lossy compression); and existing data dependencies and correlations between attributes in the table (lossless compression), it is very effective for table-data compression. Inspired by semantic compression, this study proposes a novel independent lossless compression system through utilising data-mining model to find the frequent pattern with maximum gain (representative row) in order to draw attribute semantics, besides a modified version of an augmented vector quantisation coder to increase total throughput of the database compression. This algorithm enables more granular and suitable for every kind of massive data tables after synthetically considering compression ratio, space, and speed. The experimentation with several very large real-life datasets indicates the superiority of the system with respect to previously known lossless semantic techniques.
{"title":"Improving semantic compression specification in large relational database","authors":"S. Darwish","doi":"10.1049/iet-sen.2015.0054","DOIUrl":"https://doi.org/10.1049/iet-sen.2015.0054","url":null,"abstract":"The large-scale relational databases normally have a large size and a high degree of sparsity. This has made database compression very important to improve the performance and save storage space. Using standard compression techniques (syntactic) such as Gzip or Zip does not take advantage of the relational properties, as these techniques do not look at the nature of the data. Since semantic compression accounts for and exploits both the meanings and dynamic ranges of error for individual attributes (lossy compression); and existing data dependencies and correlations between attributes in the table (lossless compression), it is very effective for table-data compression. Inspired by semantic compression, this study proposes a novel independent lossless compression system through utilising data-mining model to find the frequent pattern with maximum gain (representative row) in order to draw attribute semantics, besides a modified version of an augmented vector quantisation coder to increase total throughput of the database compression. This algorithm enables more granular and suitable for every kind of massive data tables after synthetically considering compression ratio, space, and speed. The experimentation with several very large real-life datasets indicates the superiority of the system with respect to previously known lossless semantic techniques.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"1 1","pages":"108-115"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78510765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1049/iet-sen.2014.0155
J. Liao, Yang Liu, Jing Wang, Jingyu Wang, Q. Qi
Service composition is an efficient way to implement a service of complex business process in heterogeneous environment. Existing service selection methods mainly utilise fitness function or constraint technique to convert multiple objectives service composition problems to single objective ones. These methods need to take effect with priori knowledge of problem's solution space. Besides, in each execution only one solution can be obtained, hence, users can hardly acquire evenly distributed solutions with acceptable computation cost. The authors also propose a lightweight particle swarm optimisation service selection algorithm for multi-objective service composition problems. Simulation results illustrate that the proposed algorithm surpasses the comparative algorithm in approximation, coverage and execution time.
{"title":"Lightweight approach for multi-objective web service composition","authors":"J. Liao, Yang Liu, Jing Wang, Jingyu Wang, Q. Qi","doi":"10.1049/iet-sen.2014.0155","DOIUrl":"https://doi.org/10.1049/iet-sen.2014.0155","url":null,"abstract":"Service composition is an efficient way to implement a service of complex business process in heterogeneous environment. Existing service selection methods mainly utilise fitness function or constraint technique to convert multiple objectives service composition problems to single objective ones. These methods need to take effect with priori knowledge of problem's solution space. Besides, in each execution only one solution can be obtained, hence, users can hardly acquire evenly distributed solutions with acceptable computation cost. The authors also propose a lightweight particle swarm optimisation service selection algorithm for multi-objective service composition problems. Simulation results illustrate that the proposed algorithm surpasses the comparative algorithm in approximation, coverage and execution time.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"67 1","pages":"116-124"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85224438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-23DOI: 10.1049/iet-sen.2013.0020
Philip O'Kane, S. Sezer, K. Mclaughlin, E. Im
N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes.
{"title":"Malware detection: program run length against detection rate","authors":"Philip O'Kane, S. Sezer, K. Mclaughlin, E. Im","doi":"10.1049/iet-sen.2013.0020","DOIUrl":"https://doi.org/10.1049/iet-sen.2013.0020","url":null,"abstract":"N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"24 1","pages":"42-51"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89587458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-23DOI: 10.1049/iet-sen.2012.0121
Hui Liu, Yang Liu, Xue Guo, Yuanyuan Gao
Refactorings might be done using two different tactics: root canal refactoring and floss refactoring. Root canal refactoring is to set aside an extended period specially for refactoring. Floss refactoring is to interleave refactorings with other programming tasks. However, no large-scale case study on refactoring tactics is available. To this end, the authors carry out a case study to investigate the following research questions. (i) How often are root canal refactoring and floss refactoring employed, respectively? (ii) Are some kinds of refactorings more likely than others to be applied as floss refactorings or root canal refactorings? (iii) Do engineers employing both tactics have obvious bias to or against either of the tactics? They analyse the usage data information collected by Eclipse usage data collector. Results suggest that about 14% of refactorings are root canal refactorings. These findings reconfirm the hypothesis that, in general, floss refactoring is more common than root canal refactoring. The relative popularity of root canal refactoring, however, is much higher than expected. They also find that some kinds of refactorings are more likely than others to be performed as root canal refactorings. Results also suggest that engineers who have explored both tactics obviously tended towards root canal refactoring.
{"title":"Case study on software refactoring tactics","authors":"Hui Liu, Yang Liu, Xue Guo, Yuanyuan Gao","doi":"10.1049/iet-sen.2012.0121","DOIUrl":"https://doi.org/10.1049/iet-sen.2012.0121","url":null,"abstract":"Refactorings might be done using two different tactics: root canal refactoring and floss refactoring. Root canal refactoring is to set aside an extended period specially for refactoring. Floss refactoring is to interleave refactorings with other programming tasks. However, no large-scale case study on refactoring tactics is available. To this end, the authors carry out a case study to investigate the following research questions. (i) How often are root canal refactoring and floss refactoring employed, respectively? (ii) Are some kinds of refactorings more likely than others to be applied as floss refactorings or root canal refactorings? (iii) Do engineers employing both tactics have obvious bias to or against either of the tactics? They analyse the usage data information collected by Eclipse usage data collector. Results suggest that about 14% of refactorings are root canal refactorings. These findings reconfirm the hypothesis that, in general, floss refactoring is more common than root canal refactoring. The relative popularity of root canal refactoring, however, is much higher than expected. They also find that some kinds of refactorings are more likely than others to be performed as root canal refactorings. Results also suggest that engineers who have explored both tactics obviously tended towards root canal refactoring.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"1 1","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89877857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-23DOI: 10.1049/IET-SEN.2012.0137
Tingyuan Nie, Lijian Zhou, Zhe-ming Lu
With the increasingly extensive application of networking technology, security of network becomes significant than ever before. Encryption algorithm plays a key role in construction of a secure network system. However, the encryption algorithm implemented on resource-constrained device is difficult to achieve ideal performance. The issue of power consumption becomes essential to performance of data encryption algorithm. Many methods are proposed to evaluate the power consumption of encryption algorithms yet the authors do not ensure which one is effective. In this study, they give a comprehensive review for the methods of power evaluation. They then design a series of experiments to evaluate the effectiveness of three main types of methods by implementing several traditional symmetric encryption algorithms on a workstation. The experimental results show that external measurement and software profiling are more accurate than that of uninterruptible power system battery. The improvement of power consumption is 27.44 and 33.53% which implies the method of external measurement and software profiling is more effective in power consumption evaluation.
{"title":"Power evaluation methods for data encryption algorithms","authors":"Tingyuan Nie, Lijian Zhou, Zhe-ming Lu","doi":"10.1049/IET-SEN.2012.0137","DOIUrl":"https://doi.org/10.1049/IET-SEN.2012.0137","url":null,"abstract":"With the increasingly extensive application of networking technology, security of network becomes significant than ever before. Encryption algorithm plays a key role in construction of a secure network system. However, the encryption algorithm implemented on resource-constrained device is difficult to achieve ideal performance. The issue of power consumption becomes essential to performance of data encryption algorithm. Many methods are proposed to evaluate the power consumption of encryption algorithms yet the authors do not ensure which one is effective. In this study, they give a comprehensive review for the methods of power evaluation. They then design a series of experiments to evaluate the effectiveness of three main types of methods by implementing several traditional symmetric encryption algorithms on a workstation. The experimental results show that external measurement and software profiling are more accurate than that of uninterruptible power system battery. The improvement of power consumption is 27.44 and 33.53% which implies the method of external measurement and software profiling is more effective in power consumption evaluation.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"552 1","pages":"12-18"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86989765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-23DOI: 10.1049/iet-sen.2013.0015
Wenhu Tang, Long Yan, Zhen Yang, Q. Wu
This study presents a novel approach to document ranking in an ontology-based document search engine (ODSE) using evidential reasoning (ER). Firstly, a domain ontology model, used for query expansion, and a connection interface to an ODSE are developed. A multiple attribute decision making (MADM) tree model is proposed to organise expanded query terms. Then, an ER algorithm, based on the Dempster-Shafer theory, is used for evidence combination in the MADM tree model. The proposed approach is discussed in a generic frame for document ranking, which is evaluated using document queries in the domain of electrical substation fault diagnosis. The results show that the proposed approach provides a suitable solution to document ranking and the precision at the same recall levels for ODSE searches have been improved significantly with ER embedded, in comparison with a traditional keyword-matching search engine, an ODSE without ER and a non-randomness-based weighting model.
{"title":"Improved document ranking in ontology-based document search engine using evidential reasoning","authors":"Wenhu Tang, Long Yan, Zhen Yang, Q. Wu","doi":"10.1049/iet-sen.2013.0015","DOIUrl":"https://doi.org/10.1049/iet-sen.2013.0015","url":null,"abstract":"This study presents a novel approach to document ranking in an ontology-based document search engine (ODSE) using evidential reasoning (ER). Firstly, a domain ontology model, used for query expansion, and a connection interface to an ODSE are developed. A multiple attribute decision making (MADM) tree model is proposed to organise expanded query terms. Then, an ER algorithm, based on the Dempster-Shafer theory, is used for evidence combination in the MADM tree model. The proposed approach is discussed in a generic frame for document ranking, which is evaluated using document queries in the domain of electrical substation fault diagnosis. The results show that the proposed approach provides a suitable solution to document ranking and the precision at the same recall levels for ODSE searches have been improved significantly with ER embedded, in comparison with a traditional keyword-matching search engine, an ODSE without ER and a non-randomness-based weighting model.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"45 1","pages":"33-41"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82392841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-23DOI: 10.1049/iet-sen.2012.0194
Patricia Miravet, Ignacio Marín, Francisco Ortin, Javier Rodríguez
The development of connected mobile applications for a broad audience is a complex task because of the existing device diversity. In order to soothe this situation, device-independent approaches are aimed at implementing platform-independent applications, hiding the differences among the diverse families and models of mobile devices. Most of the existing approaches are based on the imperative definition of applications, which are either compiled to a native application, or executed in a Web browser. The client and server sides of applications are implemented separately, using different mechanisms for data synchronisation. In this study, the authors propose device-independent mobile application generation (DIMAG), a framework for defining native device-independent client-server applications based on the declarative specification of application workflow, state and data synchronisation, user interface and data queries. The authors have designed DIMAG considering the dynamic addition of new types of devices, and facilitating the generation of applications for new target platforms. DIMAG has been implemented taking advantage of existing standards.
{"title":"Framework for the declarative implementation of native mobile applications","authors":"Patricia Miravet, Ignacio Marín, Francisco Ortin, Javier Rodríguez","doi":"10.1049/iet-sen.2012.0194","DOIUrl":"https://doi.org/10.1049/iet-sen.2012.0194","url":null,"abstract":"The development of connected mobile applications for a broad audience is a complex task because of the existing device diversity. In order to soothe this situation, device-independent approaches are aimed at implementing platform-independent applications, hiding the differences among the diverse families and models of mobile devices. Most of the existing approaches are based on the imperative definition of applications, which are either compiled to a native application, or executed in a Web browser. The client and server sides of applications are implemented separately, using different mechanisms for data synchronisation. In this study, the authors propose device-independent mobile application generation (DIMAG), a framework for defining native device-independent client-server applications based on the declarative specification of application workflow, state and data synchronisation, user interface and data queries. The authors have designed DIMAG considering the dynamic addition of new types of devices, and facilitating the generation of applications for new target platforms. DIMAG has been implemented taking advantage of existing standards.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"49 1","pages":"19-32"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80897169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}