首页 > 最新文献

IET Softw.最新文献

英文 中文
Applying selective mutation strategies to the AsmetaL language 选择性突变策略在AsmetaL语言中的应用
Pub Date : 2017-01-11 DOI: 10.1049/iet-sen.2015.0030
Osama Alkrarha, J. Hassine
Abstract state machines (ASMs) have been introduced as a computation model that is more powerful and more universal than standard computation models. The early validation of ASM models would help reduce the cost and risk of having defects propagate, through refinement, to other models, and eventually to code; thus, adversely affecting the quality of the end product. Mutation testing is a well-established fault-based technique for assessing and improving the quality of test suites. However, little research has been devoted to mutation analysis in the context of ASMs. Mutation testing is known to be computationally expensive due to the large number of generated mutants that are executed against a test set. In this study, the authors empirically investigate the application of cost reduction strategies to AsmetaL, an ASM-based formal language. Furthermore, they evaluate experimentally the effectiveness and the savings resulting from applying two techniques: namely, random mutants selection and operator-based selective mutation, in the context of the AsmetaL language. The quantitative results show that both techniques achieved good savings without major impact on effectiveness.
抽象状态机(asm)作为一种比标准计算模型更强大、更通用的计算模型被引入。ASM模型的早期验证将有助于减少缺陷传播的成本和风险,通过细化,传播到其他模型,并最终传播到代码;因此,对最终产品的质量产生不利影响。突变测试是一种完善的基于故障的技术,用于评估和改进测试套件的质量。然而,在asm的背景下,很少有研究致力于突变分析。众所周知,突变测试的计算成本很高,因为要对测试集执行大量生成的突变。在这项研究中,作者实证研究了成本降低策略在AsmetaL(一种基于asm的形式语言)中的应用。此外,他们通过实验评估了在AsmetaL语言背景下应用两种技术的有效性和节省的成本:即随机突变选择和基于操作符的选择性突变。定量结果表明,这两种技术都实现了良好的节约,而对效率没有重大影响。
{"title":"Applying selective mutation strategies to the AsmetaL language","authors":"Osama Alkrarha, J. Hassine","doi":"10.1049/iet-sen.2015.0030","DOIUrl":"https://doi.org/10.1049/iet-sen.2015.0030","url":null,"abstract":"Abstract state machines (ASMs) have been introduced as a computation model that is more powerful and more universal than standard computation models. The early validation of ASM models would help reduce the cost and risk of having defects propagate, through refinement, to other models, and eventually to code; thus, adversely affecting the quality of the end product. Mutation testing is a well-established fault-based technique for assessing and improving the quality of test suites. However, little research has been devoted to mutation analysis in the context of ASMs. Mutation testing is known to be computationally expensive due to the large number of generated mutants that are executed against a test set. In this study, the authors empirically investigate the application of cost reduction strategies to AsmetaL, an ASM-based formal language. Furthermore, they evaluate experimentally the effectiveness and the savings resulting from applying two techniques: namely, random mutants selection and operator-based selective mutation, in the context of the AsmetaL language. The quantitative results show that both techniques achieved good savings without major impact on effectiveness.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"17 1","pages":"292-300"},"PeriodicalIF":0.0,"publicationDate":"2017-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85998916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart fuzzing method for detecting stack-based buffer overflow in binary codes 基于堆栈的二进制码缓冲区溢出智能模糊检测方法
Pub Date : 2016-08-01 DOI: 10.1049/iet-sen.2015.0039
Maryam Mouzarani, B. Sadeghiyan, M. Zolfaghari
During the past decades several methods have been proposed to detect the stack-based buffer overflow vulnerability, though it is still a serious threat to the computer systems. Among the suggested methods, various fuzzers have been proposed to detect this vulnerability. However, many of them are not smart enough to have high code-coverage and detect vulnerabilities in feasible execution paths of the program. The authors present a new smart fuzzing method for detecting stack-based buffer overflows in binary codes. In the proposed method, concolic (concrete + symbolic) execution is used to calculate the path and vulnerability constraints for each execution path in the program. The vulnerability constraints determine which parts of input data and to what length should be extended to cause buffer overflow in an execution path. Based on the calculated constraints, the authors generate test data that detect buffer overflows in feasible execution paths of the program. The authors have implemented the proposed method as a plug-in for Valgrind and tested it on three groups of benchmark programs. The results demonstrate that the calculated vulnerability constraints are accurate and the fuzzer is able to detect the vulnerabilities in these programs. The authors have also compared the implemented fuzzer with three other fuzzers and demonstrated how calculating the path and vulnerability constraints in the method helps to fuzz a program more efficiently.
在过去的几十年里,人们提出了几种检测基于堆栈的缓冲区溢出漏洞的方法,但它仍然是计算机系统的一个严重威胁。在建议的方法中,提出了各种fuzzers来检测此漏洞。然而,它们中的许多都不够聪明,无法实现高代码覆盖率,也无法检测程序可行执行路径中的漏洞。提出了一种新的智能模糊检测方法,用于检测二进制码中基于堆栈的缓冲区溢出。该方法采用共集(具体+符号)执行,计算程序中每条执行路径的路径和漏洞约束。漏洞约束决定了输入数据的哪些部分以及应该扩展到什么长度,以导致执行路径中的缓冲区溢出。在计算约束条件的基础上,生成了在程序可行的执行路径上检测缓冲区溢出的测试数据。作者将所提出的方法作为Valgrind的插件实现,并在三组基准程序上进行了测试。结果表明,计算的漏洞约束是准确的,模糊器能够检测出这些程序中的漏洞。作者还将实现的模糊器与其他三种模糊器进行了比较,并演示了该方法中计算路径和漏洞约束如何有助于更有效地模糊程序。
{"title":"Smart fuzzing method for detecting stack-based buffer overflow in binary codes","authors":"Maryam Mouzarani, B. Sadeghiyan, M. Zolfaghari","doi":"10.1049/iet-sen.2015.0039","DOIUrl":"https://doi.org/10.1049/iet-sen.2015.0039","url":null,"abstract":"During the past decades several methods have been proposed to detect the stack-based buffer overflow vulnerability, though it is still a serious threat to the computer systems. Among the suggested methods, various fuzzers have been proposed to detect this vulnerability. However, many of them are not smart enough to have high code-coverage and detect vulnerabilities in feasible execution paths of the program. The authors present a new smart fuzzing method for detecting stack-based buffer overflows in binary codes. In the proposed method, concolic (concrete + symbolic) execution is used to calculate the path and vulnerability constraints for each execution path in the program. The vulnerability constraints determine which parts of input data and to what length should be extended to cause buffer overflow in an execution path. Based on the calculated constraints, the authors generate test data that detect buffer overflows in feasible execution paths of the program. The authors have implemented the proposed method as a plug-in for Valgrind and tested it on three groups of benchmark programs. The results demonstrate that the calculated vulnerability constraints are accurate and the fuzzer is able to detect the vulnerabilities in these programs. The authors have also compared the implemented fuzzer with three other fuzzers and demonstrated how calculating the path and vulnerability constraints in the method helps to fuzz a program more efficiently.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"218 1","pages":"96-107"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79767072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Improving semantic compression specification in large relational database 改进大型关系数据库的语义压缩规范
Pub Date : 2016-08-01 DOI: 10.1049/iet-sen.2015.0054
S. Darwish
The large-scale relational databases normally have a large size and a high degree of sparsity. This has made database compression very important to improve the performance and save storage space. Using standard compression techniques (syntactic) such as Gzip or Zip does not take advantage of the relational properties, as these techniques do not look at the nature of the data. Since semantic compression accounts for and exploits both the meanings and dynamic ranges of error for individual attributes (lossy compression); and existing data dependencies and correlations between attributes in the table (lossless compression), it is very effective for table-data compression. Inspired by semantic compression, this study proposes a novel independent lossless compression system through utilising data-mining model to find the frequent pattern with maximum gain (representative row) in order to draw attribute semantics, besides a modified version of an augmented vector quantisation coder to increase total throughput of the database compression. This algorithm enables more granular and suitable for every kind of massive data tables after synthetically considering compression ratio, space, and speed. The experimentation with several very large real-life datasets indicates the superiority of the system with respect to previously known lossless semantic techniques.
大型关系数据库通常具有较大的规模和高度的稀疏性。这使得数据库压缩对于提高性能和节省存储空间非常重要。使用诸如Gzip或Zip之类的标准压缩技术(语法)并不能利用关系属性,因为这些技术不考虑数据的性质。由于语义压缩考虑并利用了单个属性的含义和动态误差范围(有损压缩);对表中已有的数据依赖关系和属性之间的相关性(无损压缩),它对表数据的压缩非常有效。受语义压缩的启发,本研究提出了一种新的独立无损压缩系统,利用数据挖掘模型寻找增益最大的频繁模式(代表行)来绘制属性语义,并改进了增广矢量量化编码器来提高数据库压缩的总吞吐量。该算法在综合考虑压缩比、空间、速度等因素后,更细化,适合各类海量数据表。对几个非常大的真实数据集的实验表明,相对于先前已知的无损语义技术,该系统具有优越性。
{"title":"Improving semantic compression specification in large relational database","authors":"S. Darwish","doi":"10.1049/iet-sen.2015.0054","DOIUrl":"https://doi.org/10.1049/iet-sen.2015.0054","url":null,"abstract":"The large-scale relational databases normally have a large size and a high degree of sparsity. This has made database compression very important to improve the performance and save storage space. Using standard compression techniques (syntactic) such as Gzip or Zip does not take advantage of the relational properties, as these techniques do not look at the nature of the data. Since semantic compression accounts for and exploits both the meanings and dynamic ranges of error for individual attributes (lossy compression); and existing data dependencies and correlations between attributes in the table (lossless compression), it is very effective for table-data compression. Inspired by semantic compression, this study proposes a novel independent lossless compression system through utilising data-mining model to find the frequent pattern with maximum gain (representative row) in order to draw attribute semantics, besides a modified version of an augmented vector quantisation coder to increase total throughput of the database compression. This algorithm enables more granular and suitable for every kind of massive data tables after synthetically considering compression ratio, space, and speed. The experimentation with several very large real-life datasets indicates the superiority of the system with respect to previously known lossless semantic techniques.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"1 1","pages":"108-115"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78510765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lightweight approach for multi-objective web service composition 多目标web服务组合的轻量级方法
Pub Date : 2016-08-01 DOI: 10.1049/iet-sen.2014.0155
J. Liao, Yang Liu, Jing Wang, Jingyu Wang, Q. Qi
Service composition is an efficient way to implement a service of complex business process in heterogeneous environment. Existing service selection methods mainly utilise fitness function or constraint technique to convert multiple objectives service composition problems to single objective ones. These methods need to take effect with priori knowledge of problem's solution space. Besides, in each execution only one solution can be obtained, hence, users can hardly acquire evenly distributed solutions with acceptable computation cost. The authors also propose a lightweight particle swarm optimisation service selection algorithm for multi-objective service composition problems. Simulation results illustrate that the proposed algorithm surpasses the comparative algorithm in approximation, coverage and execution time.
服务组合是在异构环境中实现复杂业务流程服务的一种有效方法。现有的服务选择方法主要是利用适应度函数或约束技术将多目标服务组合问题转化为单目标服务组合问题。这些方法需要在问题解空间的先验知识的基础上发挥作用。此外,每次执行只能得到一个解,用户很难获得计算成本可接受的均匀分布解。针对多目标服务组合问题,提出了一种轻量级粒子群优化服务选择算法。仿真结果表明,该算法在逼近性、覆盖范围和执行时间上都优于比较算法。
{"title":"Lightweight approach for multi-objective web service composition","authors":"J. Liao, Yang Liu, Jing Wang, Jingyu Wang, Q. Qi","doi":"10.1049/iet-sen.2014.0155","DOIUrl":"https://doi.org/10.1049/iet-sen.2014.0155","url":null,"abstract":"Service composition is an efficient way to implement a service of complex business process in heterogeneous environment. Existing service selection methods mainly utilise fitness function or constraint technique to convert multiple objectives service composition problems to single objective ones. These methods need to take effect with priori knowledge of problem's solution space. Besides, in each execution only one solution can be obtained, hence, users can hardly acquire evenly distributed solutions with acceptable computation cost. The authors also propose a lightweight particle swarm optimisation service selection algorithm for multi-objective service composition problems. Simulation results illustrate that the proposed algorithm surpasses the comparative algorithm in approximation, coverage and execution time.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"67 1","pages":"116-124"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85224438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Malware detection: program run length against detection rate 恶意软件检测:程序运行长度对检测率
Pub Date : 2014-01-23 DOI: 10.1049/iet-sen.2013.0020
Philip O'Kane, S. Sezer, K. Mclaughlin, E. Im
N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes.
N-gram分析是一种使用字节、字符或文本字符串来研究程序结构的方法。本研究使用动态分析来研究恶意软件检测,使用基于N-gram分析的分类方法。动态分析的一个关键问题是必须运行程序以确保正确分类的时间长度。这项研究的动机是找到最佳的操作码(操作码)子集,使恶意软件的最佳指标,并确定多长时间的程序必须监控,以确保良性和恶意软件的准确支持向量机(SVM)分类。本研究中的实验将程序表示为通过动态分析获得的不同程序运行周期的操作码密度直方图。使用支持向量机作为程序分类器来确定不同程序运行长度正确判断恶意软件存在的能力。研究结果表明,恶意软件可以通过使用少量操作码来检测不同的程序运行长度。
{"title":"Malware detection: program run length against detection rate","authors":"Philip O'Kane, S. Sezer, K. Mclaughlin, E. Im","doi":"10.1049/iet-sen.2013.0020","DOIUrl":"https://doi.org/10.1049/iet-sen.2013.0020","url":null,"abstract":"N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"24 1","pages":"42-51"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89587458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Case study on software refactoring tactics 软件重构策略的案例研究
Pub Date : 2014-01-23 DOI: 10.1049/iet-sen.2012.0121
Hui Liu, Yang Liu, Xue Guo, Yuanyuan Gao
Refactorings might be done using two different tactics: root canal refactoring and floss refactoring. Root canal refactoring is to set aside an extended period specially for refactoring. Floss refactoring is to interleave refactorings with other programming tasks. However, no large-scale case study on refactoring tactics is available. To this end, the authors carry out a case study to investigate the following research questions. (i) How often are root canal refactoring and floss refactoring employed, respectively? (ii) Are some kinds of refactorings more likely than others to be applied as floss refactorings or root canal refactorings? (iii) Do engineers employing both tactics have obvious bias to or against either of the tactics? They analyse the usage data information collected by Eclipse usage data collector. Results suggest that about 14% of refactorings are root canal refactorings. These findings reconfirm the hypothesis that, in general, floss refactoring is more common than root canal refactoring. The relative popularity of root canal refactoring, however, is much higher than expected. They also find that some kinds of refactorings are more likely than others to be performed as root canal refactorings. Results also suggest that engineers who have explored both tactics obviously tended towards root canal refactoring.
重构可以使用两种不同的策略:根管重构和牙线重构。根管重构就是专门留出一段较长的时间进行重构。Floss重构是将重构与其他编程任务穿插在一起。然而,没有关于重构策略的大规模案例研究。为此,笔者进行个案研究,探讨以下研究问题。(i)分别使用根管重构和牙线重构的频率是多少?(ii)某些重构是否比其他重构更有可能被应用于牙线重构或根管重构?(iii)采用这两种策略的工程师是否对其中一种策略有明显的偏见?它们分析由Eclipse使用数据收集器收集的使用数据信息。结果表明,约14%的重构为根管重构。这些发现再次证实了一个假设,即一般来说,牙线重构比根管重构更常见。然而,根管重构的相对流行程度远高于预期。他们还发现,某些类型的重构比其他类型的重构更有可能被执行为根管重构。结果还表明,探索过这两种策略的工程师明显倾向于根管重构。
{"title":"Case study on software refactoring tactics","authors":"Hui Liu, Yang Liu, Xue Guo, Yuanyuan Gao","doi":"10.1049/iet-sen.2012.0121","DOIUrl":"https://doi.org/10.1049/iet-sen.2012.0121","url":null,"abstract":"Refactorings might be done using two different tactics: root canal refactoring and floss refactoring. Root canal refactoring is to set aside an extended period specially for refactoring. Floss refactoring is to interleave refactorings with other programming tasks. However, no large-scale case study on refactoring tactics is available. To this end, the authors carry out a case study to investigate the following research questions. (i) How often are root canal refactoring and floss refactoring employed, respectively? (ii) Are some kinds of refactorings more likely than others to be applied as floss refactorings or root canal refactorings? (iii) Do engineers employing both tactics have obvious bias to or against either of the tactics? They analyse the usage data information collected by Eclipse usage data collector. Results suggest that about 14% of refactorings are root canal refactorings. These findings reconfirm the hypothesis that, in general, floss refactoring is more common than root canal refactoring. The relative popularity of root canal refactoring, however, is much higher than expected. They also find that some kinds of refactorings are more likely than others to be performed as root canal refactorings. Results also suggest that engineers who have explored both tactics obviously tended towards root canal refactoring.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"1 1","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89877857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Power evaluation methods for data encryption algorithms 数据加密算法的功率评估方法
Pub Date : 2014-01-23 DOI: 10.1049/IET-SEN.2012.0137
Tingyuan Nie, Lijian Zhou, Zhe-ming Lu
With the increasingly extensive application of networking technology, security of network becomes significant than ever before. Encryption algorithm plays a key role in construction of a secure network system. However, the encryption algorithm implemented on resource-constrained device is difficult to achieve ideal performance. The issue of power consumption becomes essential to performance of data encryption algorithm. Many methods are proposed to evaluate the power consumption of encryption algorithms yet the authors do not ensure which one is effective. In this study, they give a comprehensive review for the methods of power evaluation. They then design a series of experiments to evaluate the effectiveness of three main types of methods by implementing several traditional symmetric encryption algorithms on a workstation. The experimental results show that external measurement and software profiling are more accurate than that of uninterruptible power system battery. The improvement of power consumption is 27.44 and 33.53% which implies the method of external measurement and software profiling is more effective in power consumption evaluation.
随着网络技术的日益广泛应用,网络安全问题显得尤为重要。加密算法在构建安全的网络系统中起着至关重要的作用。然而,在资源受限的设备上实现的加密算法很难达到理想的性能。功耗问题成为影响数据加密算法性能的关键问题。人们提出了许多方法来评估加密算法的功耗,但作者并不能确定哪种方法是有效的。在本研究中,他们对权力评估的方法进行了全面的回顾。然后,他们设计了一系列实验,通过在工作站上实现几种传统的对称加密算法来评估三种主要方法的有效性。实验结果表明,外部测量和软件分析比不间断电源系统电池测量更准确。功耗改进率分别为27.44%和33.53%,表明外部测量法和软件分析法在功耗评估中更为有效。
{"title":"Power evaluation methods for data encryption algorithms","authors":"Tingyuan Nie, Lijian Zhou, Zhe-ming Lu","doi":"10.1049/IET-SEN.2012.0137","DOIUrl":"https://doi.org/10.1049/IET-SEN.2012.0137","url":null,"abstract":"With the increasingly extensive application of networking technology, security of network becomes significant than ever before. Encryption algorithm plays a key role in construction of a secure network system. However, the encryption algorithm implemented on resource-constrained device is difficult to achieve ideal performance. The issue of power consumption becomes essential to performance of data encryption algorithm. Many methods are proposed to evaluate the power consumption of encryption algorithms yet the authors do not ensure which one is effective. In this study, they give a comprehensive review for the methods of power evaluation. They then design a series of experiments to evaluate the effectiveness of three main types of methods by implementing several traditional symmetric encryption algorithms on a workstation. The experimental results show that external measurement and software profiling are more accurate than that of uninterruptible power system battery. The improvement of power consumption is 27.44 and 33.53% which implies the method of external measurement and software profiling is more effective in power consumption evaluation.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"552 1","pages":"12-18"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86989765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Improved document ranking in ontology-based document search engine using evidential reasoning 利用证据推理改进了基于本体的文档搜索引擎中的文档排名
Pub Date : 2014-01-23 DOI: 10.1049/iet-sen.2013.0015
Wenhu Tang, Long Yan, Zhen Yang, Q. Wu
This study presents a novel approach to document ranking in an ontology-based document search engine (ODSE) using evidential reasoning (ER). Firstly, a domain ontology model, used for query expansion, and a connection interface to an ODSE are developed. A multiple attribute decision making (MADM) tree model is proposed to organise expanded query terms. Then, an ER algorithm, based on the Dempster-Shafer theory, is used for evidence combination in the MADM tree model. The proposed approach is discussed in a generic frame for document ranking, which is evaluated using document queries in the domain of electrical substation fault diagnosis. The results show that the proposed approach provides a suitable solution to document ranking and the precision at the same recall levels for ODSE searches have been improved significantly with ER embedded, in comparison with a traditional keyword-matching search engine, an ODSE without ER and a non-randomness-based weighting model.
本研究提出了一种在基于本体的文档搜索引擎(ODSE)中使用证据推理(ER)进行文档排序的新方法。首先,开发了用于查询扩展的领域本体模型和与ODSE的连接接口;提出了一种多属性决策树模型来组织扩展查询项。然后,采用基于Dempster-Shafer理论的ER算法对MADM树模型进行证据组合。本文提出了一种通用的文档排序框架,并利用变电站故障诊断领域的文档查询对该框架进行了评价。结果表明,与传统的关键词匹配搜索引擎、不含ER的ODSE和非随机加权模型相比,嵌入ER后的ODSE在相同查全率水平下的搜索精度得到了显著提高。
{"title":"Improved document ranking in ontology-based document search engine using evidential reasoning","authors":"Wenhu Tang, Long Yan, Zhen Yang, Q. Wu","doi":"10.1049/iet-sen.2013.0015","DOIUrl":"https://doi.org/10.1049/iet-sen.2013.0015","url":null,"abstract":"This study presents a novel approach to document ranking in an ontology-based document search engine (ODSE) using evidential reasoning (ER). Firstly, a domain ontology model, used for query expansion, and a connection interface to an ODSE are developed. A multiple attribute decision making (MADM) tree model is proposed to organise expanded query terms. Then, an ER algorithm, based on the Dempster-Shafer theory, is used for evidence combination in the MADM tree model. The proposed approach is discussed in a generic frame for document ranking, which is evaluated using document queries in the domain of electrical substation fault diagnosis. The results show that the proposed approach provides a suitable solution to document ranking and the precision at the same recall levels for ODSE searches have been improved significantly with ER embedded, in comparison with a traditional keyword-matching search engine, an ODSE without ER and a non-randomness-based weighting model.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"45 1","pages":"33-41"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82392841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Framework for the declarative implementation of native mobile applications 本机移动应用程序的声明式实现框架
Pub Date : 2014-01-23 DOI: 10.1049/iet-sen.2012.0194
Patricia Miravet, Ignacio Marín, Francisco Ortin, Javier Rodríguez
The development of connected mobile applications for a broad audience is a complex task because of the existing device diversity. In order to soothe this situation, device-independent approaches are aimed at implementing platform-independent applications, hiding the differences among the diverse families and models of mobile devices. Most of the existing approaches are based on the imperative definition of applications, which are either compiled to a native application, or executed in a Web browser. The client and server sides of applications are implemented separately, using different mechanisms for data synchronisation. In this study, the authors propose device-independent mobile application generation (DIMAG), a framework for defining native device-independent client-server applications based on the declarative specification of application workflow, state and data synchronisation, user interface and data queries. The authors have designed DIMAG considering the dynamic addition of new types of devices, and facilitating the generation of applications for new target platforms. DIMAG has been implemented taking advantage of existing standards.
由于现有设备的多样性,为广大用户开发联网移动应用程序是一项复杂的任务。为了缓解这种情况,与设备无关的方法旨在实现与平台无关的应用程序,从而隐藏了不同系列和型号的移动设备之间的差异。大多数现有的方法都基于应用程序的命令式定义,这些应用程序要么被编译为本机应用程序,要么在Web浏览器中执行。应用程序的客户端和服务器端是分开实现的,使用不同的数据同步机制。在这项研究中,作者提出了独立于设备的移动应用程序生成(DIMAG),这是一个框架,用于定义基于应用程序工作流、状态和数据同步、用户界面和数据查询的声明性规范的本地设备独立的客户端-服务器应用程序。作者在设计DIMAG时考虑了新类型设备的动态添加,并促进了新目标平台应用程序的生成。DIMAG是利用现有标准实现的。
{"title":"Framework for the declarative implementation of native mobile applications","authors":"Patricia Miravet, Ignacio Marín, Francisco Ortin, Javier Rodríguez","doi":"10.1049/iet-sen.2012.0194","DOIUrl":"https://doi.org/10.1049/iet-sen.2012.0194","url":null,"abstract":"The development of connected mobile applications for a broad audience is a complex task because of the existing device diversity. In order to soothe this situation, device-independent approaches are aimed at implementing platform-independent applications, hiding the differences among the diverse families and models of mobile devices. Most of the existing approaches are based on the imperative definition of applications, which are either compiled to a native application, or executed in a Web browser. The client and server sides of applications are implemented separately, using different mechanisms for data synchronisation. In this study, the authors propose device-independent mobile application generation (DIMAG), a framework for defining native device-independent client-server applications based on the declarative specification of application workflow, state and data synchronisation, user interface and data queries. The authors have designed DIMAG considering the dynamic addition of new types of devices, and facilitating the generation of applications for new target platforms. DIMAG has been implemented taking advantage of existing standards.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"49 1","pages":"19-32"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80897169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Errata 'Value of ranked voting methods for estimation by analogy', IET Softw., 2013, 7, (4), pp 195-202 勘误“价值的排序投票方法的估计通过类比”,IET软件。生态学报,2013,7 (4),pp 195-202
Pub Date : 2014-01-23 DOI: 10.1049/iet-sen.2013.0214
Mohammad Azzeh, Marwan Alseid
{"title":"Errata 'Value of ranked voting methods for estimation by analogy', IET Softw., 2013, 7, (4), pp 195-202","authors":"Mohammad Azzeh, Marwan Alseid","doi":"10.1049/iet-sen.2013.0214","DOIUrl":"https://doi.org/10.1049/iet-sen.2013.0214","url":null,"abstract":"","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"3 2-3 1","pages":"52"},"PeriodicalIF":0.0,"publicationDate":"2014-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79721311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Softw.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1