Generating the test inputs, that have high code coverage while minimizing the number of test inputs, is a practical but difficult problem. The application of symbolic execution in combination with SMT solvers gives a promising way to solve it. Recently, there have been several tools that help generating the test inputs for C programs, but their abilities are still limited, depending on the particular chosen SMT solver and most of them currently do not support real arithmetic. We propose an approach to overcome the limitation of unique solver’s ability by using multiple SMT solvers and combining their results to get the best solution. We also propose a method of reasoning real arithmetic for symbolic testing. We have implemented this approach in an open source symbolic testing tool called real CREST. Our experimental results are very positive.
{"title":"Extending CREST with Multiple SMT Solvers and Real Arithmetic","authors":"Do Quoc Huy, Trương Anh Hoàng, N. Binh","doi":"10.1109/KSE.2010.34","DOIUrl":"https://doi.org/10.1109/KSE.2010.34","url":null,"abstract":"Generating the test inputs, that have high code coverage while minimizing the number of test inputs, is a practical but difficult problem. The application of symbolic execution in combination with SMT solvers gives a promising way to solve it. Recently, there have been several tools that help generating the test inputs for C programs, but their abilities are still limited, depending on the particular chosen SMT solver and most of them currently do not support real arithmetic. We propose an approach to overcome the limitation of unique solver’s ability by using multiple SMT solvers and combining their results to get the best solution. We also propose a method of reasoning real arithmetic for symbolic testing. We have implemented this approach in an open source symbolic testing tool called real CREST. Our experimental results are very positive.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123571307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Military missions are highly dynamic and uncertain. This characteristic comes from the nature of battlefields where such factors as enemies and terrains are not easy to be determined. Hence disruption of missions is likely to occur whenever happening a change. This requires generating plans that can adapt quickly to changes during execution of missions, while paying a less cost. In this paper, we propose a computational approach for adaptation of mission plans dealing with any possible disruption caused by changes. It first mathematically models the dynamic planning problem with two criteria: the mission execution time and the cost of operations. Based on this quantification, we introduce a computational framework, which has an evolutionary mechanism for adapting the current solution to new situations resulted from changes. We carried out a case study on this newly proposed approach. A modified military scenario of a mission was used for testing. The obtained results strongly support our proposal in finding adaptive solution dealing with the changes.
{"title":"A Computational Framework for Adaptation in Military Mission Planning","authors":"L. Bui, Z. Michalewicz","doi":"10.1109/KSE.2010.37","DOIUrl":"https://doi.org/10.1109/KSE.2010.37","url":null,"abstract":"Military missions are highly dynamic and uncertain. This characteristic comes from the nature of battlefields where such factors as enemies and terrains are not easy to be determined. Hence disruption of missions is likely to occur whenever happening a change. This requires generating plans that can adapt quickly to changes during execution of missions, while paying a less cost. In this paper, we propose a computational approach for adaptation of mission plans dealing with any possible disruption caused by changes. It first mathematically models the dynamic planning problem with two criteria: the mission execution time and the cost of operations. Based on this quantification, we introduce a computational framework, which has an evolutionary mechanism for adapting the current solution to new situations resulted from changes. We carried out a case study on this newly proposed approach. A modified military scenario of a mission was used for testing. The obtained results strongly support our proposal in finding adaptive solution dealing with the changes.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127767195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A scenario is a synthetic description of an event or series of actions and events. It plays an important role in software analysis and design, as well as verification and validation. In this paper, we propose an approach to verify the correctness of execution scenario in a multi-agent system. In this approach, scenarios are specified by Protocol Diagrams in AUML (Agent Unified Modeling Language), we formalize pre and post conditions of the scenarios and define an extension property class in JPF (Java Path Finder) model checker to verify if the execution of scenarios satisfies their constraints. We use a well-known scenario of a book trading multi-agent system to illustrate our approach.
{"title":"A Runtime Approach to Verify Scenario in Multi-agent Systems","authors":"Thanh-Binh Trinh, Quang-Thap Pham, Ninh-Thuan Truong, Viet-Ha Nguyen","doi":"10.1109/KSE.2010.13","DOIUrl":"https://doi.org/10.1109/KSE.2010.13","url":null,"abstract":"A scenario is a synthetic description of an event or series of actions and events. It plays an important role in software analysis and design, as well as verification and validation. In this paper, we propose an approach to verify the correctness of execution scenario in a multi-agent system. In this approach, scenarios are specified by Protocol Diagrams in AUML (Agent Unified Modeling Language), we formalize pre and post conditions of the scenarios and define an extension property class in JPF (Java Path Finder) model checker to verify if the execution of scenarios satisfies their constraints. We use a well-known scenario of a book trading multi-agent system to illustrate our approach.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"465 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116026446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The nonstationary nature of the brain signals provides a rather unstable input resulting in uncertainty and complexity in the control. Intelligent processing algorithms adapted to the task are a prerequisite for reliable BCI applications. This work presents a novel intelligent processing strategy for the realization of an effective BCI which has the capability to improved classification accuracy and communication rate as well. A neural networks training based on sequential extended Kalman filtering analysis for classification of extracted EEG signal is proposed. A statistically significant improvement was achieved with respect to the rates provided by raw data.
{"title":"Neural Networks Training Based on Sequential Extended Kalman Filtering for Single Trial EEG Classification","authors":"A. Turnip, K. Hong, S. Ge, M. Jeong","doi":"10.1109/KSE.2010.42","DOIUrl":"https://doi.org/10.1109/KSE.2010.42","url":null,"abstract":"The nonstationary nature of the brain signals provides a rather unstable input resulting in uncertainty and complexity in the control. Intelligent processing algorithms adapted to the task are a prerequisite for reliable BCI applications. This work presents a novel intelligent processing strategy for the realization of an effective BCI which has the capability to improved classification accuracy and communication rate as well. A neural networks training based on sequential extended Kalman filtering analysis for classification of extracted EEG signal is proposed. A statistically significant improvement was achieved with respect to the rates provided by raw data.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129471533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As an indispensable technique in addition to the field of emph{Information Retrieval}, emph{Ontology based Retrieval System} (or Browsing Hierarchy) has been well studied and developed both in academia and industry. However, most of current systems suffer the following problems: (1) Constructing the mappings between documents and concepts in ontology requires the training of robust hierarchical classifiers, it's difficult to build such classifiers for large-scale documents corpus due to the time-efficiency and precision issues. (2) The traditional Browsing Hierarchical System ignores the distribution of documents over concepts, which is not realistic when a large number of documents distributed biasly on certain concepts. Browsing documents such concepts becomes time-consuming and unpractical for users. Therefore, further splitting these concepts into sub-categories is necessary and critical for organizing documents in the browsing system. Aiming at building the Hierarchical Browsing System more realistically and accurately, we propose an adpative Hierarchical Browsing System framework in this paper, which is designed to build a Browsing Hierarchy for $CiteSeer^x$. In this framework, we first investigate the supervised learning approaches to classify documents into existing predefined concepts of ontology and compare their performance on different datasets of $CiteSeer^x$. Then, we give a empirical analysis of unsupervised learning methods for adding new clusters to the existing browsing hierarchy. Experimental analysis on $CiteSeer^x$ corpus shows the effectiveness and the efficiency of our method.
{"title":"An Adaptive Ontology Based Hierarchical Browsing System for CiteSeerx","authors":"N. Ye, Susan Gauch, Qiang Wang, H. Luong","doi":"10.1109/KSE.2010.32","DOIUrl":"https://doi.org/10.1109/KSE.2010.32","url":null,"abstract":"As an indispensable technique in addition to the field of emph{Information Retrieval}, emph{Ontology based Retrieval System} (or Browsing Hierarchy) has been well studied and developed both in academia and industry. However, most of current systems suffer the following problems: (1) Constructing the mappings between documents and concepts in ontology requires the training of robust hierarchical classifiers, it's difficult to build such classifiers for large-scale documents corpus due to the time-efficiency and precision issues. (2) The traditional Browsing Hierarchical System ignores the distribution of documents over concepts, which is not realistic when a large number of documents distributed biasly on certain concepts. Browsing documents such concepts becomes time-consuming and unpractical for users. Therefore, further splitting these concepts into sub-categories is necessary and critical for organizing documents in the browsing system. Aiming at building the Hierarchical Browsing System more realistically and accurately, we propose an adpative Hierarchical Browsing System framework in this paper, which is designed to build a Browsing Hierarchy for $CiteSeer^x$. In this framework, we first investigate the supervised learning approaches to classify documents into existing predefined concepts of ontology and compare their performance on different datasets of $CiteSeer^x$. Then, we give a empirical analysis of unsupervised learning methods for adding new clusters to the existing browsing hierarchy. Experimental analysis on $CiteSeer^x$ corpus shows the effectiveness and the efficiency of our method.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125501699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ERP offers enormous benefits to organizations in efficiency, productivity and cost reduction. However, ERP implementations are complex, with many encountering difficulty and even failure. Change management has been identified as a critical success factor in the implementation of ERP. This study surveys twelve successful ERP implementation case studies to determine which components of change management strategies are common to successful ERP implementations. The common components identified include: effective communication, top management support, effective training/knowledge transfer, project champions, and clear systematic plans.
{"title":"Change Management Strategies for the Successful Implementation of Enterprise Resource Planning Systems","authors":"Trieu Thi Van Hau, J. Kuzic","doi":"10.1109/KSE.2010.10","DOIUrl":"https://doi.org/10.1109/KSE.2010.10","url":null,"abstract":"ERP offers enormous benefits to organizations in efficiency, productivity and cost reduction. However, ERP implementations are complex, with many encountering difficulty and even failure. Change management has been identified as a critical success factor in the implementation of ERP. This study surveys twelve successful ERP implementation case studies to determine which components of change management strategies are common to successful ERP implementations. The common components identified include: effective communication, top management support, effective training/knowledge transfer, project champions, and clear systematic plans.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133445267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, supervised learning is commonly used in many domains. Indeed, many works propose to learn new knowledge from examples that translate the expected behaviour of the considered system. A key issue of supervised learning concerns the description language used to represent the examples. In this paper, we propose a method to evaluate the feature set used to describe them. Our method is based on the computation of the consistency of the example base. We carried out a case study in the domain of geomatic in order to evaluate the sets of measures used to characterise geographic objects. The case study shows that our method allows to give relevant evaluations of measure sets.
{"title":"Supervised Feature Evaluation by Consistency Analysis: Application to Measure Sets Used to Characterise Geographic Objects","authors":"P. Taillandier, A. Drogoul","doi":"10.1109/KSE.2010.28","DOIUrl":"https://doi.org/10.1109/KSE.2010.28","url":null,"abstract":"Nowadays, supervised learning is commonly used in many domains. Indeed, many works propose to learn new knowledge from examples that translate the expected behaviour of the considered system. A key issue of supervised learning concerns the description language used to represent the examples. In this paper, we propose a method to evaluate the feature set used to describe them. Our method is based on the computation of the consistency of the example base. We carried out a case study in the domain of geomatic in order to evaluate the sets of measures used to characterise geographic objects. The case study shows that our method allows to give relevant evaluations of measure sets.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116139286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two popular hazards in supervised learning of neural networks are local minima and over fitting. Application of the momentum technique dealing with the local optima has proved efficient but it is vulnerable to over fitting. In contrast, deployment of the early stopping technique might overcome the over fitting phenomena but it sometimes terminates into the local minima. This paper proposes a hybrid approach, which is a combination of two processing neurons: momentum and early stopping, to tackle these hazards, aiming at improving the performance of neural networks in terms of both accuracy and processing time in function approximation. Experimental results conducted on various kinds of non-linear functions have demonstrated that the proposed approach is dominant compared with conventional learning approaches.
{"title":"Smoothing Supervised Learning of Neural Networks for Function Approximation","authors":"T. Nguyen","doi":"10.1109/KSE.2010.15","DOIUrl":"https://doi.org/10.1109/KSE.2010.15","url":null,"abstract":"Two popular hazards in supervised learning of neural networks are local minima and over fitting. Application of the momentum technique dealing with the local optima has proved efficient but it is vulnerable to over fitting. In contrast, deployment of the early stopping technique might overcome the over fitting phenomena but it sometimes terminates into the local minima. This paper proposes a hybrid approach, which is a combination of two processing neurons: momentum and early stopping, to tackle these hazards, aiming at improving the performance of neural networks in terms of both accuracy and processing time in function approximation. Experimental results conducted on various kinds of non-linear functions have demonstrated that the proposed approach is dominant compared with conventional learning approaches.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115064766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a rule-based Attribute-Oriented (AO) Induction method on rule-based concept hierarchies that can be constructed from generalization rules. Based on analyzing some major previous approaches such as rule-based AO induction with backtracking, path-id based AO induction and a cyclic graph based AO induction, we propose a new approach to facilitate induction on the rule based case that can avoid a problem of anomaly and overcome disadvantages of these above methods. Experimental studies show that the new approach is efficient and suitable for providing condensed and qualified summarizations.
{"title":"Rule-Based Attribute-Oriented Induction for Knowledge Discovery","authors":"N. D. Thanh, Ngo Tuan Phong, N. K. Anh","doi":"10.1109/KSE.2010.23","DOIUrl":"https://doi.org/10.1109/KSE.2010.23","url":null,"abstract":"This paper introduces a rule-based Attribute-Oriented (AO) Induction method on rule-based concept hierarchies that can be constructed from generalization rules. Based on analyzing some major previous approaches such as rule-based AO induction with backtracking, path-id based AO induction and a cyclic graph based AO induction, we propose a new approach to facilitate induction on the rule based case that can avoid a problem of anomaly and overcome disadvantages of these above methods. Experimental studies show that the new approach is efficient and suitable for providing condensed and qualified summarizations.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127817312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a new time series dimensionality reduction method, IPIP. This method takes full advantages of PIP (Perceptually Important Points) method, proposed by Chung et al., with some improvements in order that the new method can theoretically satisfy the lower bounding condition for time series dimensionality reduction methods. Furthermore, we can make IPIP index able by showing that a time series compressed by IPIP can be indexed with the support of a multidimensional index structure based on Skyline index. Our experiments show that our IPIP method with its appropriate index structure can perform better than to some previous schemes, namely PAA based on traditional R*- tree.
本文介绍了一种新的时间序列降维方法——IPIP。该方法充分利用了Chung等人提出的PIP (perceptional Important Points)方法,并进行了一些改进,使得该方法在理论上能够满足时间序列降维方法的下边界条件。此外,通过展示IPIP压缩的时间序列可以在基于Skyline索引的多维索引结构的支持下进行索引,从而使IPIP索引成为可能。实验结果表明,采用适当的索引结构的IPIP方法比传统的基于R*-树的PAA方法具有更好的性能。
{"title":"An Improvement of PIP for Time Series Dimensionality Reduction and Its Index Structure","authors":"N. T. Son, D. T. Anh","doi":"10.1109/KSE.2010.8","DOIUrl":"https://doi.org/10.1109/KSE.2010.8","url":null,"abstract":"In this paper, we introduce a new time series dimensionality reduction method, IPIP. This method takes full advantages of PIP (Perceptually Important Points) method, proposed by Chung et al., with some improvements in order that the new method can theoretically satisfy the lower bounding condition for time series dimensionality reduction methods. Furthermore, we can make IPIP index able by showing that a time series compressed by IPIP can be indexed with the support of a multidimensional index structure based on Skyline index. Our experiments show that our IPIP method with its appropriate index structure can perform better than to some previous schemes, namely PAA based on traditional R*- tree.","PeriodicalId":158823,"journal":{"name":"2010 Second International Conference on Knowledge and Systems Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114360373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}