Tree decomposition introduced by Robertson and Seymour aims to decompose a problem into clusters constituting an a cyclic graph. There are works exploiting tree decomposition for complete search methods. In this paper, we show how tree decomposition can be used to efficiently guide the exploration of local search methods that use large neighborhoods like VNS. We introduce tightness dependent tree decomposition which allows to take advantage of both the structure of the problem and the constraints tightness. Experiments performed on random instances (GRAPH) and real life instances (CELAR and SPOT5) show the appropriateness and the efficiency of our approach.
{"title":"Guiding VNS with Tree Decomposition","authors":"Mathieu Fontaine, S. Loudni, P. Boizumault","doi":"10.1109/ICTAI.2011.82","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.82","url":null,"abstract":"Tree decomposition introduced by Robertson and Seymour aims to decompose a problem into clusters constituting an a cyclic graph. There are works exploiting tree decomposition for complete search methods. In this paper, we show how tree decomposition can be used to efficiently guide the exploration of local search methods that use large neighborhoods like VNS. We introduce tightness dependent tree decomposition which allows to take advantage of both the structure of the problem and the constraints tightness. Experiments performed on random instances (GRAPH) and real life instances (CELAR and SPOT5) show the appropriateness and the efficiency of our approach.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130802943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Bessiere, E. Bouyakhf, Younes Mechqrane, M. Wahbi
Asynchronous Backtracking is the standard search procedure for distributed constraint reasoning. It requires a total ordering on the agents. All polynomial space algorithms proposed so far to improve Asynchronous Backtracking by reordering agents during search only allow a limited amount of reordering. In this paper, we propose Agile-ABT, a search procedure that is able to change the ordering of agents more than previous approaches. This is done via the original notion of termination value, a vector of stamps labelling the new orders exchanged by agents during search. In Agile-ABT, agents can reorder themselves as much as they want as long as the termination value decreases as the search progresses. Our experiments show the good performance of Agile-ABT when compared to other dynamic reordering techniques.
{"title":"Agile Asynchronous Backtracking for Distributed Constraint Satisfaction Problems","authors":"C. Bessiere, E. Bouyakhf, Younes Mechqrane, M. Wahbi","doi":"10.1109/ICTAI.2011.122","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.122","url":null,"abstract":"Asynchronous Backtracking is the standard search procedure for distributed constraint reasoning. It requires a total ordering on the agents. All polynomial space algorithms proposed so far to improve Asynchronous Backtracking by reordering agents during search only allow a limited amount of reordering. In this paper, we propose Agile-ABT, a search procedure that is able to change the ordering of agents more than previous approaches. This is done via the original notion of termination value, a vector of stamps labelling the new orders exchanged by agents during search. In Agile-ABT, agents can reorder themselves as much as they want as long as the termination value decreases as the search progresses. Our experiments show the good performance of Agile-ABT when compared to other dynamic reordering techniques.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130407113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper emphasizes on handling uncertain and causal information in a min-based possibility theory framework. More precisely, we focus on studying the representational point of view of interventions under a compilation framework. We propose two compilation-based inference algorithms for min-based possibilistic causal networks based on encoding the augmented network into a propositional theory and compiling this output in order to efficiently compute the effect of both observations and interventions.
{"title":"An Augmented-Based Approach for Compiling Min-based Possibilistic Causal Networks","authors":"R. Ayachi, N. B. Amor, S. Benferhat","doi":"10.1109/ICTAI.2011.107","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.107","url":null,"abstract":"This paper emphasizes on handling uncertain and causal information in a min-based possibility theory framework. More precisely, we focus on studying the representational point of view of interventions under a compilation framework. We propose two compilation-based inference algorithms for min-based possibilistic causal networks based on encoding the augmented network into a propositional theory and compiling this output in order to efficiently compute the effect of both observations and interventions.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123258029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a growing interest in building large knowledge bases. Dealing with a huge amount of knowledge, two problems can be encountered in real domains. The first case is that knowledge is originally centralized so that one can access the whole knowledge but the size of the knowledge base is too huge to be handled. The second case is that knowledge is distributed in several sources so that it is hard or impossible to immediately access the whole or part of knowledge. We focus here on the case in which a single reasoner might not be able to cope with the entire database, and tries to partitioned the data to improve its scalability, which is likely to happen if the knowledge is partitioned into overlapping but cohesive components. We thus consider distributed reasoning with such structures, each partition collaborating with the other to produce a coherent output. We thus propose a generalization of partition-based theorem proving to partition-based consequence finding (sharing a specification of ``interesting'' consequences), with a sequential and a parallel version. As termination cannot always be ensured in first order, we also investigate bounded searches. Finally we provide an experimental analysis comparing our two variants with the centralized case using some automated process to decompose the theory, and show that for most problems, partitioning the data can indeed increase the efficiency, though proper choice of the decomposition (and especially of the starting point of the algorithm) can be difficult.
{"title":"Partition-Based Consequence Finding","authors":"Gauvain Bourgne, Katsumi Inoue","doi":"10.1109/ICTAI.2011.102","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.102","url":null,"abstract":"There is a growing interest in building large knowledge bases. Dealing with a huge amount of knowledge, two problems can be encountered in real domains. The first case is that knowledge is originally centralized so that one can access the whole knowledge but the size of the knowledge base is too huge to be handled. The second case is that knowledge is distributed in several sources so that it is hard or impossible to immediately access the whole or part of knowledge. We focus here on the case in which a single reasoner might not be able to cope with the entire database, and tries to partitioned the data to improve its scalability, which is likely to happen if the knowledge is partitioned into overlapping but cohesive components. We thus consider distributed reasoning with such structures, each partition collaborating with the other to produce a coherent output. We thus propose a generalization of partition-based theorem proving to partition-based consequence finding (sharing a specification of ``interesting'' consequences), with a sequential and a parallel version. As termination cannot always be ensured in first order, we also investigate bounded searches. Finally we provide an experimental analysis comparing our two variants with the centralized case using some automated process to decompose the theory, and show that for most problems, partitioning the data can indeed increase the efficiency, though proper choice of the decomposition (and especially of the starting point of the algorithm) can be difficult.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116124374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malak Alshawabkeh, J. Aslam, D. Kaeli, Jennifer G. Dy
Intrusion detection systems (IDSs) are continuously evolving, with the goal of improving the security of computer infrastructures. However, one of the most significant challenges in this area is the poor detection rate, due to the presence of excessive features in a data set whose class distributions are imbalanced. Despite the relatively long existence and the promising nature of feature selection methods, most of them fail to account for imbalance class distributions, particularly, for intrusion data, leading to poor predictions for minority class samples. In this paper, we propose a new feature selection algorithm to enhance the accuracy of IDS of virtual server environments. Our algorithm assigns weights to subsets of features according to the maximized area under the ROC curve (AUC) margin it induces during the boosting process over the minority and the majority examples. The best subset of features is then selected by a greedy search strategy. The empirical experiments are carried out on multiple intrusion data sets using different commercial virtual appliances and real malwares.
{"title":"A Novel Feature Selection for Intrusion Detection in Virtual Machine Environments","authors":"Malak Alshawabkeh, J. Aslam, D. Kaeli, Jennifer G. Dy","doi":"10.1109/ICTAI.2011.138","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.138","url":null,"abstract":"Intrusion detection systems (IDSs) are continuously evolving, with the goal of improving the security of computer infrastructures. However, one of the most significant challenges in this area is the poor detection rate, due to the presence of excessive features in a data set whose class distributions are imbalanced. Despite the relatively long existence and the promising nature of feature selection methods, most of them fail to account for imbalance class distributions, particularly, for intrusion data, leading to poor predictions for minority class samples. In this paper, we propose a new feature selection algorithm to enhance the accuracy of IDS of virtual server environments. Our algorithm assigns weights to subsets of features according to the maximized area under the ROC curve (AUC) margin it induces during the boosting process over the minority and the majority examples. The best subset of features is then selected by a greedy search strategy. The empirical experiments are carried out on multiple intrusion data sets using different commercial virtual appliances and real malwares.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116256724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we address the problem of merging argumentation systems (AS) in a multi-agent setting. Each agent's system may be built from different sets of arguments and/or different interactions between these arguments. The merging process must lead to solve conflicts between the agents and to identify ASs representing the knowledge of the group of agents. Previous work [6] has proposed a two-step merging process in which conflicts about an interaction result in a new kind of interaction, called ignorance. However, this merging process is computationally expensive, and does not provide a single resulting AS. We propose a novel approach to overcome these limitations by introducing a refinement of the ignorance relation under the form of a weighted attack. Our merging process takes only one step and provides a single weighted AS, which is easy to compute.
{"title":"Weighted Argumentation Systems: A Tool for Merging Argumentation Systems","authors":"C. Cayrol, M. Lagasquie-Schiex","doi":"10.1109/ICTAI.2011.99","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.99","url":null,"abstract":"In this paper, we address the problem of merging argumentation systems (AS) in a multi-agent setting. Each agent's system may be built from different sets of arguments and/or different interactions between these arguments. The merging process must lead to solve conflicts between the agents and to identify ASs representing the knowledge of the group of agents. Previous work [6] has proposed a two-step merging process in which conflicts about an interaction result in a new kind of interaction, called ignorance. However, this merging process is computationally expensive, and does not provide a single resulting AS. We propose a novel approach to overcome these limitations by introducing a refinement of the ignorance relation under the form of a weighted attack. Our merging process takes only one step and provides a single weighted AS, which is easy to compute.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120961720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a way to program adaptive reactive systems, using behavioral, scenario-based programming. Extending the semantics of live sequence charts with reinforcements allows the programmer not only to specify what the system should do or must not do, but also what it should try to do, in an intuitive and incremental way. By integrating scenario-based programs with reinforcement learning methods, the program can adapt to the environment, and try to achieve the desired goals. Visualization methods and modular learning decompositions, based on the unique structure of the program, are suggested, and result in an efficient development process and a fast learning rate.
{"title":"Adaptive Behavioral Programming","authors":"Nir Eitan, D. Harel","doi":"10.1109/ICTAI.2011.109","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.109","url":null,"abstract":"We introduce a way to program adaptive reactive systems, using behavioral, scenario-based programming. Extending the semantics of live sequence charts with reinforcements allows the programmer not only to specify what the system should do or must not do, but also what it should try to do, in an intuitive and incremental way. By integrating scenario-based programs with reinforcement learning methods, the program can adapt to the environment, and try to achieve the desired goals. Visualization methods and modular learning decompositions, based on the unique structure of the program, are suggested, and result in an efficient development process and a fast learning rate.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121573010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Similarity functions are essential to many learning algorithms. To allow their use in support vector machines (SVM), i.e., for the convergence of the learning algorithm to be guaranteed, they must be valid kernels. In the case of structured data, the similarities based on the popular edit distance often do not satisfy this requirement, which explains why they are typically used with k-nearest neighbor (k-NN). A common approach to use such edit similarities in SVM is to transform them into potentially (but not provably) valid kernels. Recently, a different theory of learning with (e,g,t) -good similarity functions was proposed, allowing the use of non-kernel similarity functions. Moreover, the resulting models are supposedly sparse, as opposed to standard SVM models that can be unnecessarily dense. In this paper, we study the relevance and applicability of this theory in the context of string edit similarities. We show that they are naturally good for a given string classification task and provide experimental evidence that the obtained models not only clearly outperform the k-NN approach, but are also competitive with standard SVM models learned with state-of-the-art edit kernels, while being much sparser.
{"title":"An Experimental Study on Learning with Good Edit Similarity Functions","authors":"A. Bellet, M. Sebban, Amaury Habrard","doi":"10.1109/ICTAI.2011.27","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.27","url":null,"abstract":"Similarity functions are essential to many learning algorithms. To allow their use in support vector machines (SVM), i.e., for the convergence of the learning algorithm to be guaranteed, they must be valid kernels. In the case of structured data, the similarities based on the popular edit distance often do not satisfy this requirement, which explains why they are typically used with k-nearest neighbor (k-NN). A common approach to use such edit similarities in SVM is to transform them into potentially (but not provably) valid kernels. Recently, a different theory of learning with (e,g,t) -good similarity functions was proposed, allowing the use of non-kernel similarity functions. Moreover, the resulting models are supposedly sparse, as opposed to standard SVM models that can be unnecessarily dense. In this paper, we study the relevance and applicability of this theory in the context of string edit similarities. We show that they are naturally good for a given string classification task and provide experimental evidence that the obtained models not only clearly outperform the k-NN approach, but are also competitive with standard SVM models learned with state-of-the-art edit kernels, while being much sparser.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125675058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vibration signals play a valuable role in the remote monitoring of high-assurance machinery such as ocean turbines. Because they are waveforms, vibration data must be transformed prior to being incorporated into a machine condition monitoring/prognostic health monitoring (MCM/PHM) solution to detect which frequencies of oscillation are most prevalent. One downside of these transformations, especially the streaming version of the wavelet packet decomposition (denoted SWPD), is that they can produce a large number of features, hindering the model building and evaluation process. In this paper we demonstrate how feature selection techniques may be applied to the output of the SWPD transformation, vastly reducing the total number of features used to build models. The resulting data can be used to build more accurate models for use in MCM/PHM while minimizing computation time.
{"title":"Feature Selection for Vibration Sensor Data Transformed by a Streaming Wavelet Packet Decomposition","authors":"Randall Wald, T. Khoshgoftaar, J. Sloan","doi":"10.1109/ICTAI.2011.168","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.168","url":null,"abstract":"Vibration signals play a valuable role in the remote monitoring of high-assurance machinery such as ocean turbines. Because they are waveforms, vibration data must be transformed prior to being incorporated into a machine condition monitoring/prognostic health monitoring (MCM/PHM) solution to detect which frequencies of oscillation are most prevalent. One downside of these transformations, especially the streaming version of the wavelet packet decomposition (denoted SWPD), is that they can produce a large number of features, hindering the model building and evaluation process. In this paper we demonstrate how feature selection techniques may be applied to the output of the SWPD transformation, vastly reducing the total number of features used to build models. The resulting data can be used to build more accurate models for use in MCM/PHM while minimizing computation time.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134599866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A swarm of robots deployed in dynamic, hostile environments may encounter situations that can prevent them from achieving optimality or completing certain tasks. To resolve these situations, the robots must have an adaptive software system that can proactively cope with changes. This adaptive system should emulate the intelligence of human reasoning and common sense but must not assume that the robots can communicate, be tightly coupled, or be constantly at a close range. This paper presents a path strategy evaluator (PSE) that learns an optimal path by considering not just the distance, but also how to minimize damages to each robot and enhance the likelihood that the swarm will succeed in its mission, all with minimal impositions on the functionality of the robots. Our evaluation shows that this PSE is able to learn a dynamic environment and its effect on the robots' critical components and output an optimal path for the robots.
{"title":"ROBUST Path Strategy Evaluator","authors":"Angie Shia, F. Bastani, I. Yen","doi":"10.1109/ICTAI.2011.91","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.91","url":null,"abstract":"A swarm of robots deployed in dynamic, hostile environments may encounter situations that can prevent them from achieving optimality or completing certain tasks. To resolve these situations, the robots must have an adaptive software system that can proactively cope with changes. This adaptive system should emulate the intelligence of human reasoning and common sense but must not assume that the robots can communicate, be tightly coupled, or be constantly at a close range. This paper presents a path strategy evaluator (PSE) that learns an optimal path by considering not just the distance, but also how to minimize damages to each robot and enhance the likelihood that the swarm will succeed in its mission, all with minimal impositions on the functionality of the robots. Our evaluation shows that this PSE is able to learn a dynamic environment and its effect on the robots' critical components and output an optimal path for the robots.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115224225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}