É. Grégoire, A. Hasni, Bertrand Mazure, Cédric Piette
In this paper, we show that the E-SquarO puzzle, which is an extension of the popular SquarO game, is NP-complete. We propose a SAT encoding of E-SquarO and investigate its practical computational properties.
{"title":"Solving E-Squaro through SAT-Coding","authors":"É. Grégoire, A. Hasni, Bertrand Mazure, Cédric Piette","doi":"10.1109/ICTAI.2013.145","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.145","url":null,"abstract":"In this paper, we show that the E-SquarO puzzle, which is an extension of the popular SquarO game, is NP-complete. We propose a SAT encoding of E-SquarO and investigate its practical computational properties.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115832481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network centrality score is an important measure to assess the importance and the major roles that each node plays in a network. In addition, the centrality score is also vitally important in assessing the overall structure and connectivity of a network. In a narrow sense, nearly all network mining algorithms, such as social network community detection, link predictions etc., involve certain types of centrality scores to some extent. Despite of its importance, very few researches have empirically analyzed the robustness of these measures in different network environments. Our existing works know very little about how network centrality score behaves at macro- (i.e. network) and micro- (i.e. individual node) levels. At the network level, what are the inherent connections between network topology structures and centrality scores? Will a sparse network be more (or less) robust in its centrality scores if any change is introduced to the network? At individual node levels, what types of nodes (high or low node degree) are more sensitive in their centrality scores, when changes are imposed to the network?And which centrality score is more reliable in revealing the genuine network structures? In this paper, we empirically analyze the robustness of three types of centrality scores: Betweenness centrality score, Closeness centrality score, and Eigen-vector centrality score for various types of networks. We systematically introduce biased and unbiased changes to the networks, by adding and removing different percentages of edges and nodes, through which we can compare and analyze the robustness and sensitivity of each centrality score measurement. Our empirical studies draw important findings to help understand the behaviors of centrality scores in different social networks.
{"title":"An Empirical Study of Robustness of Network Centrality Scores in Various Networks and Conditions","authors":"Matthew Herland, Pablo Pastran, Xingquan Zhu","doi":"10.1109/ICTAI.2013.42","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.42","url":null,"abstract":"Network centrality score is an important measure to assess the importance and the major roles that each node plays in a network. In addition, the centrality score is also vitally important in assessing the overall structure and connectivity of a network. In a narrow sense, nearly all network mining algorithms, such as social network community detection, link predictions etc., involve certain types of centrality scores to some extent. Despite of its importance, very few researches have empirically analyzed the robustness of these measures in different network environments. Our existing works know very little about how network centrality score behaves at macro- (i.e. network) and micro- (i.e. individual node) levels. At the network level, what are the inherent connections between network topology structures and centrality scores? Will a sparse network be more (or less) robust in its centrality scores if any change is introduced to the network? At individual node levels, what types of nodes (high or low node degree) are more sensitive in their centrality scores, when changes are imposed to the network?And which centrality score is more reliable in revealing the genuine network structures? In this paper, we empirically analyze the robustness of three types of centrality scores: Betweenness centrality score, Closeness centrality score, and Eigen-vector centrality score for various types of networks. We systematically introduce biased and unbiased changes to the networks, by adding and removing different percentages of edges and nodes, through which we can compare and analyze the robustness and sensitivity of each centrality score measurement. Our empirical studies draw important findings to help understand the behaviors of centrality scores in different social networks.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117182211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Possibilistic networks are belief graphical models based on possibility theory. This paper deals with a special kind of possibilistic networks called three-valued possibilistic networks where only three possibility levels are used to encode uncertain information. The paper analyzes different semantics of three-valued networks and provides precise relationships relating the different semantics. More precisely, the paper analyzes two categories of methods for deriving a three-valued joint possibility distribution from a three-valued possibilistic network. The first category of methods is based on viewing a three-valued possibilistic network as a family of compatible networks and defining combination rules for deriving the three-valued joint distribution. The second category is based on three-valued chain rules using three-valued operators inspired from some three-valued logics. Finally, the paper shows that the inference using the well-known junction tree algorithm can only be extended for some three-valued chain rules.
{"title":"Three-Valued Possibilistic Networks: Semantics & Inference","authors":"S. Benferhat, Jérôme Delobelle, Karim Tabia","doi":"10.1109/ICTAI.2013.17","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.17","url":null,"abstract":"Possibilistic networks are belief graphical models based on possibility theory. This paper deals with a special kind of possibilistic networks called three-valued possibilistic networks where only three possibility levels are used to encode uncertain information. The paper analyzes different semantics of three-valued networks and provides precise relationships relating the different semantics. More precisely, the paper analyzes two categories of methods for deriving a three-valued joint possibility distribution from a three-valued possibilistic network. The first category of methods is based on viewing a three-valued possibilistic network as a family of compatible networks and defining combination rules for deriving the three-valued joint distribution. The second category is based on three-valued chain rules using three-valued operators inspired from some three-valued logics. Finally, the paper shows that the inference using the well-known junction tree algorithm can only be extended for some three-valued chain rules.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116166354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper concerns a BDI agent-based dialog management with innovated approaches to Beliefs and Intentions components. We show that these modifications lead to approximately 5 % improvement in terms of information exchange rate.
{"title":"Modified Conversational Agent Architecture","authors":"Tomás Nestorovic, V. Matousek","doi":"10.1109/ICTAI.2013.106","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.106","url":null,"abstract":"This paper concerns a BDI agent-based dialog management with innovated approaches to Beliefs and Intentions components. We show that these modifications lead to approximately 5 % improvement in terms of information exchange rate.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124812561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First proposed in 2009, the classifier chains model (CC) has become one of the most influential algorithms for multi-label classification. It is distinguished by its simple and effective approach to exploit label dependencies. The CC method involves the training of q single-label binary classifiers, where each one is solely responsible for classifying a specific label in ll, ..., lq. These q classifiers are linked in a chain, such that each binary classifier is able to consider the labels predicted by the previous ones as additional information at classification time. The label ordering has a strong effect on predictive accuracy, however it is decided at random and/or combining random orders via an ensemble. A disadvantage of the ensemble approach consists of the fact that it is not suitable when the goal is to generate interpretable classifiers. To tackle this problem, in this work we propose a genetic algorithm for optimizing the label ordering in classifier chains. Experiments on diverse benchmark datasets, followed by the Wilcoxon test for assessing statistical significance, indicate that the proposed strategy produces more accurate classifiers.
{"title":"A Genetic Algorithm for Optimizing the Label Ordering in Multi-label Classifier Chains","authors":"Eduardo Corrêa Gonçalves, A. Plastino, A. Freitas","doi":"10.1109/ICTAI.2013.76","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.76","url":null,"abstract":"First proposed in 2009, the classifier chains model (CC) has become one of the most influential algorithms for multi-label classification. It is distinguished by its simple and effective approach to exploit label dependencies. The CC method involves the training of q single-label binary classifiers, where each one is solely responsible for classifying a specific label in ll, ..., lq. These q classifiers are linked in a chain, such that each binary classifier is able to consider the labels predicted by the previous ones as additional information at classification time. The label ordering has a strong effect on predictive accuracy, however it is decided at random and/or combining random orders via an ensemble. A disadvantage of the ensemble approach consists of the fact that it is not suitable when the goal is to generate interpretable classifiers. To tackle this problem, in this work we propose a genetic algorithm for optimizing the label ordering in classifier chains. Experiments on diverse benchmark datasets, followed by the Wilcoxon test for assessing statistical significance, indicate that the proposed strategy produces more accurate classifiers.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"4 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123695667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paolo Codenotti, H. Katebi, K. Sakallah, I. Markov
We adapt techniques from the constraint programming and satisfiability literatures to expedite the search for graph automorphisms. Specifically, we implement conflict-driven backjumping, several branching heuristics, andrestarts. To support backjumping, we extend high-performance search for graph automorphisms with a novel framework for conflict analysis. Empirically, these techniques improve performance up to several orders of magnitude.
{"title":"Conflict Analysis and Branching Heuristics in the Search for Graph Automorphisms","authors":"Paolo Codenotti, H. Katebi, K. Sakallah, I. Markov","doi":"10.1109/ICTAI.2013.139","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.139","url":null,"abstract":"We adapt techniques from the constraint programming and satisfiability literatures to expedite the search for graph automorphisms. Specifically, we implement conflict-driven backjumping, several branching heuristics, andrestarts. To support backjumping, we extend high-performance search for graph automorphisms with a novel framework for conflict analysis. Empirically, these techniques improve performance up to several orders of magnitude.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122838326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel hierarchical mixture model where each component is composed of a set of finite probability densities forming a super class mixture. Our proposed model can be viewed as a mixture of mixtures to support multi-level hierarchies where the structure of the hierarchy can be altered according to users' ontological models within costless computational time. The proposed approach is generalized to adopt any probability density function and an algorithm to learn the model is proposed. In this paper, we adopt the inverted Dirichlet distribution to build the model, and a simulation study is performed to validate the proposed approach using synthetic and a real world challenging application concerning visual scenes categorization.
{"title":"Visual Scenes Categorization Using a Flexible Hierarchical Mixture Model Supporting Users Ontology","authors":"Taoufik Bdiri, N. Bouguila, D. Ziou","doi":"10.1109/ICTAI.2013.48","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.48","url":null,"abstract":"We introduce a novel hierarchical mixture model where each component is composed of a set of finite probability densities forming a super class mixture. Our proposed model can be viewed as a mixture of mixtures to support multi-level hierarchies where the structure of the hierarchy can be altered according to users' ontological models within costless computational time. The proposed approach is generalized to adopt any probability density function and an algorithm to learn the model is proposed. In this paper, we adopt the inverted Dirichlet distribution to build the model, and a simulation study is performed to validate the proposed approach using synthetic and a real world challenging application concerning visual scenes categorization.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124206244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hypothetical Datalog is based on an intuitionistic semantics rather than a classical logic semantics, and allows embedded implications in rule bodies. While the usual implication (i.e., the neck of a Horn clause) stands for inferencing facts, an embedded implication plays the role of assuming its premise for deriving its consequence. Although this topic has received considerable attention along time and nowadays is gaining renewed interest, there has not been a tabled implementation of hypothetical Datalog. We present here such a proposal including the formal background and its application to a goal-oriented tabled setting with negation, where non-monotonicity due to negation and implication is handled via stratification and contexts. In addition, we implement it in the deductive system DES, also providing support to duplicates and integrity constraints in the hypothetical framework.
{"title":"Implementing Tabled Hypothetical Datalog","authors":"F. Sáenz-Pérez","doi":"10.1109/ICTAI.2013.94","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.94","url":null,"abstract":"Hypothetical Datalog is based on an intuitionistic semantics rather than a classical logic semantics, and allows embedded implications in rule bodies. While the usual implication (i.e., the neck of a Horn clause) stands for inferencing facts, an embedded implication plays the role of assuming its premise for deriving its consequence. Although this topic has received considerable attention along time and nowadays is gaining renewed interest, there has not been a tabled implementation of hypothetical Datalog. We present here such a proposal including the formal background and its application to a goal-oriented tabled setting with negation, where non-monotonicity due to negation and implication is handled via stratification and contexts. In addition, we implement it in the deductive system DES, also providing support to duplicates and integrity constraints in the hypothetical framework.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129507670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Kalboussi, Slim Bechikh, M. Kessentini, L. B. Said
Autonomous software agents are increasingly used in a wide range of applications. Thus, testing these entities is extremely crucial. However, testing autonomous agents is still a hard task since they may react in different manners for the same input over time. To address this problem, Nguyen et al. [6] have introduced the first approach that uses evolutionary optimization to search for challenging test cases. In this paper, we extend this work by studying experimentally the effect of the number of objectives on the obtained test cases. This is achieved by proposing five additional objectives and solving the new obtained problem by means of a Preference-based Many-Objective Evolutionary Testing (P-MOET) method. The obtained results show that the hardness of test cases increases with the rise of the number of objectives.
{"title":"On the Influence of the Number of Objectives in Evolutionary Autonomous Software Agent Testing","authors":"S. Kalboussi, Slim Bechikh, M. Kessentini, L. B. Said","doi":"10.1109/ICTAI.2013.43","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.43","url":null,"abstract":"Autonomous software agents are increasingly used in a wide range of applications. Thus, testing these entities is extremely crucial. However, testing autonomous agents is still a hard task since they may react in different manners for the same input over time. To address this problem, Nguyen et al. [6] have introduced the first approach that uses evolutionary optimization to search for challenging test cases. In this paper, we extend this work by studying experimentally the effect of the number of objectives on the obtained test cases. This is achieved by proposing five additional objectives and solving the new obtained problem by means of a Preference-based Many-Objective Evolutionary Testing (P-MOET) method. The obtained results show that the hardness of test cases increases with the rise of the number of objectives.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127871121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we try to develop a new application for physically handicapped children to communicate with others by a blink. Because of limited body movements and mental disorders, many of them cannot communicate with their families or caregivers. We think if they can use application in smart phones by a blink, it will be big help for them to tell caregivers what they really need or want to tell. First, we try to detect an eye area by using OpenCv. Then we develop the way to detect opening and closing of eyes. We combine the method using situation and using complexity of image to get more accurate results to detect a blink. The level of handicapped is very varied in children. So we will try to develop the application to be able to customize depends on the situation of users. And also, we will try to reduce the error to detect a blink and pursue the high precision of the eye chased program.
{"title":"Study and Development of Support Tool with Blinks for Physically Handicapped Children","authors":"Ippei Torii, Kaoruko Ohtani, T. Niwa, N. Ishii","doi":"10.1109/ICTAI.2013.27","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.27","url":null,"abstract":"In this study, we try to develop a new application for physically handicapped children to communicate with others by a blink. Because of limited body movements and mental disorders, many of them cannot communicate with their families or caregivers. We think if they can use application in smart phones by a blink, it will be big help for them to tell caregivers what they really need or want to tell. First, we try to detect an eye area by using OpenCv. Then we develop the way to detect opening and closing of eyes. We combine the method using situation and using complexity of image to get more accurate results to detect a blink. The level of handicapped is very varied in children. So we will try to develop the application to be able to customize depends on the situation of users. And also, we will try to reduce the error to detect a blink and pursue the high precision of the eye chased program.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}