We briefly overview the architecture of a diagnosis agent. We employ logic and logic programming to specify and implement the agent: the knowledge base uses extended logic programming to specify the agent's behaviour and its knowledge about the system to be diagnosed. The inference machine, which provides algorithms to compute diagnoses, as well as the reactive layer that realises a meta interpreter for the agent behaviour are implemented in PVM-Prolog, that enhances standard Prolog by message passing facilities.
{"title":"A deliberative and reactive diagnosis agent based on logic programming","authors":"Michael Schroeder, I. D. Móra, L. Pereira","doi":"10.1109/TAI.1996.560771","DOIUrl":"https://doi.org/10.1109/TAI.1996.560771","url":null,"abstract":"We briefly overview the architecture of a diagnosis agent. We employ logic and logic programming to specify and implement the agent: the knowledge base uses extended logic programming to specify the agent's behaviour and its knowledge about the system to be diagnosed. The inference machine, which provides algorithms to compute diagnoses, as well as the reactive layer that realises a meta interpreter for the agent behaviour are implemented in PVM-Prolog, that enhances standard Prolog by message passing facilities.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114236839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the effects on decision tree learning of constructing four types of attribute (conjunctive, disjunctive, M-of-N, and X-of-N representations). To reduce effects of other factors such as tree learning methods, new attribute search strategies, evaluation functions, and stopping criteria, a single tree learning algorithm is developed. With different option settings, it can construct four different types of new attribute, but all other factors are fixed. The study reveals that conjunctive and disjunctive representations have very similar performance in terms of prediction accuracy and theory complexity on a variety of concepts. Moreover, the study demonstrates that the stronger representation power of M-of-N than conjunction and disjunction and the stronger representation power of X-of-N than these three types of new attribute can be reflected in the performance of decision tree learning.
{"title":"Effects of different types of new attribute on constructive induction","authors":"Zijian Zheng","doi":"10.1109/TAI.1996.560459","DOIUrl":"https://doi.org/10.1109/TAI.1996.560459","url":null,"abstract":"This paper studies the effects on decision tree learning of constructing four types of attribute (conjunctive, disjunctive, M-of-N, and X-of-N representations). To reduce effects of other factors such as tree learning methods, new attribute search strategies, evaluation functions, and stopping criteria, a single tree learning algorithm is developed. With different option settings, it can construct four different types of new attribute, but all other factors are fixed. The study reveals that conjunctive and disjunctive representations have very similar performance in terms of prediction accuracy and theory complexity on a variety of concepts. Moreover, the study demonstrates that the stronger representation power of M-of-N than conjunction and disjunction and the stronger representation power of X-of-N than these three types of new attribute can be reflected in the performance of decision tree learning.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123699359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The employment of computer generated forces (CGFs) within distributed virtual environments (DVEs) dramatically increases the number of entities in a simulated training environment. However, current CGF limitations produce behaviours that can be defeated using methods ineffective against humans. Our research focuses on developing aircraft CGFs. It is necessary to deal with uncertainty, ambiguity, and approximation. The Fuzzy Wingman (FW) relies on fuzzy logic to provide these abilities. In this manner, the FW presents a reasonable approach to effectively populating the simulated training environment with low cost CGFs while maintaining the realism of training with human controlled entities.
{"title":"Computer generated intelligent companions for distributed virtual environments","authors":"Mark Edwards, E. Santos, S. Banks, M. Stytz","doi":"10.1109/TAI.1996.560779","DOIUrl":"https://doi.org/10.1109/TAI.1996.560779","url":null,"abstract":"The employment of computer generated forces (CGFs) within distributed virtual environments (DVEs) dramatically increases the number of entities in a simulated training environment. However, current CGF limitations produce behaviours that can be defeated using methods ineffective against humans. Our research focuses on developing aircraft CGFs. It is necessary to deal with uncertainty, ambiguity, and approximation. The Fuzzy Wingman (FW) relies on fuzzy logic to provide these abilities. In this manner, the FW presents a reasonable approach to effectively populating the simulated training environment with low cost CGFs while maintaining the realism of training with human controlled entities.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122766981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for propositional satisfiability (SAT). GRASP incorporates several search-pruning techniques, some of which are specific to SAT, whereas others find equivalent in other fields of artificial intelligence. GRASP is premised on the inevitability of conflicts during a search, and its most distinguishing feature is the augmentation of the basic backtracking search with a powerful conflict analysis procedure. Analyzing conflicts to determine their causes enables GRASP to backtrack non-chronologically to earlier levels in the search tree, potentially pruning large portions of the search space. In addition, by "recording" the causes of conflicts, GRASP can recognize and preempt the occurrence of similar conflicts later on in the search. Finally, straightforward bookkeeping of the causality chains leading up to conflicts allows GRASP to identify assignments that are necessary for a solution to be found. Experimental results obtained from a large number of benchmarks indicate that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for a large number of representative classes of SAT instances.
介绍了一种新的命题可满足性搜索算法GRASP (Generic seaRch Algorithm for Satisfiability Problem)。GRASP结合了几种搜索修剪技术,其中一些是SAT特有的,而另一些则在其他人工智能领域找到了等效的技术。GRASP以搜索过程中冲突的必然性为前提,其最大的特点是通过强大的冲突分析程序增强了基本的回溯搜索。分析冲突以确定其原因使GRASP能够非按时间顺序回溯到搜索树中的较早级别,这可能会修剪大部分搜索空间。此外,通过“记录”冲突的原因,GRASP可以在以后的搜索中识别和预防类似冲突的发生。最后,直接记录导致冲突的因果链,使GRASP能够确定找到解决方案所必需的分配。从大量基准测试中获得的实验结果表明,将所提出的冲突分析技术应用于SAT算法对于大量具有代表性的SAT实例类是非常有效的。
{"title":"Conflict analysis in search algorithms for satisfiability","authors":"Joao Marques-Silva, K. Sakallah","doi":"10.1109/TAI.1996.560789","DOIUrl":"https://doi.org/10.1109/TAI.1996.560789","url":null,"abstract":"Introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for propositional satisfiability (SAT). GRASP incorporates several search-pruning techniques, some of which are specific to SAT, whereas others find equivalent in other fields of artificial intelligence. GRASP is premised on the inevitability of conflicts during a search, and its most distinguishing feature is the augmentation of the basic backtracking search with a powerful conflict analysis procedure. Analyzing conflicts to determine their causes enables GRASP to backtrack non-chronologically to earlier levels in the search tree, potentially pruning large portions of the search space. In addition, by \"recording\" the causes of conflicts, GRASP can recognize and preempt the occurrence of similar conflicts later on in the search. Finally, straightforward bookkeeping of the causality chains leading up to conflicts allows GRASP to identify assignments that are necessary for a solution to be found. Experimental results obtained from a large number of benchmarks indicate that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for a large number of representative classes of SAT instances.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127119704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A way of formally but partially characterizing knowledge base correctness is to define knowledge base coherency. The first contribution of this paper is to show how taking into account test cases can lead to a new definition of rule base coherency that is better than existing ones. Our second contribution is that we propose extensions of model-based diagnosis that enable the complete characterization of rule base incoherencies and their possible causes. As a result, we obtain an algorithm for both debugging rule bases and detecting incoherencies.
{"title":"Merging test and verification for rule base debugging","authors":"F. Bouali, S. Loiseau, M. Rousset","doi":"10.1109/TAI.1996.560447","DOIUrl":"https://doi.org/10.1109/TAI.1996.560447","url":null,"abstract":"A way of formally but partially characterizing knowledge base correctness is to define knowledge base coherency. The first contribution of this paper is to show how taking into account test cases can lead to a new definition of rule base coherency that is better than existing ones. Our second contribution is that we propose extensions of model-based diagnosis that enable the complete characterization of rule base incoherencies and their possible causes. As a result, we obtain an algorithm for both debugging rule bases and detecting incoherencies.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125504323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Genetic algorithms (GAs) are promising for function optimization. Methods for function optimization are required to perform local search as well as global search in a balanced way. It is recognized that the traditional GA is not well suited to local search. I have tested algorithms combining various ideas to develop a new genetic algorithm to obtain the global optimum effectively. The results show that the performance of a genetic algorithm using large mutation rates and population-elitist selection (GALME) is superior. This paper describes the GALME and its theoretical justification, and presents the results of experiments, compared to the traditional GA. Within the range of the experiments, it turns out that the performance of GALME is remarkably superior to that of the traditional GA.
{"title":"A new genetic algorithm using large mutation rates and population-elitist selection (GALME)","authors":"H. Shimodaira","doi":"10.1109/TAI.1996.560396","DOIUrl":"https://doi.org/10.1109/TAI.1996.560396","url":null,"abstract":"Genetic algorithms (GAs) are promising for function optimization. Methods for function optimization are required to perform local search as well as global search in a balanced way. It is recognized that the traditional GA is not well suited to local search. I have tested algorithms combining various ideas to develop a new genetic algorithm to obtain the global optimum effectively. The results show that the performance of a genetic algorithm using large mutation rates and population-elitist selection (GALME) is superior. This paper describes the GALME and its theoretical justification, and presents the results of experiments, compared to the traditional GA. Within the range of the experiments, it turns out that the performance of GALME is remarkably superior to that of the traditional GA.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127319705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constraint satisfaction problems (CSPs) are widely used in Artificial Intelligence. The problem of the existence of a solution in a CSP being NP-complete, filtering techniques and particularly arc-consistency are essential. They remove some local inconsistencies and so make the search easier. Since many problems in AI require a dynamic environment, the model was extended to dynamic CSPs (DCSPs) and some incremental arc-consistency algorithms were proposed. However, all of them have important drawbacks. DnAC-4 has an expensive worst-case space complexity and a bad average time complexity. AC/DC has a non-optimal worst-case time complexity which prevents from taking advantage of its good space complexity. The algorithm we present in this paper has both lower space requirements and better time performances than DnAC-4 while keeping an optimal worst case time complexity.
{"title":"Arc-consistency in dynamic CSPs is no more prohibitive","authors":"R. Debruyne","doi":"10.1109/TAI.1996.560467","DOIUrl":"https://doi.org/10.1109/TAI.1996.560467","url":null,"abstract":"Constraint satisfaction problems (CSPs) are widely used in Artificial Intelligence. The problem of the existence of a solution in a CSP being NP-complete, filtering techniques and particularly arc-consistency are essential. They remove some local inconsistencies and so make the search easier. Since many problems in AI require a dynamic environment, the model was extended to dynamic CSPs (DCSPs) and some incremental arc-consistency algorithms were proposed. However, all of them have important drawbacks. DnAC-4 has an expensive worst-case space complexity and a bad average time complexity. AC/DC has a non-optimal worst-case time complexity which prevents from taking advantage of its good space complexity. The algorithm we present in this paper has both lower space requirements and better time performances than DnAC-4 while keeping an optimal worst case time complexity.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"375 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132810296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. The partially observable solution is incrementally constructed by considering increasing amounts of information from observations. The base solution directs the expansion of the plan by providing an evaluation function for the search fringe. We show that incremental observation moves from the base solution towards the complete solution, allowing the planner to model the uncertainty about action outcomes and observations that are present in real domains.
{"title":"Incremental Markov-model planning","authors":"R. Washington","doi":"10.1109/TAI.1996.560398","DOIUrl":"https://doi.org/10.1109/TAI.1996.560398","url":null,"abstract":"This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. The partially observable solution is incrementally constructed by considering increasing amounts of information from observations. The base solution directs the expansion of the plan by providing an evaluation function for the search fringe. We show that incremental observation moves from the base solution towards the complete solution, allowing the planner to model the uncertainty about action outcomes and observations that are present in real domains.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ATMS, as defined by de Kleer and used in a problem solver loses its efficiency due to the exponential complexity of its algorithm. Improvement works are proposed: a) computing only labels of interesting data; b) computing only some environments characterized by a focus. We propose to control a problem solver coupling a deduction system and a reason maintenance system (RIMS) based on the resolution principle. In this paper, new classes of clauses and new resolution strategies will be defined integrating these improvements and reducing the work done by the RMS.
{"title":"Resolution strategies for focusing a reason maintenance system","authors":"Rachid Yacoub, M. Dumas, Gilles Arnaud","doi":"10.1109/TAI.1996.560471","DOIUrl":"https://doi.org/10.1109/TAI.1996.560471","url":null,"abstract":"The ATMS, as defined by de Kleer and used in a problem solver loses its efficiency due to the exponential complexity of its algorithm. Improvement works are proposed: a) computing only labels of interesting data; b) computing only some environments characterized by a focus. We propose to control a problem solver coupling a deduction system and a reason maintenance system (RIMS) based on the resolution principle. In this paper, new classes of clauses and new resolution strategies will be defined integrating these improvements and reducing the work done by the RMS.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127841530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, there has been a growing interest in the maintenance and efficiency of large knowledge based systems. Decomposition of knowledge based systems is recognized as an important research issue in this respect. We discuss the decomposition of knowledge bases that consist of decision tables. Several algorithms to decompose large decision tables into smaller components are proposed.
{"title":"Clustering knowledge in tabular knowledge bases","authors":"","doi":"10.1109/TAI.1996.560430","DOIUrl":"https://doi.org/10.1109/TAI.1996.560430","url":null,"abstract":"Recently, there has been a growing interest in the maintenance and efficiency of large knowledge based systems. Decomposition of knowledge based systems is recognized as an important research issue in this respect. We discuss the decomposition of knowledge bases that consist of decision tables. Several algorithms to decompose large decision tables into smaller components are proposed.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"516 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127973502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}