Model-based diagnosis has the disadvantage of a high computational complexity. One way to overcome this disadvantage is to focus the diagnosis on a reduced diagnostic space. We propose an improved critical diagnosis reasoning method based on the method proposed by (Raiman et al., 1993). The method focuses the diagnosis on finding out the kernel diagnoses instead of the whole diagnoses. We give an updated definition of critical cover which we call "critical partition". The conditions satisfied by critical partition are relaxed compared with the conditions for critical cover. Correspondingly, a non-backtracking algorithm called Searching Critical Partition (SCP) to find out the critical partition is also proposed.
基于模型的诊断具有计算复杂度高的缺点。克服这一缺点的一种方法是将诊断集中在缩小的诊断空间上。我们在(Raiman et al., 1993)方法的基础上提出了一种改进的关键诊断推理方法。该方法的诊断重点是找出核心诊断,而不是全部诊断。我们给出了临界覆盖的更新定义,我们称之为“临界分区”。与临界覆盖条件相比,临界分区条件放宽。相应的,提出了一种非回溯算法——关键分区搜索算法(SCP)来查找关键分区。
{"title":"An improved critical diagnosis reasoning method","authors":"Yue Xu, Chengqi Zhang","doi":"10.1109/TAI.1996.560448","DOIUrl":"https://doi.org/10.1109/TAI.1996.560448","url":null,"abstract":"Model-based diagnosis has the disadvantage of a high computational complexity. One way to overcome this disadvantage is to focus the diagnosis on a reduced diagnostic space. We propose an improved critical diagnosis reasoning method based on the method proposed by (Raiman et al., 1993). The method focuses the diagnosis on finding out the kernel diagnoses instead of the whole diagnoses. We give an updated definition of critical cover which we call \"critical partition\". The conditions satisfied by critical partition are relaxed compared with the conditions for critical cover. Correspondingly, a non-backtracking algorithm called Searching Critical Partition (SCP) to find out the critical partition is also proposed.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122435215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Argument-based reasoning is a promising approach to handle inconsistent belief bases. The basic idea is to justify each plausible conclusion by acceptable arguments. The purpose of the paper is to enforce the concept of acceptability by the integration of preference orderings. Pursuing previous work on preference-based argumentation, the authors focus on the definition of preference relations for comparing conflicting arguments. They present a comparative study of several proposals. They then propose techniques for computing and comparing arguments, taking advantage of an assumption-based truth maintenance system (ATMS).
{"title":"Comparing arguments using preference orderings for argument-based reasoning","authors":"Leila Amgoud, C. Cayrol, Daniel Le Berre","doi":"10.1109/TAI.1996.560731","DOIUrl":"https://doi.org/10.1109/TAI.1996.560731","url":null,"abstract":"Argument-based reasoning is a promising approach to handle inconsistent belief bases. The basic idea is to justify each plausible conclusion by acceptable arguments. The purpose of the paper is to enforce the concept of acceptability by the integration of preference orderings. Pursuing previous work on preference-based argumentation, the authors focus on the definition of preference relations for comparing conflicting arguments. They present a comparative study of several proposals. They then propose techniques for computing and comparing arguments, taking advantage of an assumption-based truth maintenance system (ATMS).","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115409283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many applications, such as decision support, negotiation, planning, scheduling, etc., one needs to express requirements that can only be partially satisfied. In order to express such requirements, we propose a technique called forward-tracking. Intuitively, forward-tracking is a kind of dual of chronological back-tracking: if a program globally fails to find a solution, then a new execution is started from a program point and a state 'forward' in the computation tree. This search technique is applied to constraint logic programming, obtaining a powerful extension that preserves all the useful properties of the original scheme. We report on the successful practical application of forward-tracking to the evolutionary training of(constrained) neural networks.
{"title":"Forward-tracking: a technique for searching beyond failure","authors":"E. Marchiori, M. Marchiori, J. Kok","doi":"10.1109/TAI.1996.560472","DOIUrl":"https://doi.org/10.1109/TAI.1996.560472","url":null,"abstract":"In many applications, such as decision support, negotiation, planning, scheduling, etc., one needs to express requirements that can only be partially satisfied. In order to express such requirements, we propose a technique called forward-tracking. Intuitively, forward-tracking is a kind of dual of chronological back-tracking: if a program globally fails to find a solution, then a new execution is started from a program point and a state 'forward' in the computation tree. This search technique is applied to constraint logic programming, obtaining a powerful extension that preserves all the useful properties of the original scheme. We report on the successful practical application of forward-tracking to the evolutionary training of(constrained) neural networks.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123425845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Howard J. Hamilton, Robert J. Hilderman, N. Cercone
Attribute-oriented induction summarizes the information in a relational database by repeatedly replacing specific attribute values with more general concepts according to user-defined concept hierarchies. We show how domain generalization graphs can be constructed from multiple concept hierarchies associated with an attribute, describe how these graphs can be used to control the generalization of a set of attributes, and present the Multi-Attribute Generalization algorithm for attribute-oriented induction using domain generalization graphs. Based upon a generate-and-test approach, the algorithm generates all possible combinations of nodes from the domain generalization graphs associated with the individual attributes, to produce all possible generalized relations for the set of attributes. We rant the interestingness of the resulting generalized relations using measures based upon relative entropy and variance. Our experiments show that these measures provide a basis for analyzing summary data from relational databases. Variance appears more useful because it tends to rank the less complex generalized relations (i.e., those with few attributes and/or few tuples) as more interesting.
{"title":"Attribute-oriented induction using domain generalization graphs","authors":"Howard J. Hamilton, Robert J. Hilderman, N. Cercone","doi":"10.1109/TAI.1996.560458","DOIUrl":"https://doi.org/10.1109/TAI.1996.560458","url":null,"abstract":"Attribute-oriented induction summarizes the information in a relational database by repeatedly replacing specific attribute values with more general concepts according to user-defined concept hierarchies. We show how domain generalization graphs can be constructed from multiple concept hierarchies associated with an attribute, describe how these graphs can be used to control the generalization of a set of attributes, and present the Multi-Attribute Generalization algorithm for attribute-oriented induction using domain generalization graphs. Based upon a generate-and-test approach, the algorithm generates all possible combinations of nodes from the domain generalization graphs associated with the individual attributes, to produce all possible generalized relations for the set of attributes. We rant the interestingness of the resulting generalized relations using measures based upon relative entropy and variance. Our experiments show that these measures provide a basis for analyzing summary data from relational databases. Variance appears more useful because it tends to rank the less complex generalized relations (i.e., those with few attributes and/or few tuples) as more interesting.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126593469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine some natural language uses of a new type of logic grammars called Assumption Grammars, particularly suitable for hypothetical reasoning. They are based on intuitionistic and linear implications scoped over the current continuation, which allow us to follow given branches of the computation under hypotheses that disappear when and if backtracking takes place. We show how Assumption Grammars can simplify the treatment of some crucial computational linguistics problems, e.g. long distance dependencies, while simultaneously facilitating more readable grammars.
{"title":"A hypothetical reasoning based framework for NL processing","authors":"V. Dahl, A. Fall, Stephen Rochefort, Paul Tarau","doi":"10.1109/TAI.1996.560402","DOIUrl":"https://doi.org/10.1109/TAI.1996.560402","url":null,"abstract":"We examine some natural language uses of a new type of logic grammars called Assumption Grammars, particularly suitable for hypothetical reasoning. They are based on intuitionistic and linear implications scoped over the current continuation, which allow us to follow given branches of the computation under hypotheses that disappear when and if backtracking takes place. We show how Assumption Grammars can simplify the treatment of some crucial computational linguistics problems, e.g. long distance dependencies, while simultaneously facilitating more readable grammars.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116873830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem is the transformation of a conjunctive normal form (CNF) into a minimized (for the inclusion operator) disjunctive normal form (DNF) and vice versa. This operation is called the unionist product. For a CNF (resp. DNF), one pass of the unionist product provides the prime implicants (resp. implicates); two passes provide the prime implicates (resp. implicants). An algorithm built upon the classical Davis and Putnam procedure is presented for calculating, without the explicit minimization for the inclusion, this unionist product.
{"title":"Computation of prime implicates and prime implicants by a variant of the Davis and Putnam procedure","authors":"T. Castell","doi":"10.1109/TAI.1996.560739","DOIUrl":"https://doi.org/10.1109/TAI.1996.560739","url":null,"abstract":"The problem is the transformation of a conjunctive normal form (CNF) into a minimized (for the inclusion operator) disjunctive normal form (DNF) and vice versa. This operation is called the unionist product. For a CNF (resp. DNF), one pass of the unionist product provides the prime implicants (resp. implicates); two passes provide the prime implicates (resp. implicants). An algorithm built upon the classical Davis and Putnam procedure is presented for calculating, without the explicit minimization for the inclusion, this unionist product.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125267187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes subdefinite models as a variety of constraint satisfaction problems. The use of the method of subdefinite calculations makes it possible to solve overdetermined and underdetermined problems, as well as problems with uncertain, imprecise and incomplete data. Constraint propagation in all these problems is supported by a single data-driven inference algorithm. Several examples are given to show the capabilities of this approach for solving a wide class of problems.
{"title":"Subdefinite models as a variety of constraint programming","authors":"V. Telerman, Dmitry Ushakov","doi":"10.1109/TAI.1996.560446","DOIUrl":"https://doi.org/10.1109/TAI.1996.560446","url":null,"abstract":"This paper describes subdefinite models as a variety of constraint satisfaction problems. The use of the method of subdefinite calculations makes it possible to solve overdetermined and underdetermined problems, as well as problems with uncertain, imprecise and incomplete data. Constraint propagation in all these problems is supported by a single data-driven inference algorithm. Several examples are given to show the capabilities of this approach for solving a wide class of problems.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114333864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the problem of efficiently updating a network of temporal constraints when constraints are removed from or added to an existing network. Such processing tasks are important in many AI applications requiring a temporal reasoning module. First we analyze the relationship between shortest-paths algorithms for directed graphs and arc-consistency techniques. Then we focus on a subclass of STP for which we propose new fast incremental algorithms for consistency checking and for maintaining the feasible times of the temporal variables.
{"title":"Incremental algorithms for managing temporal constraints","authors":"A. Gerevini, A. Perini, F. Ricci","doi":"10.1109/TAI.1996.560477","DOIUrl":"https://doi.org/10.1109/TAI.1996.560477","url":null,"abstract":"This paper addresses the problem of efficiently updating a network of temporal constraints when constraints are removed from or added to an existing network. Such processing tasks are important in many AI applications requiring a temporal reasoning module. First we analyze the relationship between shortest-paths algorithms for directed graphs and arc-consistency techniques. Then we focus on a subclass of STP for which we propose new fast incremental algorithms for consistency checking and for maintaining the feasible times of the temporal variables.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116619065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generic expert systems are reasoning systems that can be used in many application domains, thus requiring domain independence. The user interface for a generic expert system must contain intelligence in order to maintain this domain independence and manage the complex interactions between the user and the expert system. This paper explores the uncertainty-based reasoning contained in an intelligent user interface called GESIA. GESIA's interface architecture and dynamically constructed Bayesian network are examined in detail to show how uncertainty-based reasoning enhances the capabilities of this user interface.
{"title":"GESIA: uncertainty-based reasoning for a generic expert system intelligent user interface","authors":"R. A. Harrington, S. Banks, E. Santos","doi":"10.1109/TAI.1996.560400","DOIUrl":"https://doi.org/10.1109/TAI.1996.560400","url":null,"abstract":"Generic expert systems are reasoning systems that can be used in many application domains, thus requiring domain independence. The user interface for a generic expert system must contain intelligence in order to maintain this domain independence and manage the complex interactions between the user and the expert system. This paper explores the uncertainty-based reasoning contained in an intelligent user interface called GESIA. GESIA's interface architecture and dynamically constructed Bayesian network are examined in detail to show how uncertainty-based reasoning enhances the capabilities of this user interface.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"48 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural network models of semiconductor manufacturing processes offer advantages in accuracy and generalization over traditional methods. However, model development is complicated by the fact that backpropagation neural networks contain several adjustable parameters whose optimal values are initially unknown. These include learning rate, momentum, training tolerance, and the number of hidden layer neurons. This paper investigates the use of genetic algorithms (GAs) to determine the optimal neural network parameters for modeling plasma-enhanced chemical vapor deposition (PECVD) of silicon dioxide films. To find an optimal parameter set for the PECVD models, a performance matrix is defined and used in the GA objective function. This index accounts for both prediction error as well as training error, with a higher emphasis on reducing prediction error. Results of the genetic search are compared with a similar search using the simplex algorithm. The GA search performed approximately 10% better in reducing training error and 66% better in reducing prediction error.
{"title":"Optimization of neural network structure and learning parameters using genetic algorithms","authors":"Seung-Soo Han, G. May","doi":"10.1109/TAI.1996.560452","DOIUrl":"https://doi.org/10.1109/TAI.1996.560452","url":null,"abstract":"Neural network models of semiconductor manufacturing processes offer advantages in accuracy and generalization over traditional methods. However, model development is complicated by the fact that backpropagation neural networks contain several adjustable parameters whose optimal values are initially unknown. These include learning rate, momentum, training tolerance, and the number of hidden layer neurons. This paper investigates the use of genetic algorithms (GAs) to determine the optimal neural network parameters for modeling plasma-enhanced chemical vapor deposition (PECVD) of silicon dioxide films. To find an optimal parameter set for the PECVD models, a performance matrix is defined and used in the GA objective function. This index accounts for both prediction error as well as training error, with a higher emphasis on reducing prediction error. Results of the genetic search are compared with a similar search using the simplex algorithm. The GA search performed approximately 10% better in reducing training error and 66% better in reducing prediction error.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115594948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}