Neural networks based on wavelets are constructed to study the function learning problems. Two types of learning algorithms, the overall multilevel learning (OML) and the pyramidal multilevel learning (PML) with genetic neuron selection are comparatively studied for the convergence rate and accuracy using data samples of a piecewise defined signal. Moreover, the two algorithms are examined using orthogonal and non orthogonal bases. Experimental studies exhibit that the string representation of genetic algorithms (GA) is a key issue in determining the suitable network structures and the performances of function approximation for the two learning algorithms.
{"title":"Wavelet-based signal approximation with multilevel learning algorithms using genetic neuron selection","authors":"Jing-Wein Wang, J.-S. Pan, C. H. Chen, H. Fang","doi":"10.1109/TAI.1998.744863","DOIUrl":"https://doi.org/10.1109/TAI.1998.744863","url":null,"abstract":"Neural networks based on wavelets are constructed to study the function learning problems. Two types of learning algorithms, the overall multilevel learning (OML) and the pyramidal multilevel learning (PML) with genetic neuron selection are comparatively studied for the convergence rate and accuracy using data samples of a piecewise defined signal. Moreover, the two algorithms are examined using orthogonal and non orthogonal bases. Experimental studies exhibit that the string representation of genetic algorithms (GA) is a key issue in determining the suitable network structures and the performances of function approximation for the two learning algorithms.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116998489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
KJ3 is the first system which incorporates theorem proving techniques with the Petri Net description scheme for knowledge validation of rule based systems (RBSs). By converting the validation tasks of RBSs to reachability problems of Enhanced High-level Petri Net (EHLPN), KJ3 performs validation by proving if the hypothetical reachability problem is true. The establishment of the hypothesis corresponds to the achievement of the validation tasks. Since the properties of RBSs, such as refraction, conservation of facts, variables, closed world assumption, and negative information, can be properly represented and handled by EHLPN, different types of RBSs can be processed in KJ3. Since checking user specifications becomes investigating the reachability problems of EHLPN, all types of validation tasks can be handled by KJ3. The validation results can be directly extracted from the inference process to allow the users to explain the validation results. The inference process is mathematically traceable, sound, and complete, KJ3 guarantees that the validation outcome is reliable.
KJ3是第一个将定理证明技术与基于规则系统(rbs)的知识验证的Petri网描述方案结合在一起的系统。通过将rbs的验证任务转换为Enhanced High-level Petri Net (EHLPN)的可达性问题,KJ3通过证明假设的可达性问题是否成立来执行验证。假设的建立对应于验证任务的完成。由于RBSs的折射、事实守恒、变量、闭世界假设、负信息等性质都可以用EHLPN恰当地表示和处理,因此在KJ3中可以处理不同类型的RBSs。因为检查用户规范变成了调查EHLPN的可达性问题,所以所有类型的验证任务都可以由KJ3处理。验证结果可以直接从推理过程中提取出来,允许用户对验证结果进行解释。推理过程在数学上是可追溯的、健全的和完整的,KJ3保证验证结果是可靠的。
{"title":"KJ3-a tool for proving formal specifications of rule-based expert systems","authors":"Chih-Hung Wu, Shie-Jue Lee","doi":"10.1109/TAI.1998.744859","DOIUrl":"https://doi.org/10.1109/TAI.1998.744859","url":null,"abstract":"KJ3 is the first system which incorporates theorem proving techniques with the Petri Net description scheme for knowledge validation of rule based systems (RBSs). By converting the validation tasks of RBSs to reachability problems of Enhanced High-level Petri Net (EHLPN), KJ3 performs validation by proving if the hypothetical reachability problem is true. The establishment of the hypothesis corresponds to the achievement of the validation tasks. Since the properties of RBSs, such as refraction, conservation of facts, variables, closed world assumption, and negative information, can be properly represented and handled by EHLPN, different types of RBSs can be processed in KJ3. Since checking user specifications becomes investigating the reachability problems of EHLPN, all types of validation tasks can be handled by KJ3. The validation results can be directly extracted from the inference process to allow the users to explain the validation results. The inference process is mathematically traceable, sound, and complete, KJ3 guarantees that the validation outcome is reliable.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121850806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a hybrid case-based system to help the physician. It includes a hypermedia human-machine interface and a hybrid case-based reasoner. The hypermedia human-machine interface provides a friendly human body image map for the clinician to easily enter a given consultation. It utilizes a medicine-related commonsense knowledge base to help complete the input data during the consultation. The hybrid case-based reasoner is responsible for selecting and adapting relevant cases from the case library into a diagnosis for the consultation. This reasoner does those jobs by hybridizing many techniques. Basically it uses a distributed fuzzy neural network for case retrieval. It employs decision theory, constrained induction trees, and relevance theory for case adaptation involving case combination. The technique is also used for learning new cases into the case library. Hybridizing these techniques together can effectively produce a high quality diagnosis for a given medical consultation.
{"title":"A hybrid case-based medical diagnosis system","authors":"Chien-Chang Hsu, Cheng-Seen Ho","doi":"10.1109/TAI.1998.744865","DOIUrl":"https://doi.org/10.1109/TAI.1998.744865","url":null,"abstract":"This paper proposes a hybrid case-based system to help the physician. It includes a hypermedia human-machine interface and a hybrid case-based reasoner. The hypermedia human-machine interface provides a friendly human body image map for the clinician to easily enter a given consultation. It utilizes a medicine-related commonsense knowledge base to help complete the input data during the consultation. The hybrid case-based reasoner is responsible for selecting and adapting relevant cases from the case library into a diagnosis for the consultation. This reasoner does those jobs by hybridizing many techniques. Basically it uses a distributed fuzzy neural network for case retrieval. It employs decision theory, constrained induction trees, and relevance theory for case adaptation involving case combination. The technique is also used for learning new cases into the case library. Hybridizing these techniques together can effectively produce a high quality diagnosis for a given medical consultation.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"49 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113943552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Guided Genetic Algorithm (GGA) is a hybrid of genetic algorithm (GA) and meta-heuristic search algorithm, Guided Local Search (GLS). It builds on the framework and robustness of GA, and integrating GLS's conceptual simplicity and effectiveness to arrive at a flexible algorithm well meant for constraint optimization problems. GGA adds to the canonical GA the concepts of a penalty operator and fitness templates. During operation, GGA modifies both the fitness function and fitness templates of the candidate solutions based on feedback from the constraints. The Generalized Assignment Problem (GAP) is a well explored NP hard problem that has practical instances in the real world. In GAP, one has to find the optimum assignment of a set of jobs to a group of agents. However, each job can only be performed by one agent, and each agent has a work capacity. Further, assigning different jobs to different agents involve different utilities and resource requirements. These would affect the choice of job allocation. The paper reports on GGA and its successful application to the GAP.
{"title":"The guided genetic algorithm and its application to the generalized assignment problem","authors":"T. Lau, E. Tsang","doi":"10.1109/TAI.1998.744862","DOIUrl":"https://doi.org/10.1109/TAI.1998.744862","url":null,"abstract":"The Guided Genetic Algorithm (GGA) is a hybrid of genetic algorithm (GA) and meta-heuristic search algorithm, Guided Local Search (GLS). It builds on the framework and robustness of GA, and integrating GLS's conceptual simplicity and effectiveness to arrive at a flexible algorithm well meant for constraint optimization problems. GGA adds to the canonical GA the concepts of a penalty operator and fitness templates. During operation, GGA modifies both the fitness function and fitness templates of the candidate solutions based on feedback from the constraints. The Generalized Assignment Problem (GAP) is a well explored NP hard problem that has practical instances in the real world. In GAP, one has to find the optimum assignment of a set of jobs to a group of agents. However, each job can only be performed by one agent, and each agent has a work capacity. Further, assigning different jobs to different agents involve different utilities and resource requirements. These would affect the choice of job allocation. The paper reports on GGA and its successful application to the GAP.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133556685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traveling salesman problem (TSP), a typical combinatorial explosion problem, has been well studied in the AI area, and neural network applications to solve the problem are widely surveyed as well. The Hopfield neural network is commonly referred to in finding an optimal solution and a fast convergence to the result, however, it often traps to a local minimum. Stochastic simulated annealing has an advantage in finding the optimal solution; it provides a chance to escape from the local minimum. Both significant characteristics of the Hopfield neural network structure and stochastic simulated annealing algorithm are combined together to yield a so called mean field annealing technique. A complicated job scheduling problem of a multiprocessor with multiprocess instance under execution time limitation process migration inhibited and bounded available resource constraints is presented. An energy based equation is developed first whose structure depends on precise constraints and acceptable solutions using an extended 3D Hopfield neural network (HNN) and the normalized mean field annealing (MFA) technique; a variant of mean field annealing was conducted as well. A modified cooling procedure to accelerate a reaching equilibrium for normalized mean field annealing was applied to the study. The simulation results show that the derived energy function worked effectively, and good and valid solutions for sophisticated scheduling instance can be obtained using both schemes.
{"title":"Multiconstraint task scheduling in multi-processor system by neural network","authors":"Ruey-Maw Chen, Yueh-Min Huang","doi":"10.1109/TAI.1998.744856","DOIUrl":"https://doi.org/10.1109/TAI.1998.744856","url":null,"abstract":"The traveling salesman problem (TSP), a typical combinatorial explosion problem, has been well studied in the AI area, and neural network applications to solve the problem are widely surveyed as well. The Hopfield neural network is commonly referred to in finding an optimal solution and a fast convergence to the result, however, it often traps to a local minimum. Stochastic simulated annealing has an advantage in finding the optimal solution; it provides a chance to escape from the local minimum. Both significant characteristics of the Hopfield neural network structure and stochastic simulated annealing algorithm are combined together to yield a so called mean field annealing technique. A complicated job scheduling problem of a multiprocessor with multiprocess instance under execution time limitation process migration inhibited and bounded available resource constraints is presented. An energy based equation is developed first whose structure depends on precise constraints and acceptable solutions using an extended 3D Hopfield neural network (HNN) and the normalized mean field annealing (MFA) technique; a variant of mean field annealing was conducted as well. A modified cooling procedure to accelerate a reaching equilibrium for normalized mean field annealing was applied to the study. The simulation results show that the derived energy function worked effectively, and good and valid solutions for sophisticated scheduling instance can be obtained using both schemes.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We deal with the evaluation of attribute information for mining classification rules. In a decision tree, each internal node corresponds to a decision on an attribute and each outgoing branch corresponds to a possible value of this attribute. The ordering of attributes in the levels of a decision tree will affect the efficiency of the classification process, and should be determined in accordance with the relevance of these attributes to the target class. We consider in this paper two different measurements for the relevance of attributes to the target class, i.e., inference power and information gain. These two measurements, though both being related to the relevance to the group identity, can in fact lead to different branching decisions. It is noted that, depending on the stage of tree branching, these two measurements should be judiciously employed so as to maximize the effects they are designed for. The inference power and the information gain of multiple attributes are also evaluated.
{"title":"On the evaluation of attribute information for mining classification rules","authors":"Ming-Syan Chen","doi":"10.1109/TAI.1998.744828","DOIUrl":"https://doi.org/10.1109/TAI.1998.744828","url":null,"abstract":"We deal with the evaluation of attribute information for mining classification rules. In a decision tree, each internal node corresponds to a decision on an attribute and each outgoing branch corresponds to a possible value of this attribute. The ordering of attributes in the levels of a decision tree will affect the efficiency of the classification process, and should be determined in accordance with the relevance of these attributes to the target class. We consider in this paper two different measurements for the relevance of attributes to the target class, i.e., inference power and information gain. These two measurements, though both being related to the relevance to the group identity, can in fact lead to different branching decisions. It is noted that, depending on the stage of tree branching, these two measurements should be judiciously employed so as to maximize the effects they are designed for. The inference power and the information gain of multiple attributes are also evaluated.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128347117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the agent architecture of an Internet application development tool called Distributed Interactive Web-site Builder (DIWB). Together with the component object model and a layering framework, the agent architecture can be used to build Internet applications that support individualized services. The DIWB can construct pages dynamically at runtime and can be easily customized for individual users. The architecture consists of two cooperating agents that compose pages at runtime using components and data stored in various databases (agencies). The page agent composes a page by retrieving page definition and requesting the component agent to construct individual components. The component agent retrieves user preferences, and page component definitions from the databases and returns the results to the page agent.
{"title":"An agent architecture for supporting individualized services in Internet applications","authors":"W. Shao, W. Tsai, Sanjai Rayadurgam, Robert Lai","doi":"10.1109/TAI.1998.744830","DOIUrl":"https://doi.org/10.1109/TAI.1998.744830","url":null,"abstract":"This paper presents the agent architecture of an Internet application development tool called Distributed Interactive Web-site Builder (DIWB). Together with the component object model and a layering framework, the agent architecture can be used to build Internet applications that support individualized services. The DIWB can construct pages dynamically at runtime and can be easily customized for individual users. The architecture consists of two cooperating agents that compose pages at runtime using components and data stored in various databases (agencies). The page agent composes a page by retrieving page definition and requesting the component agent to construct individual components. The component agent retrieves user preferences, and page component definitions from the databases and returns the results to the page agent.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133216566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mining of sequential patterns in a transactional database is time consuming due to its complexity. Maintaining present patterns is a non-trivial task after database update, since appended data sequences may invalidate old patterns and create new ones. In contrast to re-mining, the key to improve mining performance in the proposed incremental update algorithm is to effectively utilize the discovered knowledge. By counting over appended data sequences instead of the entire updated database in most cases, fast filtering of patterns found in last mining and successive reductions in candidate sequences together make efficient update on sequential patterns possible.
{"title":"Incremental update on sequential patterns in large databases","authors":"Ming-Yen Lin, Suh-Yin Lee","doi":"10.1109/TAI.1998.744749","DOIUrl":"https://doi.org/10.1109/TAI.1998.744749","url":null,"abstract":"Mining of sequential patterns in a transactional database is time consuming due to its complexity. Maintaining present patterns is a non-trivial task after database update, since appended data sequences may invalidate old patterns and create new ones. In contrast to re-mining, the key to improve mining performance in the proposed incremental update algorithm is to effectively utilize the discovered knowledge. By counting over appended data sequences instead of the entire updated database in most cases, fast filtering of patterns found in last mining and successive reductions in candidate sequences together make efficient update on sequential patterns possible.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123037893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study presents a novel semiparametric prediction system for the Taiwan unemployment rate series. The prediction method incorporated into the system consists of a neural network model that estimates the trend, as well as a Box-Jenkins prediction of the residual series. The response surface methodology is employed to find the appropriate setup of network parameters as the neural network is applied. Also, extensive studies are performed on the robustness of the built network model using different specified censoring strategies. In terms of the adaptability of the Box-Jenkins method, the prediction intervals of the system can be successfully constructed. To demonstrate the effectiveness of our proposed method, the monthly unemployment rate from June 1983 to February 1992 is evaluated using a neural network model with Box-Jenkins technique and other alternative methods, e.g. space-time series analysis, univariate ARIMA model and state space model. Analysis results demonstrate that the proposed method outperforms other statistical methodologies.
{"title":"A novel neural network model using Box-Jenkins technique and response surface methodology to predict unemployment rate","authors":"Chih-Chou Chiu, C. Su","doi":"10.1109/TAI.1998.744775","DOIUrl":"https://doi.org/10.1109/TAI.1998.744775","url":null,"abstract":"The study presents a novel semiparametric prediction system for the Taiwan unemployment rate series. The prediction method incorporated into the system consists of a neural network model that estimates the trend, as well as a Box-Jenkins prediction of the residual series. The response surface methodology is employed to find the appropriate setup of network parameters as the neural network is applied. Also, extensive studies are performed on the robustness of the built network model using different specified censoring strategies. In terms of the adaptability of the Box-Jenkins method, the prediction intervals of the system can be successfully constructed. To demonstrate the effectiveness of our proposed method, the monthly unemployment rate from June 1983 to February 1992 is evaluated using a neural network model with Box-Jenkins technique and other alternative methods, e.g. space-time series analysis, univariate ARIMA model and state space model. Analysis results demonstrate that the proposed method outperforms other statistical methodologies.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128793484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our research concerns formal expressive, object centred languages and tools for use in engineering domains for planning applications. We extend our recent work on an object centred language for encoding precondition planning domains to a language called OCL/sub h/, designed for HTN planning. Domain encodings for HTN planners are particularly troublesome, because they tend to be used in knowledge based applications requiring a great deal of 'domain engineering', and the abstract operators central to an HTN model do not share the fairly clear declarative semantics of concrete pre- and post-condition operators. Central to our approach is the development, in parallel, of the abstract operator set and the hierarchical state specification of the objects that the operators manipulate. We also define and illustrate a transparency property, together with a transparency checking tool, which helps the developer to encode a clear planning model in OCL/sub h/. Our encoding of the Translog domain is used as an extended example to illustrate the approach.
{"title":"A tool-supported approach to engineering HTN planning models","authors":"T. McCluskey, D. Kitchin","doi":"10.1109/TAI.1998.744854","DOIUrl":"https://doi.org/10.1109/TAI.1998.744854","url":null,"abstract":"Our research concerns formal expressive, object centred languages and tools for use in engineering domains for planning applications. We extend our recent work on an object centred language for encoding precondition planning domains to a language called OCL/sub h/, designed for HTN planning. Domain encodings for HTN planners are particularly troublesome, because they tend to be used in knowledge based applications requiring a great deal of 'domain engineering', and the abstract operators central to an HTN model do not share the fairly clear declarative semantics of concrete pre- and post-condition operators. Central to our approach is the development, in parallel, of the abstract operator set and the hierarchical state specification of the objects that the operators manipulate. We also define and illustrate a transparency property, together with a transparency checking tool, which helps the developer to encode a clear planning model in OCL/sub h/. Our encoding of the Translog domain is used as an extended example to illustrate the approach.","PeriodicalId":424568,"journal":{"name":"Proceedings Tenth IEEE International Conference on Tools with Artificial Intelligence (Cat. No.98CH36294)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125395314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}