Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323696
R. Arbon, G.G. Mally, T. Osborne, P.R. Riethmeier, R.L. Tharrett
The Automated Master Production Scheduler (Auto-MPS) is a hybrid expert scheduling system which performs production scheduling of thousands of assemblies in a high-volume manufacturing environment. It generates schedules based on a set of rules and constraint satisfaction algorithms which reflect the scheduling strategies created by management to meet their customer demand while still controlling inventory and shipping costs. The Auto-MPS also identifies the existence of significant situations which need to be analyzed by management. A graphical user interface that includes sophisticated graphical displays and hypertext based editors allows the user to easily understand the status of the current production schedules and rapidly identify and analyze potential problems. The Auto-MPS has been in production for nearly two years and has significantly improved the scheduling processes at AlliedSignal Safety Restraint Systems.<>
{"title":"Auto-MPS: an automated master production scheduling system for large volume manufacturing","authors":"R. Arbon, G.G. Mally, T. Osborne, P.R. Riethmeier, R.L. Tharrett","doi":"10.1109/CAIA.1994.323696","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323696","url":null,"abstract":"The Automated Master Production Scheduler (Auto-MPS) is a hybrid expert scheduling system which performs production scheduling of thousands of assemblies in a high-volume manufacturing environment. It generates schedules based on a set of rules and constraint satisfaction algorithms which reflect the scheduling strategies created by management to meet their customer demand while still controlling inventory and shipping costs. The Auto-MPS also identifies the existence of significant situations which need to be analyzed by management. A graphical user interface that includes sophisticated graphical displays and hypertext based editors allows the user to easily understand the status of the current production schedules and rapidly identify and analyze potential problems. The Auto-MPS has been in production for nearly two years and has significantly improved the scheduling processes at AlliedSignal Safety Restraint Systems.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"54 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134138979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323654
H. Hirsh, M. Noordewier
Successful inductive learning requires that training data be expressed in a form where underlying regularities can be recognized by the learning system. Unfortunately, many applications of inductive learning/spl minus/especially in the domain of molecular biology/spl minus/have assumed that data are provided in a form already suitable for learning, whether or not such an assumption is actually justified. This paper describes the use of background knowledge of molecular biology to re-express data into a form more appropriate for learning. Our results show dramatic improvements in classification accuracy for two very different classes of DNA sequences using traditional "off-the-sheIf" decision-tree and neural-network inductive-learning methods.<>
{"title":"Using background knowledge to improve inductive learning of DNA sequences","authors":"H. Hirsh, M. Noordewier","doi":"10.1109/CAIA.1994.323654","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323654","url":null,"abstract":"Successful inductive learning requires that training data be expressed in a form where underlying regularities can be recognized by the learning system. Unfortunately, many applications of inductive learning/spl minus/especially in the domain of molecular biology/spl minus/have assumed that data are provided in a form already suitable for learning, whether or not such an assumption is actually justified. This paper describes the use of background knowledge of molecular biology to re-express data into a form more appropriate for learning. Our results show dramatic improvements in classification accuracy for two very different classes of DNA sequences using traditional \"off-the-sheIf\" decision-tree and neural-network inductive-learning methods.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323674
H. Almuallim, Y. Akiba, T. Yamazaki, A. Yokoo, S. Kaneda
Addresses the problem of constructing translation rules for ALT-J/E/spl minus/a knowledge-based Japanese-English translation system developed at NTT. We introduce the system ATRACT, which is a semi-automatic knowledge acquisition tool designed to facilitate the construction of the desired translation rules through the use of inductive machine learning techniques. Rather than building rules by hand from scratch, a user of ATRACT can obtain good candidate rules by providing the system with a collection of examples of Japanese sentences along with their English translations. This learning task is characterized by two factors: (i) it involves exploiting a huge amount of semantic information as background knowledge; (ii) training examples are "ambiguous". Currently, two learning methods are available in ATRACT. Experiments show that these methods lead to rules that are very close to those composed manually by human experts given only a reasonable number of examples. These results suggest that ATRACT will significantly contribute to reducing the cost and improving the quality of ALT-J/E translation rules.<>
{"title":"A tool for the acquisition of Japanese-English machine translation rules using inductive learning techniques","authors":"H. Almuallim, Y. Akiba, T. Yamazaki, A. Yokoo, S. Kaneda","doi":"10.1109/CAIA.1994.323674","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323674","url":null,"abstract":"Addresses the problem of constructing translation rules for ALT-J/E/spl minus/a knowledge-based Japanese-English translation system developed at NTT. We introduce the system ATRACT, which is a semi-automatic knowledge acquisition tool designed to facilitate the construction of the desired translation rules through the use of inductive machine learning techniques. Rather than building rules by hand from scratch, a user of ATRACT can obtain good candidate rules by providing the system with a collection of examples of Japanese sentences along with their English translations. This learning task is characterized by two factors: (i) it involves exploiting a huge amount of semantic information as background knowledge; (ii) training examples are \"ambiguous\". Currently, two learning methods are available in ATRACT. Experiments show that these methods lead to rules that are very close to those composed manually by human experts given only a reasonable number of examples. These results suggest that ATRACT will significantly contribute to reducing the cost and improving the quality of ALT-J/E translation rules.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"9 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128896277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323637
W. Dai, S. Wright
We present a practical approach to implement existing expert system specification concepts. The approach is based on the previous development of expert system primitives where the focus was on the orthogonality and functionality of the primitives rather than their application aspect. An important objective is to formulate a software paradigm to enable existing expert system primitives to be combined into various expert system tools where different types of expert systems can be constructed. Expert systems built this way have been tested in telecommunications and are moving towards practical use.<>
{"title":"A conventional approach to expert systems development","authors":"W. Dai, S. Wright","doi":"10.1109/CAIA.1994.323637","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323637","url":null,"abstract":"We present a practical approach to implement existing expert system specification concepts. The approach is based on the previous development of expert system primitives where the focus was on the orthogonality and functionality of the primitives rather than their application aspect. An important objective is to formulate a software paradigm to enable existing expert system primitives to be combined into various expert system tools where different types of expert systems can be constructed. Expert systems built this way have been tested in telecommunications and are moving towards practical use.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128111255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323653
S. Liu, M. Thonnat, M. Berthod
The identification of foraminifera is an important task in oil exploration. However, this task is tedious and time-consuming. In this work, a knowledge-based system is developed for the identification of planktonic foraminifera. The identification process is made automatic by means of computer vision techniques. Currently, the knowledge-based system, though just being a prototype in this stage of its development, is able to identify several important species of planktonic foraminifera based on the parameters obtained by the image analysis algorithms. An overview of our method and the main components of the knowledge-based system are discussed.<>
{"title":"Automatic classification of planktonic foraminifera by a knowledge-based system","authors":"S. Liu, M. Thonnat, M. Berthod","doi":"10.1109/CAIA.1994.323653","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323653","url":null,"abstract":"The identification of foraminifera is an important task in oil exploration. However, this task is tedious and time-consuming. In this work, a knowledge-based system is developed for the identification of planktonic foraminifera. The identification process is made automatic by means of computer vision techniques. Currently, the knowledge-based system, though just being a prototype in this stage of its development, is able to identify several important species of planktonic foraminifera based on the parameters obtained by the image analysis algorithms. An overview of our method and the main components of the knowledge-based system are discussed.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129026825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323643
M. Juric
Multiple fault diagnosis (MFD) is the process of determining the correct fault or faults that are responsible for a given set of symptoms. Exhaustive searches or statistical analyses are usually too computationally expensive to solve these types of problems in real-time. We use a simple genetic algorithm to significantly reduce the time required to evolve a satisfactory solution. We show that when using genetic algorithms to solve these kinds of applications, best results are achieved with higher than "normal" mutation rates. Schemata theory is used to analyze this data and show that even though schema length increases, the Hamming distance between binary representations of best-fit chromosomes is quite small. Hamming distance is then related to schema length to show why mutation rate becomes important in this type of application.<>
{"title":"Optimizing genetic algorithm parameters for multiple fault diagnosis applications","authors":"M. Juric","doi":"10.1109/CAIA.1994.323643","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323643","url":null,"abstract":"Multiple fault diagnosis (MFD) is the process of determining the correct fault or faults that are responsible for a given set of symptoms. Exhaustive searches or statistical analyses are usually too computationally expensive to solve these types of problems in real-time. We use a simple genetic algorithm to significantly reduce the time required to evolve a satisfactory solution. We show that when using genetic algorithms to solve these kinds of applications, best results are achieved with higher than \"normal\" mutation rates. Schemata theory is used to analyze this data and show that even though schema length increases, the Hamming distance between binary representations of best-fit chromosomes is quite small. Hamming distance is then related to schema length to show why mutation rate becomes important in this type of application.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116916694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323673
Minhwa Chung, D. Moldovan
Presents a parallel memory-based parser called PARALLEL, which is implemented on a marker-passing parallel AI computer called the Semantic Network Array Processor (SNAP). In the PARALLEL memory-based parser, the parallelism in natural language processing is utilized by a memory search model of parsing. Linguistic information is stored as phrasal patterns in a semantic network knowledge base that is distributed over the memory of the parallel computer. Parsing is performed by recognizing and linking linguistic patterns that reflect a sentence interpretation. This is achieved via propagating markers over the distributed network. We have developed a system capable of processing newswire articles about terrorism with a large knowledge base of 12,000 semantic network nodes. This paper presents the structure of the system, the memory-based parsing method used and performance results obtained.<>
{"title":"Memory-based parsing with parallel marker-passing","authors":"Minhwa Chung, D. Moldovan","doi":"10.1109/CAIA.1994.323673","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323673","url":null,"abstract":"Presents a parallel memory-based parser called PARALLEL, which is implemented on a marker-passing parallel AI computer called the Semantic Network Array Processor (SNAP). In the PARALLEL memory-based parser, the parallelism in natural language processing is utilized by a memory search model of parsing. Linguistic information is stored as phrasal patterns in a semantic network knowledge base that is distributed over the memory of the parallel computer. Parsing is performed by recognizing and linking linguistic patterns that reflect a sentence interpretation. This is achieved via propagating markers over the distributed network. We have developed a system capable of processing newswire articles about terrorism with a large knowledge base of 12,000 semantic network nodes. This paper presents the structure of the system, the memory-based parsing method used and performance results obtained.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"196 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114044889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323659
G. Biswas, G. Lee
Discusses the application of conceptual clustering in restructuring large knowledge bases for the purpose of improving their complex problem solving efficiency. The rule base of PLAYMAKER, a system for characterizing hydrocarbon fields and plays, is restructured into a hierarchy of rule models using our conceptual clustering scheme, ITERATE. The rule models, used with a task-specific reasoning methodology, provide a more efficient, focused, and robust inferencing mechanism. A set of case studies that have been conducted demonstrate the improved performance of the reasoning system. PLAYMAKER is implemented on MIDST (Mixed Inferencing Dempster-Shafer Tool), a general-purpose knowledge-based system construction tool that incorporates reasoning mechanisms based on a task-specific architecture and belief functions.<>
{"title":"Knowledge reorganization. A rule model scheme for efficient reasoning","authors":"G. Biswas, G. Lee","doi":"10.1109/CAIA.1994.323659","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323659","url":null,"abstract":"Discusses the application of conceptual clustering in restructuring large knowledge bases for the purpose of improving their complex problem solving efficiency. The rule base of PLAYMAKER, a system for characterizing hydrocarbon fields and plays, is restructured into a hierarchy of rule models using our conceptual clustering scheme, ITERATE. The rule models, used with a task-specific reasoning methodology, provide a more efficient, focused, and robust inferencing mechanism. A set of case studies that have been conducted demonstrate the improved performance of the reasoning system. PLAYMAKER is implemented on MIDST (Mixed Inferencing Dempster-Shafer Tool), a general-purpose knowledge-based system construction tool that incorporates reasoning mechanisms based on a task-specific architecture and belief functions.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124163706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323634
A. Griffith, R. Simpson, L. Blatt
Studies have shown that software developers tend to use existing human-computer interfaces as examples while designing new interfaces. However, if the examples are poorly designed, or the tasks in the example are inconsistent with the tasks of the new interface, then using examples can be detrimental to the design of the interface. To alleviate the problem of using examples inappropriately, and to support good interface design practices, we are developing the concept of a case-based interface design assistant, called Interface Lab. Interface Lab is a design environment which uses user-centered design, an interface design methodology, as the context for retrieval of cases of interface examples.<>
{"title":"Interface Lab: a case-based interface design assistant","authors":"A. Griffith, R. Simpson, L. Blatt","doi":"10.1109/CAIA.1994.323634","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323634","url":null,"abstract":"Studies have shown that software developers tend to use existing human-computer interfaces as examples while designing new interfaces. However, if the examples are poorly designed, or the tasks in the example are inconsistent with the tasks of the new interface, then using examples can be detrimental to the design of the interface. To alleviate the problem of using examples inappropriately, and to support good interface design practices, we are developing the concept of a case-based interface design assistant, called Interface Lab. Interface Lab is a design environment which uses user-centered design, an interface design methodology, as the context for retrieval of cases of interface examples.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129917599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323661
H. E. Pople, W. Spangler, M. T. Pople
EAGOL is an artificial intelligence system for process monitoring, situation assessment, and response planning in the management of complex, engineered systems in real time. Understanding the behavior of complex systems requires two basic types of analysis, both of which are incorporated within the EAGOL model: (1) first-principles cause-and-effect analysis of the engineered system, and (2) analysis of the types of interventions that may introduced into the engineered system from (a) built-in automatic safeguard mechanisms, and (b) human operators, who are often guided by pre-defined written procedures. EAGOL includes a goal-based model of procedure generation which allows the program (1) to generate procedures based on its assessment of real or potential system states and events, and (2) to use its internal representation of procedures and goals to reason along with human operators in pursuit of an emergency resolution.<>
{"title":"EAGOL: an artificial intelligence system for process monitoring, situation assessment and response planning","authors":"H. E. Pople, W. Spangler, M. T. Pople","doi":"10.1109/CAIA.1994.323661","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323661","url":null,"abstract":"EAGOL is an artificial intelligence system for process monitoring, situation assessment, and response planning in the management of complex, engineered systems in real time. Understanding the behavior of complex systems requires two basic types of analysis, both of which are incorporated within the EAGOL model: (1) first-principles cause-and-effect analysis of the engineered system, and (2) analysis of the types of interventions that may introduced into the engineered system from (a) built-in automatic safeguard mechanisms, and (b) human operators, who are often guided by pre-defined written procedures. EAGOL includes a goal-based model of procedure generation which allows the program (1) to generate procedures based on its assessment of real or potential system states and events, and (2) to use its internal representation of procedures and goals to reason along with human operators in pursuit of an emergency resolution.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133297197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}