Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323628
G. S. Novak
Describes a system that constructs a computer program from a graphical specification provided by the user. The specification consists of diagrams that represent physical and mathematical models; connections between diagram ports signify that corresponding quantities must be equal. A program (in Lisp or C) is generated from the graphical specification by data flow analysis and algebraic manipulation of equations associated with the physical models. Equations, algebraic manipulations, and unit conversions are hidden from the user and are performed automatically. This system allows more rapid generation of programs than would be possible with hand coding.<>
{"title":"Generating programs from connections of physical models","authors":"G. S. Novak","doi":"10.1109/CAIA.1994.323628","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323628","url":null,"abstract":"Describes a system that constructs a computer program from a graphical specification provided by the user. The specification consists of diagrams that represent physical and mathematical models; connections between diagram ports signify that corresponding quantities must be equal. A program (in Lisp or C) is generated from the graphical specification by data flow analysis and algebraic manipulation of equations associated with the physical models. Equations, algebraic manipulations, and unit conversions are hidden from the user and are performed automatically. This system allows more rapid generation of programs than would be possible with hand coding.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124679880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323633
X. Guan, R. Mural, E. Uberbacher
Describes a new approach for predicting protein structures based on artificial intelligence methods and genetic algorithms. We combine nearest neighbor searching algorithms, neural networks, heuristic rules and genetic algorithms to form an integrated system to predict protein structures from their primary amino acid sequences. First, we describe our methods and how they are integrated, and then apply our methods to several protein sequences. The results are very close to the real structures obtained by crystallography. Parallel genetic algorithms are also implemented.<>
{"title":"Protein structure prediction using hybrid AI methods","authors":"X. Guan, R. Mural, E. Uberbacher","doi":"10.1109/CAIA.1994.323633","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323633","url":null,"abstract":"Describes a new approach for predicting protein structures based on artificial intelligence methods and genetic algorithms. We combine nearest neighbor searching algorithms, neural networks, heuristic rules and genetic algorithms to form an integrated system to predict protein structures from their primary amino acid sequences. First, we describe our methods and how they are integrated, and then apply our methods to several protein sequences. The results are very close to the real structures obtained by crystallography. Parallel genetic algorithms are also implemented.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125719463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323624
Mostafa Mahmoud Syiam
Presents a neural network expert system to assist a GP in early medical diagnosis of eye diseases in patients. The developed system bases its diagnosis on patient symptoms and signs, and uses a multilayer feedforward network with a single hidden layer. The backpropagation algorithm is employed for training the network in a supervised mode. The effect of the number of nodes in the hidden layer on the developed system's performance is discussed. Analysis of the results indicates that the developed system has a disease diagnosis ratio of above 87 percent. To evaluate the performance of the developed system, a test data set was given to both GPs and specialists. It is indicated that the performance of the developed system exceeds that of the GPs, and it reaches the level of performance of the eye specialists.<>
{"title":"A neural network expert system for diagnosing eye diseases","authors":"Mostafa Mahmoud Syiam","doi":"10.1109/CAIA.1994.323624","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323624","url":null,"abstract":"Presents a neural network expert system to assist a GP in early medical diagnosis of eye diseases in patients. The developed system bases its diagnosis on patient symptoms and signs, and uses a multilayer feedforward network with a single hidden layer. The backpropagation algorithm is employed for training the network in a supervised mode. The effect of the number of nodes in the hidden layer on the developed system's performance is discussed. Analysis of the results indicates that the developed system has a disease diagnosis ratio of above 87 percent. To evaluate the performance of the developed system, a test data set was given to both GPs and specialists. It is indicated that the performance of the developed system exceeds that of the GPs, and it reaches the level of performance of the eye specialists.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123839564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323640
D. Araki, K. Narimatu, S. Kojima
The scheduling model and method must be designed to be application-domain dependent so as to reflect a set of constraints, objectives and preferences which reside in the target problem. We analyzed the scheduling process of human experts in a knowledge-base level and have developed a task-specific shell named ARES/SCH. ARES/SCH possesses a primitive task library that is a collection of domain-independent and generic components of scheduling mechanisms. The whole scheduling method can be described as a combinational flow-chart of primitive tasks. Memory module mounting shop (MMS) scheduling is shown as an example of ARES/SCH applications. It was apparent that ARES/SCH contributes to the rapid development of scheduling systems and supports a wide range of scheduling domains.<>
{"title":"Knowledge modeling environment for job-shop scheduling problem","authors":"D. Araki, K. Narimatu, S. Kojima","doi":"10.1109/CAIA.1994.323640","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323640","url":null,"abstract":"The scheduling model and method must be designed to be application-domain dependent so as to reflect a set of constraints, objectives and preferences which reside in the target problem. We analyzed the scheduling process of human experts in a knowledge-base level and have developed a task-specific shell named ARES/SCH. ARES/SCH possesses a primitive task library that is a collection of domain-independent and generic components of scheduling mechanisms. The whole scheduling method can be described as a combinational flow-chart of primitive tasks. Memory module mounting shop (MMS) scheduling is shown as an example of ARES/SCH applications. It was apparent that ARES/SCH contributes to the rapid development of scheduling systems and supports a wide range of scheduling domains.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124875176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323651
M. Stumptner, A. Haselbock, G. Friedrich
The COCOS (COnfiguration through COnstraint Satisfaction) project was aimed at producing a tool that could be used for a variety of configuration applications. Traditionally, representation methods for technical configuration have focused either on reasoning about the structure of systems or the quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed a language based on an extension of the constraint satisfaction problem (CSP) model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. We briefly discuss the current implementation and the experiences obtained with a real-world knowledge base.<>
{"title":"COCOS/spl minus/a tool for constraint-based, dynamic configuration","authors":"M. Stumptner, A. Haselbock, G. Friedrich","doi":"10.1109/CAIA.1994.323651","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323651","url":null,"abstract":"The COCOS (COnfiguration through COnstraint Satisfaction) project was aimed at producing a tool that could be used for a variety of configuration applications. Traditionally, representation methods for technical configuration have focused either on reasoning about the structure of systems or the quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed a language based on an extension of the constraint satisfaction problem (CSP) model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. We briefly discuss the current implementation and the experiences obtained with a real-world knowledge base.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125219090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323682
M. Abramson, S. Bennett, W. Brooks, E. Hofmann, P. Krause, A. Temin
The Predictive Analysis System (PANS) uses knowledge of narco-trafficking behaviors to help analysts fuse all-source data into coherent pictures of activity from which predictions of future events can be made automatically. The system uses a form of model-based reasoning, plan recognition, to match reports of actual activities to expected activities. The model incorporates several sets of domain constraints and a constraint propagation algorithm is used to project known data points into the future (i.e., predict future events). The system can track many possibilities concurrently, and also allows analysts to hypothesize activity and observe the possible effect of the hypotheses on future activities. It makes use of recent results in knowledge representation, plan recognition, and machine learning to capture analysts' expertise without suffering from the brittleness of rule-based expert systems.<>
{"title":"Predictive Analysis System: a case study of AI techniques for counternarcotics","authors":"M. Abramson, S. Bennett, W. Brooks, E. Hofmann, P. Krause, A. Temin","doi":"10.1109/CAIA.1994.323682","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323682","url":null,"abstract":"The Predictive Analysis System (PANS) uses knowledge of narco-trafficking behaviors to help analysts fuse all-source data into coherent pictures of activity from which predictions of future events can be made automatically. The system uses a form of model-based reasoning, plan recognition, to match reports of actual activities to expected activities. The model incorporates several sets of domain constraints and a constraint propagation algorithm is used to project known data points into the future (i.e., predict future events). The system can track many possibilities concurrently, and also allows analysts to hypothesize activity and observe the possible effect of the hypotheses on future activities. It makes use of recent results in knowledge representation, plan recognition, and machine learning to capture analysts' expertise without suffering from the brittleness of rule-based expert systems.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123626304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323646
T. Kitani
In information extraction tasks, a finite-state pattern matcher is widely used to identify individual pieces of information in a sentence. Merging related pieces of information scattered throughout a text is usually difficult, however, since semantic relations across sentences cannot be captured by the sentence level processing. The purpose of the discourse processing described in this paper is to link individual pieces of information identified by the sentence level processing. In the Tipster information extraction domains, correct identification of company names is the key to achieving a high level of system performance. Therefore, the discourse processor in the Textract information extraction system keeps track of missing, abbreviated, and referenced company names in order to correlate individual pieces of information throughout the text. Furthermore, the discourse is segmented, so that data can be extracted from relevant portions of the text containing information of interest related to a particular tie-up relationship.<>
{"title":"Merging information by discourse processing for information extraction","authors":"T. Kitani","doi":"10.1109/CAIA.1994.323646","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323646","url":null,"abstract":"In information extraction tasks, a finite-state pattern matcher is widely used to identify individual pieces of information in a sentence. Merging related pieces of information scattered throughout a text is usually difficult, however, since semantic relations across sentences cannot be captured by the sentence level processing. The purpose of the discourse processing described in this paper is to link individual pieces of information identified by the sentence level processing. In the Tipster information extraction domains, correct identification of company names is the key to achieving a high level of system performance. Therefore, the discourse processor in the Textract information extraction system keeps track of missing, abbreviated, and referenced company names in order to correlate individual pieces of information throughout the text. Furthermore, the discourse is segmented, so that data can be extracted from relevant portions of the text containing information of interest related to a particular tie-up relationship.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"6 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120974956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323665
D. Nardi, G. Serrecchia
Applications developed by end-users using spreadsheets cannot be effectively distributed to other users because of the need for adequate information on the functioning of the applications themselves. In fact, building an explanation facility to support an application is a time-consuming task. This paper illustrates the realization of a tool for the automatic generation of explanations in conventional spreadsheet applications. The system works in two stages: the first one corresponds to the construction of a knowledge base containing the information on the mathematical relations coded into a programmed spreadsheet; the second one consists of the generation of explanations (concerning the quantities used in the spreadsheet and their relationships) from the representation previously built.<>
{"title":"Automatic generation of explanations for spreadsheet applications","authors":"D. Nardi, G. Serrecchia","doi":"10.1109/CAIA.1994.323665","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323665","url":null,"abstract":"Applications developed by end-users using spreadsheets cannot be effectively distributed to other users because of the need for adequate information on the functioning of the applications themselves. In fact, building an explanation facility to support an application is a time-consuming task. This paper illustrates the realization of a tool for the automatic generation of explanations in conventional spreadsheet applications. The system works in two stages: the first one corresponds to the construction of a knowledge base containing the information on the mathematical relations coded into a programmed spreadsheet; the second one consists of the generation of explanations (concerning the quantities used in the spreadsheet and their relationships) from the representation previously built.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122014121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323658
B. Zupan, A. Cheng
Embedded rule-based expert systems must satisfy stringent timing constraints when applied to real-time environments. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is based on a construction of the reduced cycle-free finite state transition system corresponding to the input rule-based system. The method makes use of rule-base system decomposition, concurrency and state equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system (1) has fewer rule firings to reach the fixed point, (2) is inherently stable and (3) has no redundant rules. The synthesis method also determines the tight response time bound of the new system. The optimized system is guaranteed to compute correct results, independent of the scheduling strategy and execution environment.<>
{"title":"Optimization of rule-based expert systems via state transition system construction","authors":"B. Zupan, A. Cheng","doi":"10.1109/CAIA.1994.323658","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323658","url":null,"abstract":"Embedded rule-based expert systems must satisfy stringent timing constraints when applied to real-time environments. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is based on a construction of the reduced cycle-free finite state transition system corresponding to the input rule-based system. The method makes use of rule-base system decomposition, concurrency and state equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system (1) has fewer rule firings to reach the fixed point, (2) is inherently stable and (3) has no redundant rules. The synthesis method also determines the tight response time bound of the new system. The optimized system is guaranteed to compute correct results, independent of the scheduling strategy and execution environment.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123031784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323671
S. Wermter
Describes a novel AI technique, called a plausibility network, that allows for learning to filter natural language phrases according to predefined classes under noisy conditions. We describe the automatic knowledge acquisition for representing the words of natural language phrases using significance vectors and the learning of filtering of phrases according to ten different domain classes. We particularly focus on examining the filtering performance under noisy conditions, that is the degradation of these filtering techniques for incomplete phrases with unknown words. Furthermore, we show that this technique already scales up for a few thousand real-world phrases, that it compares favorably to some classification techniques from information retrieval, and that it can deal with unknown words as they might occur based on incomplete lexicons or speech recognizers.<>
{"title":"Learning natural language filtering under noisy conditions","authors":"S. Wermter","doi":"10.1109/CAIA.1994.323671","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323671","url":null,"abstract":"Describes a novel AI technique, called a plausibility network, that allows for learning to filter natural language phrases according to predefined classes under noisy conditions. We describe the automatic knowledge acquisition for representing the words of natural language phrases using significance vectors and the learning of filtering of phrases according to ten different domain classes. We particularly focus on examining the filtering performance under noisy conditions, that is the degradation of these filtering techniques for incomplete phrases with unknown words. Furthermore, we show that this technique already scales up for a few thousand real-world phrases, that it compares favorably to some classification techniques from information retrieval, and that it can deal with unknown words as they might occur based on incomplete lexicons or speech recognizers.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"11 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128996851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}