Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323653
S. Liu, M. Thonnat, M. Berthod
The identification of foraminifera is an important task in oil exploration. However, this task is tedious and time-consuming. In this work, a knowledge-based system is developed for the identification of planktonic foraminifera. The identification process is made automatic by means of computer vision techniques. Currently, the knowledge-based system, though just being a prototype in this stage of its development, is able to identify several important species of planktonic foraminifera based on the parameters obtained by the image analysis algorithms. An overview of our method and the main components of the knowledge-based system are discussed.<>
{"title":"Automatic classification of planktonic foraminifera by a knowledge-based system","authors":"S. Liu, M. Thonnat, M. Berthod","doi":"10.1109/CAIA.1994.323653","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323653","url":null,"abstract":"The identification of foraminifera is an important task in oil exploration. However, this task is tedious and time-consuming. In this work, a knowledge-based system is developed for the identification of planktonic foraminifera. The identification process is made automatic by means of computer vision techniques. Currently, the knowledge-based system, though just being a prototype in this stage of its development, is able to identify several important species of planktonic foraminifera based on the parameters obtained by the image analysis algorithms. An overview of our method and the main components of the knowledge-based system are discussed.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129026825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323637
W. Dai, S. Wright
We present a practical approach to implement existing expert system specification concepts. The approach is based on the previous development of expert system primitives where the focus was on the orthogonality and functionality of the primitives rather than their application aspect. An important objective is to formulate a software paradigm to enable existing expert system primitives to be combined into various expert system tools where different types of expert systems can be constructed. Expert systems built this way have been tested in telecommunications and are moving towards practical use.<>
{"title":"A conventional approach to expert systems development","authors":"W. Dai, S. Wright","doi":"10.1109/CAIA.1994.323637","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323637","url":null,"abstract":"We present a practical approach to implement existing expert system specification concepts. The approach is based on the previous development of expert system primitives where the focus was on the orthogonality and functionality of the primitives rather than their application aspect. An important objective is to formulate a software paradigm to enable existing expert system primitives to be combined into various expert system tools where different types of expert systems can be constructed. Expert systems built this way have been tested in telecommunications and are moving towards practical use.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128111255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323661
H. E. Pople, W. Spangler, M. T. Pople
EAGOL is an artificial intelligence system for process monitoring, situation assessment, and response planning in the management of complex, engineered systems in real time. Understanding the behavior of complex systems requires two basic types of analysis, both of which are incorporated within the EAGOL model: (1) first-principles cause-and-effect analysis of the engineered system, and (2) analysis of the types of interventions that may introduced into the engineered system from (a) built-in automatic safeguard mechanisms, and (b) human operators, who are often guided by pre-defined written procedures. EAGOL includes a goal-based model of procedure generation which allows the program (1) to generate procedures based on its assessment of real or potential system states and events, and (2) to use its internal representation of procedures and goals to reason along with human operators in pursuit of an emergency resolution.<>
{"title":"EAGOL: an artificial intelligence system for process monitoring, situation assessment and response planning","authors":"H. E. Pople, W. Spangler, M. T. Pople","doi":"10.1109/CAIA.1994.323661","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323661","url":null,"abstract":"EAGOL is an artificial intelligence system for process monitoring, situation assessment, and response planning in the management of complex, engineered systems in real time. Understanding the behavior of complex systems requires two basic types of analysis, both of which are incorporated within the EAGOL model: (1) first-principles cause-and-effect analysis of the engineered system, and (2) analysis of the types of interventions that may introduced into the engineered system from (a) built-in automatic safeguard mechanisms, and (b) human operators, who are often guided by pre-defined written procedures. EAGOL includes a goal-based model of procedure generation which allows the program (1) to generate procedures based on its assessment of real or potential system states and events, and (2) to use its internal representation of procedures and goals to reason along with human operators in pursuit of an emergency resolution.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133297197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323658
B. Zupan, A. Cheng
Embedded rule-based expert systems must satisfy stringent timing constraints when applied to real-time environments. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is based on a construction of the reduced cycle-free finite state transition system corresponding to the input rule-based system. The method makes use of rule-base system decomposition, concurrency and state equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system (1) has fewer rule firings to reach the fixed point, (2) is inherently stable and (3) has no redundant rules. The synthesis method also determines the tight response time bound of the new system. The optimized system is guaranteed to compute correct results, independent of the scheduling strategy and execution environment.<>
{"title":"Optimization of rule-based expert systems via state transition system construction","authors":"B. Zupan, A. Cheng","doi":"10.1109/CAIA.1994.323658","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323658","url":null,"abstract":"Embedded rule-based expert systems must satisfy stringent timing constraints when applied to real-time environments. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is based on a construction of the reduced cycle-free finite state transition system corresponding to the input rule-based system. The method makes use of rule-base system decomposition, concurrency and state equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system (1) has fewer rule firings to reach the fixed point, (2) is inherently stable and (3) has no redundant rules. The synthesis method also determines the tight response time bound of the new system. The optimized system is guaranteed to compute correct results, independent of the scheduling strategy and execution environment.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123031784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323682
M. Abramson, S. Bennett, W. Brooks, E. Hofmann, P. Krause, A. Temin
The Predictive Analysis System (PANS) uses knowledge of narco-trafficking behaviors to help analysts fuse all-source data into coherent pictures of activity from which predictions of future events can be made automatically. The system uses a form of model-based reasoning, plan recognition, to match reports of actual activities to expected activities. The model incorporates several sets of domain constraints and a constraint propagation algorithm is used to project known data points into the future (i.e., predict future events). The system can track many possibilities concurrently, and also allows analysts to hypothesize activity and observe the possible effect of the hypotheses on future activities. It makes use of recent results in knowledge representation, plan recognition, and machine learning to capture analysts' expertise without suffering from the brittleness of rule-based expert systems.<>
{"title":"Predictive Analysis System: a case study of AI techniques for counternarcotics","authors":"M. Abramson, S. Bennett, W. Brooks, E. Hofmann, P. Krause, A. Temin","doi":"10.1109/CAIA.1994.323682","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323682","url":null,"abstract":"The Predictive Analysis System (PANS) uses knowledge of narco-trafficking behaviors to help analysts fuse all-source data into coherent pictures of activity from which predictions of future events can be made automatically. The system uses a form of model-based reasoning, plan recognition, to match reports of actual activities to expected activities. The model incorporates several sets of domain constraints and a constraint propagation algorithm is used to project known data points into the future (i.e., predict future events). The system can track many possibilities concurrently, and also allows analysts to hypothesize activity and observe the possible effect of the hypotheses on future activities. It makes use of recent results in knowledge representation, plan recognition, and machine learning to capture analysts' expertise without suffering from the brittleness of rule-based expert systems.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123626304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323651
M. Stumptner, A. Haselbock, G. Friedrich
The COCOS (COnfiguration through COnstraint Satisfaction) project was aimed at producing a tool that could be used for a variety of configuration applications. Traditionally, representation methods for technical configuration have focused either on reasoning about the structure of systems or the quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed a language based on an extension of the constraint satisfaction problem (CSP) model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. We briefly discuss the current implementation and the experiences obtained with a real-world knowledge base.<>
{"title":"COCOS/spl minus/a tool for constraint-based, dynamic configuration","authors":"M. Stumptner, A. Haselbock, G. Friedrich","doi":"10.1109/CAIA.1994.323651","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323651","url":null,"abstract":"The COCOS (COnfiguration through COnstraint Satisfaction) project was aimed at producing a tool that could be used for a variety of configuration applications. Traditionally, representation methods for technical configuration have focused either on reasoning about the structure of systems or the quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed a language based on an extension of the constraint satisfaction problem (CSP) model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. We briefly discuss the current implementation and the experiences obtained with a real-world knowledge base.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125219090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323646
T. Kitani
In information extraction tasks, a finite-state pattern matcher is widely used to identify individual pieces of information in a sentence. Merging related pieces of information scattered throughout a text is usually difficult, however, since semantic relations across sentences cannot be captured by the sentence level processing. The purpose of the discourse processing described in this paper is to link individual pieces of information identified by the sentence level processing. In the Tipster information extraction domains, correct identification of company names is the key to achieving a high level of system performance. Therefore, the discourse processor in the Textract information extraction system keeps track of missing, abbreviated, and referenced company names in order to correlate individual pieces of information throughout the text. Furthermore, the discourse is segmented, so that data can be extracted from relevant portions of the text containing information of interest related to a particular tie-up relationship.<>
{"title":"Merging information by discourse processing for information extraction","authors":"T. Kitani","doi":"10.1109/CAIA.1994.323646","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323646","url":null,"abstract":"In information extraction tasks, a finite-state pattern matcher is widely used to identify individual pieces of information in a sentence. Merging related pieces of information scattered throughout a text is usually difficult, however, since semantic relations across sentences cannot be captured by the sentence level processing. The purpose of the discourse processing described in this paper is to link individual pieces of information identified by the sentence level processing. In the Tipster information extraction domains, correct identification of company names is the key to achieving a high level of system performance. Therefore, the discourse processor in the Textract information extraction system keeps track of missing, abbreviated, and referenced company names in order to correlate individual pieces of information throughout the text. Furthermore, the discourse is segmented, so that data can be extracted from relevant portions of the text containing information of interest related to a particular tie-up relationship.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"6 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120974956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323635
W. Fitzgerald
Computer tutors that engage in dialog with students require effective parsing technology to understand the intentions of the students. A report is given on an implementation of a direct memory access parser (DMAP) for a computer based biology tutor (Creanimate). The author begins by describing the Creanimate system and giving an overview of direct memory access parser. He then describes the parser developed for Creanimate, DMAP-C, and the extensions and enhancements compared to previous DMAP parsers. Finally, he gives a qualitative evaluation of the effectiveness of the DMAP-C parser.<>
{"title":"Direct memory access parsing in the Creanimate biology tutor","authors":"W. Fitzgerald","doi":"10.1109/CAIA.1994.323635","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323635","url":null,"abstract":"Computer tutors that engage in dialog with students require effective parsing technology to understand the intentions of the students. A report is given on an implementation of a direct memory access parser (DMAP) for a computer based biology tutor (Creanimate). The author begins by describing the Creanimate system and giving an overview of direct memory access parser. He then describes the parser developed for Creanimate, DMAP-C, and the extensions and enhancements compared to previous DMAP parsers. Finally, he gives a qualitative evaluation of the effectiveness of the DMAP-C parser.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116733770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323631
R. Hoffman, H.R. Keshavan, J. Lankford
Generating maintenance documentation is traditionally a manual process. This is true for most industries, although products and customer documentation requirements are frequently revised. The result is frequent documentation revision and significant labor costs. This paper describes a system to automate the generation of maintenance documentation for mechanical assemblies, the Automated Maintenance Manual Production (AMMP) system. Product and customer requirement revisions are accessed from a central database. For a given maintenance task, a spatial planner module derives a sequence of operations to carry out the task based on the product CAD definition. A presentation manager module converts this sequence into formatted text and illustrations.<>
{"title":"AMMP: an Automated Maintenance Manual Production system","authors":"R. Hoffman, H.R. Keshavan, J. Lankford","doi":"10.1109/CAIA.1994.323631","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323631","url":null,"abstract":"Generating maintenance documentation is traditionally a manual process. This is true for most industries, although products and customer documentation requirements are frequently revised. The result is frequent documentation revision and significant labor costs. This paper describes a system to automate the generation of maintenance documentation for mechanical assemblies, the Automated Maintenance Manual Production (AMMP) system. Product and customer requirement revisions are accessed from a central database. For a given maintenance task, a spatial planner module derives a sequence of operations to carry out the task based on the product CAD definition. A presentation manager module converts this sequence into formatted text and illustrations.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126795109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323665
D. Nardi, G. Serrecchia
Applications developed by end-users using spreadsheets cannot be effectively distributed to other users because of the need for adequate information on the functioning of the applications themselves. In fact, building an explanation facility to support an application is a time-consuming task. This paper illustrates the realization of a tool for the automatic generation of explanations in conventional spreadsheet applications. The system works in two stages: the first one corresponds to the construction of a knowledge base containing the information on the mathematical relations coded into a programmed spreadsheet; the second one consists of the generation of explanations (concerning the quantities used in the spreadsheet and their relationships) from the representation previously built.<>
{"title":"Automatic generation of explanations for spreadsheet applications","authors":"D. Nardi, G. Serrecchia","doi":"10.1109/CAIA.1994.323665","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323665","url":null,"abstract":"Applications developed by end-users using spreadsheets cannot be effectively distributed to other users because of the need for adequate information on the functioning of the applications themselves. In fact, building an explanation facility to support an application is a time-consuming task. This paper illustrates the realization of a tool for the automatic generation of explanations in conventional spreadsheet applications. The system works in two stages: the first one corresponds to the construction of a knowledge base containing the information on the mathematical relations coded into a programmed spreadsheet; the second one consists of the generation of explanations (concerning the quantities used in the spreadsheet and their relationships) from the representation previously built.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122014121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}