Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323640
D. Araki, K. Narimatu, S. Kojima
The scheduling model and method must be designed to be application-domain dependent so as to reflect a set of constraints, objectives and preferences which reside in the target problem. We analyzed the scheduling process of human experts in a knowledge-base level and have developed a task-specific shell named ARES/SCH. ARES/SCH possesses a primitive task library that is a collection of domain-independent and generic components of scheduling mechanisms. The whole scheduling method can be described as a combinational flow-chart of primitive tasks. Memory module mounting shop (MMS) scheduling is shown as an example of ARES/SCH applications. It was apparent that ARES/SCH contributes to the rapid development of scheduling systems and supports a wide range of scheduling domains.<>
{"title":"Knowledge modeling environment for job-shop scheduling problem","authors":"D. Araki, K. Narimatu, S. Kojima","doi":"10.1109/CAIA.1994.323640","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323640","url":null,"abstract":"The scheduling model and method must be designed to be application-domain dependent so as to reflect a set of constraints, objectives and preferences which reside in the target problem. We analyzed the scheduling process of human experts in a knowledge-base level and have developed a task-specific shell named ARES/SCH. ARES/SCH possesses a primitive task library that is a collection of domain-independent and generic components of scheduling mechanisms. The whole scheduling method can be described as a combinational flow-chart of primitive tasks. Memory module mounting shop (MMS) scheduling is shown as an example of ARES/SCH applications. It was apparent that ARES/SCH contributes to the rapid development of scheduling systems and supports a wide range of scheduling domains.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124875176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323641
A.N. Kumar, S. Upadhyaya
Using function instead of fault probabilities for candidate discrimination during model based diagnosis has the advantages that function is more readily available, and facilitates explanation generation. However, current representations of function have been context dependent and state based, making them inefficient and time consuming. We propose classes as a scheme of representation of function for diagnosis based on component ontology principles, i.e., we define component functions (called classes) with respect to their ports. The scheme is space and time-wise linear in complexity, and hence, efficient. It is also domain-independent and scalable to representation of complex devices. We demonstrate the utility of the representation for the diagnosis of a printer buffer board.<>
{"title":"Component ontological representation of function for diagnosis","authors":"A.N. Kumar, S. Upadhyaya","doi":"10.1109/CAIA.1994.323641","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323641","url":null,"abstract":"Using function instead of fault probabilities for candidate discrimination during model based diagnosis has the advantages that function is more readily available, and facilitates explanation generation. However, current representations of function have been context dependent and state based, making them inefficient and time consuming. We propose classes as a scheme of representation of function for diagnosis based on component ontology principles, i.e., we define component functions (called classes) with respect to their ports. The scheme is space and time-wise linear in complexity, and hence, efficient. It is also domain-independent and scalable to representation of complex devices. We demonstrate the utility of the representation for the diagnosis of a printer buffer board.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116547924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323671
S. Wermter
Describes a novel AI technique, called a plausibility network, that allows for learning to filter natural language phrases according to predefined classes under noisy conditions. We describe the automatic knowledge acquisition for representing the words of natural language phrases using significance vectors and the learning of filtering of phrases according to ten different domain classes. We particularly focus on examining the filtering performance under noisy conditions, that is the degradation of these filtering techniques for incomplete phrases with unknown words. Furthermore, we show that this technique already scales up for a few thousand real-world phrases, that it compares favorably to some classification techniques from information retrieval, and that it can deal with unknown words as they might occur based on incomplete lexicons or speech recognizers.<>
{"title":"Learning natural language filtering under noisy conditions","authors":"S. Wermter","doi":"10.1109/CAIA.1994.323671","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323671","url":null,"abstract":"Describes a novel AI technique, called a plausibility network, that allows for learning to filter natural language phrases according to predefined classes under noisy conditions. We describe the automatic knowledge acquisition for representing the words of natural language phrases using significance vectors and the learning of filtering of phrases according to ten different domain classes. We particularly focus on examining the filtering performance under noisy conditions, that is the degradation of these filtering techniques for incomplete phrases with unknown words. Furthermore, we show that this technique already scales up for a few thousand real-world phrases, that it compares favorably to some classification techniques from information retrieval, and that it can deal with unknown words as they might occur based on incomplete lexicons or speech recognizers.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"11 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128996851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323679
M. Stolze
Describes a knowledge engineering project in which external requirements, like the maintainability of the knowledge base, the flexibility of the interface and the completeness/correctness of the advice generated by the system, were found to be crucial for the development of efficient knowledge-based systems. As a result/spl minus/quite in opposition to traditional approaches/spl minus/the external requirements were considered the determining factor for the choice of a particular knowledge representation. It is discussed why rules, "repair plans" and model-based representations were not appropriate representations for building a system which at the same time was easily maintainable, supported flexible interaction, and generated complete and correct advice. It is then shown how the problem was solved by building a cooperative knowledge-based system which uses a relatively simple representational formalism.<>
{"title":"From external requirements to appropriate knowledge representations: a case study","authors":"M. Stolze","doi":"10.1109/CAIA.1994.323679","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323679","url":null,"abstract":"Describes a knowledge engineering project in which external requirements, like the maintainability of the knowledge base, the flexibility of the interface and the completeness/correctness of the advice generated by the system, were found to be crucial for the development of efficient knowledge-based systems. As a result/spl minus/quite in opposition to traditional approaches/spl minus/the external requirements were considered the determining factor for the choice of a particular knowledge representation. It is discussed why rules, \"repair plans\" and model-based representations were not appropriate representations for building a system which at the same time was easily maintainable, supported flexible interaction, and generated complete and correct advice. It is then shown how the problem was solved by building a cooperative knowledge-based system which uses a relatively simple representational formalism.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133014891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323656
Hongyi Li, R. Deklerck, J. Cornelis
Medical image interpretation is a complex task that requires the integration of knowledge acquired from different domains, such as medicine, computer vision and image processing. This paper describes a knowledge based brain CT scan interpretation system that uses the blackboard model to integrate various sources of knowledge. The frame-based representation technique is employed to represent the geometric model of the human brain. The knowledge on low level image processing algorithms and high level interpretation is partitioned into knowledge sources (KSs) that operate on and communicate through the domain blackboard. Several numeric image processing algorithms are coded into KSs that segment the images or extract features from the image primitives. For the mapping of image primitives to brain objects, there are two groups of mapping KSs, namely model-directed and data-directed. The system achieves the successful labeling and delineation of about 25 brain objects.<>
{"title":"Integration of multiple knowledge sources in a system for brain CT-scan interpretation based on the blackboard model","authors":"Hongyi Li, R. Deklerck, J. Cornelis","doi":"10.1109/CAIA.1994.323656","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323656","url":null,"abstract":"Medical image interpretation is a complex task that requires the integration of knowledge acquired from different domains, such as medicine, computer vision and image processing. This paper describes a knowledge based brain CT scan interpretation system that uses the blackboard model to integrate various sources of knowledge. The frame-based representation technique is employed to represent the geometric model of the human brain. The knowledge on low level image processing algorithms and high level interpretation is partitioned into knowledge sources (KSs) that operate on and communicate through the domain blackboard. Several numeric image processing algorithms are coded into KSs that segment the images or extract features from the image primitives. For the mapping of image primitives to brain objects, there are two groups of mapping KSs, namely model-directed and data-directed. The system achieves the successful labeling and delineation of about 25 brain objects.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132338222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323680
Bing Liu, Siew-Hwee Choo, Shee-Ling Lok, Sing-Meng Leong, Soo-Chee Lee, F. Poon, Hwee-Har Tan
Imagine you rent a car and plan to drive around an unfamiliar city. Before you go from one place to another, you need to know a good route. In network theory, this is the shortest path problem. Dijkstra's algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in network theory. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated Dijkstra's algorithm with a knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help Dijkstra's algorithm in finding a solution. This approach dramatically reduces the computation time required for route finding. A prototype system has been implemented for route finding in Singapore.<>
{"title":"Integrating case-based reasoning, knowledge-based approach and Dijkstra algorithm for route finding","authors":"Bing Liu, Siew-Hwee Choo, Shee-Ling Lok, Sing-Meng Leong, Soo-Chee Lee, F. Poon, Hwee-Har Tan","doi":"10.1109/CAIA.1994.323680","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323680","url":null,"abstract":"Imagine you rent a car and plan to drive around an unfamiliar city. Before you go from one place to another, you need to know a good route. In network theory, this is the shortest path problem. Dijkstra's algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in network theory. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated Dijkstra's algorithm with a knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help Dijkstra's algorithm in finding a solution. This approach dramatically reduces the computation time required for route finding. A prototype system has been implemented for route finding in Singapore.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114536458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323647
M. Huhns, Munindar P. Singh
Workflows are the structured activities that take place in information systems in typical business environments. These activities frequently involve several database systems, user interfaces, and application programs. Traditional database systems do not support workflows to any reasonable extent. Usually, human beings must intervene to ensure their proper execution. We have developed an architecture based on AI technology that automatically manages workflows. This architecture executes on top of a distributed computing environment. It has been applied to automating service provisioning workflows; an implementation that operates on one such workflow has been developed. This work advances the Camel Project's goal of developing technologies for integrating heterogeneous database systems. It is notable in its marriage of AI approaches with standard distributed database techniques.<>
{"title":"Automating workflows for service provisioning: integrating AI and database technologies","authors":"M. Huhns, Munindar P. Singh","doi":"10.1109/CAIA.1994.323647","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323647","url":null,"abstract":"Workflows are the structured activities that take place in information systems in typical business environments. These activities frequently involve several database systems, user interfaces, and application programs. Traditional database systems do not support workflows to any reasonable extent. Usually, human beings must intervene to ensure their proper execution. We have developed an architecture based on AI technology that automatically manages workflows. This architecture executes on top of a distributed computing environment. It has been applied to automating service provisioning workflows; an implementation that operates on one such workflow has been developed. This work advances the Camel Project's goal of developing technologies for integrating heterogeneous database systems. It is notable in its marriage of AI approaches with standard distributed database techniques.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122749573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323655
M. Van Dyne, L. Woolery, J. Gryzmala-Busse, C. Tsatsoulis
Machine learning and statistical analysis were performed on 9,419 perinatal records with the goal of building a prototype expert system that would improve on the current accuracy rates achieved by manual pre-term labor and delivery risk scoring tools. Current manual scoring techniques have reported accuracy rates of 17-38%. The prototype expert system produced in this effort achieve overall accuracy rates of 53%-88% when tested on records that were not used in either statistical analysis or machine learning. Based on the success of this initial effort, the development of a full expert system to assist in pre-term delivery risk decision support, using the methods described in this paper, is planned.<>
{"title":"Using machine learning and expert systems to predict preterm delivery in pregnant women","authors":"M. Van Dyne, L. Woolery, J. Gryzmala-Busse, C. Tsatsoulis","doi":"10.1109/CAIA.1994.323655","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323655","url":null,"abstract":"Machine learning and statistical analysis were performed on 9,419 perinatal records with the goal of building a prototype expert system that would improve on the current accuracy rates achieved by manual pre-term labor and delivery risk scoring tools. Current manual scoring techniques have reported accuracy rates of 17-38%. The prototype expert system produced in this effort achieve overall accuracy rates of 53%-88% when tested on records that were not used in either statistical analysis or machine learning. Based on the success of this initial effort, the development of a full expert system to assist in pre-term delivery risk decision support, using the methods described in this paper, is planned.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323685
G. Semeraro, F. Esposito, D. Malerba
We propose a supervised inductive learning approach for the problem of document understanding, that is, recognizing logical components of a document. For this purpose, FOCL and NDUBI/H, two systems that learn Horn clauses, have been employed. Several experimental results are reported and a critical view of the underlying independence assumption, made by almost all systems that learn from examples, is presented. This led us to redefine the problem of document understanding in terms of a new strategy of supervised inductive learning, called contextual learning. Experiments, in which a dependency hierarchy between concepts is defined, show that contextual rules increase predictive accuracy and decrease learning time for labelling problems, like document understanding. Encouraging results have been obtained when we tried to discover a linear dependency order by means of statistical methods.<>
{"title":"Learning contextual rules for document understanding","authors":"G. Semeraro, F. Esposito, D. Malerba","doi":"10.1109/CAIA.1994.323685","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323685","url":null,"abstract":"We propose a supervised inductive learning approach for the problem of document understanding, that is, recognizing logical components of a document. For this purpose, FOCL and NDUBI/H, two systems that learn Horn clauses, have been employed. Several experimental results are reported and a critical view of the underlying independence assumption, made by almost all systems that learn from examples, is presented. This led us to redefine the problem of document understanding in terms of a new strategy of supervised inductive learning, called contextual learning. Experiments, in which a dependency hierarchy between concepts is defined, show that contextual rules increase predictive accuracy and decrease learning time for labelling problems, like document understanding. Encouraging results have been obtained when we tried to discover a linear dependency order by means of statistical methods.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126349425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323683
M. Compton, S. Wolfe
This paper describes a knowledge-based system for improving the efficiency of automated workflow systems by 1) ensuring the correctness and completeness of data contained on forms that are originated and transmitted electronically, and 2) generating an electronic "routing slip" that reflects who must approve the form. The system uses a form-independent validation engine and form-specific constraints to check that electronic forms are filled out correctly. If no errors are detected during validation, the system uses information on the form to generate a list of individuals and/or organizations that must approve it. The system, implemented in CLIPS and running on Macintosh computers, communicates with an off-the-shelf electronic forms package via AppleScript and can operate within the Apple Open Collaboration Environment (AOCE). The system has successfully validated and generated approval paths for approximately ten different types of forms, and is easily extended to new forms.<>
{"title":"Intelligent validation and routing of electronic forms in a distributed workflow environment","authors":"M. Compton, S. Wolfe","doi":"10.1109/CAIA.1994.323683","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323683","url":null,"abstract":"This paper describes a knowledge-based system for improving the efficiency of automated workflow systems by 1) ensuring the correctness and completeness of data contained on forms that are originated and transmitted electronically, and 2) generating an electronic \"routing slip\" that reflects who must approve the form. The system uses a form-independent validation engine and form-specific constraints to check that electronic forms are filled out correctly. If no errors are detected during validation, the system uses information on the form to generate a list of individuals and/or organizations that must approve it. The system, implemented in CLIPS and running on Macintosh computers, communicates with an off-the-shelf electronic forms package via AppleScript and can operate within the Apple Open Collaboration Environment (AOCE). The system has successfully validated and generated approval paths for approximately ten different types of forms, and is easily extended to new forms.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126785468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}