R. Gaizauskas, H. Cunningham, Y. Wilks, P. Rodgers, K. Humphreys
We describe a software environment to support research and development in natural language (NL) engineering. This environment-GATE (General Architecture for Text Engineering)-aims to advance research in the area of machine processing of natural languages by providing a software infrastructure on top of which heterogeneous NL component modules may be evaluated and refined individually or may be combined into larger application systems. Thus, GATE aims to support both researchers and developers working on component technologies (e.g. parsing, tagging, morphological analysis) and those working on developing end-user applications (e.g. information extraction, text summarisation, document generation, machine translation, and second language learning). GATE will promote reuse of component technology, permit specialisation and collaboration in large-scale projects, and allow for the comparison and evaluation of alternative technologies. The first release of GATE is now available.
{"title":"GATE: an environment to support research and development in natural language engineering","authors":"R. Gaizauskas, H. Cunningham, Y. Wilks, P. Rodgers, K. Humphreys","doi":"10.1109/TAI.1996.560401","DOIUrl":"https://doi.org/10.1109/TAI.1996.560401","url":null,"abstract":"We describe a software environment to support research and development in natural language (NL) engineering. This environment-GATE (General Architecture for Text Engineering)-aims to advance research in the area of machine processing of natural languages by providing a software infrastructure on top of which heterogeneous NL component modules may be evaluated and refined individually or may be combined into larger application systems. Thus, GATE aims to support both researchers and developers working on component technologies (e.g. parsing, tagging, morphological analysis) and those working on developing end-user applications (e.g. information extraction, text summarisation, document generation, machine translation, and second language learning). GATE will promote reuse of component technology, permit specialisation and collaboration in large-scale projects, and allow for the comparison and evaluation of alternative technologies. The first release of GATE is now available.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125431931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the frame of classical constraint satisfaction problems (CSPs), the backtrack tree search, combined with learning methods, presents a double advantage: for static solving, it improves the search speed by avoiding redundant explorations; for dynamic solving (after a slight change of the problem) it reuses the previous searches to build a new solution quickly. Backtrack reasoning concludes the rejection of certain combinatorial choices. Nogood Recording memorizes these choices in order to not reproduce. We aim to use Nogood Recording in the wider scope of the Valued CSP framework (VCSP) to enhance the branch and bound algorithm. Therefore, nogoods are used to increase the lower bound used by the branch and bound to prune the search. This issue leads to the definition of the "Valued Nogoods" and their use. This study focuses particularly on penalty and dynamic VCSPs which require special developments. However our results give an extension of the Nogood Recording to the general VCSP framework.
{"title":"Nogood recording for valued constraint satisfaction problems","authors":"Pierre Dago, G. Verfaillie","doi":"10.1109/TAI.1996.560443","DOIUrl":"https://doi.org/10.1109/TAI.1996.560443","url":null,"abstract":"In the frame of classical constraint satisfaction problems (CSPs), the backtrack tree search, combined with learning methods, presents a double advantage: for static solving, it improves the search speed by avoiding redundant explorations; for dynamic solving (after a slight change of the problem) it reuses the previous searches to build a new solution quickly. Backtrack reasoning concludes the rejection of certain combinatorial choices. Nogood Recording memorizes these choices in order to not reproduce. We aim to use Nogood Recording in the wider scope of the Valued CSP framework (VCSP) to enhance the branch and bound algorithm. Therefore, nogoods are used to increase the lower bound used by the branch and bound to prune the search. This issue leads to the definition of the \"Valued Nogoods\" and their use. This study focuses particularly on penalty and dynamic VCSPs which require special developments. However our results give an extension of the Nogood Recording to the general VCSP framework.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115947846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a software development toolkit named ADAM (Adaptive Distributed Applications Manager) which creates operational distributed applications. To increase the performances of these distributed applications, they are created according to the network, machines and software constraints, and are updated in reply to dynamic environment changes. Using a graphical interface, the user provides functional specifications of the distributed application he wants to build. ADAM finds out a structural scheme of this application. It adds missing software components (like data transformation software for example) in order to create an effective distributed application and selects available software components located on various machines and sites, subject to varying machine performances and different inter site distances. In a second step, the scheme is transformed into a distributed application.
{"title":"Assistant agents for creation and management of distributed applications","authors":"M. Girard","doi":"10.1109/TAI.1996.560783","DOIUrl":"https://doi.org/10.1109/TAI.1996.560783","url":null,"abstract":"We propose a software development toolkit named ADAM (Adaptive Distributed Applications Manager) which creates operational distributed applications. To increase the performances of these distributed applications, they are created according to the network, machines and software constraints, and are updated in reply to dynamic environment changes. Using a graphical interface, the user provides functional specifications of the distributed application he wants to build. ADAM finds out a structural scheme of this application. It adds missing software components (like data transformation software for example) in order to create an effective distributed application and selects available software components located on various machines and sites, subject to varying machine performances and different inter site distances. In a second step, the scheme is transformed into a distributed application.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125999511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an inferential system based on abductive interpretation of text. Inference to the best explanation is performed by the recognition of the most economic semantic paths produced by the propagation of markers on a very large linguistic knowledge base. The propagation of markers is controlled by their intrinsic propagation rules, devised from plausible semantic relation chains. An interpretation is inferred whenever two markers collide. Using a very large knowledge base, our inferential system aims at producing interpretations accountable for common sense reasoning. The novelty is that the inference rules model a large variety of implications, as suggested by the knowledge base relations. Textual implicatures are recognized as pragmatic inferences.
{"title":"PARIS: a parallel inference system","authors":"S. Harabagiu, D. Moldovan","doi":"10.1109/TAI.1996.560454","DOIUrl":"https://doi.org/10.1109/TAI.1996.560454","url":null,"abstract":"This paper presents an inferential system based on abductive interpretation of text. Inference to the best explanation is performed by the recognition of the most economic semantic paths produced by the propagation of markers on a very large linguistic knowledge base. The propagation of markers is controlled by their intrinsic propagation rules, devised from plausible semantic relation chains. An interpretation is inferred whenever two markers collide. Using a very large knowledge base, our inferential system aims at producing interpretations accountable for common sense reasoning. The novelty is that the inference rules model a large variety of implications, as suggested by the knowledge base relations. Textual implicatures are recognized as pragmatic inferences.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124131636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors present a mechanism for recovering consistent data from an inconsistent set of assertions. For a common family of knowledge bases they also provide an efficient algorithm for doing so automatically. This method is nonmonotonic and paraconsistent. It is particularly useful for making diagnoses on faulty devices.
{"title":"Automatic diagnoses for properly stratified knowledge-bases","authors":"O. Arieli, A. Avron","doi":"10.1109/TAI.1996.560481","DOIUrl":"https://doi.org/10.1109/TAI.1996.560481","url":null,"abstract":"The authors present a mechanism for recovering consistent data from an inconsistent set of assertions. For a common family of knowledge bases they also provide an efficient algorithm for doing so automatically. This method is nonmonotonic and paraconsistent. It is particularly useful for making diagnoses on faulty devices.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124430250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The processor configuration problem (PCP) is a constraint optimization problem. The task is to link up a finite set of processors into a network; minimizing the maximum distance between processors. Since each processor has a limited number of communication channels, a carefully planned layout could minimize the overhead for message switching. We present a genetic algorithm (GA) approach to the PCP. Our technique uses a mutation based GA, a function that produces schemata by analyzing previous solutions and an effective data representation. Our approach has been shown to outperform other published techniques in this problem.
{"title":"Applying a mutation-based genetic algorithm to processor configuration problems","authors":"T. Lau, E. Tsang","doi":"10.1109/TAI.1996.560395","DOIUrl":"https://doi.org/10.1109/TAI.1996.560395","url":null,"abstract":"The processor configuration problem (PCP) is a constraint optimization problem. The task is to link up a finite set of processors into a network; minimizing the maximum distance between processors. Since each processor has a limited number of communication channels, a carefully planned layout could minimize the overhead for message switching. We present a genetic algorithm (GA) approach to the PCP. Our technique uses a mutation based GA, a function that produces schemata by analyzing previous solutions and an effective data representation. Our approach has been shown to outperform other published techniques in this problem.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130692935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article discusses the implementation of a hand-written character recognition task using neural networks. Two logic neural networks-the WISARD (I. Aleksander and H. Morton, 1990) and the SOLNN (G. Tambouratzis and T.J. Stonham, 1993)-are compared on the basis of their classification accuracy. The results obtained are compared to these of other researchers, to objectively assess the success of the neural networks in classifying the dataset.
本文讨论了使用神经网络实现手写字符识别任务。两种逻辑神经网络——WISARD (I. Aleksander和H. Morton, 1990)和SOLNN (G. Tambouratzis和T.J. Stonham, 1993)——在分类精度的基础上进行了比较。将得到的结果与其他研究人员的结果进行比较,以客观地评估神经网络在数据集分类方面的成功。
{"title":"Applying logic neural networks to hand-written character recognition tasks","authors":"G. Tambouratzis","doi":"10.1109/TAI.1996.560461","DOIUrl":"https://doi.org/10.1109/TAI.1996.560461","url":null,"abstract":"This article discusses the implementation of a hand-written character recognition task using neural networks. Two logic neural networks-the WISARD (I. Aleksander and H. Morton, 1990) and the SOLNN (G. Tambouratzis and T.J. Stonham, 1993)-are compared on the basis of their classification accuracy. The results obtained are compared to these of other researchers, to objectively assess the success of the neural networks in classifying the dataset.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131478299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The symbolic level of a dynamic scene interpretation system is presented. This symbolic level is based on plan prototypes represented by Petri nets whose interpretation is expressed thanks to 1st order constrained cubes, and on a reasoning aiming at instantiating the plan prototypes with objects delivered by the numerical processing of sensor data. An example on real world data is given.
{"title":"1st order C-cubes for the interpretation of Petri nets: an application to dynamic scene understanding","authors":"Charles Castel, L. Chaudron, C. Tessier","doi":"10.1109/TAI.1996.560478","DOIUrl":"https://doi.org/10.1109/TAI.1996.560478","url":null,"abstract":"The symbolic level of a dynamic scene interpretation system is presented. This symbolic level is based on plan prototypes represented by Petri nets whose interpretation is expressed thanks to 1st order constrained cubes, and on a reasoning aiming at instantiating the plan prototypes with objects delivered by the numerical processing of sensor data. An example on real world data is given.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130455701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider fuzzy implication operators which are extensions of the two valued logic implication operator and are non decreasing with respect to their second argument. Firstly, we analyze some features of these operators with regard to fuzzy reasoning with one rule. Then, as regards approximate reasoning with multiple rules, we demonstrate that, for an inference process using Sup-T composition in the context of Compositional Rule of Inference (CRI), a necessary condition to infer a reasonable conclusion is that the minimum be used as an aggregation operator. Finally, when the minimum is used as an aggregation operator we provide a sufficient condition to obtain a reasonable inference result.
{"title":"Reasonable conclusions in fuzzy reasoning","authors":"B. Lazzerini, F. Marcelloni","doi":"10.1109/TAI.1996.560774","DOIUrl":"https://doi.org/10.1109/TAI.1996.560774","url":null,"abstract":"We consider fuzzy implication operators which are extensions of the two valued logic implication operator and are non decreasing with respect to their second argument. Firstly, we analyze some features of these operators with regard to fuzzy reasoning with one rule. Then, as regards approximate reasoning with multiple rules, we demonstrate that, for an inference process using Sup-T composition in the context of Compositional Rule of Inference (CRI), a necessary condition to infer a reasonable conclusion is that the minimum be used as an aggregation operator. Finally, when the minimum is used as an aggregation operator we provide a sufficient condition to obtain a reasonable inference result.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124437780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dent and Mercer (1996) have introduced an algorithm called minimal forward checking (MFC) which always performs no worse than forward checking (FC) in terms of number of compatibility checks and node expanded given the same variable and value orderings. In this paper we describe an algorithm which extends MFC with backmarking and conflict-directed backtracking. The new algorithm has a smaller space complexity than MFC. Experiments were conducted to compare it with MFC and some regular FC based algorithms. The results show that the new algorithm always performs at least as good as its "non-lazy" counterpart. It outperforms MFC on average and its edge over MFC is particularly clear for problems near to phase transitions. Interestingly, the minimum width variable ordering heuristic appears to be a better choice than the fail-first heuristic for the new algorithm in many occasions, particularly for sparsely constrained problems.
{"title":"Minimal forward checking with backmarking and conflict-directed backjumping","authors":"A. Kwan, E. Tsang","doi":"10.1109/TAI.1996.560466","DOIUrl":"https://doi.org/10.1109/TAI.1996.560466","url":null,"abstract":"Dent and Mercer (1996) have introduced an algorithm called minimal forward checking (MFC) which always performs no worse than forward checking (FC) in terms of number of compatibility checks and node expanded given the same variable and value orderings. In this paper we describe an algorithm which extends MFC with backmarking and conflict-directed backtracking. The new algorithm has a smaller space complexity than MFC. Experiments were conducted to compare it with MFC and some regular FC based algorithms. The results show that the new algorithm always performs at least as good as its \"non-lazy\" counterpart. It outperforms MFC on average and its edge over MFC is particularly clear for problems near to phase transitions. Interestingly, the minimum width variable ordering heuristic appears to be a better choice than the fail-first heuristic for the new algorithm in many occasions, particularly for sparsely constrained problems.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"61 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126769053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}