Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323687
R. Sasisekharan, V. Seshadri, S. Weiss
We describe a new approach, using machine learning, to automate performance monitoring in massively interconnected communications networks. The information obtained from monitoring network performance over time can be used to maintain the network preactively by detecting and predicting chronic failures and identifying potentially serious problems in the early stages before they degrade. We have applied this machine learning approach to the detection and prediction of chronic transmission faults in AT&T's digital communications network. A windowing technique was applied to large volumes of diagnostic data, and these data were analyzed and decision rules were induced. A set of conditions has been found that is highly predictive of chronic circuit problems. Through continuous monitoring of the network at regular intervals using the new approach, we have also been able to identify several local network trends of specific chronic problems while they were in progress.<>
{"title":"Using machine learning to monitor network performance","authors":"R. Sasisekharan, V. Seshadri, S. Weiss","doi":"10.1109/CAIA.1994.323687","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323687","url":null,"abstract":"We describe a new approach, using machine learning, to automate performance monitoring in massively interconnected communications networks. The information obtained from monitoring network performance over time can be used to maintain the network preactively by detecting and predicting chronic failures and identifying potentially serious problems in the early stages before they degrade. We have applied this machine learning approach to the detection and prediction of chronic transmission faults in AT&T's digital communications network. A windowing technique was applied to large volumes of diagnostic data, and these data were analyzed and decision rules were induced. A set of conditions has been found that is highly predictive of chronic circuit problems. Through continuous monitoring of the network at regular intervals using the new approach, we have also been able to identify several local network trends of specific chronic problems while they were in progress.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126456979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323664
S. Kerpedjiev
MeteoAssert, a system developed at the Forecast System Laboratory, analyzes gridded data sets and produces descriptions which are organized sets of assertions representing the content of weather messages. Each assertion conveys a single weather characteristic with a certain spatial and temporal scope. The assertions in a description are linked by discourse relations that predetermine the structure of the weather message: a natural language text, a piece of graphics, a table, or a mixture of these elements. The descriptions are generated in response to queries representing the information needs of the user. Three models drive the system: territory, time, and parameter. Each model defines the objects in terms by which the descriptions are created. MeteoAssert works as a server to several systems dealing with different applications and preparing various weather displays.<>
{"title":"MeteoAssert: generation and organization of weather assertions from gridded data","authors":"S. Kerpedjiev","doi":"10.1109/CAIA.1994.323664","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323664","url":null,"abstract":"MeteoAssert, a system developed at the Forecast System Laboratory, analyzes gridded data sets and produces descriptions which are organized sets of assertions representing the content of weather messages. Each assertion conveys a single weather characteristic with a certain spatial and temporal scope. The assertions in a description are linked by discourse relations that predetermine the structure of the weather message: a natural language text, a piece of graphics, a table, or a mixture of these elements. The descriptions are generated in response to queries representing the information needs of the user. Three models drive the system: territory, time, and parameter. Each model defines the objects in terms by which the descriptions are created. MeteoAssert works as a server to several systems dealing with different applications and preparing various weather displays.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134208548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323619
T. Quah, C. Tan, H. Teh
Presents the architecture of a hybrid neural network expert system shell. The system, structured around the concept of a "network element", is aimed at preserving the semantic structure of the expert system rules whilst incorporating the learning capability of neural networks into the inferencing mechanism. Using this architecture, every rule of the knowledge base is represented by a one- or two-layer neural network element. These network elements are dynamically linked up to form a rule-tree during the inferencing process. The system is also able to adjust its inferencing strategy according to different users and situations. A rule editor is also provided to enable easy maintenance of the neural network rule elements.<>
{"title":"A neural network expert system shell","authors":"T. Quah, C. Tan, H. Teh","doi":"10.1109/CAIA.1994.323619","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323619","url":null,"abstract":"Presents the architecture of a hybrid neural network expert system shell. The system, structured around the concept of a \"network element\", is aimed at preserving the semantic structure of the expert system rules whilst incorporating the learning capability of neural networks into the inferencing mechanism. Using this architecture, every rule of the knowledge base is represented by a one- or two-layer neural network element. These network elements are dynamically linked up to form a rule-tree during the inferencing process. The system is also able to adjust its inferencing strategy according to different users and situations. A rule editor is also provided to enable easy maintenance of the neural network rule elements.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132730508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323672
Gunter Neumann
Describes the application of explanation-based learning (EBL) for efficient processing of constraint-based grammars. The idea is to generalize the derivations of training instances created by normal parsing automatically and to use these generalized derivations (called templates) during the run-time mode of the system. In the case when a template can be instantiated for a new input, no further grammatical analysis is necessary. The approach is not restricted to sentential level but can be applied also to arbitrary phrases. Therefore, the EBL method can be interleaved straightforwardly with normal processing to get back flexibility that otherwise would be lost.<>
{"title":"Application of explanation-based learning for efficient processing of constraint-based grammars","authors":"Gunter Neumann","doi":"10.1109/CAIA.1994.323672","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323672","url":null,"abstract":"Describes the application of explanation-based learning (EBL) for efficient processing of constraint-based grammars. The idea is to generalize the derivations of training instances created by normal parsing automatically and to use these generalized derivations (called templates) during the run-time mode of the system. In the case when a template can be instantiated for a new input, no further grammatical analysis is necessary. The approach is not restricted to sentential level but can be applied also to arbitrary phrases. Therefore, the EBL method can be interleaved straightforwardly with normal processing to get back flexibility that otherwise would be lost.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134366026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323670
D. Fagan
The design and analysis of engine crankshafts involves the use of a myriad of computer aided engineering design and analysis tools. To be successful in reducing cycle time to the finished product, these tools must be integrated into an environment in which common data is shared and the analyst need not be an expert in all areas. Currently, these different analyses are completed independently of each other by different engineers. This process requires the manual exchange of data which has proven to be both time-consuming and error-prone. A process of integrating various analysis applications is presented, which utilizes a blackboard approach as an implementation tool. This paper also presents enhancements made to various analysis applications, made possible by the integration of these applications into a common environment.<>
{"title":"A blackboard approach to the integration of crankshaft analysis applications","authors":"D. Fagan","doi":"10.1109/CAIA.1994.323670","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323670","url":null,"abstract":"The design and analysis of engine crankshafts involves the use of a myriad of computer aided engineering design and analysis tools. To be successful in reducing cycle time to the finished product, these tools must be integrated into an environment in which common data is shared and the analyst need not be an expert in all areas. Currently, these different analyses are completed independently of each other by different engineers. This process requires the manual exchange of data which has proven to be both time-consuming and error-prone. A process of integrating various analysis applications is presented, which utilizes a blackboard approach as an implementation tool. This paper also presents enhancements made to various analysis applications, made possible by the integration of these applications into a common environment.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132937420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323697
S. Fujita, M. Otsubo, M. Watanabe
Describes an intelligent control shell for CAD tools, which can automatically create a command sequence to control CAD systems using symbolic knowledge of general command flows and non-symbolic knowledge of the past execution data. Users define a model of possible control flows, which are transformed into a state transition graph from which executable command sequences are inferred. The control system statistically analyzes non-deterministic branches, where a final result is predicted from a current state of a design object, a command history and the succeeding commands. Then the most promising command to optimize the design objects is selected and executed. The LSI CAD system controlled by the proposed shell synthesizes circuits about 5% faster than that synthesized with a standard script for delay minimization.<>
{"title":"An intelligent control shell for CAD tools","authors":"S. Fujita, M. Otsubo, M. Watanabe","doi":"10.1109/CAIA.1994.323697","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323697","url":null,"abstract":"Describes an intelligent control shell for CAD tools, which can automatically create a command sequence to control CAD systems using symbolic knowledge of general command flows and non-symbolic knowledge of the past execution data. Users define a model of possible control flows, which are transformed into a state transition graph from which executable command sequences are inferred. The control system statistically analyzes non-deterministic branches, where a final result is predicted from a current state of a design object, a command history and the succeeding commands. Then the most promising command to optimize the design objects is selected and executed. The LSI CAD system controlled by the proposed shell synthesizes circuits about 5% faster than that synthesized with a standard script for delay minimization.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121863802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323632
M. Hitz, T. Mueck
In general, a routing algorithm has to map virtual paths to sequences of physical data transfer operations. The number of physical transmission steps needed to transfer a particular data volume is proportional to the resulting transmission time. In the context of a corresponding optimization process, the Cayley graph model is used to generate and evaluate a large number of different interconnection topologies. Candidates are further evaluated with respect to fast and efficient routing heuristics using A* traversals. Simulated annealing techniques are used to find accurate traversal heuristics for each candidate. The results justify the application of these techniques to a large extent. In fact, the resulting heuristics provide a significant reduction in the number of expanded search nodes during the path-finding process at run-time.<>
{"title":"Routing heuristics for Cayley graph topologies","authors":"M. Hitz, T. Mueck","doi":"10.1109/CAIA.1994.323632","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323632","url":null,"abstract":"In general, a routing algorithm has to map virtual paths to sequences of physical data transfer operations. The number of physical transmission steps needed to transfer a particular data volume is proportional to the resulting transmission time. In the context of a corresponding optimization process, the Cayley graph model is used to generate and evaluate a large number of different interconnection topologies. Candidates are further evaluated with respect to fast and efficient routing heuristics using A* traversals. Simulated annealing techniques are used to find accurate traversal heuristics for each candidate. The results justify the application of these techniques to a large extent. In fact, the resulting heuristics provide a significant reduction in the number of expanded search nodes during the path-finding process at run-time.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123291512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323676
T. Bylander, M. Weintraub, S. Simon
QUAWDS is a system for interpreting human gait. The interpretation of gait data produced by gait analysis laboratories is difficult to learn and is a time consuming activity for a clinician. As a result, gait analysis is currently practiced by only a few "experts", making the widespread use of gait analysis limited in clinical practice. In this paper, we discuss a clinical evaluation of QUAWDS for a patient population having cerebral palsy with gait disorders. The implemented computer system performed at about 75% effectiveness when its performance is compared to the gait study reports produced by our expert.<>
{"title":"A study of an expert system for interpreting human walking disorders","authors":"T. Bylander, M. Weintraub, S. Simon","doi":"10.1109/CAIA.1994.323676","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323676","url":null,"abstract":"QUAWDS is a system for interpreting human gait. The interpretation of gait data produced by gait analysis laboratories is difficult to learn and is a time consuming activity for a clinician. As a result, gait analysis is currently practiced by only a few \"experts\", making the widespread use of gait analysis limited in clinical practice. In this paper, we discuss a clinical evaluation of QUAWDS for a patient population having cerebral palsy with gait disorders. The implemented computer system performed at about 75% effectiveness when its performance is compared to the gait study reports produced by our expert.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"9 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323622
Nanxin Wang, Jie Cheng, S. Staley, G. C. Davis
This paper presents an approach for rapidly integrating CAE analysis programs into complex engineering methodologies. The blackboard technique has been adopted in the implementation of this approach, especially for the process scheduling, monitoring, and potentially, diagnosis. This approach has now been implemented and tested in prototyping projects on several engineering methodologies involving multiple analysis programs.<>
{"title":"Rapid integration of CAE analysis programs using a blackboard approach","authors":"Nanxin Wang, Jie Cheng, S. Staley, G. C. Davis","doi":"10.1109/CAIA.1994.323622","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323622","url":null,"abstract":"This paper presents an approach for rapidly integrating CAE analysis programs into complex engineering methodologies. The blackboard technique has been adopted in the implementation of this approach, especially for the process scheduling, monitoring, and potentially, diagnosis. This approach has now been implemented and tested in prototyping projects on several engineering methodologies involving multiple analysis programs.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116893177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-03-01DOI: 10.1109/CAIA.1994.323630
M. Malliaris, L. Salchenberger
Compares three methods of estimating the volatility of daily S&P 100 Index stock market options. The implied volatility, calculated via the Black-Scholes model, is currently the most popular method of estimating volatility and is used by traders in the pricing of options. Historical volatility has been used to predict the implied volatility, but the estimates are poor predictors. A neural network for predicting volatility is shown to be far superior to the historical method.<>
{"title":"Do-ahead replaces run-time: a neural network forecasts options volatility","authors":"M. Malliaris, L. Salchenberger","doi":"10.1109/CAIA.1994.323630","DOIUrl":"https://doi.org/10.1109/CAIA.1994.323630","url":null,"abstract":"Compares three methods of estimating the volatility of daily S&P 100 Index stock market options. The implied volatility, calculated via the Black-Scholes model, is currently the most popular method of estimating volatility and is used by traders in the pricing of options. Historical volatility has been used to predict the implied volatility, but the estimates are poor predictors. A neural network for predicting volatility is shown to be far superior to the historical method.<<ETX>>","PeriodicalId":297396,"journal":{"name":"Proceedings of the Tenth Conference on Artificial Intelligence for Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121555677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}