Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374781
A. Khotanzad, A. Abaye, D. Maratukulam
In this paper a new recurrent neural network (RNN) based system for hourly prediction of power system loads for up to two days ahead is developed. The system is a modular one consisting of 24 non-fully connected RNNs. Each RNN predicts the one and two-day-ahead load values of a particular hour of the day. The RNNs are trained with a backpropagation through time algorithm using a teacher forcing strategy. To handle non-stationarities, an adaptive scheme is used to adjust the RNN weights during the forecasting phase. The performance of the forecaster is tested on one year of real data from two utilities and the results are excellent. This recurrent system outperforms another modular feedforward NN-based forecaster which is in beta testing at several electric utilities.<>
{"title":"An adaptive recurrent neural network system for multi-step-ahead hourly prediction of power system loads","authors":"A. Khotanzad, A. Abaye, D. Maratukulam","doi":"10.1109/ICNN.1994.374781","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374781","url":null,"abstract":"In this paper a new recurrent neural network (RNN) based system for hourly prediction of power system loads for up to two days ahead is developed. The system is a modular one consisting of 24 non-fully connected RNNs. Each RNN predicts the one and two-day-ahead load values of a particular hour of the day. The RNNs are trained with a backpropagation through time algorithm using a teacher forcing strategy. To handle non-stationarities, an adaptive scheme is used to adjust the RNN weights during the forecasting phase. The performance of the forecaster is tested on one year of real data from two utilities and the results are excellent. This recurrent system outperforms another modular feedforward NN-based forecaster which is in beta testing at several electric utilities.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116577239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374706
V. Soo, Jan-Sing Wang, Shih-Pu Wang
The main interest of this research is to discover clinical implications from a large PTCA (Percutaneous Transluminal Coronary Angioplasty) database. A case-based concept formation model D-UNIMEM, modified from Lebowitz's UNIMEM, is proposed for this purpose. In this model, we integrated two kinds of class membership and the index-conjunction class membership. The former is a polythetic clustering approach that serves at the early stage of concept formation. The latter that allows only relevant instances to be placed in the same cluster serves as the later stage of concept formation. D-UNIMEM could extract interesting correlation among features from the learned concept hierarchy.<>
{"title":"An incremental concept formation approach to learn and discover from a clinical database","authors":"V. Soo, Jan-Sing Wang, Shih-Pu Wang","doi":"10.1109/ICNN.1994.374706","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374706","url":null,"abstract":"The main interest of this research is to discover clinical implications from a large PTCA (Percutaneous Transluminal Coronary Angioplasty) database. A case-based concept formation model D-UNIMEM, modified from Lebowitz's UNIMEM, is proposed for this purpose. In this model, we integrated two kinds of class membership and the index-conjunction class membership. The former is a polythetic clustering approach that serves at the early stage of concept formation. The latter that allows only relevant instances to be placed in the same cluster serves as the later stage of concept formation. D-UNIMEM could extract interesting correlation among features from the learned concept hierarchy.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374772
G. Chu, ChengXin Cui, D. Stacey
An application in the area of chemical and biosensor design has provided the inspiration for research into some of the issues involved with the design and application of modular artificial neural networks (ANNs) for pattern classification tasks. We can divide the development of modular ANNs into two main components: (1) the topological design of the individual modular ANNs and the construction of the assembly of modules; and (2) the analysis of the data sets to be used to train the individual modules. The chemical sensor design task allows us to explore this second component to identify some of the implications for the capture and analysis of data appropriate for the training of modular ANN systems.<>
{"title":"A modular artificial neural network system for the classification and selection of coatings for a chemical sensor array","authors":"G. Chu, ChengXin Cui, D. Stacey","doi":"10.1109/ICNN.1994.374772","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374772","url":null,"abstract":"An application in the area of chemical and biosensor design has provided the inspiration for research into some of the issues involved with the design and application of modular artificial neural networks (ANNs) for pattern classification tasks. We can divide the development of modular ANNs into two main components: (1) the topological design of the individual modular ANNs and the construction of the assembly of modules; and (2) the analysis of the data sets to be used to train the individual modules. The chemical sensor design task allows us to explore this second component to identify some of the implications for the capture and analysis of data appropriate for the training of modular ANN systems.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122115866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374746
J. Mrsic-Flogel
An evolving learning system should be able to self-organise on its input vector continuously through time. This paper presents initial simulation results which show that entropy is a measure which could be employed to find various coding structure information by inspection of a binary input channel through time. It also shows that source information needs to be sparsely coded for entropy to be able to detect which code bitstring lengths are being employed to communicate source information to a self-organizing system.<>
{"title":"Aspects of information detection using entropy","authors":"J. Mrsic-Flogel","doi":"10.1109/ICNN.1994.374746","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374746","url":null,"abstract":"An evolving learning system should be able to self-organise on its input vector continuously through time. This paper presents initial simulation results which show that entropy is a measure which could be employed to find various coding structure information by inspection of a binary input channel through time. It also shows that source information needs to be sparsely coded for entropy to be able to detect which code bitstring lengths are being employed to communicate source information to a self-organizing system.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116739308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374684
G. Feng, C. K. Chak
This paper considers tracking control of robots in task space. A new control scheme is proposed based on a kind of conventional controller and a neural network based compensating controller. This scheme takes advantages of simplicity of the model based control approach and uses the neural network controller to compensate for the robot modelling uncertainties. The neural network is trained online based on Lyapunov theory and thus its convergence is guaranteed.<>
{"title":"Robot tracking in task space using neural networks","authors":"G. Feng, C. K. Chak","doi":"10.1109/ICNN.1994.374684","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374684","url":null,"abstract":"This paper considers tracking control of robots in task space. A new control scheme is proposed based on a kind of conventional controller and a neural network based compensating controller. This scheme takes advantages of simplicity of the model based control approach and uses the neural network controller to compensate for the robot modelling uncertainties. The neural network is trained online based on Lyapunov theory and thus its convergence is guaranteed.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374595
Binfan Liu, J. Si
Gaussian neural networks are considered to approximate any C/sup 2/ function with support on the unit hypercube I/sub m/=[0,1]/sup m/ in the sense of best approximation. An upper bound (0(N/sup -2/)) of the approximation error is obtained in the present paper for a Gaussian network having N/sup m/ hidden neurons with centers defined on a regular mesh in I/sub m/.<>
{"title":"The best approximation to C/sup 2/ functions and its error bounds using regular-center Gaussian networks","authors":"Binfan Liu, J. Si","doi":"10.1109/ICNN.1994.374595","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374595","url":null,"abstract":"Gaussian neural networks are considered to approximate any C/sup 2/ function with support on the unit hypercube I/sub m/=[0,1]/sup m/ in the sense of best approximation. An upper bound (0(N/sup -2/)) of the approximation error is obtained in the present paper for a Gaussian network having N/sup m/ hidden neurons with centers defined on a regular mesh in I/sub m/.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117283756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374327
A. Jagota, A. Negatu, D. Kaznachey
We define a measure for the fault-tolerance of binary weights Hopfield networks and relate it to a measure of information capacity. Using these measures, we compute results on the fault-tolerance and information capacity of certain Hopfield networks employing binary-valued weights. These Hopfield networks are governed by a single scalar parameter that controls their weights and biases. In one extreme value of this parameter, we show that the information capacity is optimal whereas the fault-tolerance is zero. At the other extreme, our results are inexact. We are only able to show that the information capacity is at least of the order of N log/sub 2/ N and N respectively, where N is the number of units. Our fault-tolerance results are even poorer, though nonzero. Nevertheless they do indicate a trade-off between information capacity and fault-tolerance as this parameter is varied from the first extreme to the second. We are also able to show that particular collections of patterns remain stable states as this parameter is varied, and fault-tolerance for them goes from zero at one extreme of this parameter to /spl Theta/(N/sup 2/) at the other extreme.<>
{"title":"Information capacity and fault tolerance of binary weights Hopfield nets","authors":"A. Jagota, A. Negatu, D. Kaznachey","doi":"10.1109/ICNN.1994.374327","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374327","url":null,"abstract":"We define a measure for the fault-tolerance of binary weights Hopfield networks and relate it to a measure of information capacity. Using these measures, we compute results on the fault-tolerance and information capacity of certain Hopfield networks employing binary-valued weights. These Hopfield networks are governed by a single scalar parameter that controls their weights and biases. In one extreme value of this parameter, we show that the information capacity is optimal whereas the fault-tolerance is zero. At the other extreme, our results are inexact. We are only able to show that the information capacity is at least of the order of N log/sub 2/ N and N respectively, where N is the number of units. Our fault-tolerance results are even poorer, though nonzero. Nevertheless they do indicate a trade-off between information capacity and fault-tolerance as this parameter is varied from the first extreme to the second. We are also able to show that particular collections of patterns remain stable states as this parameter is varied, and fault-tolerance for them goes from zero at one extreme of this parameter to /spl Theta/(N/sup 2/) at the other extreme.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129516381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374984
I. Kirschning, H. Tomabechi
This paper presents a new method for phoneme recognition using neural networks, the time-sliced recurrent recognizer (TSRR). In this method we employ Elman's recurrent network with error-backpropagation, adding an extra group of units that are trained to give a specific representation of each phoneme while it is recognizing it. The purpose of this architecture is to obtain an immediate hypothesis of the speech input without having to pre-label each phoneme or separate them before the input. The input signal is divided into time-slices which are recognized in a linear sequential fashion. The generated hypothesis is shown in the extra group of units at the same moment the time-slices are passed through the network and being recognized as a certain phoneme. Thus the TSRR is capable of recognizing the phonemes in real-time without discriminatory learning.<>
{"title":"Phoneme recognition using a time-sliced recurrent recognizer","authors":"I. Kirschning, H. Tomabechi","doi":"10.1109/ICNN.1994.374984","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374984","url":null,"abstract":"This paper presents a new method for phoneme recognition using neural networks, the time-sliced recurrent recognizer (TSRR). In this method we employ Elman's recurrent network with error-backpropagation, adding an extra group of units that are trained to give a specific representation of each phoneme while it is recognizing it. The purpose of this architecture is to obtain an immediate hypothesis of the speech input without having to pre-label each phoneme or separate them before the input. The input signal is divided into time-slices which are recognized in a linear sequential fashion. The generated hypothesis is shown in the extra group of units at the same moment the time-slices are passed through the network and being recognized as a certain phoneme. Thus the TSRR is capable of recognizing the phonemes in real-time without discriminatory learning.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129606701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.375004
Andrew Vakhutinsky, B. Golden
Using neural networks to find an approximate solution to difficult optimization problems is a very attractive prospect. The traveling salesman problem (TSP), probably the best-known problem in combinatorial optimization, has been attacked by a variety of neural network approaches. The main purpose of this paper is to show how elastic network ideas can be applied to two TSP generalizations: the multiple traveling salesmen problem (MTSP) and the vehicle routing problem (VRP).<>
{"title":"Solving vehicle routing problems using elastic nets","authors":"Andrew Vakhutinsky, B. Golden","doi":"10.1109/ICNN.1994.375004","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375004","url":null,"abstract":"Using neural networks to find an approximate solution to difficult optimization problems is a very attractive prospect. The traveling salesman problem (TSP), probably the best-known problem in combinatorial optimization, has been attacked by a variety of neural network approaches. The main purpose of this paper is to show how elastic network ideas can be applied to two TSP generalizations: the multiple traveling salesmen problem (MTSP) and the vehicle routing problem (VRP).<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129728481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374287
Zhicheng Wang
Linear prediction is a major technique of signal processing and has been applied to many areas. Although nonlinear prediction has been investigated with some techniques such as multilayer backpropagation neural networks, the computational and storage expenses are usually very high. Moreover, they are deficient in nonlinear analysis, leading to no way to improvement but experimentally choosing parameters and sizes in ad hoc fashion. In this paper, the author presents new architectures for autoregressive prediction based upon statistical analysis of nonlinearity and design algorithm based on steepest descent scheme and correlation maximization. Instead of a fixed configuration, a prediction model begins with a linear model, then learns and grows to a more sophisticated structure step by step, creating a minimal structure for a certain objective. It adaptively learns much faster than existing algorithms. The model determines its own size and topology and retains a minimal structure. The proposed scheme is called generalized antoregressive prediction. This technique can be also applied to general ARMA nonlinear prediction. A new speech coding system using the generalised AR prediction is presented, which takes advantages of nonlinearity and parallelism of the proposed AR model. The system outperforms the corresponding linear coders.<>
{"title":"Generalized autoregressive prediction with application to speech coding","authors":"Zhicheng Wang","doi":"10.1109/ICNN.1994.374287","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374287","url":null,"abstract":"Linear prediction is a major technique of signal processing and has been applied to many areas. Although nonlinear prediction has been investigated with some techniques such as multilayer backpropagation neural networks, the computational and storage expenses are usually very high. Moreover, they are deficient in nonlinear analysis, leading to no way to improvement but experimentally choosing parameters and sizes in ad hoc fashion. In this paper, the author presents new architectures for autoregressive prediction based upon statistical analysis of nonlinearity and design algorithm based on steepest descent scheme and correlation maximization. Instead of a fixed configuration, a prediction model begins with a linear model, then learns and grows to a more sophisticated structure step by step, creating a minimal structure for a certain objective. It adaptively learns much faster than existing algorithms. The model determines its own size and topology and retains a minimal structure. The proposed scheme is called generalized antoregressive prediction. This technique can be also applied to general ARMA nonlinear prediction. A new speech coding system using the generalised AR prediction is presented, which takes advantages of nonlinearity and parallelism of the proposed AR model. The system outperforms the corresponding linear coders.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128510825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}