Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374527
J. Quero, J. G. Ortega, C. Janer, L. Franquelo
Presents a purely digital stochastic implementation of multilayer neural networks. The authors have developed this implementation using an architecture that permits the addition of a very large number of synaptic connections, provided that the neuron's transfer function is the hard limiting function. The expression that relates the design parameter, that is, the maximum pulse density, with the accuracy of the operations has been used as the design criterion. The resulting circuit is easily configurable and expandable.<>
{"title":"VLSI implementation of a fully parallel stochastic neural network","authors":"J. Quero, J. G. Ortega, C. Janer, L. Franquelo","doi":"10.1109/ICNN.1994.374527","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374527","url":null,"abstract":"Presents a purely digital stochastic implementation of multilayer neural networks. The authors have developed this implementation using an architecture that permits the addition of a very large number of synaptic connections, provided that the neuron's transfer function is the hard limiting function. The expression that relates the design parameter, that is, the maximum pulse density, with the accuracy of the operations has been used as the design criterion. The resulting circuit is easily configurable and expandable.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127773623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374733
C. Privitera, P. Morasso
The problem to detect and recognize the occurrence of specific events in a continually evolving environment, is particularly important in many fields, starting from motor planning. In this paper, the authors propose a two-dimensional map, where the processing elements correspond to specific instances of leaky integrators whose parameters (or tops) are learned in a self-organizing manner: in this way the map becomes a topologic representation of temporal sequences whose presence in a continuous temporal data flow is detectable by means of the activation level of the corresponding neurons.<>
{"title":"The analysis of continuous temporal sequences by a map of sequential leaky integrators","authors":"C. Privitera, P. Morasso","doi":"10.1109/ICNN.1994.374733","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374733","url":null,"abstract":"The problem to detect and recognize the occurrence of specific events in a continually evolving environment, is particularly important in many fields, starting from motor planning. In this paper, the authors propose a two-dimensional map, where the processing elements correspond to specific instances of leaky integrators whose parameters (or tops) are learned in a self-organizing manner: in this way the map becomes a topologic representation of temporal sequences whose presence in a continuous temporal data flow is detectable by means of the activation level of the corresponding neurons.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126358927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.375040
G. Mbamalu, M. El-Hawary
We use a multilayer neural network with a backpropagation algorithm to forecast the commercial sector load portion resulting from decomposing the system load of the Nova Scotia Power Inc. system. To minimize the effect of weather on the forecast of the commercial load, it is further decomposed into four autonomous sections of six hour durations. The optimal input for a training set is determined based on the sum of the squared residuals of the predicted loads. The input patterns are made up of the immediate past four or five hours load and the output is the fifth or the sixth hour load. The results obtained using the proposed approach provide evidence that in the absence of some influential variables such as temperature, a careful selection of training patterns will enhance the performance of the artificial neural network in predicting the power system load.<>
{"title":"A decomposition approach to forecasting electric power system commercial load using an artificial neural network","authors":"G. Mbamalu, M. El-Hawary","doi":"10.1109/ICNN.1994.375040","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375040","url":null,"abstract":"We use a multilayer neural network with a backpropagation algorithm to forecast the commercial sector load portion resulting from decomposing the system load of the Nova Scotia Power Inc. system. To minimize the effect of weather on the forecast of the commercial load, it is further decomposed into four autonomous sections of six hour durations. The optimal input for a training set is determined based on the sum of the squared residuals of the predicted loads. The input patterns are made up of the immediate past four or five hours load and the output is the fifth or the sixth hour load. The results obtained using the proposed approach provide evidence that in the absence of some influential variables such as temperature, a careful selection of training patterns will enhance the performance of the artificial neural network in predicting the power system load.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126443993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374336
B. Flachs, M. Flynn
Pattern recognition is a budding field with many possible approaches. This article describes sparse adaptive memory (SARI), an associative memory built upon the strengths of Parzen classifiers, nearest neighbor classifiers, feedforward neural networks, and is related to learning vector quantization. A key feature of this learning architecture is the ability to adaptively change its prototype patterns in addition to its output mapping. As SAM changes the prototype patterns in the list, it isolates modes in the density functions to produce a classifier that is in some senses optimal. Some very important interactions of gradient descent learning are exposed, providing conditions under which gradient descent will converge to an admissible solution in an associative memory structure. A layer of learning heuristics can be built upon the basic gradient descent learning algorithm to improve memory efficiency in terms of error rate, and therefore hardware requirements. A simulation study examines the effects of one such heuristic in the context of handwritten digit recognition.<>
{"title":"Sparse adaptive memory and handwritten digit recognition","authors":"B. Flachs, M. Flynn","doi":"10.1109/ICNN.1994.374336","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374336","url":null,"abstract":"Pattern recognition is a budding field with many possible approaches. This article describes sparse adaptive memory (SARI), an associative memory built upon the strengths of Parzen classifiers, nearest neighbor classifiers, feedforward neural networks, and is related to learning vector quantization. A key feature of this learning architecture is the ability to adaptively change its prototype patterns in addition to its output mapping. As SAM changes the prototype patterns in the list, it isolates modes in the density functions to produce a classifier that is in some senses optimal. Some very important interactions of gradient descent learning are exposed, providing conditions under which gradient descent will converge to an admissible solution in an associative memory structure. A layer of learning heuristics can be built upon the basic gradient descent learning algorithm to improve memory efficiency in terms of error rate, and therefore hardware requirements. A simulation study examines the effects of one such heuristic in the context of handwritten digit recognition.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125485066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374286
A. M. Aitken
The SAM architecture is a novel neural network architecture, based on the cerebral neocortex, for combining unsupervised learning modules. When used as part of the control system for an agent, the architecture enables the agent to learn the functional semantics of its motor outputs and sensory inputs, and to acquire behavioral sequences by imitating other agents (learning by 'watching'). This involves attempting to recreate the sensory sequences the agent has been exposed to. The architecture scales well to multiple motor and sensory modalities, and to more complex behavioral requirements. The SAM architecture may also hint at an explanation of several features of the operation of the cerebral neocortex.<>
{"title":"An architecture for learning to behave","authors":"A. M. Aitken","doi":"10.1109/ICNN.1994.374286","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374286","url":null,"abstract":"The SAM architecture is a novel neural network architecture, based on the cerebral neocortex, for combining unsupervised learning modules. When used as part of the control system for an agent, the architecture enables the agent to learn the functional semantics of its motor outputs and sensory inputs, and to acquire behavioral sequences by imitating other agents (learning by 'watching'). This involves attempting to recreate the sensory sequences the agent has been exposed to. The architecture scales well to multiple motor and sensory modalities, and to more complex behavioral requirements. The SAM architecture may also hint at an explanation of several features of the operation of the cerebral neocortex.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125601082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374289
B. K. Verma, J. Mulawka
A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fast training process is presented. The modification is based on the solving of the weight matrix for the output layer using theory of equations and least squares techniques.<>
{"title":"A modified backpropagation algorithm","authors":"B. K. Verma, J. Mulawka","doi":"10.1109/ICNN.1994.374289","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374289","url":null,"abstract":"A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fast training process is presented. The modification is based on the solving of the weight matrix for the output layer using theory of equations and least squares techniques.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122009612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374521
O. Chen, T. Berger, B. Sheu
The nonlinear model of the functional properties of the hippocampal formation has been developed. The architecture of the proposed hardware implementation has a topology highly similar to the anatomical structure of the hippocampus, and the dynamical properties of its components are based on experimental characterization of individual hippocampal neurons. The design scheme of a analog cellular neural network has been extensively applied. By using a 1-/spl mu/m CMOS technology, the 5/spl times/5 neuron array with some testing modules has been designed for fabrication. According to the SPICE-3 circuit simulator, the response time of each neuron with memorizing 4 time units is around 0.5 /spl mu/sec.<>
{"title":"VLSI implementation of the hippocampus on nonlinear system model","authors":"O. Chen, T. Berger, B. Sheu","doi":"10.1109/ICNN.1994.374521","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374521","url":null,"abstract":"The nonlinear model of the functional properties of the hippocampal formation has been developed. The architecture of the proposed hardware implementation has a topology highly similar to the anatomical structure of the hippocampus, and the dynamical properties of its components are based on experimental characterization of individual hippocampal neurons. The design scheme of a analog cellular neural network has been extensively applied. By using a 1-/spl mu/m CMOS technology, the 5/spl times/5 neuron array with some testing modules has been designed for fabrication. According to the SPICE-3 circuit simulator, the response time of each neuron with memorizing 4 time units is around 0.5 /spl mu/sec.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122266372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374724
C.Y. Chen, C. Hwang
The backpropagation network (BPN) is now widely used in the field of pattern recognition because this artificial neural network can classify complex patterns and perform nontrivial mapping functions. In this paper, we propose a multi-level backpropagation network (MLBPN) model as a classifier for practical pattern recognition systems. The described model reserves the benefits of the BPN and derives the extra benefits of this MLBPN with two fold: (1) the MLBPN can reduce the complexity of BPN, and (2) a speed-up of the recognition process is attained. The experimental results verify these characteristics and show that the MLBPN model is a practical classifier for pattern recognition systems.<>
{"title":"A multi-level backpropagation network for pattern recognition systems","authors":"C.Y. Chen, C. Hwang","doi":"10.1109/ICNN.1994.374724","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374724","url":null,"abstract":"The backpropagation network (BPN) is now widely used in the field of pattern recognition because this artificial neural network can classify complex patterns and perform nontrivial mapping functions. In this paper, we propose a multi-level backpropagation network (MLBPN) model as a classifier for practical pattern recognition systems. The described model reserves the benefits of the BPN and derives the extra benefits of this MLBPN with two fold: (1) the MLBPN can reduce the complexity of BPN, and (2) a speed-up of the recognition process is attained. The experimental results verify these characteristics and show that the MLBPN model is a practical classifier for pattern recognition systems.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127977354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374981
P. Nocera, R. Quélavoine
We propose in this paper two ways for diminishing the size of a multilayered neural network trained to recognise French vowels. The first deals with the hidden layers: the study of the variation of the outputs of each node gives us information on its very discrimination power and then allows us to reduce the size of the network. The second involves the input nodes: by the examination of the connecting weights between the input nodes and the following hidden layer, we can determinate which features are actually relevant for our classification problem, and then eliminate the useless ones. Through the problem of recognising the French vowel /a/, we show that we can obtain a reduced structure that still can learn.<>
{"title":"Diminishing the number of nodes in multi-layered neural networks","authors":"P. Nocera, R. Quélavoine","doi":"10.1109/ICNN.1994.374981","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374981","url":null,"abstract":"We propose in this paper two ways for diminishing the size of a multilayered neural network trained to recognise French vowels. The first deals with the hidden layers: the study of the variation of the outputs of each node gives us information on its very discrimination power and then allows us to reduce the size of the network. The second involves the input nodes: by the examination of the connecting weights between the input nodes and the following hidden layer, we can determinate which features are actually relevant for our classification problem, and then eliminate the useless ones. Through the problem of recognising the French vowel /a/, we show that we can obtain a reduced structure that still can learn.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128177867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374523
R. Zaman, D. Wunsch
Presents an adaptive neural network, which uses multiplying-digital-to-analog converters (MDACs) as synaptic weights. The chip takes advantage of digital processing to learn weights, but retains the parallel asynchronous behavior of analog systems, since part of the neuron functions are analog. The authors use MDAC units of 6 bit accuracy for this chip. Hebbian learning is employed, which is very attractive for electronic neural networks since it only uses local information in adapting weights.<>
{"title":"An adaptive VLSI neural network chip","authors":"R. Zaman, D. Wunsch","doi":"10.1109/ICNN.1994.374523","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374523","url":null,"abstract":"Presents an adaptive neural network, which uses multiplying-digital-to-analog converters (MDACs) as synaptic weights. The chip takes advantage of digital processing to learn weights, but retains the parallel asynchronous behavior of analog systems, since part of the neuron functions are analog. The authors use MDAC units of 6 bit accuracy for this chip. Hebbian learning is employed, which is very attractive for electronic neural networks since it only uses local information in adapting weights.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}