Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170755
T. Nakagawa, H. Kitagawa, E. Page, G. Tagliarini
An architecture for high-speed and low-cost processors based upon SDNNs, (strictly digital neural networks) to solve combinatorial optimization problems within O(1) time is presented. Combinatorial optimization problems were programmed as a set selection problem with the k-out-of-n design rule, and solved by a cluster of SDN elementary processors in a discrete operation manner of TOH (traveling on hypercube), which is a rule for synchronized parallel execution. In all simulation cases, the latest SDNN-3 hardware achieved O(1) parallel processing in solving large-scale N-queen problems of up to 1200-queens. It was confirmed that all of the solutions are optimum, and that the SDNN processor always converges to global minima without any external one.<>
{"title":"SDNN-3: A simple processor architecture for O(1) parallel processing in combinatorial optimization with strictly digital neural networks","authors":"T. Nakagawa, H. Kitagawa, E. Page, G. Tagliarini","doi":"10.1109/IJCNN.1991.170755","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170755","url":null,"abstract":"An architecture for high-speed and low-cost processors based upon SDNNs, (strictly digital neural networks) to solve combinatorial optimization problems within O(1) time is presented. Combinatorial optimization problems were programmed as a set selection problem with the k-out-of-n design rule, and solved by a cluster of SDN elementary processors in a discrete operation manner of TOH (traveling on hypercube), which is a rule for synchronized parallel execution. In all simulation cases, the latest SDNN-3 hardware achieved O(1) parallel processing in solving large-scale N-queen problems of up to 1200-queens. It was confirmed that all of the solutions are optimum, and that the SDNN processor always converges to global minima without any external one.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125297941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170485
YoungJu Choie, Y. Kwon, T. Poston, Chung-Nim Lee
Comparisons of learning algorithms are often dominated by the time taken to approach optimal weights at infinity, in typical benchmark problems with binary output targets. It is suggested that this slow final convergence be replaced by a scaling step shown to arbitrarily reduce error, for a clearer comparison of the searching power. Stopping a benchmark test by the good point criterion, rather than by a small sum-of-squared-errors, concentrates the test on this more difficult challenge, and thus reveals more about the promise of the algorithm for practical engineering use.<>
{"title":"On benchmarks for learning algorithms","authors":"YoungJu Choie, Y. Kwon, T. Poston, Chung-Nim Lee","doi":"10.1109/IJCNN.1991.170485","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170485","url":null,"abstract":"Comparisons of learning algorithms are often dominated by the time taken to approach optimal weights at infinity, in typical benchmark problems with binary output targets. It is suggested that this slow final convergence be replaced by a scaling step shown to arbitrarily reduce error, for a clearer comparison of the searching power. Stopping a benchmark test by the good point criterion, rather than by a small sum-of-squared-errors, concentrates the test on this more difficult challenge, and thus reveals more about the promise of the algorithm for practical engineering use.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122419477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170735
D. Wang, B. Schurmann
An attempt is made to demonstrate how symbolic computation can be applied to aid in the analysis and derivation of neural systems. The authors review the general method and techniques of the Lyapunov method for the stability analysis of artificial neural systems. They present some strategies for using computer algebra systems and their extensions to analyze the stability of known neural systems and to derive novel stable ones. A brief description of a toolkit developed in MACSYMA is also provided. An illustration is given to sketch the derivation of neural learning dynamics by the toolkit. A discussion of future developments is included.<>
{"title":"Computer aided investigations of artificial neural systems","authors":"D. Wang, B. Schurmann","doi":"10.1109/IJCNN.1991.170735","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170735","url":null,"abstract":"An attempt is made to demonstrate how symbolic computation can be applied to aid in the analysis and derivation of neural systems. The authors review the general method and techniques of the Lyapunov method for the stability analysis of artificial neural systems. They present some strategies for using computer algebra systems and their extensions to analyze the stability of known neural systems and to derive novel stable ones. A brief description of a toolkit developed in MACSYMA is also provided. An illustration is given to sketch the derivation of neural learning dynamics by the toolkit. A discussion of future developments is included.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122782017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170702
M. Khalid, S. Omatu
A neural network based control scheme with an adaptive neural model reference structure is described. A neural net emulator is first trained to model the plant's behavior. The neural net controller is next trained to learn the plant's inverse dynamics by backpropagating the error at the output of the plant through the emulator. The proposed structure of this method allows both the neural network controller and emulator to be continuously trained online. Simulation results to control a nonlinear temperature control process showed that the proposed neural network control method is easily implemented for a wide variety of control problems.<>
{"title":"A neural network based control scheme with an adaptive neural model reference structure","authors":"M. Khalid, S. Omatu","doi":"10.1109/IJCNN.1991.170702","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170702","url":null,"abstract":"A neural network based control scheme with an adaptive neural model reference structure is described. A neural net emulator is first trained to model the plant's behavior. The neural net controller is next trained to learn the plant's inverse dynamics by backpropagating the error at the output of the plant through the emulator. The proposed structure of this method allows both the neural network controller and emulator to be continuously trained online. Simulation results to control a nonlinear temperature control process showed that the proposed neural network control method is easily implemented for a wide variety of control problems.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131531001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170401
S.-D. Wang, Chia-Hung Hsu
Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm.<>
{"title":"Terminal attractor learning algorithms for back propagation neural networks","authors":"S.-D. Wang, Chia-Hung Hsu","doi":"10.1109/IJCNN.1991.170401","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170401","url":null,"abstract":"Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130190985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170701
K. Aikawa
The author investigates a feedforward neural network that can accept phonemes with an arbitrary duration coping with nonlinear time warping. The time-warping neural network is characterized by the time-warping functions embedded between the input layer and the first hidden layer in the network. The input layer accesses three different time points. The accessing points are determined by the time-warping functions. The input spectrum sequence itself is not warped but the accessing-point sequence is warped. The advantage of this network architecture is that the input layer can access the original spectrum sequence. The proposed network demonstrated higher phoneme recognition accuracy than the baseline recognizer based on conventional feedforward neural networks. The recognition accuracy was even higher than that achieved with discrete hidden Markov models.<>
{"title":"Time-warping neural network for phoneme recognition","authors":"K. Aikawa","doi":"10.1109/IJCNN.1991.170701","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170701","url":null,"abstract":"The author investigates a feedforward neural network that can accept phonemes with an arbitrary duration coping with nonlinear time warping. The time-warping neural network is characterized by the time-warping functions embedded between the input layer and the first hidden layer in the network. The input layer accesses three different time points. The accessing points are determined by the time-warping functions. The input spectrum sequence itself is not warped but the accessing-point sequence is warped. The advantage of this network architecture is that the input layer can access the original spectrum sequence. The proposed network demonstrated higher phoneme recognition accuracy than the baseline recognizer based on conventional feedforward neural networks. The recognition accuracy was even higher than that achieved with discrete hidden Markov models.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132804355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170605
J. Schmidhuber
A novel curious model-building control system is described which actively tries to provoke situations for which it learned to expect to learn something about the environment. Such a system has been implemented as a four-network system based on Watkins' Q-learning algorithm which can be used to maximize the expectation of the temporal derivative of the adaptive assumed reliability of future predictions. An experiment with an artificial nondeterministic environment demonstrates that the system can be superior to previous model-building control systems, which do not address the problem of modeling the reliability of the world model's predictions in uncertain environments and use ad-hoc methods (like random search) to train the world model.<>
{"title":"Curious model-building control systems","authors":"J. Schmidhuber","doi":"10.1109/IJCNN.1991.170605","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170605","url":null,"abstract":"A novel curious model-building control system is described which actively tries to provoke situations for which it learned to expect to learn something about the environment. Such a system has been implemented as a four-network system based on Watkins' Q-learning algorithm which can be used to maximize the expectation of the temporal derivative of the adaptive assumed reliability of future predictions. An experiment with an artificial nondeterministic environment demonstrates that the system can be superior to previous model-building control systems, which do not address the problem of modeling the reliability of the world model's predictions in uncertain environments and use ad-hoc methods (like random search) to train the world model.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133510224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170424
J. Wen, G. Schweitzer
The authors first discuss the physical and mathematical model of CCD (charge coupled device) cameras on which the standard photogrammetric calibration of the cameras is based. Then they introduce artificial neural networks in order to improve the classical calibration of the CCD cameras, and thus develop a new method to calibrate CCD cameras. In this set-up, a feedforward artificial neural network is used. Three advantages of the hybrid calibration are discussed: feasibility, applicability, and efficiency. In order to judge the quality of the calibration, the calibration error of a camera is defined. It is shown experimentally that the accuracy of the image frame coordinates has been improved by a factor two through the hybrid calibration. It appears to be a new idea to add an artificial neural network to the physical and mathematical model of a system in order to improve the overall description of the system.<>
{"title":"Hybrid calibration of CCD cameras using artificial neural nets","authors":"J. Wen, G. Schweitzer","doi":"10.1109/IJCNN.1991.170424","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170424","url":null,"abstract":"The authors first discuss the physical and mathematical model of CCD (charge coupled device) cameras on which the standard photogrammetric calibration of the cameras is based. Then they introduce artificial neural networks in order to improve the classical calibration of the CCD cameras, and thus develop a new method to calibrate CCD cameras. In this set-up, a feedforward artificial neural network is used. Three advantages of the hybrid calibration are discussed: feasibility, applicability, and efficiency. In order to judge the quality of the calibration, the calibration error of a camera is defined. It is shown experimentally that the accuracy of the image frame coordinates has been improved by a factor two through the hybrid calibration. It appears to be a new idea to add an artificial neural network to the physical and mathematical model of a system in order to improve the overall description of the system.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133662316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170426
Xiahua Yang, P. Xue
Several discrete orthogonal transforms have been used to study the behaviors of transform-domain backpropagation (BP) algorithms. Two examples of computer simulation show that, on selecting the appropriate parameters and the suitable structures of a neural network, the performance of the transform-domain BP algorithm is somewhat better than that of the original time-domain BP algorithm, regardless of which discrete orthogonal transform is applied. Among the transforms that have been used, the behaviors of the discrete cosine transform (DCT) and an alternative version of it are believed to be the best.<>
{"title":"Behaviors of transform domain backpropagation (BP) algorithm","authors":"Xiahua Yang, P. Xue","doi":"10.1109/IJCNN.1991.170426","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170426","url":null,"abstract":"Several discrete orthogonal transforms have been used to study the behaviors of transform-domain backpropagation (BP) algorithms. Two examples of computer simulation show that, on selecting the appropriate parameters and the suitable structures of a neural network, the performance of the transform-domain BP algorithm is somewhat better than that of the original time-domain BP algorithm, regardless of which discrete orthogonal transform is applied. Among the transforms that have been used, the behaviors of the discrete cosine transform (DCT) and an alternative version of it are believed to be the best.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133721426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170414
K. Imai, K. Gouhara, Y. Uchikawa
The authors present a novel pattern recognition architecture using three-layered backpropagation (BP) models. The proposed architecture consists mainly of the following two completely separate functions: extraction of a target pattern and recognition of the extracted pattern. It is possible that the proposed architecture detects where and what the target pattern is. In order to realize these functions, the following networks are introduced: filtering network, position network, size network, frame-working network, and categorizing networks. Results of handwritten-letter recognition experiments show that the proposed architecture has the ability to recognize a deformed target pattern in an original image with much noise, especially lumped noises.<>
{"title":"Pattern extraction and recognition for noisy images using the three-layered BP model","authors":"K. Imai, K. Gouhara, Y. Uchikawa","doi":"10.1109/IJCNN.1991.170414","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170414","url":null,"abstract":"The authors present a novel pattern recognition architecture using three-layered backpropagation (BP) models. The proposed architecture consists mainly of the following two completely separate functions: extraction of a target pattern and recognition of the extracted pattern. It is possible that the proposed architecture detects where and what the target pattern is. In order to realize these functions, the following networks are introduced: filtering network, position network, size network, frame-working network, and categorizing networks. Results of handwritten-letter recognition experiments show that the proposed architecture has the ability to recognize a deformed target pattern in an original image with much noise, especially lumped noises.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}