Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939080
Bryan Bemley
Gives a brief definition of a neural network, talks a little about two different neural networks, names some people that were in the field of neural networks, gives a brief history on neural networks, an explanation of the traveling salesman problems, and a brief conclusion.
{"title":"Neural networks and the traveling salesman problem","authors":"Bryan Bemley","doi":"10.1109/IJCNN.2001.939080","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939080","url":null,"abstract":"Gives a brief definition of a neural network, talks a little about two different neural networks, names some people that were in the field of neural networks, gives a brief history on neural networks, an explanation of the traveling salesman problems, and a brief conclusion.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"1226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115908674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939009
S. Ng, C. Cheung, S. Leung, A. Luk
An algorithm is proposed to solve the "flat spot" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.
{"title":"A new adaptive learning algorithm using magnified gradient function","authors":"S. Ng, C. Cheung, S. Leung, A. Luk","doi":"10.1109/IJCNN.2001.939009","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939009","url":null,"abstract":"An algorithm is proposed to solve the \"flat spot\" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131392754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938846
S. Omatu, T. Fujinaka, T. Kosaka, H. Yanagimoto, M. Yoshioka
In this paper, a new method to classify the Italian Liras by using the learning vector quantization (LVQ) is proposed. The Italian Liras of 8 kinds, 1000, 2000, 5000, 10000, 50000 (new), 50000 (old), 100000 (new), 100000 (old) Liras with four directions A,B,C, and D are used, where A and B mean the normal direction and the upside down direction and C and D mean the reverse version of A and B. The original image with 128 by 64 pixels is observed at the transaction machine in which rotation and shift are included. After correction of these effects, we select a suitable area which shows the bill image and feed the image with 64 by 15 pixels to a neural network. Although the neural network of the LVQ type can process in any order of the dimension of the input data, the smaller size is better to achieve a faster convergence.
{"title":"Italian Lira classification by LVQ","authors":"S. Omatu, T. Fujinaka, T. Kosaka, H. Yanagimoto, M. Yoshioka","doi":"10.1109/IJCNN.2001.938846","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938846","url":null,"abstract":"In this paper, a new method to classify the Italian Liras by using the learning vector quantization (LVQ) is proposed. The Italian Liras of 8 kinds, 1000, 2000, 5000, 10000, 50000 (new), 50000 (old), 100000 (new), 100000 (old) Liras with four directions A,B,C, and D are used, where A and B mean the normal direction and the upside down direction and C and D mean the reverse version of A and B. The original image with 128 by 64 pixels is observed at the transaction machine in which rotation and shift are included. After correction of these effects, we select a suitable area which shows the bill image and feed the image with 64 by 15 pixels to a neural network. Although the neural network of the LVQ type can process in any order of the dimension of the input data, the smaller size is better to achieve a faster convergence.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939484
Y. Singh, C. Rai
Several gradient-based algorithms exist for performing blind source separation (BSS). In this paper we compare three most popular neural algorithms: EASI, natural gradient and Bell-Sejnowski algorithms. The effectiveness of these algorithms depends upon the nonlinear activation function. These algorithms were evaluated with different nonlinear functions for sub-Gaussian and super-Gaussian sources.
{"title":"A comparison of BSS algorithms","authors":"Y. Singh, C. Rai","doi":"10.1109/IJCNN.2001.939484","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939484","url":null,"abstract":"Several gradient-based algorithms exist for performing blind source separation (BSS). In this paper we compare three most popular neural algorithms: EASI, natural gradient and Bell-Sejnowski algorithms. The effectiveness of these algorithms depends upon the nonlinear activation function. These algorithms were evaluated with different nonlinear functions for sub-Gaussian and super-Gaussian sources.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132523294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939071
N. Almutairi, M. Chow
In this paper, a robust and adaptive fuzzy controller to modify a well-known PI controller in order to improve the system performance is presented. Robustness is concluded showing that the sensitivity function of the output is bounded. Based on this robust adaptive fuzzy controller, several performance measures of the controlled plant, including the rise time, delay time, settling time, and percentage of maximum overshoot is shown to be highly related to the parameters in certain fuzzy rules of the controller. The sensitivity functions of the relative errors in the performance measures is used to derive these relations. A DC motor with a nonlinear friction model is used to illustrate this method.
{"title":"A modified PI control action with a robust adaptive fuzzy controller applied to DC motor","authors":"N. Almutairi, M. Chow","doi":"10.1109/IJCNN.2001.939071","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939071","url":null,"abstract":"In this paper, a robust and adaptive fuzzy controller to modify a well-known PI controller in order to improve the system performance is presented. Robustness is concluded showing that the sensitivity function of the output is bounded. Based on this robust adaptive fuzzy controller, several performance measures of the controlled plant, including the rise time, delay time, settling time, and percentage of maximum overshoot is shown to be highly related to the parameters in certain fuzzy rules of the controller. The sensitivity functions of the relative errors in the performance measures is used to derive these relations. A DC motor with a nonlinear friction model is used to illustrate this method.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130038345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939589
Kussul Emst
The Rosenblatt perceptron was used for handwritten digit recognition. For testing its performance the MNIST database was used. 60,000 samples of handwritten digits were used for perceptron training, and 10,000 samples for testing. A recognition rate of 99.2% was obtained. The critical parameter of Rosenblatt perceptrons is the number of neurons N in the associative neuron layer. We changed the parameter N from 1,000 to 512,000. We investigated the influence of this parameter on the performance of the Rosenblatt perceptron. Increasing N from 1,000 to 512,000 involves decreasing of test errors from 5 to 8 times. It was shown that a large scale Rosenblatt perceptron is comparable with the best classifiers checked on MNIST database (98.9%-99.3%).
{"title":"Rosenblatt perceptrons for handwritten digit recognition","authors":"Kussul Emst","doi":"10.1109/IJCNN.2001.939589","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939589","url":null,"abstract":"The Rosenblatt perceptron was used for handwritten digit recognition. For testing its performance the MNIST database was used. 60,000 samples of handwritten digits were used for perceptron training, and 10,000 samples for testing. A recognition rate of 99.2% was obtained. The critical parameter of Rosenblatt perceptrons is the number of neurons N in the associative neuron layer. We changed the parameter N from 1,000 to 512,000. We investigated the influence of this parameter on the performance of the Rosenblatt perceptron. Increasing N from 1,000 to 512,000 involves decreasing of test errors from 5 to 8 times. It was shown that a large scale Rosenblatt perceptron is comparable with the best classifiers checked on MNIST database (98.9%-99.3%).","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134347518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939007
Shuhui Li
The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or batch form. But the batch form EKF is different from the gradient averaging in standard batch mode BP. The paper compares backpropagation and extended Kalman filter in pattern and batch forms for neural network trainings. For each comparison between the batch-mode EKF and BP, the same batch data size is used. An overall RMS error computed for all training examples is adopted in the paper for the comparison, which is found to be especially beneficial to pattern mode EKF and BP trainings. Simulation of the network training with different batch data sizes shows that EKF and BP in batch-form usually are more stable and can obtain smaller RMS error than in pattern-form. However, too large batch data size can let the BP trap to a "local minimum", and can also reduces the network training effect of the EKF algorithm.
{"title":"Comparative analysis of backpropagation and extended Kalman filter in pattern and batch forms for training neural networks","authors":"Shuhui Li","doi":"10.1109/IJCNN.2001.939007","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939007","url":null,"abstract":"The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or batch form. But the batch form EKF is different from the gradient averaging in standard batch mode BP. The paper compares backpropagation and extended Kalman filter in pattern and batch forms for neural network trainings. For each comparison between the batch-mode EKF and BP, the same batch data size is used. An overall RMS error computed for all training examples is adopted in the paper for the comparison, which is found to be especially beneficial to pattern mode EKF and BP trainings. Simulation of the network training with different batch data sizes shows that EKF and BP in batch-form usually are more stable and can obtain smaller RMS error than in pattern-form. However, too large batch data size can let the BP trap to a \"local minimum\", and can also reduces the network training effect of the EKF algorithm.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938407
Zhijie Cai, Liping Tang, J. Ruan, Shixiong Xu, Fanji Gu
Sixteen acute myocardial infarction (AMI) inpatients were selected randomly. Sixteen normal subjects were selected as a control group, their age and sex matched to the AMI group. The patient's HRV were recorded 9 times after AMI in half a year. Some predictability measures based on neural network learning were used to analyze these data. It was found that for normal subjects, their HRVs were chaotic, but for AMI patients, their HRVs could be either periodic or stochastic. For all the sixteen AMI patients, some of the predictability measures kept one order lower than the one for normal subjects at least for six months after AMI. Therefore, it can be a sensitive index for measuring the damage of the heart due to the AMI attack.
{"title":"Predictability analysis of the heart rate variability","authors":"Zhijie Cai, Liping Tang, J. Ruan, Shixiong Xu, Fanji Gu","doi":"10.1109/IJCNN.2001.938407","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938407","url":null,"abstract":"Sixteen acute myocardial infarction (AMI) inpatients were selected randomly. Sixteen normal subjects were selected as a control group, their age and sex matched to the AMI group. The patient's HRV were recorded 9 times after AMI in half a year. Some predictability measures based on neural network learning were used to analyze these data. It was found that for normal subjects, their HRVs were chaotic, but for AMI patients, their HRVs could be either periodic or stochastic. For all the sixteen AMI patients, some of the predictability measures kept one order lower than the one for normal subjects at least for six months after AMI. Therefore, it can be a sensitive index for measuring the damage of the heart due to the AMI attack.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131500553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939497
J. Carroll, T. Peterson, N. Owens
We focus on the task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Q-learning: direct transfer, soft transfer and memory-guided exploration. In direct transfer, the Q-values from a previous task are used to initialize the Q-values of the next task. The soft transfer initializes the Q-values of the new task with a weighted average of the standard initialization value and the Q-values of the previous task. In memory-guided exploration the Q-values of previous tasks are used as a guide in the initial exploration of the agent. The weight that the agent gives to its past experience decreases over time. We explore stability issues related to the off-policy nature of memory-guided exploration and compare memory-guided exploration to soft transfer and direct transfer in three different environments.
{"title":"Memory-guided exploration in reinforcement learning","authors":"J. Carroll, T. Peterson, N. Owens","doi":"10.1109/IJCNN.2001.939497","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939497","url":null,"abstract":"We focus on the task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Q-learning: direct transfer, soft transfer and memory-guided exploration. In direct transfer, the Q-values from a previous task are used to initialize the Q-values of the next task. The soft transfer initializes the Q-values of the new task with a weighted average of the standard initialization value and the Q-values of the previous task. In memory-guided exploration the Q-values of previous tasks are used as a guide in the initial exploration of the agent. The weight that the agent gives to its past experience decreases over time. We explore stability issues related to the off-policy nature of memory-guided exploration and compare memory-guided exploration to soft transfer and direct transfer in three different environments.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939021
B. Girau
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graphs of standard neural models can not be handled by digital hardware devices. A new theoretical and practical framework allows to reconcile simple hardware topologies with complex neural architectures: field programmable neural arrays (FPNA) lead to powerful neural architectures that are easy to map onto digital hardware, thanks to a simplified topology and an original data exchange scheme. The paper focuses on a class of synchronous FPNAs, for which an efficient implementation with on-chip learning is described. Application and implementation results are discussed.
{"title":"On-chip learning of FPGA-inspired neural nets","authors":"B. Girau","doi":"10.1109/IJCNN.2001.939021","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939021","url":null,"abstract":"Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graphs of standard neural models can not be handled by digital hardware devices. A new theoretical and practical framework allows to reconcile simple hardware topologies with complex neural architectures: field programmable neural arrays (FPNA) lead to powerful neural architectures that are easy to map onto digital hardware, thanks to a simplified topology and an original data exchange scheme. The paper focuses on a class of synchronous FPNAs, for which an efficient implementation with on-chip learning is described. Application and implementation results are discussed.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131209810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}