Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007527
H. Ishibuchi, T. Seguchi
This paper examines the adaptability of neural networks to gradual and sudden changes in the environment of a non-cooperative repeated market selection game. Neural networks are used as decision-making systems of agents for iterative game playing. Training data are successively generated from each round of our game by the neural networks.
{"title":"Successive adaptation of neural networks in a multi-agent model","authors":"H. Ishibuchi, T. Seguchi","doi":"10.1109/IJCNN.2002.1007527","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007527","url":null,"abstract":"This paper examines the adaptability of neural networks to gradual and sudden changes in the environment of a non-cooperative repeated market selection game. Neural networks are used as decision-making systems of agents for iterative game playing. Training data are successively generated from each round of our game by the neural networks.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132320949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007535
M. Zlochin, Y. Baram
Monte Carlo methods, such as importance sampling, have become a major tool in Bayesian inference. However, in order to produce an accurate estimate, the sampling distribution is required to be close to the target distribution. Several adaptive importance sampling algorithms, proposed over the last few years, attempt to learn a good sampling distribution automatically, but their performance is often unsatisfactory. In addition, a theoretical analysis, which takes into account the computational cost of the sampling algorithms, is still lacking. In this paper, we present a first attempt at such analysis, and we propose some modifications to existing adaptive importance sampling algorithms, which produce significantly more accurate estimates.
{"title":"Efficient nonparametric importance sampling for Bayesian learning","authors":"M. Zlochin, Y. Baram","doi":"10.1109/IJCNN.2002.1007535","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007535","url":null,"abstract":"Monte Carlo methods, such as importance sampling, have become a major tool in Bayesian inference. However, in order to produce an accurate estimate, the sampling distribution is required to be close to the target distribution. Several adaptive importance sampling algorithms, proposed over the last few years, attempt to learn a good sampling distribution automatically, but their performance is often unsatisfactory. In addition, a theoretical analysis, which takes into account the computational cost of the sampling algorithms, is still lacking. In this paper, we present a first attempt at such analysis, and we propose some modifications to existing adaptive importance sampling algorithms, which produce significantly more accurate estimates.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131325052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005620
H. El-Bakry
An approach to reducing the computation time taken by fast neural nets for the searching process is presented. The principle of the divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately using a fast neural network Compared to conventional and fast neural networks, experimental results show that a speed up ratio is achieved when applying this technique to locate human faces automatically in cluttered scenes. Furthermore, faster face detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of fast neural networks. Moreover, the problem of sub-image centering and normalization in the Fourier space is solved.
{"title":"Face detection using neural networks and image decomposition","authors":"H. El-Bakry","doi":"10.1109/IJCNN.2002.1005620","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005620","url":null,"abstract":"An approach to reducing the computation time taken by fast neural nets for the searching process is presented. The principle of the divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately using a fast neural network Compared to conventional and fast neural networks, experimental results show that a speed up ratio is achieved when applying this technique to locate human faces automatically in cluttered scenes. Furthermore, faster face detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of fast neural networks. Moreover, the problem of sub-image centering and normalization in the Fourier space is solved.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125503985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007646
T. Haruhiko, K. Hidehiko, H. Terumine
We propose a new learning algorithm to enhance fault tolerance of multilayer neural networks (MLNs). This method is based on the fact that strong weights make MLNs sensitive to faults. To decrease the number of strong connections, we introduce a new evaluation function for the new learning algorithm. The function consists of two terms: one is the output error and the other is the square sum of HO-weights (weighs between the hidden layer and output layer). The second term aims to decrease the value of HO-weights. By decreasing the value of only HO-weights, we enhance the fault tolerance against the previous method.
{"title":"Partially weight minimization approach for fault tolerant multilayer neural networks","authors":"T. Haruhiko, K. Hidehiko, H. Terumine","doi":"10.1109/IJCNN.2002.1007646","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007646","url":null,"abstract":"We propose a new learning algorithm to enhance fault tolerance of multilayer neural networks (MLNs). This method is based on the fact that strong weights make MLNs sensitive to faults. To decrease the number of strong connections, we introduce a new evaluation function for the new learning algorithm. The function consists of two terms: one is the output error and the other is the square sum of HO-weights (weighs between the hidden layer and output layer). The second term aims to decrease the value of HO-weights. By decreasing the value of only HO-weights, we enhance the fault tolerance against the previous method.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"50 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123186817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005490
A. Parlos, Kyusung Kim
Timely detection and diagnosis of incipient faults is desirable for online condition assessment purposes. In this paper, a model-based fault diagnosis system is developed for induction motors, using recurrent neural networks for multistep transient response prediction and multiresolution signal processing for nonstationary signal feature extraction. The proposed diagnosis system uses only measured motor terminal currents and voltages, and motor speed. The effectiveness of the diagnosis system is demonstrated through staged motor faults of electrical and mechanical origin. Scaling of the diagnosis system to machines with different power ratings is demonstrated with data from 2.2 kW, 373 kW and 597 kW induction motors.
{"title":"Model-based incipient fault diagnosis - multi-step neuro-predictors and multiresolution signal processing","authors":"A. Parlos, Kyusung Kim","doi":"10.1109/IJCNN.2002.1005490","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005490","url":null,"abstract":"Timely detection and diagnosis of incipient faults is desirable for online condition assessment purposes. In this paper, a model-based fault diagnosis system is developed for induction motors, using recurrent neural networks for multistep transient response prediction and multiresolution signal processing for nonstationary signal feature extraction. The proposed diagnosis system uses only measured motor terminal currents and voltages, and motor speed. The effectiveness of the diagnosis system is demonstrated through staged motor faults of electrical and mechanical origin. Scaling of the diagnosis system to machines with different power ratings is demonstrated with data from 2.2 kW, 373 kW and 597 kW induction motors.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123090678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007560
Yanli Yang, M. Polycarpou, A. Minai
Searching a spatially extended environment using autonomous mobile agents is a problem that arises in many applications, e.g., search-and-rescue, search-and-destroy, intelligence gathering, surveillance, disaster response, exploration, etc. Since agents such as UAV's are often energy-limited and operate in a hostile environment, there is a premium on efficient cooperative search without superfluous communication. In this paper, we consider how a group of mobile agents, using only limited messages and incomplete information, can learn to search an environment efficiently. In particular, we consider the issue of centralized vs. decentralized intelligence and the effect of opportunistic sharing of learned information on search performance.
{"title":"Opportunistically cooperative neural learning in mobile agents","authors":"Yanli Yang, M. Polycarpou, A. Minai","doi":"10.1109/IJCNN.2002.1007560","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007560","url":null,"abstract":"Searching a spatially extended environment using autonomous mobile agents is a problem that arises in many applications, e.g., search-and-rescue, search-and-destroy, intelligence gathering, surveillance, disaster response, exploration, etc. Since agents such as UAV's are often energy-limited and operate in a hostile environment, there is a premium on efficient cooperative search without superfluous communication. In this paper, we consider how a group of mobile agents, using only limited messages and incomplete information, can learn to search an environment efficiently. In particular, we consider the issue of centralized vs. decentralized intelligence and the effect of opportunistic sharing of learned information on search performance.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126399250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005593
H. Malki, J. Baldwin
We present a hybrid neuro-fuzzy technique for predicting producibility of a well. First, multilayer neural networks are used to compute petrophysical parameters such as quality control curves and permeability. In particular, neural networks are used to predict the permeability from nuclear magnetic resonance (NMR) logs. Next, the permeability is used as one of the input to a fuzzy logic inference engine that determines producibility and suggests a rank of production for multiple zones in a well. This technique is tested with well logs and results are comparable to expert identification of producible zones. The main advantages of the proposed model are faster processing time and less expert dependency during application.
{"title":"A neuro-fuzzy based oil/gas producibility estimation method","authors":"H. Malki, J. Baldwin","doi":"10.1109/IJCNN.2002.1005593","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005593","url":null,"abstract":"We present a hybrid neuro-fuzzy technique for predicting producibility of a well. First, multilayer neural networks are used to compute petrophysical parameters such as quality control curves and permeability. In particular, neural networks are used to predict the permeability from nuclear magnetic resonance (NMR) logs. Next, the permeability is used as one of the input to a fuzzy logic inference engine that determines producibility and suggests a rank of production for multiple zones in a well. This technique is tested with well logs and results are comparable to expert identification of producible zones. The main advantages of the proposed model are faster processing time and less expert dependency during application.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005589
Qingjie Zhao, Zeng-qi Sun
A technique for robot motion simulation is proposed with imaged-based view synthesis. An eigen space method is used to acquire compact representations of the images. A wavelet neural network is utilized to map joint positions into the compact representations. The trajectory in the joint space is first planned to generate a joint sequence, and the image sequence of the robot motion is synthesized directly from reference images. No calibration is needed. Experiment results are demonstrated.
{"title":"Robot motion simulation using wavelet neural network","authors":"Qingjie Zhao, Zeng-qi Sun","doi":"10.1109/IJCNN.2002.1005589","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005589","url":null,"abstract":"A technique for robot motion simulation is proposed with imaged-based view synthesis. An eigen space method is used to acquire compact representations of the images. A wavelet neural network is utilized to map joint positions into the compact representations. The trajectory in the joint space is first planned to generate a joint sequence, and the image sequence of the robot motion is synthesized directly from reference images. No calibration is needed. Experiment results are demonstrated.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115312397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005442
D. Luo, Ke Chen
Unlike previous comparative studies, we present an empirical evaluation on three typical statistical ensemble methods - boosting, bagging and combination of weak perceptrons - in terms of speaker identification where miscellaneous mismatch conditions are involved. During creating an ensemble, moreover, different combination strategies are also investigated. As a result, our studies present their generalization capabilities on mismatch conditions, which provides an alternative insight to understand those methods.
{"title":"A comparative study of statistical ensemble methods on mismatch conditions","authors":"D. Luo, Ke Chen","doi":"10.1109/IJCNN.2002.1005442","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005442","url":null,"abstract":"Unlike previous comparative studies, we present an empirical evaluation on three typical statistical ensemble methods - boosting, bagging and combination of weak perceptrons - in terms of speaker identification where miscellaneous mismatch conditions are involved. During creating an ensemble, moreover, different combination strategies are also investigated. As a result, our studies present their generalization capabilities on mismatch conditions, which provides an alternative insight to understand those methods.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121187064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007825
R. Kamimura, T. Kamimura
We propose an information theoretic approach called cooperative information control. The new method realizes self-organizing maps in a way completely different from the conventional SOM. In addition, the method can create clearer neuron firing patterns. In the method, competition is realized by maximizing information content in neurons. Cooperation is implemented by having neurons behave similarly to their neighbors. These two processes are unified and controlled in the framework of cooperative information control. We applied the new method to applied linguistic data analysis. Experimental results confirmed that the method can yield more explicit neuron firing patterns than the conventional self-organizing maps.
{"title":"Cooperative information control and second language learning: a new information theoretic approach to self-organizing maps","authors":"R. Kamimura, T. Kamimura","doi":"10.1109/IJCNN.2002.1007825","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007825","url":null,"abstract":"We propose an information theoretic approach called cooperative information control. The new method realizes self-organizing maps in a way completely different from the conventional SOM. In addition, the method can create clearer neuron firing patterns. In the method, competition is realized by maximizing information content in neurons. Cooperation is implemented by having neurons behave similarly to their neighbors. These two processes are unified and controlled in the framework of cooperative information control. We applied the new method to applied linguistic data analysis. Experimental results confirmed that the method can yield more explicit neuron firing patterns than the conventional self-organizing maps.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}