Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007746
Cheng-Fa Tsai, Chun-Wei Tsai
This paper presents a new metaheuristic method called EA algorithm for solving the TSP (traveling salesman problem). We introduce a genetic exploitation mechanism in ant colony system from genetic algorithm to search solutions space for solving the traveling salesman problem. In addition, we present a method called nearest neighbor (NN) to EA to improve TSPs thus obtain good solutions quickly. According to our simulation results, the EA algorithm outperforms the ant colony system (ACS) in tour length comparison of traveling salesman problem. In this work it is observed that EA or ACS with NN approach as initial solutions can provide a significant improvement for obtaining a global optimum solution or a near global optimum solution in large TSPs.
{"title":"A new approach for solving large traveling salesman problem using evolutionary ant rules","authors":"Cheng-Fa Tsai, Chun-Wei Tsai","doi":"10.1109/IJCNN.2002.1007746","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007746","url":null,"abstract":"This paper presents a new metaheuristic method called EA algorithm for solving the TSP (traveling salesman problem). We introduce a genetic exploitation mechanism in ant colony system from genetic algorithm to search solutions space for solving the traveling salesman problem. In addition, we present a method called nearest neighbor (NN) to EA to improve TSPs thus obtain good solutions quickly. According to our simulation results, the EA algorithm outperforms the ant colony system (ACS) in tour length comparison of traveling salesman problem. In this work it is observed that EA or ACS with NN approach as initial solutions can provide a significant improvement for obtaining a global optimum solution or a near global optimum solution in large TSPs.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133209264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007771
U. Johansson, L. Niklasson
This paper shows that artificial neural networks can exploit the temporal structure in the domain of marketing investments. Two architectures are compared; a tapped delay neural network and simple recurrent net. The performance is evaluated, and the method for extending it is suggested. The method uses a sensitivity analysis and identifies which input parameters that could be removed for increased performance.
{"title":"Increased performance with neural nets - an example from the marketing domain","authors":"U. Johansson, L. Niklasson","doi":"10.1109/IJCNN.2002.1007771","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007771","url":null,"abstract":"This paper shows that artificial neural networks can exploit the temporal structure in the domain of marketing investments. Two architectures are compared; a tapped delay neural network and simple recurrent net. The performance is evaluated, and the method for extending it is suggested. The method uses a sensitivity analysis and identifies which input parameters that could be removed for increased performance.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133980345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005559
D. Li, K. Hirasawa, J. Hu, J. Murata
In the search for even better parsimonious neural network modeling, this paper describes a novel approach which attempts to exploit redundancy found in the conventional sigmoidal networks. A hybrid universal learning network constructed by the combination of proposed multiplication units with summation units is trained for several classification problems. It is clarified that the multiplication units in different layers in the network improve the performance of the network.
{"title":"Training a kind of hybrid universal learning networks with classification problems","authors":"D. Li, K. Hirasawa, J. Hu, J. Murata","doi":"10.1109/IJCNN.2002.1005559","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005559","url":null,"abstract":"In the search for even better parsimonious neural network modeling, this paper describes a novel approach which attempts to exploit redundancy found in the conventional sigmoidal networks. A hybrid universal learning network constructed by the combination of proposed multiplication units with summation units is trained for several classification problems. It is clarified that the multiplication units in different layers in the network improve the performance of the network.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134499373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007496
K. Shimonomura, S. Kameda, T. Yagi
A novel robot vision system was configured using a silicon retina and FPGA circuit. Silicon retina has been developed to mimic the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very largescale integrated circuit which executes Laplacian-Gaussian (/spl nabla//sup 2/G)-like filtering and frame subtraction on the image in real time. FPGA circuit controls a silicon retina and executes image processing depending on application of the system. This robot vision system can achieve real time and robust computations under natural illumination with a compact hardware and a low power consumption.
{"title":"Silicon retina system applicable to robot vision","authors":"K. Shimonomura, S. Kameda, T. Yagi","doi":"10.1109/IJCNN.2002.1007496","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007496","url":null,"abstract":"A novel robot vision system was configured using a silicon retina and FPGA circuit. Silicon retina has been developed to mimic the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very largescale integrated circuit which executes Laplacian-Gaussian (/spl nabla//sup 2/G)-like filtering and frame subtraction on the image in real time. FPGA circuit controls a silicon retina and executes image processing depending on application of the system. This robot vision system can achieve real time and robust computations under natural illumination with a compact hardware and a low power consumption.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134525944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007660
E. Cuadros-Vargas, R.A.F. Romero
Self-organizing maps (SOM) perform similarity information retrieval, but they cannot answer questions like k-nearest neighbors easily. This paper presents a new family of constructive SOM called SAM-SOM family which incorporates spatial access methods to perform more specific queries like k-NN and range queries. Using this family of networks, the patterns have to be presented only once. This approach speeds up dramatically the SOM training process with a minimal number of parameters.
{"title":"A SAM-SOM family: incorporating spatial access methods into constructive self-organizing maps","authors":"E. Cuadros-Vargas, R.A.F. Romero","doi":"10.1109/IJCNN.2002.1007660","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007660","url":null,"abstract":"Self-organizing maps (SOM) perform similarity information retrieval, but they cannot answer questions like k-nearest neighbors easily. This paper presents a new family of constructive SOM called SAM-SOM family which incorporates spatial access methods to perform more specific queries like k-NN and range queries. Using this family of networks, the patterns have to be presented only once. This approach speeds up dramatically the SOM training process with a minimal number of parameters.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134543788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007449
D. V. Prokhorov, L.A. Feldkarnp, I. Tyukin
In this paper we review recent results on the adaptive behavior attained with fixed-weight recurrent neural networks (meta-learning). We argue that such behavior is a natural consequence of prior training.
{"title":"Adaptive behavior with fixed weights in RNN: an overview","authors":"D. V. Prokhorov, L.A. Feldkarnp, I. Tyukin","doi":"10.1109/IJCNN.2002.1007449","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007449","url":null,"abstract":"In this paper we review recent results on the adaptive behavior attained with fixed-weight recurrent neural networks (meta-learning). We argue that such behavior is a natural consequence of prior training.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134623763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007480
T. Onoda, H. Murata, Gunnar Rätsch, K. Muller
The estimation of the states of household electric appliances has served as the first application of support vector machines in the power system research field. Thus, it is imperative for power system research field to evaluate the support vector machine on this task from a practical point of view. We use the data proposed in Onoda and Ratsch (2000) for this purpose. We put particular emphasis on comparing different types of support vector machines obtained by choosing different kernels. We report results for polynomial kernels, radial basis function kernels, and sigmoid kernels. In the estimation of the states of household electric appliances, the results for the three different kernels achieved different error rates. We also put particular emphasis on comparing the different capacity of support vector machines obtained by choosing different regularization constants and parameters of kernels. The results show that the choice of regularization constants and parameters of kernels is as important as the choice of kernel functions for real world applications.
{"title":"Experimental analysis of support vector machines with different kernels based on non-intrusive monitoring data","authors":"T. Onoda, H. Murata, Gunnar Rätsch, K. Muller","doi":"10.1109/IJCNN.2002.1007480","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007480","url":null,"abstract":"The estimation of the states of household electric appliances has served as the first application of support vector machines in the power system research field. Thus, it is imperative for power system research field to evaluate the support vector machine on this task from a practical point of view. We use the data proposed in Onoda and Ratsch (2000) for this purpose. We put particular emphasis on comparing different types of support vector machines obtained by choosing different kernels. We report results for polynomial kernels, radial basis function kernels, and sigmoid kernels. In the estimation of the states of household electric appliances, the results for the three different kernels achieved different error rates. We also put particular emphasis on comparing the different capacity of support vector machines obtained by choosing different regularization constants and parameters of kernels. The results show that the choice of regularization constants and parameters of kernels is as important as the choice of kernel functions for real world applications.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133087513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007656
J. Dávila
The GENDALC system has been previously used to evolve NN topologies for natural language tasks. This paper presents results on additional tasks that require remembering and processing of previous input patterns. These results indicate that GENDALC is particularly well suited for tasks that require remembering.
{"title":"Genetic evolution of neural networks that remember","authors":"J. Dávila","doi":"10.1109/IJCNN.2002.1007656","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007656","url":null,"abstract":"The GENDALC system has been previously used to evolve NN topologies for natural language tasks. This paper presents results on additional tasks that require remembering and processing of previous input patterns. These results indicate that GENDALC is particularly well suited for tasks that require remembering.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123163672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005504
J. S. Kirk, J. Zurada
It is often observed that the lattice of a well-trained self-organizing map (SOM) preserves the topology of the data set. In this paper, we examine what is meant by this claim and discuss a related goal for a dimension-reducing mapping. We term this goal "topography preservation", and attempt to fulfill it using a two-stage training method called genetically-trained topographic mapping. In the first stage of training, a clustering algorithm is used to map sets of input data points to each neuron. In the second stage, a genetic algorithm assigns adjacencies between the neurons of the output lattice according to the fitness defined by the topography preservation goal. Stock market data and an artificial data set are used to illustrate the relative strengths of the standard SOM and the new algorithm.
{"title":"Motivation for a genetically-trained topography-preserving map","authors":"J. S. Kirk, J. Zurada","doi":"10.1109/IJCNN.2002.1005504","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005504","url":null,"abstract":"It is often observed that the lattice of a well-trained self-organizing map (SOM) preserves the topology of the data set. In this paper, we examine what is meant by this claim and discuss a related goal for a dimension-reducing mapping. We term this goal \"topography preservation\", and attempt to fulfill it using a two-stage training method called genetically-trained topographic mapping. In the first stage of training, a clustering algorithm is used to map sets of input data points to each neuron. In the second stage, a genetic algorithm assigns adjacencies between the neurons of the output lattice according to the fitness defined by the topography preservation goal. Stock market data and an artificial data set are used to illustrate the relative strengths of the standard SOM and the new algorithm.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131309544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005565
Zhou Kang, Yan Li, H. de Garis, Lishan Kang
The discovery of scientific laws is always built on the basis of scientific experiments and observed data. Any real world complex system must be controlled by some basic laws, including macroscopic level, submicroscopic level and microscopic level laws. How to discover its necessity-laws from these observed data is the most important task of data mining (DM) and KDD. Based on the evolutionary computation, this paper proposes a multilevel and multi-scale evolutionary modeling system which models the macro-behavior of the system by ordinary differential equations while models the micro-behavior of the system by natural fractals. This system can be used to model and predict the scientific observed time series, such as observed data of sunspot and precipitation of flood season, and always get good results.
{"title":"A multi-level and multi-scale evolutionary modeling system for scientific data","authors":"Zhou Kang, Yan Li, H. de Garis, Lishan Kang","doi":"10.1109/IJCNN.2002.1005565","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005565","url":null,"abstract":"The discovery of scientific laws is always built on the basis of scientific experiments and observed data. Any real world complex system must be controlled by some basic laws, including macroscopic level, submicroscopic level and microscopic level laws. How to discover its necessity-laws from these observed data is the most important task of data mining (DM) and KDD. Based on the evolutionary computation, this paper proposes a multilevel and multi-scale evolutionary modeling system which models the macro-behavior of the system by ordinary differential equations while models the micro-behavior of the system by natural fractals. This system can be used to model and predict the scientific observed time series, such as observed data of sunspot and precipitation of flood season, and always get good results.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129073788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}