Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007746
Cheng-Fa Tsai, Chun-Wei Tsai
This paper presents a new metaheuristic method called EA algorithm for solving the TSP (traveling salesman problem). We introduce a genetic exploitation mechanism in ant colony system from genetic algorithm to search solutions space for solving the traveling salesman problem. In addition, we present a method called nearest neighbor (NN) to EA to improve TSPs thus obtain good solutions quickly. According to our simulation results, the EA algorithm outperforms the ant colony system (ACS) in tour length comparison of traveling salesman problem. In this work it is observed that EA or ACS with NN approach as initial solutions can provide a significant improvement for obtaining a global optimum solution or a near global optimum solution in large TSPs.
{"title":"A new approach for solving large traveling salesman problem using evolutionary ant rules","authors":"Cheng-Fa Tsai, Chun-Wei Tsai","doi":"10.1109/IJCNN.2002.1007746","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007746","url":null,"abstract":"This paper presents a new metaheuristic method called EA algorithm for solving the TSP (traveling salesman problem). We introduce a genetic exploitation mechanism in ant colony system from genetic algorithm to search solutions space for solving the traveling salesman problem. In addition, we present a method called nearest neighbor (NN) to EA to improve TSPs thus obtain good solutions quickly. According to our simulation results, the EA algorithm outperforms the ant colony system (ACS) in tour length comparison of traveling salesman problem. In this work it is observed that EA or ACS with NN approach as initial solutions can provide a significant improvement for obtaining a global optimum solution or a near global optimum solution in large TSPs.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133209264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007771
U. Johansson, L. Niklasson
This paper shows that artificial neural networks can exploit the temporal structure in the domain of marketing investments. Two architectures are compared; a tapped delay neural network and simple recurrent net. The performance is evaluated, and the method for extending it is suggested. The method uses a sensitivity analysis and identifies which input parameters that could be removed for increased performance.
{"title":"Increased performance with neural nets - an example from the marketing domain","authors":"U. Johansson, L. Niklasson","doi":"10.1109/IJCNN.2002.1007771","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007771","url":null,"abstract":"This paper shows that artificial neural networks can exploit the temporal structure in the domain of marketing investments. Two architectures are compared; a tapped delay neural network and simple recurrent net. The performance is evaluated, and the method for extending it is suggested. The method uses a sensitivity analysis and identifies which input parameters that could be removed for increased performance.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133980345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005559
D. Li, K. Hirasawa, J. Hu, J. Murata
In the search for even better parsimonious neural network modeling, this paper describes a novel approach which attempts to exploit redundancy found in the conventional sigmoidal networks. A hybrid universal learning network constructed by the combination of proposed multiplication units with summation units is trained for several classification problems. It is clarified that the multiplication units in different layers in the network improve the performance of the network.
{"title":"Training a kind of hybrid universal learning networks with classification problems","authors":"D. Li, K. Hirasawa, J. Hu, J. Murata","doi":"10.1109/IJCNN.2002.1005559","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005559","url":null,"abstract":"In the search for even better parsimonious neural network modeling, this paper describes a novel approach which attempts to exploit redundancy found in the conventional sigmoidal networks. A hybrid universal learning network constructed by the combination of proposed multiplication units with summation units is trained for several classification problems. It is clarified that the multiplication units in different layers in the network improve the performance of the network.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134499373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007496
K. Shimonomura, S. Kameda, T. Yagi
A novel robot vision system was configured using a silicon retina and FPGA circuit. Silicon retina has been developed to mimic the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very largescale integrated circuit which executes Laplacian-Gaussian (/spl nabla//sup 2/G)-like filtering and frame subtraction on the image in real time. FPGA circuit controls a silicon retina and executes image processing depending on application of the system. This robot vision system can achieve real time and robust computations under natural illumination with a compact hardware and a low power consumption.
{"title":"Silicon retina system applicable to robot vision","authors":"K. Shimonomura, S. Kameda, T. Yagi","doi":"10.1109/IJCNN.2002.1007496","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007496","url":null,"abstract":"A novel robot vision system was configured using a silicon retina and FPGA circuit. Silicon retina has been developed to mimic the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very largescale integrated circuit which executes Laplacian-Gaussian (/spl nabla//sup 2/G)-like filtering and frame subtraction on the image in real time. FPGA circuit controls a silicon retina and executes image processing depending on application of the system. This robot vision system can achieve real time and robust computations under natural illumination with a compact hardware and a low power consumption.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134525944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007660
E. Cuadros-Vargas, R.A.F. Romero
Self-organizing maps (SOM) perform similarity information retrieval, but they cannot answer questions like k-nearest neighbors easily. This paper presents a new family of constructive SOM called SAM-SOM family which incorporates spatial access methods to perform more specific queries like k-NN and range queries. Using this family of networks, the patterns have to be presented only once. This approach speeds up dramatically the SOM training process with a minimal number of parameters.
{"title":"A SAM-SOM family: incorporating spatial access methods into constructive self-organizing maps","authors":"E. Cuadros-Vargas, R.A.F. Romero","doi":"10.1109/IJCNN.2002.1007660","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007660","url":null,"abstract":"Self-organizing maps (SOM) perform similarity information retrieval, but they cannot answer questions like k-nearest neighbors easily. This paper presents a new family of constructive SOM called SAM-SOM family which incorporates spatial access methods to perform more specific queries like k-NN and range queries. Using this family of networks, the patterns have to be presented only once. This approach speeds up dramatically the SOM training process with a minimal number of parameters.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134543788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007449
D. V. Prokhorov, L.A. Feldkarnp, I. Tyukin
In this paper we review recent results on the adaptive behavior attained with fixed-weight recurrent neural networks (meta-learning). We argue that such behavior is a natural consequence of prior training.
{"title":"Adaptive behavior with fixed weights in RNN: an overview","authors":"D. V. Prokhorov, L.A. Feldkarnp, I. Tyukin","doi":"10.1109/IJCNN.2002.1007449","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007449","url":null,"abstract":"In this paper we review recent results on the adaptive behavior attained with fixed-weight recurrent neural networks (meta-learning). We argue that such behavior is a natural consequence of prior training.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134623763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007480
T. Onoda, H. Murata, Gunnar Rätsch, K. Muller
The estimation of the states of household electric appliances has served as the first application of support vector machines in the power system research field. Thus, it is imperative for power system research field to evaluate the support vector machine on this task from a practical point of view. We use the data proposed in Onoda and Ratsch (2000) for this purpose. We put particular emphasis on comparing different types of support vector machines obtained by choosing different kernels. We report results for polynomial kernels, radial basis function kernels, and sigmoid kernels. In the estimation of the states of household electric appliances, the results for the three different kernels achieved different error rates. We also put particular emphasis on comparing the different capacity of support vector machines obtained by choosing different regularization constants and parameters of kernels. The results show that the choice of regularization constants and parameters of kernels is as important as the choice of kernel functions for real world applications.
{"title":"Experimental analysis of support vector machines with different kernels based on non-intrusive monitoring data","authors":"T. Onoda, H. Murata, Gunnar Rätsch, K. Muller","doi":"10.1109/IJCNN.2002.1007480","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007480","url":null,"abstract":"The estimation of the states of household electric appliances has served as the first application of support vector machines in the power system research field. Thus, it is imperative for power system research field to evaluate the support vector machine on this task from a practical point of view. We use the data proposed in Onoda and Ratsch (2000) for this purpose. We put particular emphasis on comparing different types of support vector machines obtained by choosing different kernels. We report results for polynomial kernels, radial basis function kernels, and sigmoid kernels. In the estimation of the states of household electric appliances, the results for the three different kernels achieved different error rates. We also put particular emphasis on comparing the different capacity of support vector machines obtained by choosing different regularization constants and parameters of kernels. The results show that the choice of regularization constants and parameters of kernels is as important as the choice of kernel functions for real world applications.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133087513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007577
H. Bersini, P. Sener
Frustrated chaos is one of the most frequent dynamical regimes encountered in basic neural networks of any size. This chaotic regime results from an intertwining of almost stable attractors and leads to an unpredictable itinerancy among these attractors. Similarities with the classical intermittency and crisis-induced intermittency chaotic regimes are underlined. Original aspects of this chaos are the induction of this regime by a logical frustration of the connectivity structure, the recursive nature of the bifurcation diagram in which new cycles of increasing size appears continuously by increasing the resolution of the diagram, the description of this chaos as a weighted combination of the cycles at both ends of the chaotic window (the importance of each cycle being dependent on the distance to the critical points). The problematic of learning should draw some benefits from a better understanding of the bifurcations occurring by varying the connection values.
{"title":"Frustrated chaos in neural networks","authors":"H. Bersini, P. Sener","doi":"10.1109/IJCNN.2002.1007577","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007577","url":null,"abstract":"Frustrated chaos is one of the most frequent dynamical regimes encountered in basic neural networks of any size. This chaotic regime results from an intertwining of almost stable attractors and leads to an unpredictable itinerancy among these attractors. Similarities with the classical intermittency and crisis-induced intermittency chaotic regimes are underlined. Original aspects of this chaos are the induction of this regime by a logical frustration of the connectivity structure, the recursive nature of the bifurcation diagram in which new cycles of increasing size appears continuously by increasing the resolution of the diagram, the description of this chaos as a weighted combination of the cycles at both ends of the chaotic window (the importance of each cycle being dependent on the distance to the critical points). The problematic of learning should draw some benefits from a better understanding of the bifurcations occurring by varying the connection values.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115328732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005493
C. Rajan, M. R. Mohan, K. Manivannan
The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints with temperature and demand as control parameter. Neyveli Thermal Power Station - II in India, demonstrates the effectiveness of the proposed approach.
{"title":"Refined simulated annealing method for solving unit commitment problem","authors":"C. Rajan, M. R. Mohan, K. Manivannan","doi":"10.1109/IJCNN.2002.1005493","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005493","url":null,"abstract":"The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints with temperature and demand as control parameter. Neyveli Thermal Power Station - II in India, demonstrates the effectiveness of the proposed approach.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115530609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007813
I. Valova, D. Szer, N. Georgieva
SOM approximates a high dimensional unknown input distribution with lower dimensional neural network structure to model the topology of the input space as closely as possible. We present a SOM that processes the whole input in parallel and organizes itself over time. This way, networks can be developed that do not reorganize their structure from scratch every time a new set of input vectors is presented but rather adjust their internal architecture in accordance with previous mappings.
{"title":"A growing parallel self-organizing map for unsupervised learning","authors":"I. Valova, D. Szer, N. Georgieva","doi":"10.1109/IJCNN.2002.1007813","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007813","url":null,"abstract":"SOM approximates a high dimensional unknown input distribution with lower dimensional neural network structure to model the topology of the input space as closely as possible. We present a SOM that processes the whole input in parallel and organizes itself over time. This way, networks can be developed that do not reorganize their structure from scratch every time a new set of input vectors is presented but rather adjust their internal architecture in accordance with previous mappings.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114622307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}