Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939004
F. van den Bergh, A. Engelbrecht
The cooperative particle swarm optimiser (CPSO) is a variant of the particle swarm optimiser (PSO) that splits the problem vector, for example a neural network weight vector, across several swarms. The paper investigates the influence that the number of swarms used (also called the split factor) has on the training performance of a product unit neural network. Results are presented, comparing the training performance of the two algorithms, PSO and CPSO, as applied to the task of training the weight vector of a product unit neural network.
{"title":"Training product unit networks using cooperative particle swarm optimisers","authors":"F. van den Bergh, A. Engelbrecht","doi":"10.1109/IJCNN.2001.939004","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939004","url":null,"abstract":"The cooperative particle swarm optimiser (CPSO) is a variant of the particle swarm optimiser (PSO) that splits the problem vector, for example a neural network weight vector, across several swarms. The paper investigates the influence that the number of swarms used (also called the split factor) has on the training performance of a product unit neural network. Results are presented, comparing the training performance of the two algorithms, PSO and CPSO, as applied to the task of training the weight vector of a product unit neural network.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116466295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939000
Tianzhen Wang
In the paper a theory of human image understanding, Recognition-by-Element (RBE), is presented that suggests that the representation of an object may be a set that consists of many members, called elements, most of them are 2D projections (images) of the 3D object to the retina from a specific viewpoint, in specific illumination, specific background, and so on, these 2D images encoding, storing, and retrieving globally, the rest are other sensor inputs evoked by the object, such as voices, tastes, smells, tactilities, etc. The model can explain the implicit memory, perceptual constancies, hand and face detectors, and the cases of object recognition impairment. A computer program has been developed to simulate the RBE, the result is satisfactory.
{"title":"How does our neural system represent an object in brain (Recognition-by-Element)","authors":"Tianzhen Wang","doi":"10.1109/IJCNN.2001.939000","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939000","url":null,"abstract":"In the paper a theory of human image understanding, Recognition-by-Element (RBE), is presented that suggests that the representation of an object may be a set that consists of many members, called elements, most of them are 2D projections (images) of the 3D object to the retina from a specific viewpoint, in specific illumination, specific background, and so on, these 2D images encoding, storing, and retrieving globally, the rest are other sensor inputs evoked by the object, such as voices, tastes, smells, tactilities, etc. The model can explain the implicit memory, perceptual constancies, hand and face detectors, and the cases of object recognition impairment. A computer program has been developed to simulate the RBE, the result is satisfactory.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"85 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122709051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939066
M. O. Efe, O. Kaynak, Xinghuo Yu, Bogdan M. Wilamowski
A method for driving the dynamics of a nonlinear system to a sliding mode is discussed. The approach is based on a sliding mode control methodology, i.e., the system under control is driven towards a sliding mode by tuning the parameters of the controller. In this loop, the parameters of the controller are adjusted such that a zero learning error level is reached in one dimensional phase space defined on the output of the controller. A Gaussian radial basis function neural network is used as the controller.
{"title":"Sliding mode control of nonlinear systems using Gaussian radial basis function neural networks","authors":"M. O. Efe, O. Kaynak, Xinghuo Yu, Bogdan M. Wilamowski","doi":"10.1109/IJCNN.2001.939066","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939066","url":null,"abstract":"A method for driving the dynamics of a nonlinear system to a sliding mode is discussed. The approach is based on a sliding mode control methodology, i.e., the system under control is driven towards a sliding mode by tuning the parameters of the controller. In this loop, the parameters of the controller are adjusted such that a zero learning error level is reached in one dimensional phase space defined on the output of the controller. A Gaussian radial basis function neural network is used as the controller.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"41 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114039287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939527
S.X. Souza, A. D. Doria Neto, J.A.F. Costa, M.L. de Andrade Netto
A neural hybrid system based on Kohonen and Hopfield networks is proposed for memory association. It uses a heuristic approach to split a total set of patterns into various subsets with the aim to increase performance of the parallel architecture of Hopfield networks (PAHN). This architecture avoids several spurious states enabling a pattern storage capacity larger then permitted by a typical Hopfield network. The strategy consists of a method to sort patterns with the SOM algorithm and distribute them into these subsets in such a way that the patterns of the same subset are to be as more orthogonal as possible among themselves. The results show that the strategy employed to distribute patterns in subsets works well when compared with the random distributions and with the exhaustive approach. The results also show that the proposed heuristic lead to patterns subsets that enable more robust memory retrieval.
{"title":"A neural hybrid system for large memory association","authors":"S.X. Souza, A. D. Doria Neto, J.A.F. Costa, M.L. de Andrade Netto","doi":"10.1109/IJCNN.2001.939527","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939527","url":null,"abstract":"A neural hybrid system based on Kohonen and Hopfield networks is proposed for memory association. It uses a heuristic approach to split a total set of patterns into various subsets with the aim to increase performance of the parallel architecture of Hopfield networks (PAHN). This architecture avoids several spurious states enabling a pattern storage capacity larger then permitted by a typical Hopfield network. The strategy consists of a method to sort patterns with the SOM algorithm and distribute them into these subsets in such a way that the patterns of the same subset are to be as more orthogonal as possible among themselves. The results show that the strategy employed to distribute patterns in subsets works well when compared with the random distributions and with the exhaustive approach. The results also show that the proposed heuristic lead to patterns subsets that enable more robust memory retrieval.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117073042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939491
M. Tonomura, K. Nakayama
The backpropagation algorithm is mainly used for multilayer perceptrons. This algorithm is, however, difficult to achieve high generalization when the number of training data is limited, i.e. sparse training data. In this paper, a new learning algorithm is proposed. It combines the BP algorithm and modifies hyperplanes taking internal information into account. In other words, the hyperplanes are controlled by the distance between the hyperplanes and the critical training data, which locate close to the boundary. This algorithm works well for the sparse training data to achieve high generalization. In order to evaluate generalization, it is assumed that all data are normally distributed around the training data. Several simulations of pattern classification demonstrate the efficiency of the proposed algorithm.
{"title":"A hybrid learning algorithm for multilayer perceptrons to improve generalization under sparse training data conditions","authors":"M. Tonomura, K. Nakayama","doi":"10.1109/IJCNN.2001.939491","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939491","url":null,"abstract":"The backpropagation algorithm is mainly used for multilayer perceptrons. This algorithm is, however, difficult to achieve high generalization when the number of training data is limited, i.e. sparse training data. In this paper, a new learning algorithm is proposed. It combines the BP algorithm and modifies hyperplanes taking internal information into account. In other words, the hyperplanes are controlled by the distance between the hyperplanes and the critical training data, which locate close to the boundary. This algorithm works well for the sparse training data to achieve high generalization. In order to evaluate generalization, it is assumed that all data are normally distributed around the training data. Several simulations of pattern classification demonstrate the efficiency of the proposed algorithm.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129712961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939062
M. Menhaj, F. Delgosha
A classifier with the optimum decision, Bayesian classifier could be implemented with probabilistic neural networks (PNNs). The authors presented a new competitive learning algorithm for training such a network when all classes are completely separated. This paper generalizes our previous work to the case of overlapping categories. In our new perspective, the network is, in fact, made blind with respect to the overlapping training samples, so the new training algorithm is called soft PNN (or SPNN). The usefulness of SPNN has been proved by two 2-D classification problems. The simulation results highlight the merit of the proposed method.
{"title":"A soft probabilistic neural network for implementation of Bayesian classifiers","authors":"M. Menhaj, F. Delgosha","doi":"10.1109/IJCNN.2001.939062","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939062","url":null,"abstract":"A classifier with the optimum decision, Bayesian classifier could be implemented with probabilistic neural networks (PNNs). The authors presented a new competitive learning algorithm for training such a network when all classes are completely separated. This paper generalizes our previous work to the case of overlapping categories. In our new perspective, the network is, in fact, made blind with respect to the overlapping training samples, so the new training algorithm is called soft PNN (or SPNN). The usefulness of SPNN has been proved by two 2-D classification problems. The simulation results highlight the merit of the proposed method.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129881856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938510
Jae-Byung Jung, M. El-Sharkawi, G. Anderson, R. Miyamoto, R. Marks, W. Fox, C. Eggen
The composite effort of the system team, rather, is significantly more important than a single player's individual performance. We consider the case wherein each player's performance is tuned to result in maximal team performance for the specific case of maximal area coverage (MAC). The approach is first illustrated through solution of MAC by a fixed number of deformable shapes. An application to sonar is then presented. Here, sonar control parameters determine a range-depth area of coverage. The coverage is also affected by known but uncontrollable environmental parameters. The problem is to determine K sets of sonar ping parameters that result in MAC. The forward problem of determining coverage given control and environmental parameters is computationally intensive. To facilitate real time cooperative optimization among a number of such systems, the sonar input-output is captured in a feedforward layered perceptron neural network.
{"title":"Team optimization of cooperating systems: application to maximal area coverage","authors":"Jae-Byung Jung, M. El-Sharkawi, G. Anderson, R. Miyamoto, R. Marks, W. Fox, C. Eggen","doi":"10.1109/IJCNN.2001.938510","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938510","url":null,"abstract":"The composite effort of the system team, rather, is significantly more important than a single player's individual performance. We consider the case wherein each player's performance is tuned to result in maximal team performance for the specific case of maximal area coverage (MAC). The approach is first illustrated through solution of MAC by a fixed number of deformable shapes. An application to sonar is then presented. Here, sonar control parameters determine a range-depth area of coverage. The coverage is also affected by known but uncontrollable environmental parameters. The problem is to determine K sets of sonar ping parameters that result in MAC. The forward problem of determining coverage given control and environmental parameters is computationally intensive. To facilitate real time cooperative optimization among a number of such systems, the sonar input-output is captured in a feedforward layered perceptron neural network.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128748294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939017
S. Kameda, T. Yagi
A silicon retina was fabricated to emulate two fundamental types of response in the vertebrate retinal circuit, i.e. the sustained response and the transient response. The outputs of the silicon retina emulating the sustained response exhibit a Laplacian-Gaussian-like receptive field and therefore carry out a smoothing and contrast enhancement on input images. The outputs emulating the transient response were obtained by subtracting subsequent images that were smoothed by a resistive network and therefore are sensitive to moving object. The chip was applied for a real time image processing in indoor illumination.
{"title":"A silicon retina calculating high-precision spatial and temporal derivatives","authors":"S. Kameda, T. Yagi","doi":"10.1109/IJCNN.2001.939017","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939017","url":null,"abstract":"A silicon retina was fabricated to emulate two fundamental types of response in the vertebrate retinal circuit, i.e. the sustained response and the transient response. The outputs of the silicon retina emulating the sustained response exhibit a Laplacian-Gaussian-like receptive field and therefore carry out a smoothing and contrast enhancement on input images. The outputs emulating the transient response were obtained by subtracting subsequent images that were smoothed by a resistive network and therefore are sensitive to moving object. The chip was applied for a real time image processing in indoor illumination.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128631445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938802
S. Vucetic, P. Radivojac, Z. Obradovic, Celeste J. Brown, Dunker Ak
In this paper we propose several methods for improving prediction of protein disorder. These include attribute construction from protein sequence, choice of classifier and postprocessing. While ensembles of neural networks achieved the higher accuracy, the difference as compared to logistic regression classifiers was smaller than 1%. Bagging of neural networks, where moving averages over windows of length 61 were used for attribute construction, combined with postprocessing by averaging predictions over windows of length 81 resulted in 82.6% accuracy for a larger set of ordered and disordered proteins than used previously. This result was a significant improvement over previous methodology, which gave an accuracy of 70.2%. Moreover, unlike the previous methodology, the modified attribute construction allowed prediction at protein ends.
{"title":"Methods for improving protein disorder prediction","authors":"S. Vucetic, P. Radivojac, Z. Obradovic, Celeste J. Brown, Dunker Ak","doi":"10.1109/IJCNN.2001.938802","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938802","url":null,"abstract":"In this paper we propose several methods for improving prediction of protein disorder. These include attribute construction from protein sequence, choice of classifier and postprocessing. While ensembles of neural networks achieved the higher accuracy, the difference as compared to logistic regression classifiers was smaller than 1%. Bagging of neural networks, where moving averages over windows of length 61 were used for attribute construction, combined with postprocessing by averaging predictions over windows of length 81 resulted in 82.6% accuracy for a larger set of ordered and disordered proteins than used previously. This result was a significant improvement over previous methodology, which gave an accuracy of 70.2%. Moreover, unlike the previous methodology, the modified attribute construction allowed prediction at protein ends.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124657874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938806
R. Reynolds, H. Ressom, M. Musavi, C. Domnisoru
This paper proposes the use of fuzzy modeling algorithms to analyze gene expression data. Current algorithms apply all potential combinations of genes to a fuzzy model of gene interaction (for example, activator/inhibitor/target) and are evaluated on the basis of how well they fit the model. However, the algorithm is computationally intensive; the activator/inhibitor model has an algorithmic complexity of O(N/sup 3/), while more complex models (multiple activators/inhibitors) have even higher complexities. As a result, the algorithm takes a significant amount of time to analyze an entire genome. The purpose of this paper is to propose the use of clustering as a preprocessing method to reduce the total number of gene combinations analyzed. By first analyzing how well cluster centers fit the model, the algorithm can ignore combinations of genes that are unlikely to fit. This will allow the algorithm to run in a shorter amount of time with minimal effect on the results.
{"title":"Use of clustering to improve performance in fuzzy gene expression analysis","authors":"R. Reynolds, H. Ressom, M. Musavi, C. Domnisoru","doi":"10.1109/IJCNN.2001.938806","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938806","url":null,"abstract":"This paper proposes the use of fuzzy modeling algorithms to analyze gene expression data. Current algorithms apply all potential combinations of genes to a fuzzy model of gene interaction (for example, activator/inhibitor/target) and are evaluated on the basis of how well they fit the model. However, the algorithm is computationally intensive; the activator/inhibitor model has an algorithmic complexity of O(N/sup 3/), while more complex models (multiple activators/inhibitors) have even higher complexities. As a result, the algorithm takes a significant amount of time to analyze an entire genome. The purpose of this paper is to propose the use of clustering as a preprocessing method to reduce the total number of gene combinations analyzed. By first analyzing how well cluster centers fit the model, the algorithm can ignore combinations of genes that are unlikely to fit. This will allow the algorithm to run in a shorter amount of time with minimal effect on the results.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129900807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}