Pub Date : 2002-03-01DOI: 10.1109/IJCNN.2001.939522
Hisao Imai, Y. Osana, M. Hagiwara
We propose a chaotic analog associative memory (CAAM). It can deal with associations of analog patterns including common patterns. The proposed model has the following features: (1) it can deal with associations of analog patterns; (2) it can deal with one-to-many associations; (3) it has robustness for noisy input and neuron damage.
{"title":"Chaotic analog associative memory","authors":"Hisao Imai, Y. Osana, M. Hagiwara","doi":"10.1109/IJCNN.2001.939522","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939522","url":null,"abstract":"We propose a chaotic analog associative memory (CAAM). It can deal with associations of analog patterns including common patterns. The proposed model has the following features: (1) it can deal with associations of analog patterns; (2) it can deal with one-to-many associations; (3) it has robustness for noisy input and neuron damage.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938419
L. Feldkamp, D. Prokhorov, T. Feldkamp
Cluster-weighted modeling (CWM) was proposed by Gershenfeld (1999) for density estimation in joint input-output space. In the base CWM algorithm there is a single output cluster for each input cluster. We extend the base CWM to the structure in which multiple output clusters are associated with the same input cluster. We call this CWM with multiclusters and illustrate it with an example.
{"title":"Cluster-weighted modeling with multiclusters","authors":"L. Feldkamp, D. Prokhorov, T. Feldkamp","doi":"10.1109/IJCNN.2001.938419","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938419","url":null,"abstract":"Cluster-weighted modeling (CWM) was proposed by Gershenfeld (1999) for density estimation in joint input-output space. In the base CWM algorithm there is a single output cluster for each input cluster. We extend the base CWM to the structure in which multiple output clusters are associated with the same input cluster. We call this CWM with multiclusters and illustrate it with an example.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115409958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938445
Hao-jun Sun, Shengrui Wang, Q. Jiang
Clustering analysis plays an important role in solving practical problems in such domains as data mining in large databases. In this paper, we are interested in fuzzy c-means (FCM) based algorithms. The main purpose is to design an effective validity function to measure the result of clustering and detecting the best number of clusters for a given data set in practical applications. After a review of the relevant literature, we present the new validity function. Experimental results and comparisons will be given to illustrate the performance of the new validity function.
{"title":"A new validation index for determining the number of clusters in a data set","authors":"Hao-jun Sun, Shengrui Wang, Q. Jiang","doi":"10.1109/IJCNN.2001.938445","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938445","url":null,"abstract":"Clustering analysis plays an important role in solving practical problems in such domains as data mining in large databases. In this paper, we are interested in fuzzy c-means (FCM) based algorithms. The main purpose is to design an effective validity function to measure the result of clustering and detecting the best number of clusters for a given data set in practical applications. After a review of the relevant literature, we present the new validity function. Experimental results and comparisons will be given to illustrate the performance of the new validity function.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123084534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938825
Toshiyuki TANAKA
We analyze the performance of neural code-division multiple-access (CDMA) multiuser detectors. Formal correspondence between the CDMA multiuser detection problem and recurrent neural networks such as the Hopfield neural network and the Boltzmann machines is established, based on which the replica analysis on the bit-error rate of the neural multiuser detectors is presented. Detection dynamics of the neural multiuser detectors is also analyzed based on statistical neurodynamics.
{"title":"Performance analysis of neural CDMA multiuser detector","authors":"Toshiyuki TANAKA","doi":"10.1109/IJCNN.2001.938825","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938825","url":null,"abstract":"We analyze the performance of neural code-division multiple-access (CDMA) multiuser detectors. Formal correspondence between the CDMA multiuser detection problem and recurrent neural networks such as the Hopfield neural network and the Boltzmann machines is established, based on which the replica analysis on the bit-error rate of the neural multiuser detectors is presented. Detection dynamics of the neural multiuser detectors is also analyzed based on statistical neurodynamics.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121217588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938424
M. Dong, R. Kothari
Decision tree pruning is useful in improving the generalization performance of decision trees. As opposed to explicit pruning in which nodes are removed from fully constructed decision trees, implicit pruning uses a stopping criteria to label a node as a leaf node when splitting it further would not result in acceptable improvement in performance. The stopping criteria is often also called the pre-pruning criteria and is typically based on the pattern instances available at node (i.e. local information). We propose a new criteria for pre-pruning based on a classifiability measure. The proposed criteria not only considers the number of pattern instances of different classes at a node (node purity) but also the spatial distribution of these instances to estimate the effect of further splitting the node. The algorithm and some experimental results are presented.
{"title":"Classifiability based pruning of decision trees","authors":"M. Dong, R. Kothari","doi":"10.1109/IJCNN.2001.938424","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938424","url":null,"abstract":"Decision tree pruning is useful in improving the generalization performance of decision trees. As opposed to explicit pruning in which nodes are removed from fully constructed decision trees, implicit pruning uses a stopping criteria to label a node as a leaf node when splitting it further would not result in acceptable improvement in performance. The stopping criteria is often also called the pre-pruning criteria and is typically based on the pattern instances available at node (i.e. local information). We propose a new criteria for pre-pruning based on a classifiability measure. The proposed criteria not only considers the number of pattern instances of different classes at a node (node purity) but also the spatial distribution of these instances to estimate the effect of further splitting the node. The algorithm and some experimental results are presented.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121230804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939521
F. Clift, T. Martinez
An approach to training a generalized Hopfield network is developed and evaluated. Both the weight symmetricity constraint and the zero self-connection constraint are removed from standard Hopfield networks. Training is accomplished with backpropagation through time, using noisy versions of the memorized patterns. Training in this way is referred to as noisy associative training (NAT). Performance of NAT is evaluated on both random and correlated data. NAT has been tested on several data sets, with a large number of training runs for each experiment. The data sets used include uniformly distributed random data and several data sets adapted from the U.C. Irvine Machine Learning Repository. Results show that for random patterns, Hopfield networks trained with NAT have an average overall recall accuracy 6.1 times greater than networks produced with either Hebbian or pseudo-inverse training. Additionally, these networks have 13% fewer spurious memories on average than networks trained with pseudoinverse or Hebbian training. Typically, networks memorizing over 2N (where N is the number of nodes in the network) patterns are produced. Performance on correlated data shows an even greater improvement over networks produced with either Hebbian or pseudo-inverse training-an average of 27.8 times greater recall accuracy, with 14% fewer spurious memories.
{"title":"Improved Hopfield networks by training with noisy data","authors":"F. Clift, T. Martinez","doi":"10.1109/IJCNN.2001.939521","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939521","url":null,"abstract":"An approach to training a generalized Hopfield network is developed and evaluated. Both the weight symmetricity constraint and the zero self-connection constraint are removed from standard Hopfield networks. Training is accomplished with backpropagation through time, using noisy versions of the memorized patterns. Training in this way is referred to as noisy associative training (NAT). Performance of NAT is evaluated on both random and correlated data. NAT has been tested on several data sets, with a large number of training runs for each experiment. The data sets used include uniformly distributed random data and several data sets adapted from the U.C. Irvine Machine Learning Repository. Results show that for random patterns, Hopfield networks trained with NAT have an average overall recall accuracy 6.1 times greater than networks produced with either Hebbian or pseudo-inverse training. Additionally, these networks have 13% fewer spurious memories on average than networks trained with pseudoinverse or Hebbian training. Typically, networks memorizing over 2N (where N is the number of nodes in the network) patterns are produced. Performance on correlated data shows an even greater improvement over networks produced with either Hebbian or pseudo-inverse training-an average of 27.8 times greater recall accuracy, with 14% fewer spurious memories.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127187922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939022
I.Z. Mihu, R. Brad, M. Breazu
Neural networks are non-linear static or dynamical systems that learn to solve problems from examples. Most of the learning algorithms require a lot of computing power and, therefore, could benefit from fast dedicated hardware. One of the most common architectures used for this special-purpose hardware is the systolic array. The design and implementation of different neural network architectures in systolic arrays can be complex, however. The paper shows the manner in which the Hopfield neural network can be mapped into a 2-D systolic array and presents an FPGA implementation of the proposed 2-D systolic array.
{"title":"Specifications and FPGA implementation of a systolic Hopfield-type associative memory","authors":"I.Z. Mihu, R. Brad, M. Breazu","doi":"10.1109/IJCNN.2001.939022","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939022","url":null,"abstract":"Neural networks are non-linear static or dynamical systems that learn to solve problems from examples. Most of the learning algorithms require a lot of computing power and, therefore, could benefit from fast dedicated hardware. One of the most common architectures used for this special-purpose hardware is the systolic array. The design and implementation of different neural network architectures in systolic arrays can be complex, however. The paper shows the manner in which the Hopfield neural network can be mapped into a 2-D systolic array and presents an FPGA implementation of the proposed 2-D systolic array.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127477926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938994
W.R. de Oliveira, M.C.P. de Souto, Teresa B Ludermir
We propose a way to simulate Turing machines (TMs) by neural networks (NNs) which is in agreement with the correct interpretation of Turing's analysis of computation; compatible with the current approaches to analyze cognition as an interactive agent-environment process; and physically realizable since it does not use connection weights with unbounded precision. We give a full description of an implementation of a universal TM into a recurrent sigmoid NN focusing on the TM finite state control, leaving the tape, an infinite resource, as an external non-intrinsic feature.
{"title":"Agent-environment approach to the simulation of Turing machines by neural networks","authors":"W.R. de Oliveira, M.C.P. de Souto, Teresa B Ludermir","doi":"10.1109/IJCNN.2001.938994","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938994","url":null,"abstract":"We propose a way to simulate Turing machines (TMs) by neural networks (NNs) which is in agreement with the correct interpretation of Turing's analysis of computation; compatible with the current approaches to analyze cognition as an interactive agent-environment process; and physically realizable since it does not use connection weights with unbounded precision. We give a full description of an implementation of a universal TM into a recurrent sigmoid NN focusing on the TM finite state control, leaving the tape, an infinite resource, as an external non-intrinsic feature.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125501928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938787
H. Shiratsuchi, H. Gotanda, K. Inoue, K. Kumamaru
In this paper, we propose an initialization method of multilayer neural networks (NN) employing the structure learning with forgetting. The proposed initialization consists of two steps: weights of hidden units are initialized so that their hyperplanes should pass through the center of input pattern set, and those of output units are initialized to zero. Several simulations were performed to study how the initialization affects the structure forming process of the NN. From the simulation result, it was confirmed that the initialization gives better network structure and higher generalization ability.
{"title":"Effects of initialization on structure formation and generalization of neural networks","authors":"H. Shiratsuchi, H. Gotanda, K. Inoue, K. Kumamaru","doi":"10.1109/IJCNN.2001.938787","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938787","url":null,"abstract":"In this paper, we propose an initialization method of multilayer neural networks (NN) employing the structure learning with forgetting. The proposed initialization consists of two steps: weights of hidden units are initialized so that their hyperplanes should pass through the center of input pattern set, and those of output units are initialized to zero. Several simulations were performed to study how the initialization affects the structure forming process of the NN. From the simulation result, it was confirmed that the initialization gives better network structure and higher generalization ability.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115549601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938805
A. Micheli, A. Sperduti, A. Starita, A. Bianucci
In this paper, we face the design of novel molecules belonging to the class of adenine analogues (8-azaadenine derivates), that present a widespread potential therapeutic interest, in the new perspective offered by recursive neural networks for quantitative structure-activity relationships analysis. The generality and flexibility of the method used to process structured domains allows us to propose new solutions to the representation problem of this set of compounds and to obtain good prediction results, as it has been proved by the comparison with the values obtained "a posteriori" after synthesis and biological essays of designed molecules.
{"title":"Design of new biologically active molecules by recursive neural networks","authors":"A. Micheli, A. Sperduti, A. Starita, A. Bianucci","doi":"10.1109/IJCNN.2001.938805","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938805","url":null,"abstract":"In this paper, we face the design of novel molecules belonging to the class of adenine analogues (8-azaadenine derivates), that present a widespread potential therapeutic interest, in the new perspective offered by recursive neural networks for quantitative structure-activity relationships analysis. The generality and flexibility of the method used to process structured domains allows us to propose new solutions to the representation problem of this set of compounds and to obtain good prediction results, as it has been proved by the comparison with the values obtained \"a posteriori\" after synthesis and biological essays of designed molecules.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}