Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.836208
C. H. Chen, Bichuan Shen
Cranberry plants are very sensitive to weather and other conditions. In this paper, the condition of cranberry growth is analyzed through PCA (principle component analysis) of the minimum cranberry spectral match measurement data. Three neural network models are applied to the one-month ahead prediction. The simulation results show the high performance modeling ability of these neural networks. The reliable prediction provided by the dynamic neural networks will be useful for the farmers to monitor and control the cranberry growth process.
{"title":"Analysis and prediction of cranberry growth with dynamical neural network models","authors":"C. H. Chen, Bichuan Shen","doi":"10.1109/IJCNN.1999.836208","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836208","url":null,"abstract":"Cranberry plants are very sensitive to weather and other conditions. In this paper, the condition of cranberry growth is analyzed through PCA (principle component analysis) of the minimum cranberry spectral match measurement data. Three neural network models are applied to the one-month ahead prediction. The simulation results show the high performance modeling ability of these neural networks. The reliable prediction provided by the dynamic neural networks will be useful for the farmers to monitor and control the cranberry growth process.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115234532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831061
De-shuang Huang
This paper investigates, from the viewpoint of linear algebra, the local minima of least square error cost functions defined at the outputs of outer-supervised feedforward neural networks (FNN). For a specific case, we also show that those spacedly colinear samples (probably output by the final hidden layer) will be easily separated with null-cost error function even if the condition M/spl ges/N is not satisfied. In the light of these conclusions we shall give a general method for designing a suitable architecture network to solve a specific problem.
{"title":"On the conditions of outer-supervised feedforward neural networks for null cost learning","authors":"De-shuang Huang","doi":"10.1109/IJCNN.1999.831061","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831061","url":null,"abstract":"This paper investigates, from the viewpoint of linear algebra, the local minima of least square error cost functions defined at the outputs of outer-supervised feedforward neural networks (FNN). For a specific case, we also show that those spacedly colinear samples (probably output by the final hidden layer) will be easily separated with null-cost error function even if the condition M/spl ges/N is not satisfied. In the light of these conclusions we shall give a general method for designing a suitable architecture network to solve a specific problem.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115413075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831583
I. Silva, M. E. Bordon, A. Souza
Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of artificial neural networks that can be used to solve several classes of optimization problems. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. Among the problems that can be treated by the proposed approach include combinational optimization problems and dynamic programming problems.
{"title":"Design and analysis of neural networks for systems optimization","authors":"I. Silva, M. E. Bordon, A. Souza","doi":"10.1109/IJCNN.1999.831583","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831583","url":null,"abstract":"Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of artificial neural networks that can be used to solve several classes of optimization problems. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. Among the problems that can be treated by the proposed approach include combinational optimization problems and dynamic programming problems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115430109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831462
John G. Taylor
A framework is developed, and criteria thereby deduced, for a neural site to be regarded as essential for the creation of consciousness. Various sites in the brain are considered but only very few are found to satisfy all of the criteria. The framework proposed here is barred on the notion of the central representation regarded as being composed of information deemed intrinsic to awareness. In particular, the central representation is suggested as being in the inferior parietal lobes. Implications of this identification are discussed.
{"title":"Neural networks for consciousness: the central representation","authors":"John G. Taylor","doi":"10.1109/IJCNN.1999.831462","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831462","url":null,"abstract":"A framework is developed, and criteria thereby deduced, for a neural site to be regarded as essential for the creation of consciousness. Various sites in the brain are considered but only very few are found to satisfy all of the criteria. The framework proposed here is barred on the notion of the central representation regarded as being composed of information deemed intrinsic to awareness. In particular, the central representation is suggested as being in the inferior parietal lobes. Implications of this identification are discussed.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115702196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831586
R. Pavloski, M. Karimi
A means of providing the feedback necessary for an associative memory is suggested by self-trapping, the development of localization phenomena and order in coupled physical systems. Following the lead of Hopfield (1982, 1984) who exploited the formal analogy of a fully-connected ANN to an infinite ranged interaction Ising model, we have carried through a similar development to demonstrate that self-trapping networks (STNs) with only near-neighbor synapses develop attractor states through localization of a self-trapping input. The attractor states of the STN are the stored memories of this system, and are analogous to the magnetization developed in a self-trapping 1D Ising system. Post-synaptic potentials for each stored memory become trapped at non-zero valves and a sparsely-connected network evolves to the corresponding state. Both analytic and computational studies of the STN show that this model mimics a fully-connected ANN.
{"title":"Self-trapping in an attractor neural network with nearest neighbor synapses mimics full connectivity","authors":"R. Pavloski, M. Karimi","doi":"10.1109/IJCNN.1999.831586","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831586","url":null,"abstract":"A means of providing the feedback necessary for an associative memory is suggested by self-trapping, the development of localization phenomena and order in coupled physical systems. Following the lead of Hopfield (1982, 1984) who exploited the formal analogy of a fully-connected ANN to an infinite ranged interaction Ising model, we have carried through a similar development to demonstrate that self-trapping networks (STNs) with only near-neighbor synapses develop attractor states through localization of a self-trapping input. The attractor states of the STN are the stored memories of this system, and are analogous to the magnetization developed in a self-trapping 1D Ising system. Post-synaptic potentials for each stored memory become trapped at non-zero valves and a sparsely-connected network evolves to the corresponding state. Both analytic and computational studies of the STN show that this model mimics a fully-connected ANN.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115745958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.830876
Alexander N Gorban, E. M. Mirkes, V. G. Tsaregorodtsev
This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are: 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements); 2) using of adjustable and flexible pruning process (the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form); and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledge-based expert systems.
{"title":"Generation of explicit knowledge from empirical data through pruning of trainable neural networks","authors":"Alexander N Gorban, E. M. Mirkes, V. G. Tsaregorodtsev","doi":"10.1109/IJCNN.1999.830876","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830876","url":null,"abstract":"This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are: 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements); 2) using of adjustable and flexible pruning process (the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form); and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledge-based expert systems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116658130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831162
Y. Matsuyama, T. Ikeda, Tomoaki Tanaka, S. Furukawa, N. Takeda, Takeshi Niimoto
The /spl alpha/-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the /spl alpha/-logarithm, while the traditional one uses the logarithm. The case of /spl alpha/=-1 corresponds to the log-EM algorithm. Since the speed of the /spl alpha/-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbooks for the /spl alpha/-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.
{"title":"The /spl alpha/-EM learning and its cookbook: from mixture-of-expert neural networks to movie random field","authors":"Y. Matsuyama, T. Ikeda, Tomoaki Tanaka, S. Furukawa, N. Takeda, Takeshi Niimoto","doi":"10.1109/IJCNN.1999.831162","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831162","url":null,"abstract":"The /spl alpha/-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the /spl alpha/-logarithm, while the traditional one uses the logarithm. The case of /spl alpha/=-1 corresponds to the log-EM algorithm. Since the speed of the /spl alpha/-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbooks for the /spl alpha/-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116984822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832671
P. Somervuo
Time information of the input data is used for evaluating the goodness of the self-organizing map to store and represent temporal feature vector sequences. A new node neighborhood is defined for the map which takes the temporal order of the input samples into account. A connection is created between those two map modes which are the best-matching units for two successive input samples in time. This results in the time-topology preserving network.
{"title":"Time topology for the self-organizing map","authors":"P. Somervuo","doi":"10.1109/IJCNN.1999.832671","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832671","url":null,"abstract":"Time information of the input data is used for evaluating the goodness of the self-organizing map to store and represent temporal feature vector sequences. A new node neighborhood is defined for the map which takes the temporal order of the input samples into account. A connection is created between those two map modes which are the best-matching units for two successive input samples in time. This results in the time-topology preserving network.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":" 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120943266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831531
E. Basson, A. Engelbrecht
A new learning algorithm is presented that learns a function and its first-order derivatives. Derivatives are learned together with the function using gradient descent. Preliminary results show that the algorithm accurately approximates the derivatives.
{"title":"Approximation of a function and its derivatives in feedforward neural networks","authors":"E. Basson, A. Engelbrecht","doi":"10.1109/IJCNN.1999.831531","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831531","url":null,"abstract":"A new learning algorithm is presented that learns a function and its first-order derivatives. Derivatives are learned together with the function using gradient descent. Preliminary results show that the algorithm accurately approximates the derivatives.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"19 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.830809
D. Vogiatzis, A. Stafylopatis
We propose a neural network method for the generation of symbolic expressions using reinforcement learning. According to the proposed method, a human decides on the kind and number of primitive functions which, with the appropriate composition (in the mathematical sense), can represent a mapping between two domains. The appropriate composition is achieved by an agent which tries many compositions and receives a reward depending on the quality of the composed function.
{"title":"A neural network endowed with symbolic processing ability","authors":"D. Vogiatzis, A. Stafylopatis","doi":"10.1109/IJCNN.1999.830809","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830809","url":null,"abstract":"We propose a neural network method for the generation of symbolic expressions using reinforcement learning. According to the proposed method, a human decides on the kind and number of primitive functions which, with the appropriate composition (in the mathematical sense), can represent a mapping between two domains. The appropriate composition is achieved by an agent which tries many compositions and receives a reward depending on the quality of the composed function.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127486303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}