Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845693
M. Ito, S. Miyake, S. Inawashiro, J. Kuroiwa, Y. Sawada
We propose a pulse-neuron model with transmission delays for the field CA3 of the hippocampus and the new learning rule. We use temporal sequences of patterns which consist of trains of bursts. In simulations, it is shown that the model successfully learns and recalls the temporal sequences. The new learning rule works much more effectively than the Hebbian learning rule in learning temporal sequences of patterns.
{"title":"A hippocampal CA3 model for temporal sequences","authors":"M. Ito, S. Miyake, S. Inawashiro, J. Kuroiwa, Y. Sawada","doi":"10.1109/ICONIP.1999.845693","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845693","url":null,"abstract":"We propose a pulse-neuron model with transmission delays for the field CA3 of the hippocampus and the new learning rule. We use temporal sequences of patterns which consist of trains of bursts. In simulations, it is shown that the model successfully learns and recalls the temporal sequences. The new learning rule works much more effectively than the Hebbian learning rule in learning temporal sequences of patterns.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129376585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844681
T. Yamamoto, T. Oki, S. L. Shah
It is well known that most industrial processes are multivariate in nature, and yet PID controllers are being widely used in a multiloop framework for the control of such interacting systems. In this paper, a design scheme for a neural net-based controller with a PID structure is proposed for the control of such multivariable systems. The proposed controller consists of a pre-compensator designed with a static gain matrix which compensates for the low-frequency interaction, and PID controllers placed diagonally, whose gains are tuned by a neural network.
{"title":"Design of a multivariable neural-net based PID controller","authors":"T. Yamamoto, T. Oki, S. L. Shah","doi":"10.1109/ICONIP.1999.844681","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844681","url":null,"abstract":"It is well known that most industrial processes are multivariate in nature, and yet PID controllers are being widely used in a multiloop framework for the control of such interacting systems. In this paper, a design scheme for a neural net-based controller with a PID structure is proposed for the control of such multivariable systems. The proposed controller consists of a pre-compensator designed with a static gain matrix which compensates for the low-frequency interaction, and PID controllers placed diagonally, whose gains are tuned by a neural network.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125011135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844656
M. Bellgard, R. Taplin
Many pattern recognition problems are viewed as problems that can be solved using a window based artificial neural network (ANN). The paper details a unique, window based learning algorithm using the Effective Boltzmann Machine (EBM). In the past, EBM, which is based on the Boltzmann Machine (BM), has been shown to have the ability to perform pattern completion and to provide an energy measure for completions of any length. Described in the paper is the way that the EBM itself is a highly suitable architecture for learning window based problems. A walk through of a simple example, mathematical derivation as well as simulation experiments shows that the EBM outperforms a window based BM using the criteria of quality of learning, and speed of learning, as well as the resultant generalisations produced by the network.
{"title":"Why a window-based learning algorithm using an Effective Boltzmann machine is superior to the original BM learning algorithm","authors":"M. Bellgard, R. Taplin","doi":"10.1109/ICONIP.1999.844656","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844656","url":null,"abstract":"Many pattern recognition problems are viewed as problems that can be solved using a window based artificial neural network (ANN). The paper details a unique, window based learning algorithm using the Effective Boltzmann Machine (EBM). In the past, EBM, which is based on the Boltzmann Machine (BM), has been shown to have the ability to perform pattern completion and to provide an energy measure for completions of any length. Described in the paper is the way that the EBM itself is a highly suitable architecture for learning window based problems. A walk through of a simple example, mathematical derivation as well as simulation experiments shows that the EBM outperforms a window based BM using the criteria of quality of learning, and speed of learning, as well as the resultant generalisations produced by the network.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131120091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843969
V. Karri, F. Frost
Artificial neural networks are increasingly useful computational models, consisting of highly interconnected parallel processing units. In particular, radial basis function, RBF, networks are emerging as important computational models for a broad range of applications. The Gaussian function used in RBF networks has an adjustable parameter, /spl sigma/, which specifies the diameter of the receptive field of the hidden layer neurons. The selection of /spl sigma/ is commonly carried out using heuristic techniques. The selection of /spl sigma/, as shown in this paper, plays an important role in the predictive capabilities of the RBF network. However, the use of a Gaussian function with the standard deviation of the training pattern output vector is shown to be associated with the minimum RMS error obtained using an optimum /spl sigma/ value derived using a heuristic technique. The aluminium fluoride, AlF/sub 3/, content of industrial reduction cell for aluminium production is well predicted using the RBF network with a Gaussian function /spl sigma/ value derived using the standard deviation of the training pattern output vector.
{"title":"Effect of altering the Gaussian function receptive field width in RBF neural networks on aluminium fluoride prediction in industrial reduction cells","authors":"V. Karri, F. Frost","doi":"10.1109/ICONIP.1999.843969","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843969","url":null,"abstract":"Artificial neural networks are increasingly useful computational models, consisting of highly interconnected parallel processing units. In particular, radial basis function, RBF, networks are emerging as important computational models for a broad range of applications. The Gaussian function used in RBF networks has an adjustable parameter, /spl sigma/, which specifies the diameter of the receptive field of the hidden layer neurons. The selection of /spl sigma/ is commonly carried out using heuristic techniques. The selection of /spl sigma/, as shown in this paper, plays an important role in the predictive capabilities of the RBF network. However, the use of a Gaussian function with the standard deviation of the training pattern output vector is shown to be associated with the minimum RMS error obtained using an optimum /spl sigma/ value derived using a heuristic technique. The aluminium fluoride, AlF/sub 3/, content of industrial reduction cell for aluminium production is well predicted using the RBF network with a Gaussian function /spl sigma/ value derived using the standard deviation of the training pattern output vector.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131309639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843992
K. Hiraoka, T. Shigehara, H. Mizoguchi, T. Mishima, S. Yoshizawa
In autoassociative learning for the bottleneck neural network, the problem of overfitting is pointed out. This overfitting is pathological in the sense that it does not disappear even if the sample size goes to infinity. However, it is not observed in the real learning process. Thus we study the basin of the overfitting solution. First, the existence of overfitting is confirmed. Then it is shown that the basin of the overfitting solution is small compared with the normal solution.
{"title":"On the overfitting of the five-layered bottleneck network","authors":"K. Hiraoka, T. Shigehara, H. Mizoguchi, T. Mishima, S. Yoshizawa","doi":"10.1109/ICONIP.1999.843992","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843992","url":null,"abstract":"In autoassociative learning for the bottleneck neural network, the problem of overfitting is pointed out. This overfitting is pathological in the sense that it does not disappear even if the sample size goes to infinity. However, it is not observed in the real learning process. Thus we study the basin of the overfitting solution. First, the existence of overfitting is confirmed. Then it is shown that the basin of the overfitting solution is small compared with the normal solution.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"107 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131386138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845633
Z. Bandar, H. Al-Attar, D. Mclean
There are two fundamental weaknesses which may have a great impact on the performance of decision tree (DT) induction. These are the limitations in the ability of the DT language to represent some of the underlying patterns of the domain and the degradation in the quality of evidence available to the induction process caused by its recursive partitioning of the training data. The impact of these two weaknesses is greatest when the induction process attempts to overcome the first weakness by resorting to more partitioning of the training data, thus increasing its vulnerability to the second weakness. The authors investigate the use of multiple DT models as a method of overcoming the limitations of the DT modeling language and describe a new and novel algorithm to automatically generate multiple DT models from the same training data. The algorithm is compared to a single-tree classifier by experiments on two well known data sets. Results clearly demonstrate the superiority of our algorithm.
{"title":"Genetic algorithm based multiple decision tree induction","authors":"Z. Bandar, H. Al-Attar, D. Mclean","doi":"10.1109/ICONIP.1999.845633","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845633","url":null,"abstract":"There are two fundamental weaknesses which may have a great impact on the performance of decision tree (DT) induction. These are the limitations in the ability of the DT language to represent some of the underlying patterns of the domain and the degradation in the quality of evidence available to the induction process caused by its recursive partitioning of the training data. The impact of these two weaknesses is greatest when the induction process attempts to overcome the first weakness by resorting to more partitioning of the training data, thus increasing its vulnerability to the second weakness. The authors investigate the use of multiple DT models as a method of overcoming the limitations of the DT modeling language and describe a new and novel algorithm to automatically generate multiple DT models from the same training data. The algorithm is compared to a single-tree classifier by experiments on two well known data sets. Results clearly demonstrate the superiority of our algorithm.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126952006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844653
Y. Hata, Syoji Kobashi, S. Hirano, M. Ishikawa
The paper introduces registration systems of multi-modality medical images and describes the practical systems related to brain science. A possibility for applying soft computing techniques is also shown. First we describe a registration system of computed tomography image and magnetic resonance angiography image of a human brain. This registration system is used to demonstrate anatomical location information of vascular lesion from the surface of the human skull. We next describe a registration system of magnetic resonance (MR) image and positron emission transmission (PET) image. The MR image can produce neuroanatomical information, and the PET image quantifies metabolic pathways in vivo. In both systems, we describe a possibility of soft computing techniques.
{"title":"Registration of multi-modality medical images by soft computing approach","authors":"Y. Hata, Syoji Kobashi, S. Hirano, M. Ishikawa","doi":"10.1109/ICONIP.1999.844653","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844653","url":null,"abstract":"The paper introduces registration systems of multi-modality medical images and describes the practical systems related to brain science. A possibility for applying soft computing techniques is also shown. First we describe a registration system of computed tomography image and magnetic resonance angiography image of a human brain. This registration system is used to demonstrate anatomical location information of vascular lesion from the surface of the human skull. We next describe a registration system of magnetic resonance (MR) image and positron emission transmission (PET) image. The MR image can produce neuroanatomical information, and the PET image quantifies metabolic pathways in vivo. In both systems, we describe a possibility of soft computing techniques.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125249522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844647
Bai-ling Zhang, Q. Huang, Tom Gedeon
Karhunen-Loeve transform (KLT) is the optimal linear transform for coding images under the assumption of stationarity. For images composed of regions with widely varied local statistics, R.D. Dony and S. Haykin (1995) proposed a transform coding method called optimally integrated adaptive learning (OIAL), in which a number of localized KLTs are adapted to regions with roughly the same statistics. The new transform coding method is shown to be superior to the traditional KLT. However, the performance of OIAL depends on an estimate of the global principal components of the data, which is not only computationally expensive bat also impractical in some cases. Another problem of OIAL is that the mean vector in each region is not taken into account, which is required to define a local PCA. The authors propose an improvement over the OIAL which replaces the winner-take-all (WTA) based clustering with an optimal soft-competition learning algorithm called "neural gas". The mean vector in each region is also incorporated. Experiments show a better performance than OIAL.
{"title":"A mixture of local PCA learning algorithm for adaptive transform coding","authors":"Bai-ling Zhang, Q. Huang, Tom Gedeon","doi":"10.1109/ICONIP.1999.844647","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844647","url":null,"abstract":"Karhunen-Loeve transform (KLT) is the optimal linear transform for coding images under the assumption of stationarity. For images composed of regions with widely varied local statistics, R.D. Dony and S. Haykin (1995) proposed a transform coding method called optimally integrated adaptive learning (OIAL), in which a number of localized KLTs are adapted to regions with roughly the same statistics. The new transform coding method is shown to be superior to the traditional KLT. However, the performance of OIAL depends on an estimate of the global principal components of the data, which is not only computationally expensive bat also impractical in some cases. Another problem of OIAL is that the mean vector in each region is not taken into account, which is required to define a local PCA. The authors propose an improvement over the OIAL which replaces the winner-take-all (WTA) based clustering with an optimal soft-competition learning algorithm called \"neural gas\". The mean vector in each region is also incorporated. Experiments show a better performance than OIAL.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127867480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844662
J. Rajapakse, V. Venkatraman
Sensory or cognitive stimuli in functional MRI (fMRI) experiments activate neuronal populations in specific areas of the brain. Neuronal events in activated brain regions cause changes of blood flow and blood oxygenation level. FMRI signals are sensitive to hemodynamic events ensuing neuronal activation in the brain. The authors use a neural network to model neuronal-vascular coupling of human brain with images obtained in fMRI experiments. The nonlinear mappings modeled by training a network were used to approximate time series acquired in language comprehension and visual experiments. The models of neuronal-vascular coupling realized using the neural network were better than those rendered by a linear system model.
{"title":"Neural network modeling of neuronal-vascular coupling","authors":"J. Rajapakse, V. Venkatraman","doi":"10.1109/ICONIP.1999.844662","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844662","url":null,"abstract":"Sensory or cognitive stimuli in functional MRI (fMRI) experiments activate neuronal populations in specific areas of the brain. Neuronal events in activated brain regions cause changes of blood flow and blood oxygenation level. FMRI signals are sensitive to hemodynamic events ensuing neuronal activation in the brain. The authors use a neural network to model neuronal-vascular coupling of human brain with images obtained in fMRI experiments. The nonlinear mappings modeled by training a network were used to approximate time series acquired in language comprehension and visual experiments. The models of neuronal-vascular coupling realized using the neural network were better than those rendered by a linear system model.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128764621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844706
A. Luk, S. Lien
Introduces a generalised idea of a lotto-type competitive learning (LTCL) algorithm where one or more winners exist. The winners are divided into tiers, with each tier being rewarded differently. Again, the losers are all penalised equally. A set of dynamic LTCL equations is then introduced to assist the study of the stability of the generalised LTCL. It is shown that if a K-orthant exists in the LTCL's state space, which is an attracting invariant set of the network's flow, it will converge to a fixed point.
{"title":"Stability of the generalised lotto-type competitive learning","authors":"A. Luk, S. Lien","doi":"10.1109/ICONIP.1999.844706","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844706","url":null,"abstract":"Introduces a generalised idea of a lotto-type competitive learning (LTCL) algorithm where one or more winners exist. The winners are divided into tiers, with each tier being rewarded differently. Again, the losers are all penalised equally. A set of dynamic LTCL equations is then introduced to assist the study of the stability of the generalised LTCL. It is shown that if a K-orthant exists in the LTCL's state space, which is an attracting invariant set of the network's flow, it will converge to a fixed point.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132755002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}