Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844681
T. Yamamoto, T. Oki, S. L. Shah
It is well known that most industrial processes are multivariate in nature, and yet PID controllers are being widely used in a multiloop framework for the control of such interacting systems. In this paper, a design scheme for a neural net-based controller with a PID structure is proposed for the control of such multivariable systems. The proposed controller consists of a pre-compensator designed with a static gain matrix which compensates for the low-frequency interaction, and PID controllers placed diagonally, whose gains are tuned by a neural network.
{"title":"Design of a multivariable neural-net based PID controller","authors":"T. Yamamoto, T. Oki, S. L. Shah","doi":"10.1109/ICONIP.1999.844681","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844681","url":null,"abstract":"It is well known that most industrial processes are multivariate in nature, and yet PID controllers are being widely used in a multiloop framework for the control of such interacting systems. In this paper, a design scheme for a neural net-based controller with a PID structure is proposed for the control of such multivariable systems. The proposed controller consists of a pre-compensator designed with a static gain matrix which compensates for the low-frequency interaction, and PID controllers placed diagonally, whose gains are tuned by a neural network.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125011135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843958
D. Merkl, A. Rauber
The self-organizing map is a very popular unsupervised neural network model for the analysis of high-dimensional input data as in information retrieval applications. However, the interpretation of the map requires much manual effort, especially as far as the analysis of the learned features and the characteristics of identified clusters is concerned. We present our novel LabelSOM method which, based on the features learned by the map, automatically selects the most descriptive features of the input patterns mapped onto a particular unit of the map, thus making the characteristics of the various clusters within the map explicit. We demonstrate the benefits of this approach on an example from text classification using a real-world document archive. In this particular case, the features correspond to keywords describing the contents of a document. The benefit of this approach is that the various document clusters are characterized in terms of shared keywords, thus making it easy for the user to explore the contents of an unknown document archive.
{"title":"Automatic labeling of self-organizing maps for information retrieval","authors":"D. Merkl, A. Rauber","doi":"10.1109/ICONIP.1999.843958","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843958","url":null,"abstract":"The self-organizing map is a very popular unsupervised neural network model for the analysis of high-dimensional input data as in information retrieval applications. However, the interpretation of the map requires much manual effort, especially as far as the analysis of the learned features and the characteristics of identified clusters is concerned. We present our novel LabelSOM method which, based on the features learned by the map, automatically selects the most descriptive features of the input patterns mapped onto a particular unit of the map, thus making the characteristics of the various clusters within the map explicit. We demonstrate the benefits of this approach on an example from text classification using a real-world document archive. In this particular case, the features correspond to keywords describing the contents of a document. The benefit of this approach is that the various document clusters are characterized in terms of shared keywords, thus making it easy for the user to explore the contents of an unknown document archive.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128161514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845657
J. Young, K. Lees, J. Austin
This paper compares the performance of software and hardware implementations of binary correlation matrix memory (CMM). CMM is a simple, one-layer neural network with a Hebbian learning rule which offers excellent speed and scalability advantages. CMM "building blocks" form the basis of the AURA neural network system which has been applied to a broad range of practical problems. The paper presents the results of a performance comparison between recent software and hardware implementations of binary CMM. The results show that the hardware implementation provides a best-case speed-up of 50 over the software implementation. Finally, some areas for further improvement in the hardware implementation are identified.
{"title":"Performance comparison of correlation matrix memory implementations","authors":"J. Young, K. Lees, J. Austin","doi":"10.1109/ICONIP.1999.845657","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845657","url":null,"abstract":"This paper compares the performance of software and hardware implementations of binary correlation matrix memory (CMM). CMM is a simple, one-layer neural network with a Hebbian learning rule which offers excellent speed and scalability advantages. CMM \"building blocks\" form the basis of the AURA neural network system which has been applied to a broad range of practical problems. The paper presents the results of a performance comparison between recent software and hardware implementations of binary CMM. The results show that the hardware implementation provides a best-case speed-up of 50 over the software implementation. Finally, some areas for further improvement in the hardware implementation are identified.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129783686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844656
M. Bellgard, R. Taplin
Many pattern recognition problems are viewed as problems that can be solved using a window based artificial neural network (ANN). The paper details a unique, window based learning algorithm using the Effective Boltzmann Machine (EBM). In the past, EBM, which is based on the Boltzmann Machine (BM), has been shown to have the ability to perform pattern completion and to provide an energy measure for completions of any length. Described in the paper is the way that the EBM itself is a highly suitable architecture for learning window based problems. A walk through of a simple example, mathematical derivation as well as simulation experiments shows that the EBM outperforms a window based BM using the criteria of quality of learning, and speed of learning, as well as the resultant generalisations produced by the network.
{"title":"Why a window-based learning algorithm using an Effective Boltzmann machine is superior to the original BM learning algorithm","authors":"M. Bellgard, R. Taplin","doi":"10.1109/ICONIP.1999.844656","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844656","url":null,"abstract":"Many pattern recognition problems are viewed as problems that can be solved using a window based artificial neural network (ANN). The paper details a unique, window based learning algorithm using the Effective Boltzmann Machine (EBM). In the past, EBM, which is based on the Boltzmann Machine (BM), has been shown to have the ability to perform pattern completion and to provide an energy measure for completions of any length. Described in the paper is the way that the EBM itself is a highly suitable architecture for learning window based problems. A walk through of a simple example, mathematical derivation as well as simulation experiments shows that the EBM outperforms a window based BM using the criteria of quality of learning, and speed of learning, as well as the resultant generalisations produced by the network.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131120091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845673
K. Jayakumar, K. Rajaram, M. Faruqi
The paper addresses the design and development of an intelligent neuro controller for navigating vehicles, which can respond to a crisis when the human driver fails to react appropriately. Crisis situations were simulated analytically in a program with random changes in the road curvatures. Using transient dynamic equations and a vehicle model, the vehicle states, responses such as yaw rate, lateral velocity and variation of constructs such as the lateral position with respect to road centre (LPRC), heading error with respect to local curvature (HELC) under simulation trials were visually updated on the screen to enable manual manipulation of control inputs to guide the simulated navigation behaviour of the vehicle. A neuro controller was made to learn the inherent dynamics by an association of the vehicle states during such situations with the pattern of human reactions and his choice of inputs viz. the throttle, steer, brake and gear in controlling such crisis. Such a neuro controller then, has the ability to invoke such past learning of successful reaction patterns of the human control action during moments of human incapacity to react under crisis situations. The neuro controller guides the vehicle navigation under varying crisis conditions preventing road departures. The capability of the trained neuro controller to improvise effectively under unpresented crisis situations and maintain stability, controllability and safety of the vehicle has also been explored.
{"title":"Machine intelligence for crisis handling in navigating vehicles using neuro-controllers","authors":"K. Jayakumar, K. Rajaram, M. Faruqi","doi":"10.1109/ICONIP.1999.845673","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845673","url":null,"abstract":"The paper addresses the design and development of an intelligent neuro controller for navigating vehicles, which can respond to a crisis when the human driver fails to react appropriately. Crisis situations were simulated analytically in a program with random changes in the road curvatures. Using transient dynamic equations and a vehicle model, the vehicle states, responses such as yaw rate, lateral velocity and variation of constructs such as the lateral position with respect to road centre (LPRC), heading error with respect to local curvature (HELC) under simulation trials were visually updated on the screen to enable manual manipulation of control inputs to guide the simulated navigation behaviour of the vehicle. A neuro controller was made to learn the inherent dynamics by an association of the vehicle states during such situations with the pattern of human reactions and his choice of inputs viz. the throttle, steer, brake and gear in controlling such crisis. Such a neuro controller then, has the ability to invoke such past learning of successful reaction patterns of the human control action during moments of human incapacity to react under crisis situations. The neuro controller guides the vehicle navigation under varying crisis conditions preventing road departures. The capability of the trained neuro controller to improvise effectively under unpresented crisis situations and maintain stability, controllability and safety of the vehicle has also been explored.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129822042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845693
M. Ito, S. Miyake, S. Inawashiro, J. Kuroiwa, Y. Sawada
We propose a pulse-neuron model with transmission delays for the field CA3 of the hippocampus and the new learning rule. We use temporal sequences of patterns which consist of trains of bursts. In simulations, it is shown that the model successfully learns and recalls the temporal sequences. The new learning rule works much more effectively than the Hebbian learning rule in learning temporal sequences of patterns.
{"title":"A hippocampal CA3 model for temporal sequences","authors":"M. Ito, S. Miyake, S. Inawashiro, J. Kuroiwa, Y. Sawada","doi":"10.1109/ICONIP.1999.845693","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845693","url":null,"abstract":"We propose a pulse-neuron model with transmission delays for the field CA3 of the hippocampus and the new learning rule. We use temporal sequences of patterns which consist of trains of bursts. In simulations, it is shown that the model successfully learns and recalls the temporal sequences. The new learning rule works much more effectively than the Hebbian learning rule in learning temporal sequences of patterns.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129376585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844662
J. Rajapakse, V. Venkatraman
Sensory or cognitive stimuli in functional MRI (fMRI) experiments activate neuronal populations in specific areas of the brain. Neuronal events in activated brain regions cause changes of blood flow and blood oxygenation level. FMRI signals are sensitive to hemodynamic events ensuing neuronal activation in the brain. The authors use a neural network to model neuronal-vascular coupling of human brain with images obtained in fMRI experiments. The nonlinear mappings modeled by training a network were used to approximate time series acquired in language comprehension and visual experiments. The models of neuronal-vascular coupling realized using the neural network were better than those rendered by a linear system model.
{"title":"Neural network modeling of neuronal-vascular coupling","authors":"J. Rajapakse, V. Venkatraman","doi":"10.1109/ICONIP.1999.844662","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844662","url":null,"abstract":"Sensory or cognitive stimuli in functional MRI (fMRI) experiments activate neuronal populations in specific areas of the brain. Neuronal events in activated brain regions cause changes of blood flow and blood oxygenation level. FMRI signals are sensitive to hemodynamic events ensuing neuronal activation in the brain. The authors use a neural network to model neuronal-vascular coupling of human brain with images obtained in fMRI experiments. The nonlinear mappings modeled by training a network were used to approximate time series acquired in language comprehension and visual experiments. The models of neuronal-vascular coupling realized using the neural network were better than those rendered by a linear system model.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128764621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843956
H. Suzuki, K. Aihara
We propose a new method to calculate the correlation integral from a spike train produced by a dynamical system. Our method is based on the idea of metric space of spike trains. Compared with interspike interval reconstruction, our method practically gives a better estimation of the correlation integral of the combined system of the original system and the neuron model.
{"title":"Correlation integral estimated from a spike train","authors":"H. Suzuki, K. Aihara","doi":"10.1109/ICONIP.1999.843956","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843956","url":null,"abstract":"We propose a new method to calculate the correlation integral from a spike train produced by a dynamical system. Our method is based on the idea of metric space of spike trains. Compared with interspike interval reconstruction, our method practically gives a better estimation of the correlation integral of the combined system of the original system and the neuron model.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123188432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845633
Z. Bandar, H. Al-Attar, D. Mclean
There are two fundamental weaknesses which may have a great impact on the performance of decision tree (DT) induction. These are the limitations in the ability of the DT language to represent some of the underlying patterns of the domain and the degradation in the quality of evidence available to the induction process caused by its recursive partitioning of the training data. The impact of these two weaknesses is greatest when the induction process attempts to overcome the first weakness by resorting to more partitioning of the training data, thus increasing its vulnerability to the second weakness. The authors investigate the use of multiple DT models as a method of overcoming the limitations of the DT modeling language and describe a new and novel algorithm to automatically generate multiple DT models from the same training data. The algorithm is compared to a single-tree classifier by experiments on two well known data sets. Results clearly demonstrate the superiority of our algorithm.
{"title":"Genetic algorithm based multiple decision tree induction","authors":"Z. Bandar, H. Al-Attar, D. Mclean","doi":"10.1109/ICONIP.1999.845633","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845633","url":null,"abstract":"There are two fundamental weaknesses which may have a great impact on the performance of decision tree (DT) induction. These are the limitations in the ability of the DT language to represent some of the underlying patterns of the domain and the degradation in the quality of evidence available to the induction process caused by its recursive partitioning of the training data. The impact of these two weaknesses is greatest when the induction process attempts to overcome the first weakness by resorting to more partitioning of the training data, thus increasing its vulnerability to the second weakness. The authors investigate the use of multiple DT models as a method of overcoming the limitations of the DT modeling language and describe a new and novel algorithm to automatically generate multiple DT models from the same training data. The algorithm is compared to a single-tree classifier by experiments on two well known data sets. Results clearly demonstrate the superiority of our algorithm.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126952006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844647
Bai-ling Zhang, Q. Huang, Tom Gedeon
Karhunen-Loeve transform (KLT) is the optimal linear transform for coding images under the assumption of stationarity. For images composed of regions with widely varied local statistics, R.D. Dony and S. Haykin (1995) proposed a transform coding method called optimally integrated adaptive learning (OIAL), in which a number of localized KLTs are adapted to regions with roughly the same statistics. The new transform coding method is shown to be superior to the traditional KLT. However, the performance of OIAL depends on an estimate of the global principal components of the data, which is not only computationally expensive bat also impractical in some cases. Another problem of OIAL is that the mean vector in each region is not taken into account, which is required to define a local PCA. The authors propose an improvement over the OIAL which replaces the winner-take-all (WTA) based clustering with an optimal soft-competition learning algorithm called "neural gas". The mean vector in each region is also incorporated. Experiments show a better performance than OIAL.
{"title":"A mixture of local PCA learning algorithm for adaptive transform coding","authors":"Bai-ling Zhang, Q. Huang, Tom Gedeon","doi":"10.1109/ICONIP.1999.844647","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844647","url":null,"abstract":"Karhunen-Loeve transform (KLT) is the optimal linear transform for coding images under the assumption of stationarity. For images composed of regions with widely varied local statistics, R.D. Dony and S. Haykin (1995) proposed a transform coding method called optimally integrated adaptive learning (OIAL), in which a number of localized KLTs are adapted to regions with roughly the same statistics. The new transform coding method is shown to be superior to the traditional KLT. However, the performance of OIAL depends on an estimate of the global principal components of the data, which is not only computationally expensive bat also impractical in some cases. Another problem of OIAL is that the mean vector in each region is not taken into account, which is required to define a local PCA. The authors propose an improvement over the OIAL which replaces the winner-take-all (WTA) based clustering with an optimal soft-competition learning algorithm called \"neural gas\". The mean vector in each region is also incorporated. Experiments show a better performance than OIAL.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127867480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}