Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.830848
J. Bard, J. Patton, M. Musavi
The quality of paper produced in a papermaking process is largely dependent on the properties of the wood pulp used. One important property is pulp freeness. Ideally, a constant, predetermined level of freeness is desired to achieve the highest quality of paper possible. The focus of this paper is on developing a system to control the wood pulp freeness. A radial basis function (RBF) artificial neural network was used to model the freeness and a fuzzy logic controller was used to control the input parameters to maintain a desired level of freeness. Ideally, the controller will reduce pulp freeness fluctuations in order to improve overall paper sheet quality and production.
{"title":"Using RBF neural networks and a fuzzy logic controller to stabilize wood pulp freeness","authors":"J. Bard, J. Patton, M. Musavi","doi":"10.1109/IJCNN.1999.830848","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830848","url":null,"abstract":"The quality of paper produced in a papermaking process is largely dependent on the properties of the wood pulp used. One important property is pulp freeness. Ideally, a constant, predetermined level of freeness is desired to achieve the highest quality of paper possible. The focus of this paper is on developing a system to control the wood pulp freeness. A radial basis function (RBF) artificial neural network was used to model the freeness and a fuzzy logic controller was used to control the input parameters to maintain a desired level of freeness. Ideally, the controller will reduce pulp freeness fluctuations in order to improve overall paper sheet quality and production.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133459266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.836037
S. Ikbal, Hemant Misra, B. Yegnanarayana
In this paper we analyse the mapping behavior of an autoassociative neural network (AANN). The mapping in an AANN is achieved by using a dimension reduction followed by a dimension expansion. One of the major results of the analysis is that, the network performs better autoassociation as the size increases. This is because, a network of a given size can deal with only a certain level of nonlinearity. Performance of autoassociative mapping is illustrated with 2D examples. We have shown the utility of the mapping feature of an AANN for speaker verification.
{"title":"Analysis of autoassociative mapping neural networks","authors":"S. Ikbal, Hemant Misra, B. Yegnanarayana","doi":"10.1109/IJCNN.1999.836037","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836037","url":null,"abstract":"In this paper we analyse the mapping behavior of an autoassociative neural network (AANN). The mapping in an AANN is achieved by using a dimension reduction followed by a dimension expansion. One of the major results of the analysis is that, the network performs better autoassociation as the size increases. This is because, a network of a given size can deal with only a certain level of nonlinearity. Performance of autoassociative mapping is illustrated with 2D examples. We have shown the utility of the mapping feature of an AANN for speaker verification.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127850949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.836169
Zhuoer Shi, Desheng Zhang, D. Kouri, D. Hoffman
We present a novel polynomial functional neural networks using distributed approximating functional (DAF) wavelets (infinitely smooth filters in both time and frequency regimes), for signal estimation and surface fitting. The remarkable advantage of these polynomial nets is that the functional space smoothness is identical to the state space smoothness (consisting of the weighting vectors). The constrained cost energy function using optimal regularization programming endows the networks with a natural time-varying filtering feature. Theoretical analysis and an application show that the approach is extremely stable and efficient for signal processing and curve/surface fitting.
{"title":"Robust regularized learning using distributed approximating functional networks","authors":"Zhuoer Shi, Desheng Zhang, D. Kouri, D. Hoffman","doi":"10.1109/IJCNN.1999.836169","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836169","url":null,"abstract":"We present a novel polynomial functional neural networks using distributed approximating functional (DAF) wavelets (infinitely smooth filters in both time and frequency regimes), for signal estimation and surface fitting. The remarkable advantage of these polynomial nets is that the functional space smoothness is identical to the state space smoothness (consisting of the weighting vectors). The constrained cost energy function using optimal regularization programming endows the networks with a natural time-varying filtering feature. Theoretical analysis and an application show that the approach is extremely stable and efficient for signal processing and curve/surface fitting.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127064061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832690
J. Chung, S. Velinsky
One approach towards improving the reliability of high-performance robotic systems is to allow for the automatic reconfiguration of the robot's control system to accommodate actuator failure and/or damage. The new concept of the extended plant and its identification in closed loop is introduced for developing a reconfigurable robot manipulator controller. It is made possible through the use of artificial neural networks. A simulation study demonstrates the effectiveness of the developed control algorithm.
{"title":"Intelligent reconfigurable control of robot manipulators","authors":"J. Chung, S. Velinsky","doi":"10.1109/IJCNN.1999.832690","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832690","url":null,"abstract":"One approach towards improving the reliability of high-performance robotic systems is to allow for the automatic reconfiguration of the robot's control system to accommodate actuator failure and/or damage. The new concept of the extended plant and its identification in closed loop is introduced for developing a reconfigurable robot manipulator controller. It is made possible through the use of artificial neural networks. A simulation study demonstrates the effectiveness of the developed control algorithm.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115582471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831448
A. Radchenko
The model of neural membrane describes interaction of gating charges (GC), their conformational mobility and immobilization during excitation. Volt-conformational and current-voltage characteristic (VCC and CVC) of the membrane are analytically derived. Inactivation is shown to change these characteristics during excitation; this is caused by GC immobilization, instead of the contrary. VCC and CVC have hysteretic properties. Due to them electroexcitable units of the somato-dendritic (SD) membrane arrange a memory medium well adapted to record, keep and reconstruct afferent information. GC immobilization underlies consolidation of memory traces. The theory of quasi-holographic associative memory is constructed where role of memory medium is carried out by synaptic addressed units of electroexcitable mosaics of SD-membranes. Small changes of membrane potential (slow potentials) select modes of such memory: if the working point on VCC is displaced inside the hysteretic loop, then the neuron is in writing mode, if outside then in a reading mode. Current distribution of slow potentials shares neuron population on writing, reading and intermediate sets (short-term memory), they are in relative dynamic (metabolic dependent) balance.
{"title":"Biophysical basis of neural memory","authors":"A. Radchenko","doi":"10.1109/IJCNN.1999.831448","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831448","url":null,"abstract":"The model of neural membrane describes interaction of gating charges (GC), their conformational mobility and immobilization during excitation. Volt-conformational and current-voltage characteristic (VCC and CVC) of the membrane are analytically derived. Inactivation is shown to change these characteristics during excitation; this is caused by GC immobilization, instead of the contrary. VCC and CVC have hysteretic properties. Due to them electroexcitable units of the somato-dendritic (SD) membrane arrange a memory medium well adapted to record, keep and reconstruct afferent information. GC immobilization underlies consolidation of memory traces. The theory of quasi-holographic associative memory is constructed where role of memory medium is carried out by synaptic addressed units of electroexcitable mosaics of SD-membranes. Small changes of membrane potential (slow potentials) select modes of such memory: if the working point on VCC is displaced inside the hysteretic loop, then the neuron is in writing mode, if outside then in a reading mode. Current distribution of slow potentials shares neuron population on writing, reading and intermediate sets (short-term memory), they are in relative dynamic (metabolic dependent) balance.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115590790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.835942
R. Gemello, D. Albesano, F. Mana
In speech recognition the most diffused technology (hidden Markov models) is constrained by the condition of stochastic independence of its input features. That limits the simultaneous use of features derived from the speech signal with different processing algorithms. On the contrary artificial neural networks (ANN) are capable of incorporating multiple heterogeneous input features, which do not need to be treated as independent, finding the optimal combination of these features for classification. The purpose of this work is the exploitation of this characteristic of ANNs to improve the speech recognition accuracy through the combined use of input features coming from different sources (different feature extraction algorithms). We integrate two input sources: the Mel based cepstral coefficients (MFCC) derived from FFT and the RASTA-PLP cepstral coefficients. The results show that this integration leads to an error reduction of 26% on a telephone quality test set.
{"title":"Multi-source neural networks for speech recognition","authors":"R. Gemello, D. Albesano, F. Mana","doi":"10.1109/IJCNN.1999.835942","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.835942","url":null,"abstract":"In speech recognition the most diffused technology (hidden Markov models) is constrained by the condition of stochastic independence of its input features. That limits the simultaneous use of features derived from the speech signal with different processing algorithms. On the contrary artificial neural networks (ANN) are capable of incorporating multiple heterogeneous input features, which do not need to be treated as independent, finding the optimal combination of these features for classification. The purpose of this work is the exploitation of this characteristic of ANNs to improve the speech recognition accuracy through the combined use of input features coming from different sources (different feature extraction algorithms). We integrate two input sources: the Mel based cepstral coefficients (MFCC) derived from FFT and the RASTA-PLP cepstral coefficients. The results show that this integration leads to an error reduction of 26% on a telephone quality test set.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123899650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.833412
Chungyong Tsai, Chih-Chi Chang
Applying neural networks or fuzzy systems to the field of optimal control encounters the difficulty of locating adequate samples that can be used to train the neural networks or modify the fuzzy rules such that the optimal control value for a given state can be produced. Instead of an exhaustive search, this work presents a simple method based on the rule of bang-bang control to locate the training samples for time optimal control. Although the samples obtained by the proposed method can be learned by multilayer perceptrons and radial basis networks, a neural network deemed appropriate for learning these samples is proposed as well. Simulation results demonstrate the effectiveness of the proposed method.
{"title":"A neural network controller based on the rule of bang-bang control","authors":"Chungyong Tsai, Chih-Chi Chang","doi":"10.1109/IJCNN.1999.833412","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833412","url":null,"abstract":"Applying neural networks or fuzzy systems to the field of optimal control encounters the difficulty of locating adequate samples that can be used to train the neural networks or modify the fuzzy rules such that the optimal control value for a given state can be produced. Instead of an exhaustive search, this work presents a simple method based on the rule of bang-bang control to locate the training samples for time optimal control. Although the samples obtained by the proposed method can be learned by multilayer perceptrons and radial basis networks, a neural network deemed appropriate for learning these samples is proposed as well. Simulation results demonstrate the effectiveness of the proposed method.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832629
R. Lotlikar, R. Kothari
Dimensionality reduction is the process of mapping high dimensional patterns to a lower dimensional manifold and is typically used for visualization or as a preprocessing step in classification applications. From a classification viewpoint, the rate of increase of Bayes error serves as an ideal choice to measure the loss of information relevant to classification. Motivated by that, we present a multilayer perceptron which produces as output the lower dimensional representation. The multilayer perceptron is trained so as to minimize the classification error in the subspace. It thus differs from autoassociative like multilayer perceptrons which have been proposed and used for dimensionality reduction. We examine the performance of the proposed method of dimensionality reduction and the effect that varying the parameters have on the algorithm.
{"title":"Multilayer perceptron based dimensionality reduction","authors":"R. Lotlikar, R. Kothari","doi":"10.1109/IJCNN.1999.832629","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832629","url":null,"abstract":"Dimensionality reduction is the process of mapping high dimensional patterns to a lower dimensional manifold and is typically used for visualization or as a preprocessing step in classification applications. From a classification viewpoint, the rate of increase of Bayes error serves as an ideal choice to measure the loss of information relevant to classification. Motivated by that, we present a multilayer perceptron which produces as output the lower dimensional representation. The multilayer perceptron is trained so as to minimize the classification error in the subspace. It thus differs from autoassociative like multilayer perceptrons which have been proposed and used for dimensionality reduction. We examine the performance of the proposed method of dimensionality reduction and the effect that varying the parameters have on the algorithm.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.836183
S. Lee, B. Nam
In the recognition of fingerprint, preprocessing such as smoothing, binarization and thinning is needed. Then fingerprint minutiae feature is extracted. Some fingerprint identification algorithm (such as using FFT etc.) may require so much computation as to be impractical. Wavelet based algorithm may be the key to making a low cost fingerprint identification system that would operate on a small computer. We present a fast and effective method to identify fingerprint.
{"title":"Fingerprint recognition using wavelet transform and probabilistic neural network","authors":"S. Lee, B. Nam","doi":"10.1109/IJCNN.1999.836183","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836183","url":null,"abstract":"In the recognition of fingerprint, preprocessing such as smoothing, binarization and thinning is needed. Then fingerprint minutiae feature is extracted. Some fingerprint identification algorithm (such as using FFT etc.) may require so much computation as to be impractical. Wavelet based algorithm may be the key to making a low cost fingerprint identification system that would operate on a small computer. We present a fast and effective method to identify fingerprint.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124452834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831587
D. Phatak
It is demonstrated that fault tolerance, generalization and the Vapnik-Chertonenkis (VC) dimension are inter-related attributes. It is well known that the generalization error if plotted as a function of the VC dimension h, exhibits a well defined minimum corresponding to an optimal value of h, say h/sub opt/. We show that if the VC dimension h of an ANN satisfies h/spl les/h/sub opt/ (i.e., there is no excess capacity or redundancy), then fault tolerance and generalization are mutually conflicting attributes. On the other hand, if h>h/sub opt/ (i.e., there is excess capacity or redundancy), then fault tolerance and generalization are mutually synergistic attributes. In other words, training methods geared towards improving the fault tolerance can also lead to better generalization and vice versa, only when there is excess capacity or redundancy. This is consistent with our previous results indicating that complete fault tolerance in ANNs requires a significant amount of redundancy.
{"title":"Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs","authors":"D. Phatak","doi":"10.1109/IJCNN.1999.831587","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831587","url":null,"abstract":"It is demonstrated that fault tolerance, generalization and the Vapnik-Chertonenkis (VC) dimension are inter-related attributes. It is well known that the generalization error if plotted as a function of the VC dimension h, exhibits a well defined minimum corresponding to an optimal value of h, say h/sub opt/. We show that if the VC dimension h of an ANN satisfies h/spl les/h/sub opt/ (i.e., there is no excess capacity or redundancy), then fault tolerance and generalization are mutually conflicting attributes. On the other hand, if h>h/sub opt/ (i.e., there is excess capacity or redundancy), then fault tolerance and generalization are mutually synergistic attributes. In other words, training methods geared towards improving the fault tolerance can also lead to better generalization and vice versa, only when there is excess capacity or redundancy. This is consistent with our previous results indicating that complete fault tolerance in ANNs requires a significant amount of redundancy.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114572249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}