Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.833496
Ke Chen, Deliang Wang
In this paper, a dynamically coupled neural oscillator network is proposed for image segmentation. Instead of pair-wise coupling, an ensemble of oscillators coupled in a local region is used for grouping. We introduce a set of neighborhoods to generate dynamical coupling structures associated with a specific oscillator. Based on the proximity and similarity principles, two grouping rules are proposed to explicitly consider the distinct cases of whether an oscillator is inside a homogeneous image region or near a boundary between different regions. The use of dynamical coupling makes our segmentation network robust to noise on an image. For fast computation, a segmentation algorithm is abstracted from the underlying oscillatory dynamics and has been applied to synthetic and real images. Simulation results demonstrate the effectiveness of our oscillator network in image segmentation.
{"title":"Image segmentation based on a dynamically coupled neural oscillator network","authors":"Ke Chen, Deliang Wang","doi":"10.1109/IJCNN.1999.833496","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833496","url":null,"abstract":"In this paper, a dynamically coupled neural oscillator network is proposed for image segmentation. Instead of pair-wise coupling, an ensemble of oscillators coupled in a local region is used for grouping. We introduce a set of neighborhoods to generate dynamical coupling structures associated with a specific oscillator. Based on the proximity and similarity principles, two grouping rules are proposed to explicitly consider the distinct cases of whether an oscillator is inside a homogeneous image region or near a boundary between different regions. The use of dynamical coupling makes our segmentation network robust to noise on an image. For fast computation, a segmentation algorithm is abstracted from the underlying oscillatory dynamics and has been applied to synthetic and real images. Simulation results demonstrate the effectiveness of our oscillator network in image segmentation.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123373773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832645
G. D. Magoulas, V. Plagianakos, M. Vrahatis
Training algorithms suitable to work under imprecise conditions are proposed. They require only the algebraic sign of the error function or its gradient to be correct, and depending on the way they update the weights, they are analyzed as composite nonlinear successive overrelaxation (SOR) methods or composite nonlinear Jacobi methods, applied to the gradient of the error function. The local convergence behavior of the proposed algorithms is also studied. The proposed approach seems practically useful when training is affected by technology imperfections, limited precision in operations and data, hardware component variations and environmental changes that cause unpredictable deviations of parameter values from the designed configuration. Therefore, it may be difficult or impossible to obtain very precise values for the error function and the gradient of the error during training.
{"title":"Sign-methods for training with imprecise error function and gradient values","authors":"G. D. Magoulas, V. Plagianakos, M. Vrahatis","doi":"10.1109/IJCNN.1999.832645","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832645","url":null,"abstract":"Training algorithms suitable to work under imprecise conditions are proposed. They require only the algebraic sign of the error function or its gradient to be correct, and depending on the way they update the weights, they are analyzed as composite nonlinear successive overrelaxation (SOR) methods or composite nonlinear Jacobi methods, applied to the gradient of the error function. The local convergence behavior of the proposed algorithms is also studied. The proposed approach seems practically useful when training is affected by technology imperfections, limited precision in operations and data, hardware component variations and environmental changes that cause unpredictable deviations of parameter values from the designed configuration. Therefore, it may be difficult or impossible to obtain very precise values for the error function and the gradient of the error during training.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123393790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.830805
G. Preda, Radu C. Popa, K. Demachi, K. Miya
A neural network mapping approach has been proposed for the inversion problem in eddy-current testing (ECT). The use of a principal component analysis (PCA) data transformation step, a data fragmentation technique, jittering, and of a data fusion approach proved to be instrumental auxiliary tools that support the basic training algorithm in coping with the strong ill-posedness of the inversion problem. The present paper reports on the further improvements brought by a new, randomly generated database used for the training set, proposed for the reconstruction of crack shape and conductivity distribution. Good results were obtained for four levels of conductivity and nonconnected crack shapes even in the presence of high noise levels.
{"title":"Neural network for inverse mapping in eddy current testing","authors":"G. Preda, Radu C. Popa, K. Demachi, K. Miya","doi":"10.1109/IJCNN.1999.830805","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830805","url":null,"abstract":"A neural network mapping approach has been proposed for the inversion problem in eddy-current testing (ECT). The use of a principal component analysis (PCA) data transformation step, a data fragmentation technique, jittering, and of a data fusion approach proved to be instrumental auxiliary tools that support the basic training algorithm in coping with the strong ill-posedness of the inversion problem. The present paper reports on the further improvements brought by a new, randomly generated database used for the training set, proposed for the reconstruction of crack shape and conductivity distribution. Good results were obtained for four levels of conductivity and nonconnected crack shapes even in the presence of high noise levels.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"62 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114118076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831164
J. Minghu, Z. Xiaoyan
To counter the drawbacks of long training time required by Waibel's time-delay neural networks (TDNN) in phoneme recognition, the paper puts forward several improved fast learning methods for TDNN. Merging the unsupervised Oja rule and the similar error backpropagation algorithm for initial training of TDNN weights can effectively increase the convergence speed. Improving the error energy function and updating the changing of weights according to size of output error, can increase the training speed. From backpropagation along layer, to average overlap part of backpropagation error of the first hidden layer along a frame, the training samples gradually increase the convergence speed increases. For multi-class phonemic modular TDNNs, we improve the architecture of Waibel's modular networks, and obtain an optimum modular TDNNs of tree structure to accelerate its learning. Its training time is less than Waibel's modular TDNNs.
{"title":"A novel fast learning algorithms for time-delay neural networks","authors":"J. Minghu, Z. Xiaoyan","doi":"10.1109/IJCNN.1999.831164","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831164","url":null,"abstract":"To counter the drawbacks of long training time required by Waibel's time-delay neural networks (TDNN) in phoneme recognition, the paper puts forward several improved fast learning methods for TDNN. Merging the unsupervised Oja rule and the similar error backpropagation algorithm for initial training of TDNN weights can effectively increase the convergence speed. Improving the error energy function and updating the changing of weights according to size of output error, can increase the training speed. From backpropagation along layer, to average overlap part of backpropagation error of the first hidden layer along a frame, the training samples gradually increase the convergence speed increases. For multi-class phonemic modular TDNNs, we improve the architecture of Waibel's modular networks, and obtain an optimum modular TDNNs of tree structure to accelerate its learning. Its training time is less than Waibel's modular TDNNs.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121524201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.833544
M. Blumenstein, B. Verma
An algorithm for segmenting unconstrained printed and cursive words is proposed. The algorithm initially oversegments handwritten word images (for training and testing) using heuristics and feature detection. An artificial neural network (ANN) is then trained with global features extracted from segmentation points found in words designated for training. Segmentation points located in "test" word images are subsequently extracted and verified using the trained ANN. Two major sets of experiments were conducted, resulting in segmentation accuracies of 75.06% and 76.52%. The handwritten words used for experimentation were taken from the CEDAR CD-ROM. The results obtained for segmentation can easily be used for comparison with other researchers using the same benchmark database.
{"title":"A new segmentation algorithm for handwritten word recognition","authors":"M. Blumenstein, B. Verma","doi":"10.1109/IJCNN.1999.833544","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833544","url":null,"abstract":"An algorithm for segmenting unconstrained printed and cursive words is proposed. The algorithm initially oversegments handwritten word images (for training and testing) using heuristics and feature detection. An artificial neural network (ANN) is then trained with global features extracted from segmentation points found in words designated for training. Segmentation points located in \"test\" word images are subsequently extracted and verified using the trained ANN. Two major sets of experiments were conducted, resulting in segmentation accuracies of 75.06% and 76.52%. The handwritten words used for experimentation were taken from the CEDAR CD-ROM. The results obtained for segmentation can easily be used for comparison with other researchers using the same benchmark database.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121556604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832711
Jin-Tsong Jeng, Tsu-Tian Lee
In this paper, we propose an approximate equivalence neural network model with a fast learning speed as well as a good function approximation capability, and a new objective function, which satisfies the H/sup /spl infin// induced norm to solve the worst-case identification and control of nonlinear problems. The approximate equivalence neural network not only has the same capability of universal approximator, but also has a faster learning speed than the conventional feedforward/recurrent neural networks. Based on this approximate transformable technique, the relationship between the single-layered neural network and multilayered perceptrons neural network is derived. It is shown that a approximate equivalence neural network can be represented as a functional link network that is based on Chebyshev polynomials. We also derive a new learning algorithm such that the infinity norm of the transfer function from the input to the output is under a prescribed level. It turns out that the approximate equivalence neural network can be extended to do the worst-case problem, in the identification and control of nonlinear problems.
{"title":"An approximate equivalence neural network to conventional neural network for the worst-case identification and control of nonlinear system","authors":"Jin-Tsong Jeng, Tsu-Tian Lee","doi":"10.1109/IJCNN.1999.832711","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832711","url":null,"abstract":"In this paper, we propose an approximate equivalence neural network model with a fast learning speed as well as a good function approximation capability, and a new objective function, which satisfies the H/sup /spl infin// induced norm to solve the worst-case identification and control of nonlinear problems. The approximate equivalence neural network not only has the same capability of universal approximator, but also has a faster learning speed than the conventional feedforward/recurrent neural networks. Based on this approximate transformable technique, the relationship between the single-layered neural network and multilayered perceptrons neural network is derived. It is shown that a approximate equivalence neural network can be represented as a functional link network that is based on Chebyshev polynomials. We also derive a new learning algorithm such that the infinity norm of the transfer function from the input to the output is under a prescribed level. It turns out that the approximate equivalence neural network can be extended to do the worst-case problem, in the identification and control of nonlinear problems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121642708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831501
Lu Xugang, Chen Daowen
In traditional speech signal processing methods and current auditory based methods, features are extracted based on power spectrum, that is, spatial or temporal mechanism is used to simulate the frequency response of our cochlear function. The disadvantage of these methods are that noise and tone signals are processed equally, but, in fact, our auditory system percepts noise and periodic stimulation with different sensitivity: if the stimulation is noise, the audible threshold is high, and the gain for noise is low. On the contrary, if the stimulation is periodic time series, then the auditory system's audible threshold will be low and the gain will be high, that is the temporal processing aspect. In this paper, spatial and temporal mechanisms are integrated in neural firing response, thus the representation not only represents the average firing rate of neural fibers, but also enhances the periodic components of the stimulation. Thus, this representation can have both merits of the two processing methods.
{"title":"Integrating spatial and temporal mechanisms in auditory neural fiber's computational model","authors":"Lu Xugang, Chen Daowen","doi":"10.1109/IJCNN.1999.831501","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831501","url":null,"abstract":"In traditional speech signal processing methods and current auditory based methods, features are extracted based on power spectrum, that is, spatial or temporal mechanism is used to simulate the frequency response of our cochlear function. The disadvantage of these methods are that noise and tone signals are processed equally, but, in fact, our auditory system percepts noise and periodic stimulation with different sensitivity: if the stimulation is noise, the audible threshold is high, and the gain for noise is low. On the contrary, if the stimulation is periodic time series, then the auditory system's audible threshold will be low and the gain will be high, that is the temporal processing aspect. In this paper, spatial and temporal mechanisms are integrated in neural firing response, thus the representation not only represents the average firing rate of neural fibers, but also enhances the periodic components of the stimulation. Thus, this representation can have both merits of the two processing methods.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114712009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832693
M. Lagoudakis, A. Maida
Neural maps have been recently proposed as an alternative method for mobile robot path planning. However, these proposals are mostly theoretical and primarily concerned with biological plausibility. This paper addresses the applicability of neural maps to mobile robot navigation with focus on efficient implementations It is suggested that neural maps offer a promising alternative compared to the traditional distance transform and harmonic function methods. Applications of neural maps are presented for both global and local navigation. Experimental results (both simulated and real-world on a Nomad 200 mobile robot) demonstrate the validity of the approach. Our work reveals that a key issue for success of the method is the organization of the map that needs to be optimized for the situation at hand.
{"title":"Neural maps for mobile robot navigation","authors":"M. Lagoudakis, A. Maida","doi":"10.1109/IJCNN.1999.832693","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832693","url":null,"abstract":"Neural maps have been recently proposed as an alternative method for mobile robot path planning. However, these proposals are mostly theoretical and primarily concerned with biological plausibility. This paper addresses the applicability of neural maps to mobile robot navigation with focus on efficient implementations It is suggested that neural maps offer a promising alternative compared to the traditional distance transform and harmonic function methods. Applications of neural maps are presented for both global and local navigation. Experimental results (both simulated and real-world on a Nomad 200 mobile robot) demonstrate the validity of the approach. Our work reveals that a key issue for success of the method is the organization of the map that needs to be optimized for the situation at hand.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124333413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.832654
A. Hirabayashi, Gintaras Ogawa
Learning a mapping from training data can be discussed from the viewpoint of function approximation. One of the authors, Ogawa (1995), proposed projection learning, partial projection learning, and averaged projection learning to obtain good generalization capability, and devised the concept of a family of projection learnings which includes these three kinds of projection learnings. This provided a framework to discuss an infinite kind of learning. Conventional definitions of the family, however, did not represent the concept appropriately and inhibited development of the theory. In this paper, we propose a new and natural definition and discuss properties of the family, which provide the foundations of future studies of the family of projection learnings.
{"title":"A class of learning for optimal generalization","authors":"A. Hirabayashi, Gintaras Ogawa","doi":"10.1109/IJCNN.1999.832654","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832654","url":null,"abstract":"Learning a mapping from training data can be discussed from the viewpoint of function approximation. One of the authors, Ogawa (1995), proposed projection learning, partial projection learning, and averaged projection learning to obtain good generalization capability, and devised the concept of a family of projection learnings which includes these three kinds of projection learnings. This provided a framework to discuss an infinite kind of learning. Conventional definitions of the family, however, did not represent the concept appropriately and inhibited development of the theory. In this paper, we propose a new and natural definition and discuss properties of the family, which provide the foundations of future studies of the family of projection learnings.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124391304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.833540
Jae-Hyun Cho, Chul-Woo Park, E. Cha
This paper describes a practical equation for estimating the fractal dimensions (FD) of images and discusses the recognition model for which it is applicable. The FD is applied to pre-estimate quantities of the information that can be used to recognize images.
{"title":"Learning-data composition and recognition using fractal parameters","authors":"Jae-Hyun Cho, Chul-Woo Park, E. Cha","doi":"10.1109/IJCNN.1999.833540","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833540","url":null,"abstract":"This paper describes a practical equation for estimating the fractal dimensions (FD) of images and discusses the recognition model for which it is applicable. The FD is applied to pre-estimate quantities of the information that can be used to recognize images.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}