A four-dimensional neural network model with delay is investigated. With the help of the theory of delay differential equation and Hopf bifurcation, the conditions of the equilibrium undergoing Hopf bifurcation are worked out by choosing the delay as parameter. Applying the normal form theory and the center manifold argument, we derive the explicit formulae for determining the properties of the bifurcating periodic solutions. Numerical simulations are performed to illustrate the analytical results.
{"title":"Dynamical Behavior in a Four-Dimensional Neural Network Model with Delay","authors":"Changjin Xu, Peiluan Li","doi":"10.1155/2012/397146","DOIUrl":"https://doi.org/10.1155/2012/397146","url":null,"abstract":"A four-dimensional neural network model with delay is investigated. With the help of the theory of delay differential equation and Hopf bifurcation, the conditions of the equilibrium undergoing Hopf bifurcation are worked out by choosing the delay as parameter. Applying the normal form theory and the center manifold argument, we derive the explicit formulae for determining the properties of the bifurcating periodic solutions. Numerical simulations are performed to illustrate the analytical results.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"29 1","pages":"397146:1-397146:11"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84743224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers the applications of resampling methods to support vector machines (SVMs). We take into account the leaving-one-out cross-validation (CV) when determining the optimum tuning parameters and bootstrapping the deviance in order to summarize the measure of goodness-of-fit in SVMs. The leaving-one-out CV is also adapted in order to provide estimates of the bias of the excess error in a prediction rule constructed with training samples. We analyze the data from a mackerel-egg survey and a liver-disease study.
{"title":"Cross-Validation, Bootstrap, and Support Vector Machines","authors":"M. Tsujitani, Yusuke Tanaka","doi":"10.1155/2011/302572","DOIUrl":"https://doi.org/10.1155/2011/302572","url":null,"abstract":"This paper considers the applications of resampling methods to support vector machines (SVMs). We take into account the leaving-one-out cross-validation (CV) when determining the optimum tuning parameters and bootstrapping the deviance in order to summarize the measure of goodness-of-fit in SVMs. The leaving-one-out CV is also adapted in order to provide estimates of the bias of the excess error in a prediction rule constructed with training samples. We analyze the data from a mackerel-egg survey and a liver-disease study.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"63 1","pages":"302572:1-302572:6"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73200415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the first time the global dissipativity of a class of cellular neural networks with multipantograph delays is studied. On the one hand, some delay-dependent sufficient conditions are obtained by directly constructing suitable Lyapunov functionals; on the other hand, firstly the transformation transforms the cellular neural networks with multipantograph delays into the cellular neural networks with constant delays and variable coefficients, and then constructing Lyapunov functionals, some delay-independent sufficient conditions are given. These new sufficient conditions can ensure global dissipativity together with their sets of attraction and can be applied to design global dissipative cellular neural networks with multipantograph delays and easily checked in practice by simple algebraic methods. An example is given to illustrate the correctness of the results.
{"title":"On the Global Dissipativity of a Class of Cellular Neural Networks with Multipantograph Delays","authors":"Liqun Zhou","doi":"10.1155/2011/941426","DOIUrl":"https://doi.org/10.1155/2011/941426","url":null,"abstract":"For the first time the global dissipativity of a class of cellular neural networks with multipantograph delays is studied. On the one hand, some delay-dependent sufficient conditions are obtained by directly constructing suitable Lyapunov functionals; on the other hand, firstly the transformation transforms the cellular neural networks with multipantograph delays into the cellular neural networks with constant delays and variable coefficients, and then constructing Lyapunov functionals, some delay-independent sufficient conditions are given. These new sufficient conditions can ensure global dissipativity together with their sets of attraction and can be applied to design global dissipative cellular neural networks with multipantograph delays and easily checked in practice by simple algebraic methods. An example is given to illustrate the correctness of the results.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"2011 1","pages":"941426:1-941426:7"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73763668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adaptive natural gradient learning avoids singularities in the parameter space of multilayer perceptrons. However, it requires a larger number of additional parameters than ordinary backpropagation in the form of the Fisher information matrix. This article describes a new approach to natural gradient learning that uses a smaller Fisher information matrix. It also uses a prior distribution on the neural network parameters and an annealed learning rate. While this new approach is computationally simpler, its performance is comparable to that of Adaptive natural gradient learning.
{"title":"A Simplified Natural Gradient Learning Algorithm","authors":"Michael R. Bastian, J. Gunther, T. Moon","doi":"10.1155/2011/407497","DOIUrl":"https://doi.org/10.1155/2011/407497","url":null,"abstract":"Adaptive natural gradient learning avoids singularities in the parameter space of multilayer perceptrons. However, it requires a larger number of additional parameters than ordinary backpropagation in the form of the Fisher information matrix. This article describes a new approach to natural gradient learning that uses a smaller Fisher information matrix. It also uses a prior distribution on the neural network parameters and an annealed learning rate. While this new approach is computationally simpler, its performance is comparable to that of Adaptive natural gradient learning.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"44 1","pages":"407497:1-407497:9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87236480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel and effective approach to synchronization analysis of neural networks is investigated by using the nonlinear operator named the generalized Dahlquist constant and the general intermittent control. The proposed approach offers a design procedure for synchronization of a large class of neural networks. The numerical simulations whose theoretical results are applied to typical neural networks with and without delayed item demonstrate the effectiveness and feasibility of the proposed technique.
{"title":"The Generalized Dahlquist Constant with Applications in Synchronization Analysis of Typical Neural Networks via General Intermittent Control","authors":"Zhang Qunli","doi":"10.1155/2011/249136","DOIUrl":"https://doi.org/10.1155/2011/249136","url":null,"abstract":"A novel and effective approach to synchronization analysis of neural networks is investigated by using the nonlinear operator named the generalized Dahlquist constant and the general intermittent control. The proposed approach offers a design procedure for synchronization of a large class of neural networks. The numerical simulations whose theoretical results are applied to typical neural networks with and without delayed item demonstrate the effectiveness and feasibility of the proposed technique.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"15 1","pages":"249136:1-249136:7"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84105509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN) to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.
{"title":"Applying Artificial Neural Networks for Face Recognition","authors":"T. Le","doi":"10.1155/2011/673016","DOIUrl":"https://doi.org/10.1155/2011/673016","url":null,"abstract":"This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN) to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"11 1","pages":"673016:1-673016:16"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75936356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Self-Organizing Map (SOM) algorithm is widely used for building topographic maps of data represented in a vectorial space, but it does not operate with dissimilarity data. Soft Topographic Map (STM) algorithm is an extension of SOM to arbitrary distance measures, and it creates a map using a set of units, organized in a rectangular lattice, defining data neighbourhood relationships. In the last years, a new standard for identifying bacteria using genotypic information began to be developed. In this new approach, phylogenetic relationships of bacteria could be determined by comparing a stable part of the bacteria genetic code, the so-called "housekeeping genes." The goal of this work is to build a topographic representation of bacteria clusters, by means of self-organizing maps, starting from genotypic features regarding housekeeping genes.
{"title":"Soft Topographic Maps for Clustering and Classifying Bacteria Using Housekeeping Genes","authors":"M. L. Rosa, R. Rizzo, A. Urso","doi":"10.1155/2011/617427","DOIUrl":"https://doi.org/10.1155/2011/617427","url":null,"abstract":"The Self-Organizing Map (SOM) algorithm is widely used for building topographic maps of data represented in a vectorial space, but it does not operate with dissimilarity data. Soft Topographic Map (STM) algorithm is an extension of SOM to arbitrary distance measures, and it creates a map using a set of units, organized in a rectangular lattice, defining data neighbourhood relationships. In the last years, a new standard for identifying bacteria using genotypic information began to be developed. In this new approach, phylogenetic relationships of bacteria could be determined by comparing a stable part of the bacteria genetic code, the so-called \"housekeeping genes.\" The goal of this work is to build a topographic representation of bacteria clusters, by means of self-organizing maps, starting from genotypic features regarding housekeeping genes.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"177 1","pages":"617427:1-617427:8"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73120611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multilayer perceptron (MLP) with back-propagation learning rule is adopted to predict the winning rates of two teams according to their official statistical data of 2006World Cup Football Game at the previous stages. There are training samples fromthree classes: win, draw, and loss. At the new stage, new training samples are selected from the previous stages and are added to the training samples, then we retrain the neural network. It is a type of on-line learning. The 8 features are selected with ad hoc choice.We use the theorem ofMirchandani and Cao to determine the number of hidden nodes. And after the testing in the learning convergence, the MLP is determined as 8-2-3 model. The learning rate and momentum coefficient are determined in the cross-learning. The prediction accuracy achieves 75% if the draw games are excluded.
{"title":"Multilayer Perceptron for Prediction of 2006 World Cup Football Game","authors":"Kou-Yuan Huang, Kai-Ju Chen","doi":"10.1155/2011/374816","DOIUrl":"https://doi.org/10.1155/2011/374816","url":null,"abstract":"Multilayer perceptron (MLP) with back-propagation learning rule is adopted to predict the winning rates of two teams according to their official statistical data of 2006World Cup Football Game at the previous stages. There are training samples fromthree classes: win, draw, and loss. At the new stage, new training samples are selected from the previous stages and are added to the training samples, then we retrain the neural network. It is a type of on-line learning. The 8 features are selected with ad hoc choice.We use the theorem ofMirchandani and Cao to determine the number of hidden nodes. And after the testing in the learning convergence, the MLP is determined as 8-2-3 model. The learning rate and momentum coefficient are determined in the cross-learning. The prediction accuracy achieves 75% if the draw games are excluded.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"31 1","pages":"374816:1-374816:8"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83345554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Mansour, R. Ayoubi, H. Ziade, R. Velazco, W. Falou
The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN) that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network (HNN) parallel architecture on a SRAM-based FPGA. Themain advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions.
{"title":"An Optimal Implementation on FPGA of a Hopfield Neural Network","authors":"W. Mansour, R. Ayoubi, H. Ziade, R. Velazco, W. Falou","doi":"10.1155/2011/189368","DOIUrl":"https://doi.org/10.1155/2011/189368","url":null,"abstract":"The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN) that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network (HNN) parallel architecture on a SRAM-based FPGA. Themain advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"2 1","pages":"189368:1-189368:9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81346083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increased complexity of plants and the development of sophisticated control systems have encouraged the parallel development of efficient rapid fault detection and isolation (FDI) systems. FDI in industrial system has lately become of great significance. This paper proposes a new technique for short time fault detection and diagnosis in nonlinear dynamic systems with multi inputs and multi outputs. The main contribution of this paper is to develop a FDI schema according to reference models of fault-free and faulty behaviors designed with neural networks. Fault detection is obtained according to residuals that result from the comparison of measured signals with the outputs of the fault free reference model. Then, Euclidean distance from the outputs of models of faults to themeasurements leads to fault isolation. The advantage of this method is to provide not only early detection but also early diagnosis thanks to the parallel computation of the models of faults and to the proposed decision algorithm. The effectiveness of this approach is illustrated with simulations on DAMADICS benchmark.
{"title":"Early FDI Based on Residuals Design According to the Analysis of Models of Faults: Application to DAMADICS","authors":"Y. Kourd, D. Lefebvre, N. Guersi","doi":"10.1155/2011/453169","DOIUrl":"https://doi.org/10.1155/2011/453169","url":null,"abstract":"The increased complexity of plants and the development of sophisticated control systems have encouraged the parallel development of efficient rapid fault detection and isolation (FDI) systems. FDI in industrial system has lately become of great significance. This paper proposes a new technique for short time fault detection and diagnosis in nonlinear dynamic systems with multi inputs and multi outputs. The main contribution of this paper is to develop a FDI schema according to reference models of fault-free and faulty behaviors designed with neural networks. Fault detection is obtained according to residuals that result from the comparison of measured signals with the outputs of the fault free reference model. Then, Euclidean distance from the outputs of models of faults to themeasurements leads to fault isolation. The advantage of this method is to provide not only early detection but also early diagnosis thanks to the parallel computation of the models of faults and to the proposed decision algorithm. The effectiveness of this approach is illustrated with simulations on DAMADICS benchmark.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"48 1","pages":"453169:1-453169:10"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83108401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}