Pub Date : 2011-12-01Epub Date: 2011-10-17DOI: 10.1109/TNN.2011.2170095
Chi-Yuan Yeh, Wen-Hau Roger Jeng, Shie-Jue Lee
We propose a novel approach for building a type-2 neural-fuzzy system from a given set of input-output training data. A self-constructing fuzzy clustering method is used to partition the training dataset into clusters through input-similarity and output-similarity tests. The membership function associated with each cluster is defined with the mean and deviation of the data points included in the cluster. Then a type-2 fuzzy Takagi-Sugeno-Kang IF-THEN rule is derived from each cluster to form a fuzzy rule base. A fuzzy neural network is constructed accordingly and the associated parameters are refined by a hybrid learning algorithm which incorporates particle swarm optimization and a least squares estimation. For a new input, a corresponding crisp output of the system is obtained by combining the inferred results of all the rules into a type-2 fuzzy set, which is then defuzzified by applying a refined type reduction algorithm. Experimental results are presented to demonstrate the effectiveness of our proposed approach.
{"title":"Data-based system modeling using a type-2 fuzzy neural network with a hybrid learning algorithm.","authors":"Chi-Yuan Yeh, Wen-Hau Roger Jeng, Shie-Jue Lee","doi":"10.1109/TNN.2011.2170095","DOIUrl":"https://doi.org/10.1109/TNN.2011.2170095","url":null,"abstract":"<p><p>We propose a novel approach for building a type-2 neural-fuzzy system from a given set of input-output training data. A self-constructing fuzzy clustering method is used to partition the training dataset into clusters through input-similarity and output-similarity tests. The membership function associated with each cluster is defined with the mean and deviation of the data points included in the cluster. Then a type-2 fuzzy Takagi-Sugeno-Kang IF-THEN rule is derived from each cluster to form a fuzzy rule base. A fuzzy neural network is constructed accordingly and the associated parameters are refined by a hybrid learning algorithm which incorporates particle swarm optimization and a least squares estimation. For a new input, a corresponding crisp output of the system is obtained by combining the inferred results of all the rules into a type-2 fuzzy set, which is then defuzzified by applying a refined type reduction algorithm. Experimental results are presented to demonstrate the effectiveness of our proposed approach.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 12","pages":"2296-309"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2170095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30072530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01Epub Date: 2011-09-26DOI: 10.1109/TNN.2011.2165729
Yu Jiang, Zhong-Ping Jiang
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
{"title":"Approximate dynamic programming for optimal stationary control with control-dependent noise.","authors":"Yu Jiang, Zhong-Ping Jiang","doi":"10.1109/TNN.2011.2165729","DOIUrl":"https://doi.org/10.1109/TNN.2011.2165729","url":null,"abstract":"<p><p>This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 12","pages":"2392-8"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2165729","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30171692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01Epub Date: 2011-10-25DOI: 10.1109/TNN.2011.2170093
Luka Teslic, Benjamin Hartmann, Oliver Nelles, Igor Skrjanc
This paper deals with the problem of fuzzy nonlinear model identification in the framework of a local model network (LMN). A new iterative identification approach is proposed, where supervised and unsupervised learning are combined to optimize the structure of the LMN. For the purpose of fitting the cluster-centers to the process nonlinearity, the Gustafsson-Kessel (GK) fuzzy clustering, i.e., unsupervised learning, is applied. In combination with the LMN learning procedure, a new incremental method to define the number and the initial locations of the cluster centers for the GK clustering algorithm is proposed. Each data cluster corresponds to a local region of the process and is modeled with a local linear model. Since the validity functions are calculated from the fuzzy covariance matrices of the clusters, they are highly adaptable and thus the process can be described with a very sparse amount of local models, i.e., with a parsimonious LMN model. The proposed method for constructing the LMN is finally tested on a drug absorption spectral process and compared to two other methods, namely, Lolimot and Hilomot. The comparison between the experimental results when using each method shows the usefulness of the proposed identification algorithm.
{"title":"Nonlinear system identification by Gustafson-Kessel fuzzy clustering and supervised local model network learning for the drug absorption spectra process.","authors":"Luka Teslic, Benjamin Hartmann, Oliver Nelles, Igor Skrjanc","doi":"10.1109/TNN.2011.2170093","DOIUrl":"https://doi.org/10.1109/TNN.2011.2170093","url":null,"abstract":"<p><p>This paper deals with the problem of fuzzy nonlinear model identification in the framework of a local model network (LMN). A new iterative identification approach is proposed, where supervised and unsupervised learning are combined to optimize the structure of the LMN. For the purpose of fitting the cluster-centers to the process nonlinearity, the Gustafsson-Kessel (GK) fuzzy clustering, i.e., unsupervised learning, is applied. In combination with the LMN learning procedure, a new incremental method to define the number and the initial locations of the cluster centers for the GK clustering algorithm is proposed. Each data cluster corresponds to a local region of the process and is modeled with a local linear model. Since the validity functions are calculated from the fuzzy covariance matrices of the clusters, they are highly adaptable and thus the process can be described with a very sparse amount of local models, i.e., with a parsimonious LMN model. The proposed method for constructing the LMN is finally tested on a drug absorption spectral process and compared to two other methods, namely, Lolimot and Hilomot. The comparison between the experimental results when using each method shows the usefulness of the proposed identification algorithm.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":" ","pages":"1941-51"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2170093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40117952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01Epub Date: 2011-10-28DOI: 10.1109/TNN.2011.2168536
Bruno Apolloni, Simone Bassis, Elena Pagani, Gian Paolo Rossi, Lorenzo Valerio
We introduce a wait-and-chase scheme that models the contact times between moving agents within a connectionist construct. The idea that elementary processors move within a network to get a proper position is borne out both by biological neurons in the brain morphogenesis and by agents within social networks. From the former, we take inspiration to devise a medium-term project for new artificial neural network training procedures where mobile neurons exchange data only when they are close to one another in a proper space (are in contact). From the latter, we accumulate mobility tracks experience. We focus on the preliminary step of characterizing the elapsed time between neuron contacts, which results from a spatial process fitting in the family of random processes with memory, where chasing neurons are stochastically driven by the goal of hitting target neurons. Thus, we add an unprecedented mobility model to the literature in the field, introducing a distribution law of the intercontact times that merges features of both negative exponential and Pareto distribution laws. We give a constructive description and implementation of our model, as well as a short analytical form whose parameters are suitably estimated in terms of confidence intervals from experimental data. Numerical experiments show the model and related inference tools to be sufficiently robust to cope with two main requisites for its exploitation in a neural network: the nonindependence of the observed intercontact times and the feasibility of the model inversion problem to infer suitable mobility parameters.
{"title":"Mobility timing for agent communities, a cue for advanced connectionist systems.","authors":"Bruno Apolloni, Simone Bassis, Elena Pagani, Gian Paolo Rossi, Lorenzo Valerio","doi":"10.1109/TNN.2011.2168536","DOIUrl":"https://doi.org/10.1109/TNN.2011.2168536","url":null,"abstract":"<p><p>We introduce a wait-and-chase scheme that models the contact times between moving agents within a connectionist construct. The idea that elementary processors move within a network to get a proper position is borne out both by biological neurons in the brain morphogenesis and by agents within social networks. From the former, we take inspiration to devise a medium-term project for new artificial neural network training procedures where mobile neurons exchange data only when they are close to one another in a proper space (are in contact). From the latter, we accumulate mobility tracks experience. We focus on the preliminary step of characterizing the elapsed time between neuron contacts, which results from a spatial process fitting in the family of random processes with memory, where chasing neurons are stochastically driven by the goal of hitting target neurons. Thus, we add an unprecedented mobility model to the literature in the field, introducing a distribution law of the intercontact times that merges features of both negative exponential and Pareto distribution laws. We give a constructive description and implementation of our model, as well as a short analytical form whose parameters are suitably estimated in terms of confidence intervals from experimental data. Numerical experiments show the model and related inference tools to be sufficiently robust to cope with two main requisites for its exploitation in a neural network: the nonindependence of the observed intercontact times and the feasibility of the model inversion problem to infer suitable mobility parameters.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":" ","pages":"2032-49"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2168536","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40123922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01Epub Date: 2011-10-03DOI: 10.1109/TNN.2011.2169426
Tatt Hee Oong, Nor Ashidi Mat Isa
This paper presents a new evolutionary approach called the hybrid evolutionary artificial neural network (HEANN) for simultaneously evolving an artificial neural networks (ANNs) topology and weights. Evolutionary algorithms (EAs) with strong global search capabilities are likely to provide the most promising region. However, they are less efficient in fine-tuning the search space locally. HEANN emphasizes the balancing of the global search and local search for the evolutionary process by adapting the mutation probability and the step size of the weight perturbation. This is distinguishable from most previous studies that incorporate EA to search for network topology and gradient learning for weight updating. Four benchmark functions were used to test the evolutionary framework of HEANN. In addition, HEANN was tested on seven classification benchmark problems from the UCI machine learning repository. Experimental results show the superior performance of HEANN in fine-tuning the network complexity within a small number of generations while preserving the generalization capability compared with other algorithms.
{"title":"Adaptive evolutionary artificial neural networks for pattern classification.","authors":"Tatt Hee Oong, Nor Ashidi Mat Isa","doi":"10.1109/TNN.2011.2169426","DOIUrl":"https://doi.org/10.1109/TNN.2011.2169426","url":null,"abstract":"<p><p>This paper presents a new evolutionary approach called the hybrid evolutionary artificial neural network (HEANN) for simultaneously evolving an artificial neural networks (ANNs) topology and weights. Evolutionary algorithms (EAs) with strong global search capabilities are likely to provide the most promising region. However, they are less efficient in fine-tuning the search space locally. HEANN emphasizes the balancing of the global search and local search for the evolutionary process by adapting the mutation probability and the step size of the weight perturbation. This is distinguishable from most previous studies that incorporate EA to search for network topology and gradient learning for weight updating. Four benchmark functions were used to test the evolutionary framework of HEANN. In addition, HEANN was tested on seven classification benchmark problems from the UCI machine learning repository. Experimental results show the superior performance of HEANN in fine-tuning the network complexity within a small number of generations while preserving the generalization capability compared with other algorithms.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 11","pages":"1823-36"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2169426","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30182518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01Epub Date: 2011-09-26DOI: 10.1109/TNN.2011.2166275
Jianxiong Zhang, Wansheng Tang, Pengsheng Zheng
In this paper, we investigate the ultimate bound and positively invariant set for a class of Hopfield neural networks (HNNs) based on the Lyapunov stability criterion and Lagrange multiplier method. It is shown that a hyperelliptic estimate of the ultimate bound and positively invariant set for the HNNs can be calculated by solving a linear matrix inequality (LMI). Furthermore, the global stability of the unique equilibrium and the instability region of the HNNs are analyzed, respectively. Finally, the most accurate estimate of the ultimate bound and positively invariant set can be derived by solving the corresponding optimization problems involving the LMI constraints. Some numerical examples are given to illustrate the effectiveness of the proposed results.
{"title":"Estimating the ultimate bound and positively invariant set for a class of Hopfield networks.","authors":"Jianxiong Zhang, Wansheng Tang, Pengsheng Zheng","doi":"10.1109/TNN.2011.2166275","DOIUrl":"https://doi.org/10.1109/TNN.2011.2166275","url":null,"abstract":"<p><p>In this paper, we investigate the ultimate bound and positively invariant set for a class of Hopfield neural networks (HNNs) based on the Lyapunov stability criterion and Lagrange multiplier method. It is shown that a hyperelliptic estimate of the ultimate bound and positively invariant set for the HNNs can be calculated by solving a linear matrix inequality (LMI). Furthermore, the global stability of the unique equilibrium and the instability region of the HNNs are analyzed, respectively. Finally, the most accurate estimate of the ultimate bound and positively invariant set can be derived by solving the corresponding optimization problems involving the LMI constraints. Some numerical examples are given to illustrate the effectiveness of the proposed results.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 11","pages":"1735-43"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2166275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30171693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01Epub Date: 2011-09-26DOI: 10.1109/TNN.2011.2167239
Ryad Benosman, Sio-Hoï Ieng, Paul Rogister, Christoph Posch
Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based--rather than frame-based--vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor.
{"title":"Asynchronous event-based hebbian epipolar geometry.","authors":"Ryad Benosman, Sio-Hoï Ieng, Paul Rogister, Christoph Posch","doi":"10.1109/TNN.2011.2167239","DOIUrl":"https://doi.org/10.1109/TNN.2011.2167239","url":null,"abstract":"<p><p>Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based--rather than frame-based--vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 11","pages":"1723-34"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2167239","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30171694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01Epub Date: 2011-09-29DOI: 10.1109/TNN.2011.2167760
Xingbao Gao, Li-Zhi Liao
In this paper, we analyze and establish the stability and convergence of the dynamical system proposed by Xia and Feng, whose equilibria solve variational inequality and related problems. Under the pseudo-monotonicity and other conditions, this system is proved to be stable in the sense of Lyapunov and converges to one of its equilibrium points for any starting point. Meanwhile, the global exponential stability of this system is also shown under some mild conditions without the strong monotonicity of the mapping. The obtained results improve and correct some existing ones. The validity and performance of this system are demonstrated by some numerical examples.
{"title":"Stability and convergence analysis for a class of neural networks.","authors":"Xingbao Gao, Li-Zhi Liao","doi":"10.1109/TNN.2011.2167760","DOIUrl":"https://doi.org/10.1109/TNN.2011.2167760","url":null,"abstract":"<p><p>In this paper, we analyze and establish the stability and convergence of the dynamical system proposed by Xia and Feng, whose equilibria solve variational inequality and related problems. Under the pseudo-monotonicity and other conditions, this system is proved to be stable in the sense of Lyapunov and converges to one of its equilibrium points for any starting point. Meanwhile, the global exponential stability of this system is also shown under some mild conditions without the strong monotonicity of the mapping. The obtained results improve and correct some existing ones. The validity and performance of this system are demonstrated by some numerical examples.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 11","pages":"1770-82"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2167760","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30181110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01Epub Date: 2011-09-29DOI: 10.1109/TNN.2011.2162000
Feiping Nie, Zinan Zeng, Ivor W Tsang, Dong Xu, Changshui Zhang
Spectral clustering (SC) methods have been successfully applied to many real-world applications. The success of these SC methods is largely based on the manifold assumption, namely, that two nearby data points in the high-density region of a low-dimensional data manifold have the same cluster label. However, such an assumption might not always hold on high-dimensional data. When the data do not exhibit a clear low-dimensional manifold structure (e.g., high-dimensional and sparse data), the clustering performance of SC will be degraded and become even worse than K -means clustering. In this paper, motivated by the observation that the true cluster assignment matrix for high-dimensional data can be always embedded in a linear space spanned by the data, we propose the spectral embedded clustering (SEC) framework, in which a linearity regularization is explicitly added into the objective function of SC methods. More importantly, the proposed SEC framework can naturally deal with out-of-sample data. We also present a new Laplacian matrix constructed from a local regression of each pattern and incorporate it into our SEC framework to capture both local and global discriminative information for clustering. Comprehensive experiments on eight real-world high-dimensional datasets demonstrate the effectiveness and advantages of our SEC framework over existing SC methods and K-means-based clustering methods. Our SEC framework significantly outperforms SC using the Nyström algorithm on unseen data.
{"title":"Spectral embedded clustering: a framework for in-sample and out-of-sample spectral clustering.","authors":"Feiping Nie, Zinan Zeng, Ivor W Tsang, Dong Xu, Changshui Zhang","doi":"10.1109/TNN.2011.2162000","DOIUrl":"https://doi.org/10.1109/TNN.2011.2162000","url":null,"abstract":"<p><p>Spectral clustering (SC) methods have been successfully applied to many real-world applications. The success of these SC methods is largely based on the manifold assumption, namely, that two nearby data points in the high-density region of a low-dimensional data manifold have the same cluster label. However, such an assumption might not always hold on high-dimensional data. When the data do not exhibit a clear low-dimensional manifold structure (e.g., high-dimensional and sparse data), the clustering performance of SC will be degraded and become even worse than K -means clustering. In this paper, motivated by the observation that the true cluster assignment matrix for high-dimensional data can be always embedded in a linear space spanned by the data, we propose the spectral embedded clustering (SEC) framework, in which a linearity regularization is explicitly added into the objective function of SC methods. More importantly, the proposed SEC framework can naturally deal with out-of-sample data. We also present a new Laplacian matrix constructed from a local regression of each pattern and incorporate it into our SEC framework to capture both local and global discriminative information for clustering. Comprehensive experiments on eight real-world high-dimensional datasets demonstrate the effectiveness and advantages of our SEC framework over existing SC methods and K-means-based clustering methods. Our SEC framework significantly outperforms SC using the Nyström algorithm on unseen data.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 11","pages":"1796-808"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2162000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30181107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01Epub Date: 2011-09-08DOI: 10.1109/TNN.2011.2140381
Shahab Mehraeen, Sarangapani Jagannathan, Mariesa L Crow
A novel neural network (NN)-based nonlinear decentralized adaptive controller is proposed for a class of large-scale, uncertain, interconnected nonlinear systems in strict-feedback form by using the dynamic surface control (DSC) principle, thus, the "explosion of complexity" problem which is observed in the conventional backstepping approach is relaxed in both state and output feedback control designs. The matching condition is not assumed when considering the interconnection terms. Then, NNs are utilized to approximate the uncertainties in both subsystem and interconnected terms. By using novel NN weight update laws with quadratic error terms as well as proposed control inputs, it is demonstrated using Lyapunov stability that the system states errors converge to zero asymptotically with both state and output feedback controllers, even in the presence of NN approximation errors in contrast with the uniform ultimate boundedness result, which is common in the literature with NN-based DSC and backstepping schemes. Simulation results show the effectiveness of the approach.
{"title":"Decentralized dynamic surface control of large-scale interconnected systems in strict-feedback form using neural networks with asymptotic stabilization.","authors":"Shahab Mehraeen, Sarangapani Jagannathan, Mariesa L Crow","doi":"10.1109/TNN.2011.2140381","DOIUrl":"https://doi.org/10.1109/TNN.2011.2140381","url":null,"abstract":"<p><p>A novel neural network (NN)-based nonlinear decentralized adaptive controller is proposed for a class of large-scale, uncertain, interconnected nonlinear systems in strict-feedback form by using the dynamic surface control (DSC) principle, thus, the \"explosion of complexity\" problem which is observed in the conventional backstepping approach is relaxed in both state and output feedback control designs. The matching condition is not assumed when considering the interconnection terms. Then, NNs are utilized to approximate the uncertainties in both subsystem and interconnected terms. By using novel NN weight update laws with quadratic error terms as well as proposed control inputs, it is demonstrated using Lyapunov stability that the system states errors converge to zero asymptotically with both state and output feedback controllers, even in the presence of NN approximation errors in contrast with the uniform ultimate boundedness result, which is common in the literature with NN-based DSC and backstepping schemes. Simulation results show the effectiveness of the approach.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 11","pages":"1709-22"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2140381","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29987440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}