A silicon retina is an intelligent vision sensor that can execute real-time image preprocessing by using a parallel analog circuit that mimics the structure of the neuronal circuits in the vertebrate retina. For enhancing the sensor's robustness to changes in illumination in a practical environment, we have designed and fabricated a silicon retina on the basis of a computational model of brightness constancy. The chip has a wide-dynamic-range and shows a constant response against changes in the illumination intensity. The photosensor in the present chip approximates logarithmic illumination-to-voltage transfer characteristics as a result of the application of a time-modulated reset voltage technique. Two types of image processing, namely, Laplacian-Gaussian-like spatial filtering and computing the frame difference, are carried out by using resistive networks and sample/hold circuits in the chip. As a result of these processings, the chip exhibits brightness constancy over a wide range of illumination. The chip is fabricated by using the 0.25- μm complementary metal-oxide semiconductor image sensor technology. The number of pixels is 64 × 64, and the power consumption is 32 mW at the frame rate of 30 fps. We show that our chip not only has a wide-dynamic-range but also shows a constant response to the changes in illumination.
{"title":"Wide-dynamic-range APS-based silicon retina with brightness constancy.","authors":"Kazuhiro Shimonomura, Seiji Kameda, Atsushi Iwata, Tetsuya Yagi","doi":"10.1109/TNN.2011.2161591","DOIUrl":"https://doi.org/10.1109/TNN.2011.2161591","url":null,"abstract":"<p><p>A silicon retina is an intelligent vision sensor that can execute real-time image preprocessing by using a parallel analog circuit that mimics the structure of the neuronal circuits in the vertebrate retina. For enhancing the sensor's robustness to changes in illumination in a practical environment, we have designed and fabricated a silicon retina on the basis of a computational model of brightness constancy. The chip has a wide-dynamic-range and shows a constant response against changes in the illumination intensity. The photosensor in the present chip approximates logarithmic illumination-to-voltage transfer characteristics as a result of the application of a time-modulated reset voltage technique. Two types of image processing, namely, Laplacian-Gaussian-like spatial filtering and computing the frame difference, are carried out by using resistive networks and sample/hold circuits in the chip. As a result of these processings, the chip exhibits brightness constancy over a wide range of illumination. The chip is fabricated by using the 0.25- μm complementary metal-oxide semiconductor image sensor technology. The number of pixels is 64 × 64, and the power consumption is 32 mW at the frame rate of 30 fps. We show that our chip not only has a wide-dynamic-range but also shows a constant response to the changes in illumination.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 9","pages":"1482-93"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2161591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-01Epub Date: 2011-07-18DOI: 10.1109/TNN.2011.2160875
Leonardo Rigutini, Tiziano Papini, Marco Maggini, Franco Scarselli
Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.
{"title":"SortNet: learning to rank by a neural preference function.","authors":"Leonardo Rigutini, Tiziano Papini, Marco Maggini, Franco Scarselli","doi":"10.1109/TNN.2011.2160875","DOIUrl":"https://doi.org/10.1109/TNN.2011.2160875","url":null,"abstract":"<p><p>Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 9","pages":"1368-80"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2160875","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30019803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-01Epub Date: 2011-07-18DOI: 10.1109/TNN.2011.2160987
Bo Liu, Wenlian Lu, Tianping Chen
In this brief, we discuss some variants of generalized Halanay inequalities that are useful in the discussion of dissipativity and stability of delayed neural networks, integro-differential systems, and Volterra functional differential equations. We provide some generalizations of the Halanay inequality, which is more accurate than the existing results. As applications, we discuss invariant set, dissipative synchronization, and global asymptotic stability for the Hopfield neural networks with infinite delays. We also prove that the dynamical systems with unbounded time-varying delays are globally asymptotically stable.
{"title":"Generalized Halanay inequalities and their applications to neural networks with unbounded time-varying delays.","authors":"Bo Liu, Wenlian Lu, Tianping Chen","doi":"10.1109/TNN.2011.2160987","DOIUrl":"https://doi.org/10.1109/TNN.2011.2160987","url":null,"abstract":"<p><p>In this brief, we discuss some variants of generalized Halanay inequalities that are useful in the discussion of dissipativity and stability of delayed neural networks, integro-differential systems, and Volterra functional differential equations. We provide some generalizations of the Halanay inequality, which is more accurate than the existing results. As applications, we discuss invariant set, dissipative synchronization, and global asymptotic stability for the Hopfield neural networks with infinite delays. We also prove that the dynamical systems with unbounded time-varying delays are globally asymptotically stable.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 9","pages":"1508-13"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2160987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30019801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-01Epub Date: 2011-07-29DOI: 10.1109/TNN.2011.2161772
Fei Wang, Bin Zhao, Changshui Zhang
We propose a new dimensionality reduction method called maximum margin projection (MMP), which aims to project data samples into the most discriminative subspace, where clusters are most well-separated. Specifically, MMP projects input patterns onto the normal of the maximum margin separating hyperplanes. As a result, MMP only depends on the geometry of the optimal decision boundary and not on the distribution of those data points lying further away from this boundary. Technically, MMP is formulated as an integer programming problem and we propose a column generation algorithm to solve it. Moreover, through a combination of theoretical results and empirical observations we show that the computation time needed for MMP can be treated as linear in the dataset size. Experimental results on both toy and real-world datasets demonstrate the effectiveness of MMP.
{"title":"Unsupervised large margin discriminative projection.","authors":"Fei Wang, Bin Zhao, Changshui Zhang","doi":"10.1109/TNN.2011.2161772","DOIUrl":"https://doi.org/10.1109/TNN.2011.2161772","url":null,"abstract":"<p><p>We propose a new dimensionality reduction method called maximum margin projection (MMP), which aims to project data samples into the most discriminative subspace, where clusters are most well-separated. Specifically, MMP projects input patterns onto the normal of the maximum margin separating hyperplanes. As a result, MMP only depends on the geometry of the optimal decision boundary and not on the distribution of those data points lying further away from this boundary. Technically, MMP is formulated as an integer programming problem and we propose a column generation algorithm to solve it. Moreover, through a combination of theoretical results and empirical observations we show that the computation time needed for MMP can be treated as linear in the dataset size. Experimental results on both toy and real-world datasets demonstrate the effectiveness of MMP.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 9","pages":"1446-56"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2161772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-01Epub Date: 2011-07-29DOI: 10.1109/TNN.2011.2161330
Haiquan Zhao, Xiangping Zeng, Zhengyou He
To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.
{"title":"Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.","authors":"Haiquan Zhao, Xiangping Zeng, Zhengyou He","doi":"10.1109/TNN.2011.2161330","DOIUrl":"https://doi.org/10.1109/TNN.2011.2161330","url":null,"abstract":"<p><p>To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 9","pages":"1494-507"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2161330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-01Epub Date: 2011-07-29DOI: 10.1109/TNN.2011.2162109
Sotirios P Chatzis, Yiannis Demiris
Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance.
{"title":"Echo state Gaussian process.","authors":"Sotirios P Chatzis, Yiannis Demiris","doi":"10.1109/TNN.2011.2162109","DOIUrl":"https://doi.org/10.1109/TNN.2011.2162109","url":null,"abstract":"Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance.","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 9","pages":"1435-45"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2162109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-08-01Epub Date: 2011-06-27DOI: 10.1109/TNN.2011.2157358
Bukhari Che Ujang, Clive Cheong Took, Danilo P Mandic
A class of nonlinear quaternion-valued adaptive filtering algorithms is proposed based on locally analytic nonlinear activation functions. To circumvent the stringent standard analyticity conditions which are prohibitive to the development of nonlinear adaptive quaternion-valued estimation models, we use the fact that stochastic gradient learning algorithms require only local analyticity at the operating point in the estimation space. It is shown that the quaternion-valued exponential function is locally analytic, and, since local analyticity extends to polynomials, products, and ratios, we show that a class of transcendental nonlinear functions can serve as activation functions in nonlinear and neural adaptive models. This provides a unifying framework for the derivation of gradient-based learning algorithms in the quaternion domain, and the derived algorithms are shown to have the same generic form as their real- and complex-valued counterparts. To make such models second-order optimal for the generality of quaternion signals (both circular and noncircular), we use recent developments in augmented quaternion statistics to introduce widely linear versions of the proposed nonlinear adaptive quaternion valued filters. This allows full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances to cater rigorously for second-order noncircularity (improperness), and the corresponding power mismatch in the signal components. Simulations over a range of circular and noncircular synthetic processes and a real world 3-D noncircular wind signal support the approach.
{"title":"Quaternion-valued nonlinear adaptive filtering.","authors":"Bukhari Che Ujang, Clive Cheong Took, Danilo P Mandic","doi":"10.1109/TNN.2011.2157358","DOIUrl":"https://doi.org/10.1109/TNN.2011.2157358","url":null,"abstract":"<p><p>A class of nonlinear quaternion-valued adaptive filtering algorithms is proposed based on locally analytic nonlinear activation functions. To circumvent the stringent standard analyticity conditions which are prohibitive to the development of nonlinear adaptive quaternion-valued estimation models, we use the fact that stochastic gradient learning algorithms require only local analyticity at the operating point in the estimation space. It is shown that the quaternion-valued exponential function is locally analytic, and, since local analyticity extends to polynomials, products, and ratios, we show that a class of transcendental nonlinear functions can serve as activation functions in nonlinear and neural adaptive models. This provides a unifying framework for the derivation of gradient-based learning algorithms in the quaternion domain, and the derived algorithms are shown to have the same generic form as their real- and complex-valued counterparts. To make such models second-order optimal for the generality of quaternion signals (both circular and noncircular), we use recent developments in augmented quaternion statistics to introduce widely linear versions of the proposed nonlinear adaptive quaternion valued filters. This allows full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances to cater rigorously for second-order noncircularity (improperness), and the corresponding power mismatch in the signal components. Simulations over a range of circular and noncircular synthetic processes and a real world 3-D noncircular wind signal support the approach.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 8","pages":"1193-206"},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2157358","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30273938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-08-01Epub Date: 2011-06-23DOI: 10.1109/TNN.2011.2156809
Niceto R Luque, Jesus A Garrido, Richard R Carrillo, Olivier J-M D Coenen, Eduardo Ros
It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.
{"title":"Cerebellar input configuration toward object model abstraction in manipulation tasks.","authors":"Niceto R Luque, Jesus A Garrido, Richard R Carrillo, Olivier J-M D Coenen, Eduardo Ros","doi":"10.1109/TNN.2011.2156809","DOIUrl":"https://doi.org/10.1109/TNN.2011.2156809","url":null,"abstract":"<p><p>It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 8","pages":"1321-8"},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2156809","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29966787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-08-01Epub Date: 2011-06-30DOI: 10.1109/TNN.2011.2157938
Wenjun Xiong, Daniel W C Ho, Zidong Wang
In this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algorithms, large-size networks are aggregated to small-size ones. Then, by applying the principle minor theory to the small-size networks, a sufficient condition is given to reduce the number of controlled nodes. Finally, simulation results are given to illustrate the effectiveness of the developed criteria.
{"title":"Consensus analysis of multiagent networks via aggregated and pinning approaches.","authors":"Wenjun Xiong, Daniel W C Ho, Zidong Wang","doi":"10.1109/TNN.2011.2157938","DOIUrl":"https://doi.org/10.1109/TNN.2011.2157938","url":null,"abstract":"<p><p>In this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algorithms, large-size networks are aggregated to small-size ones. Then, by applying the principle minor theory to the small-size networks, a sufficient condition is given to reduce the number of controlled nodes. Finally, simulation results are given to illustrate the effectiveness of the developed criteria.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 8","pages":"1231-40"},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2157938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29979451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-08-01Epub Date: 2011-06-30DOI: 10.1109/TNN.2011.2156808
Yu Zhang, Dit-Yan Yeung
Generalized discriminant analysis (GDA) is a commonly used method for dimensionality reduction. In its general form, it seeks a nonlinear projection that simultaneously maximizes the between-class dissimilarity and minimizes the within-class dissimilarity to increase class separability. In real-world applications where labeled data are scarce, GDA may not work very well. However, unlabeled data are often available in large quantities at very low cost. In this paper, we propose a novel GDA algorithm which is abbreviated as semisupervised generalized discriminant analysis (SSGDA). We utilize unlabeled data to maximize an optimality criterion of GDA and formulate the problem as an optimization problem that is solved using the constrained concave-convex procedure. The optimization procedure leads to estimation of the class labels for the unlabeled data. We propose a novel confidence measure and a method for selecting those unlabeled data points whose labels are estimated with high confidence. The selected unlabeled data can then be used to augment the original labeled dataset for performing GDA. We also propose a variant of SSGDA, called M-SSGDA, which adopts the manifold assumption to utilize the unlabeled data. Extensive experiments on many benchmark datasets demonstrate the effectiveness of our proposed methods.
{"title":"Semisupervised generalized discriminant analysis.","authors":"Yu Zhang, Dit-Yan Yeung","doi":"10.1109/TNN.2011.2156808","DOIUrl":"https://doi.org/10.1109/TNN.2011.2156808","url":null,"abstract":"<p><p>Generalized discriminant analysis (GDA) is a commonly used method for dimensionality reduction. In its general form, it seeks a nonlinear projection that simultaneously maximizes the between-class dissimilarity and minimizes the within-class dissimilarity to increase class separability. In real-world applications where labeled data are scarce, GDA may not work very well. However, unlabeled data are often available in large quantities at very low cost. In this paper, we propose a novel GDA algorithm which is abbreviated as semisupervised generalized discriminant analysis (SSGDA). We utilize unlabeled data to maximize an optimality criterion of GDA and formulate the problem as an optimization problem that is solved using the constrained concave-convex procedure. The optimization procedure leads to estimation of the class labels for the unlabeled data. We propose a novel confidence measure and a method for selecting those unlabeled data points whose labels are estimated with high confidence. The selected unlabeled data can then be used to augment the original labeled dataset for performing GDA. We also propose a variant of SSGDA, called M-SSGDA, which adopts the manifold assumption to utilize the unlabeled data. Extensive experiments on many benchmark datasets demonstrate the effectiveness of our proposed methods.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 8","pages":"1207-17"},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2156808","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29979361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}