首页 > 最新文献

IEEE transactions on neural networks最新文献

英文 中文
Minimum-volume-constrained nonnegative matrix factorization: enhanced ability of learning parts. 最小体积约束非负矩阵分解:增强学习部件的能力。
Pub Date : 2011-10-01 Epub Date: 2011-08-30 DOI: 10.1109/TNN.2011.2164621
Guoxu Zhou, Shengli Xie, Zuyuan Yang, Jun-Mei Yang, Zhaoshui He

Nonnegative matrix factorization (NMF) with minimum-volume-constraint (MVC) is exploited in this paper. Our results show that MVC can actually improve the sparseness of the results of NMF. This sparseness is L(0)-norm oriented and can give desirable results even in very weak sparseness situations, thereby leading to the significantly enhanced ability of learning parts of NMF. The close relation between NMF, sparse NMF, and the MVC_NMF is discussed first. Then two algorithms are proposed to solve the MVC_NMF model. One is called quadratic programming_MVC_NMF (QP_MVC_NMF) which is based on quadratic programming and the other is called negative glow_MVC_NMF (NG_MVC_NMF) because it uses multiplicative updates incorporating natural gradient ingeniously. The QP_MVC_NMF algorithm is quite efficient for small-scale problems and the NG_MVC_NMF algorithm is more suitable for large-scale problems. Simulations show the efficiency and validity of the proposed methods in applications of blind source separation and human face images analysis.

研究了具有最小体积约束的非负矩阵分解(NMF)。我们的研究结果表明,MVC确实可以提高NMF结果的稀疏性。这种稀疏性是面向L(0)范数的,即使在非常弱的稀疏性情况下也能得到理想的结果,从而显著增强了NMF部分的学习能力。首先讨论了NMF、稀疏NMF和MVC_NMF之间的密切关系。然后提出了求解MVC_NMF模型的两种算法。一种叫做二次规划_mvc_nmf (QP_MVC_NMF),它是基于二次规划的,另一种叫做负glow_MVC_NMF (NG_MVC_NMF),因为它巧妙地使用了包含自然梯度的乘法更新。QP_MVC_NMF算法对小规模问题的处理效率较高,而NG_MVC_NMF算法更适合于大规模问题。仿真结果表明了该方法在盲源分离和人脸图像分析中的有效性。
{"title":"Minimum-volume-constrained nonnegative matrix factorization: enhanced ability of learning parts.","authors":"Guoxu Zhou,&nbsp;Shengli Xie,&nbsp;Zuyuan Yang,&nbsp;Jun-Mei Yang,&nbsp;Zhaoshui He","doi":"10.1109/TNN.2011.2164621","DOIUrl":"https://doi.org/10.1109/TNN.2011.2164621","url":null,"abstract":"<p><p>Nonnegative matrix factorization (NMF) with minimum-volume-constraint (MVC) is exploited in this paper. Our results show that MVC can actually improve the sparseness of the results of NMF. This sparseness is L(0)-norm oriented and can give desirable results even in very weak sparseness situations, thereby leading to the significantly enhanced ability of learning parts of NMF. The close relation between NMF, sparse NMF, and the MVC_NMF is discussed first. Then two algorithms are proposed to solve the MVC_NMF model. One is called quadratic programming_MVC_NMF (QP_MVC_NMF) which is based on quadratic programming and the other is called negative glow_MVC_NMF (NG_MVC_NMF) because it uses multiplicative updates incorporating natural gradient ingeniously. The QP_MVC_NMF algorithm is quite efficient for small-scale problems and the NG_MVC_NMF algorithm is more suitable for large-scale problems. Simulations show the efficiency and validity of the proposed methods in applications of blind source separation and human face images analysis.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2164621","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30108304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Embedding prior knowledge within compressed sensing by neural networks. 用神经网络在压缩感知中嵌入先验知识。
Pub Date : 2011-10-01 Epub Date: 2011-09-06 DOI: 10.1109/TNN.2011.2164810
Dany Merhej, Chaouki Diab, Mohamad Khalil, Rémy Prost

In the compressed sensing framework, different algorithms have been proposed for sparse signal recovery from an incomplete set of linear measurements. The most known can be classified into two categories: l(1) norm minimization-based algorithms and l(0) pseudo-norm minimization with greedy matching pursuit algorithms. In this paper, we propose a modified matching pursuit algorithm based on the orthogonal matching pursuit (OMP). The idea is to replace the correlation step of the OMP, with a neural network. Simulation results show that in the case of random sparse signal reconstruction, the proposed method performs as well as the OMP. Complexity overhead, for training and then integrating the network in the sparse signal recovery is thus not justified in this case. However, if the signal has an added structure, it is learned and incorporated in the proposed new OMP. We consider three structures: first, the sparse signal is positive, second the positions of the non zero coefficients of the sparse signal follow a certain spatial probability density function, the third case is a combination of both. Simulation results show that, for these signals of interest, the probability of exact recovery with our modified OMP increases significantly. Comparisons with l(1) based reconstructions are also performed. We thus present a framework to reconstruct sparse signals with added structure by embedding, through neural network training, additional knowledge to the decoding process in order to have better performance in the recovery of sparse signals of interest.

在压缩感知框架中,已经提出了不同的算法来从不完全线性测量集中恢复稀疏信号。最著名的算法可分为两类:基于l(1)范数最小化算法和基于贪婪匹配追踪算法的l(0)伪范数最小化算法。本文提出了一种基于正交匹配追踪(OMP)的改进匹配追踪算法。这个想法是用一个神经网络取代OMP的相关步骤。仿真结果表明,在随机稀疏信号重构的情况下,该方法的性能与OMP方法相当。因此,在这种情况下,用于训练然后在稀疏信号恢复中集成网络的复杂性开销是不合理的。然而,如果信号有一个附加的结构,它将被学习并合并到提议的新OMP中。我们考虑三种结构:第一种是稀疏信号为正,第二种是稀疏信号的非零系数的位置遵循一定的空间概率密度函数,第三种是两者的结合。仿真结果表明,对于这些感兴趣的信号,使用改进的OMP精确恢复的概率显着增加。还与基于l(1)的重建进行了比较。因此,我们提出了一个框架,通过神经网络训练,在解码过程中嵌入额外的知识,从而重构具有附加结构的稀疏信号,从而在恢复感兴趣的稀疏信号方面具有更好的性能。
{"title":"Embedding prior knowledge within compressed sensing by neural networks.","authors":"Dany Merhej,&nbsp;Chaouki Diab,&nbsp;Mohamad Khalil,&nbsp;Rémy Prost","doi":"10.1109/TNN.2011.2164810","DOIUrl":"https://doi.org/10.1109/TNN.2011.2164810","url":null,"abstract":"<p><p>In the compressed sensing framework, different algorithms have been proposed for sparse signal recovery from an incomplete set of linear measurements. The most known can be classified into two categories: l(1) norm minimization-based algorithms and l(0) pseudo-norm minimization with greedy matching pursuit algorithms. In this paper, we propose a modified matching pursuit algorithm based on the orthogonal matching pursuit (OMP). The idea is to replace the correlation step of the OMP, with a neural network. Simulation results show that in the case of random sparse signal reconstruction, the proposed method performs as well as the OMP. Complexity overhead, for training and then integrating the network in the sparse signal recovery is thus not justified in this case. However, if the signal has an added structure, it is learned and incorporated in the proposed new OMP. We consider three structures: first, the sparse signal is positive, second the positions of the non zero coefficients of the sparse signal follow a certain spatial probability density function, the third case is a combination of both. Simulation results show that, for these signals of interest, the probability of exact recovery with our modified OMP increases significantly. Comparisons with l(1) based reconstructions are also performed. We thus present a framework to reconstruct sparse signals with added structure by embedding, through neural network training, additional knowledge to the decoding process in order to have better performance in the recovery of sparse signals of interest.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2164810","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30127260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Incremental learning of concept drift in nonstationary environments. 非平稳环境下概念漂移的增量学习。
Pub Date : 2011-10-01 Epub Date: 2011-08-04 DOI: 10.1109/TNN.2011.2160459
Ryan Elwell, Robi Polikar

We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper.

我们引入了一种基于分类器的集成方法,用于概念漂移的增量学习,其特征是非平稳环境(nse),其中底层数据分布随时间变化。提出的算法命名为Learn(++)。NSE,从连续批次的数据中学习,而不对漂移的性质或速率做任何假设;它可以从这样的环境中学习,经历恒定或可变的漂移,概念类的增加或删除,以及周期性漂移。与Learn(++)算法家族的其他成员一样,该算法以增量方式学习,也就是说,不需要访问以前看到的数据。学习(+ +)。NSE为它接收到的每一批数据训练一个新的分类器,并使用动态加权多数投票将这些分类器组合在一起。该方法的新颖之处在于根据每个分类器对当前和过去环境的时间调整精度来确定投票权重。这种方法允许算法识别底层数据分布的变化并采取相应的行动,以及早期分布可能再次出现的情况。我们在几个合成数据集上评估了该算法,这些数据集旨在模拟各种非平稳环境,以及真实世界的天气预报数据集。还包括与其他几种方法的比较。结果表明:Learn(++)。无论概念漂移的类型如何,NSE都可以非常密切地跟踪变化的环境。为了让有兴趣的研究人员将来使用,比较和基准,我们也发布了我们在本文中使用的数据。
{"title":"Incremental learning of concept drift in nonstationary environments.","authors":"Ryan Elwell,&nbsp;Robi Polikar","doi":"10.1109/TNN.2011.2160459","DOIUrl":"https://doi.org/10.1109/TNN.2011.2160459","url":null,"abstract":"<p><p>We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2160459","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30063664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 765
Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays. 混合时滞离散随机马尔可夫跳变神经网络的无源性分析。
Pub Date : 2011-10-01 Epub Date: 2011-08-12 DOI: 10.1109/TNN.2011.2163203
Zheng-Guang Wu, Peng Shi, Hongye Su, Jian Chu

In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays. The mixed time delays consist of both discrete and distributed delays. The Markov chain in the underlying neural networks is finite piecewise homogeneous. By introducing a Lyapunov functional that accounts for the mixed time delays, a delay-dependent passivity condition is derived in terms of the linear matrix inequality approach. The case of Markov chain with partially unknown transition probabilities is also considered. All the results presented depend upon not only discrete delay but also distributed delay. A numerical example is included to demonstrate the effectiveness of the proposed methods.

本文对具有马尔可夫跳变参数和混合时滞的离散随机神经网络进行了无源分析。混合时延包括离散时延和分布时延。底层神经网络中的马尔可夫链是有限分段齐次的。通过引入一个考虑混合时滞的Lyapunov泛函,利用线性矩阵不等式方法导出了一个与时滞相关的无源条件。同时考虑了转移概率部分未知的马尔可夫链的情况。所有的结果不仅依赖于离散延迟,而且依赖于分布延迟。算例验证了所提方法的有效性。
{"title":"Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays.","authors":"Zheng-Guang Wu,&nbsp;Peng Shi,&nbsp;Hongye Su,&nbsp;Jian Chu","doi":"10.1109/TNN.2011.2163203","DOIUrl":"https://doi.org/10.1109/TNN.2011.2163203","url":null,"abstract":"<p><p>In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays. The mixed time delays consist of both discrete and distributed delays. The Markov chain in the underlying neural networks is finite piecewise homogeneous. By introducing a Lyapunov functional that accounts for the mixed time delays, a delay-dependent passivity condition is derived in terms of the linear matrix inequality approach. The case of Markov chain with partially unknown transition probabilities is also considered. All the results presented depend upon not only discrete delay but also distributed delay. A numerical example is included to demonstrate the effectiveness of the proposed methods.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2163203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29934356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 364
Analysis of fixed-point and coordinate descent algorithms for regularized kernel methods. 正则化核方法的不动点和坐标下降算法分析。
Pub Date : 2011-10-01 Epub Date: 2011-08-18 DOI: 10.1109/TNN.2011.2164096
Francesco Dinuzzo

In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.

本文分析了具有凸损失函数和二次范数正则化的正则化核方法的两类一般优化算法的收敛性。第一种方法是一种基于不动点迭代的新算法,它非常适合并行实现,可以用于任何凸损失函数。第二种方法是基于坐标下降,并推广了一些先前提出的线性支持向量机技术。它利用可加可分损失函数的结构来计算封闭形式的线搜索解。这两种方法都非常容易实现。本文还通过将凸正则化问题精确地表述为无约束可微稳定问题,证明了如何消除目标泛函的不可微性。
{"title":"Analysis of fixed-point and coordinate descent algorithms for regularized kernel methods.","authors":"Francesco Dinuzzo","doi":"10.1109/TNN.2011.2164096","DOIUrl":"https://doi.org/10.1109/TNN.2011.2164096","url":null,"abstract":"<p><p>In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2164096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30092715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Stability and L2 performance analysis of stochastic delayed neural networks. 随机延迟神经网络的稳定性和L2性能分析。
Pub Date : 2011-10-01 Epub Date: 2011-08-12 DOI: 10.1109/TNN.2011.2163319
Yun Chen, Wei Xing Zheng

This brief focuses on the robust mean-square exponential stability and L(2) performance analysis for a class of uncertain time-delay neural networks perturbed by both additive and multiplicative stochastic noises. New mean-square exponential stability and L(2) performance criteria are developed based on the delay partition Lyapunov-Krasovskii functional method and generalized Finsler lemma which is applicable to stochastic systems. The analytical results are established without involving any model transformation, estimation for cross terms, additional free-weighting matrices, or tuning parameters. Numerical examples are presented to verify that the proposed approach is both less conservative and less computationally complex than the existing ones.

本文主要研究一类受加性和乘性随机噪声干扰的不确定时滞神经网络的鲁棒均方指数稳定性和L(2)性能分析。基于延迟划分Lyapunov-Krasovskii泛函方法和适用于随机系统的广义Finsler引理,提出了新的均方指数稳定性和L(2)性能准则。分析结果的建立不涉及任何模型转换,交叉项的估计,额外的自由加权矩阵,或调整参数。数值算例表明,该方法具有较低的保守性和较低的计算复杂度。
{"title":"Stability and L2 performance analysis of stochastic delayed neural networks.","authors":"Yun Chen,&nbsp;Wei Xing Zheng","doi":"10.1109/TNN.2011.2163319","DOIUrl":"https://doi.org/10.1109/TNN.2011.2163319","url":null,"abstract":"<p><p>This brief focuses on the robust mean-square exponential stability and L(2) performance analysis for a class of uncertain time-delay neural networks perturbed by both additive and multiplicative stochastic noises. New mean-square exponential stability and L(2) performance criteria are developed based on the delay partition Lyapunov-Krasovskii functional method and generalized Finsler lemma which is applicable to stochastic systems. The analytical results are established without involving any model transformation, estimation for cross terms, additional free-weighting matrices, or tuning parameters. Numerical examples are presented to verify that the proposed approach is both less conservative and less computationally complex than the existing ones.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2163319","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29934355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Chaotic simulated annealing by a neural network with a variable delay: design and application. 变延迟神经网络的混沌模拟退火:设计与应用。
Pub Date : 2011-10-01 Epub Date: 2011-08-12 DOI: 10.1109/TNN.2011.2163080
Shyan-Shiou Chen

In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

在本文中,我们有三个目标:第一是描述变延迟系统的优点,第二是为延迟神经网络找到一个更直观的Lyapunov函数,第三是为二次代价函数设计一个延迟神经网络。对于延迟神经网络,大多数研究者基于线性矩阵不等式(LMI)方法构造Lyapunov函数。然而,这种方法并不直观。我们为延迟神经网络提供了另一种候选Lyapunov函数。另一方面,如果我们首先给定一个二次代价函数,我们可以通过将二阶项适当地分成自反馈连接权值和延迟连接权值两部分来构建延迟神经网络。为了证明变延迟神经网络的优势,我们提出了一种变延迟的瞬态混沌神经网络,并通过数值计算证明了该模型比Chen-Aihara模型、Wang模型和Zhao模型具有更好的搜索能力。我们讨论了混沌相和收敛相。在混沌阶段,我们简单地给出了具有恒定延迟和可变延迟的单个神经元的分岔图。我们证明了变延迟模型具有随机性和混沌徘徊性。在收敛阶段,我们不仅为具有延迟的神经网络提供了一种新的Lyapunov函数(Lyapunov函数与LMI方法无关),而且还建立了延迟神经网络的Lyapunov函数与旅行商问题的目标函数之间的相关性。
{"title":"Chaotic simulated annealing by a neural network with a variable delay: design and application.","authors":"Shyan-Shiou Chen","doi":"10.1109/TNN.2011.2163080","DOIUrl":"https://doi.org/10.1109/TNN.2011.2163080","url":null,"abstract":"<p><p>In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2163080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29934357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Neural networks-based adaptive control for nonlinear time-varying delays systems with unknown control direction. 控制方向未知非线性时变时滞系统的神经网络自适应控制。
Pub Date : 2011-10-01 Epub Date: 2011-08-30 DOI: 10.1109/TNN.2011.2165222
Yuntong Wen, Xuemei Ren

This paper investigates a neural network (NN) state observer-based adaptive control for a class of time-varying delays nonlinear systems with unknown control direction. An adaptive neural memoryless observer, in which the knowledge of time-delay is not used, is designed to estimate the system states. Furthermore, by applying the property of the function tanh(2)(ϑ/ε)/ϑ (the function can be defined at ϑ = 0) and introducing a novel type appropriate Lyapunov-Krasovskii functional, an adaptive output feedback controller is constructed via backstepping method which can efficiently avoid the problem of controller singularity and compensate for the time-delay. It is highly proven that the closed-loop systems controller designed by the NN-basis function property, new kind parameter adaptive law and Nussbaum function in detecting the control direction is able to guarantee the semi-global uniform ultimate boundedness of all signals and the tracking error can converge to a small neighborhood of zero. The characteristic of the proposed approach is that it relaxes any restrictive assumptions of Lipschitz condition for the unknown nonlinear continuous functions. And the proposed scheme is suitable for the systems with mismatching conditions and unmeasurable states. Finally, two simulation examples are given to illustrate the effectiveness and applicability of the proposed approach.

针对一类控制方向未知的时变时滞非线性系统,研究了基于状态观测器的神经网络自适应控制。设计了一种不考虑时滞知识的自适应神经无记忆观测器来估计系统状态。进一步,利用函数tanh(2)(φ /ε)/ φ(函数可在φ = 0处定义)的性质,引入一种新型的适当Lyapunov-Krasovskii泛函,采用反步法构造自适应输出反馈控制器,有效地避免了控制器的奇异性问题,并补偿了时滞。充分证明了利用nn基函数性质、新型参数自适应律和Nussbaum函数检测控制方向所设计的闭环系统控制器能够保证所有信号的半全局一致最终有界性和跟踪误差收敛到零的小邻域。该方法的特点是对未知非线性连续函数放宽了Lipschitz条件的任何限制性假设。该方案适用于条件不匹配和状态不可测的系统。最后,给出了两个仿真实例,说明了所提方法的有效性和适用性。
{"title":"Neural networks-based adaptive control for nonlinear time-varying delays systems with unknown control direction.","authors":"Yuntong Wen,&nbsp;Xuemei Ren","doi":"10.1109/TNN.2011.2165222","DOIUrl":"https://doi.org/10.1109/TNN.2011.2165222","url":null,"abstract":"<p><p>This paper investigates a neural network (NN) state observer-based adaptive control for a class of time-varying delays nonlinear systems with unknown control direction. An adaptive neural memoryless observer, in which the knowledge of time-delay is not used, is designed to estimate the system states. Furthermore, by applying the property of the function tanh(2)(ϑ/ε)/ϑ (the function can be defined at ϑ = 0) and introducing a novel type appropriate Lyapunov-Krasovskii functional, an adaptive output feedback controller is constructed via backstepping method which can efficiently avoid the problem of controller singularity and compensate for the time-delay. It is highly proven that the closed-loop systems controller designed by the NN-basis function property, new kind parameter adaptive law and Nussbaum function in detecting the control direction is able to guarantee the semi-global uniform ultimate boundedness of all signals and the tracking error can converge to a small neighborhood of zero. The characteristic of the proposed approach is that it relaxes any restrictive assumptions of Lipschitz condition for the unknown nonlinear continuous functions. And the proposed scheme is suitable for the systems with mismatching conditions and unmeasurable states. Finally, two simulation examples are given to illustrate the effectiveness and applicability of the proposed approach.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2165222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30110075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Efficient revised simplex method for SVM training. 基于修正单纯形的SVM训练方法。
Pub Date : 2011-10-01 Epub Date: 2011-09-06 DOI: 10.1109/TNN.2011.2165081
Christopher Sentelle, Georgios C Anagnostopoulos, Michael Georgiopoulos

Existing active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows the algorithm to converge. However, the algorithm implementation is likely to be more complex and less computationally efficient than would otherwise be required for an algorithm that does not have to contend with the singularities. We show that the revised simplex method introduced by Rusin provides a guarantee of nonsingularity when solving for the search direction. This method provides for a simpler and more computationally efficient implementation, as it avoids the need to test for rank degeneracies and also the need to modify factorizations or solution methods based upon those rank degeneracies. In our approach, we take advantage of the guarantee of nonsingularity by implementing an efficient method for solving the search direction and show that our algorithm is competitive with SVM-QP and also that it is a particularly effective when the fraction of nonbound support vectors is large. In addition, we show competitive performance of the proposed algorithm against two popular SVM training algorithms, SVMLight and LIBSVM.

现有文献报道的支持向量机(SVM)训练的活动集方法在求解搜索方向时必须解决奇异性问题。当遇到奇点时,可以仔细选择一个无限下降方向,避免循环并允许算法收敛。然而,与不需要处理奇异点的算法相比,算法实现可能更复杂,计算效率更低。结果表明,Rusin提出的修正单纯形法在求解搜索方向时保证了算法的非奇异性。这种方法提供了一种更简单、计算效率更高的实现,因为它避免了测试秩退化的需要,也避免了修改基于这些秩退化的分解或解决方法的需要。在我们的方法中,我们通过实现一种有效的求解搜索方向的方法来利用非奇异性的保证,并表明我们的算法与SVM-QP相竞争,并且当无界支持向量的比例很大时,它特别有效。此外,我们还展示了该算法与两种流行的SVM训练算法(SVMLight和LIBSVM)的竞争性能。
{"title":"Efficient revised simplex method for SVM training.","authors":"Christopher Sentelle,&nbsp;Georgios C Anagnostopoulos,&nbsp;Michael Georgiopoulos","doi":"10.1109/TNN.2011.2165081","DOIUrl":"https://doi.org/10.1109/TNN.2011.2165081","url":null,"abstract":"<p><p>Existing active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows the algorithm to converge. However, the algorithm implementation is likely to be more complex and less computationally efficient than would otherwise be required for an algorithm that does not have to contend with the singularities. We show that the revised simplex method introduced by Rusin provides a guarantee of nonsingularity when solving for the search direction. This method provides for a simpler and more computationally efficient implementation, as it avoids the need to test for rank degeneracies and also the need to modify factorizations or solution methods based upon those rank degeneracies. In our approach, we take advantage of the guarantee of nonsingularity by implementing an efficient method for solving the search direction and show that our algorithm is competitive with SVM-QP and also that it is a particularly effective when the fraction of nonbound support vectors is large. In addition, we show competitive performance of the proposed algorithm against two popular SVM training algorithms, SVMLight and LIBSVM.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2165081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30125871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Deep learning regularized Fisher mappings. 深度学习正则化Fisher映射。
Pub Date : 2011-10-01 Epub Date: 2011-08-12 DOI: 10.1109/TNN.2011.2162429
W K Wong, Mingming Sun

For classification tasks, it is always desirable to extract features that are most effective for preserving class separability. In this brief, we propose a new feature extraction method called regularized deep Fisher mapping (RDFM), which learns an explicit mapping from the sample space to the feature space using a deep neural network to enhance the separability of features according to the Fisher criterion. Compared to kernel methods, the deep neural network is a deep and nonlocal learning architecture, and therefore exhibits more powerful ability to learn the nature of highly variable datasets from fewer samples. To eliminate the side effects of overfitting brought about by the large capacity of powerful learners, regularizers are applied in the learning procedure of RDFM. RDFM is evaluated in various types of datasets, and the results reveal that it is necessary to apply unsupervised regularization in the fine-tuning phase of deep learning. Thus, for very flexible models, the optimal Fisher feature extractor may be a balance between discriminative ability and descriptive ability.

对于分类任务,总是希望提取最有效地保持类可分离性的特征。在本文中,我们提出了一种新的特征提取方法,称为正则化深度Fisher映射(RDFM),该方法利用深度神经网络学习样本空间到特征空间的显式映射,根据Fisher准则增强特征的可分性。与核方法相比,深度神经网络是一种深度和非局部学习架构,因此显示出更强大的能力,可以从更少的样本中学习高度可变数据集的性质。为了消除由于强大的学习器容量过大所带来的过拟合副作用,正则化器被应用到RDFM的学习过程中。在不同类型的数据集上对RDFM进行了评估,结果表明在深度学习的微调阶段应用无监督正则化是必要的。因此,对于非常灵活的模型,最佳的Fisher特征提取器可能是判别能力和描述能力之间的平衡。
{"title":"Deep learning regularized Fisher mappings.","authors":"W K Wong,&nbsp;Mingming Sun","doi":"10.1109/TNN.2011.2162429","DOIUrl":"https://doi.org/10.1109/TNN.2011.2162429","url":null,"abstract":"<p><p>For classification tasks, it is always desirable to extract features that are most effective for preserving class separability. In this brief, we propose a new feature extraction method called regularized deep Fisher mapping (RDFM), which learns an explicit mapping from the sample space to the feature space using a deep neural network to enhance the separability of features according to the Fisher criterion. Compared to kernel methods, the deep neural network is a deep and nonlocal learning architecture, and therefore exhibits more powerful ability to learn the nature of highly variable datasets from fewer samples. To eliminate the side effects of overfitting brought about by the large capacity of powerful learners, regularizers are applied in the learning procedure of RDFM. RDFM is evaluated in various types of datasets, and the results reveal that it is necessary to apply unsupervised regularization in the fine-tuning phase of deep learning. Thus, for very flexible models, the optimal Fisher feature extractor may be a balance between discriminative ability and descriptive ability.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2162429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29934358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
期刊
IEEE transactions on neural networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1