首页 > 最新文献

IEEE transactions on neural networks最新文献

英文 中文
Wide-dynamic-range APS-based silicon retina with brightness constancy. 具有亮度恒定的宽动态范围aps基硅视网膜。
Pub Date : 2011-09-01 Epub Date: 2011-07-29 DOI: 10.1109/TNN.2011.2161591
Kazuhiro Shimonomura, Seiji Kameda, Atsushi Iwata, Tetsuya Yagi

A silicon retina is an intelligent vision sensor that can execute real-time image preprocessing by using a parallel analog circuit that mimics the structure of the neuronal circuits in the vertebrate retina. For enhancing the sensor's robustness to changes in illumination in a practical environment, we have designed and fabricated a silicon retina on the basis of a computational model of brightness constancy. The chip has a wide-dynamic-range and shows a constant response against changes in the illumination intensity. The photosensor in the present chip approximates logarithmic illumination-to-voltage transfer characteristics as a result of the application of a time-modulated reset voltage technique. Two types of image processing, namely, Laplacian-Gaussian-like spatial filtering and computing the frame difference, are carried out by using resistive networks and sample/hold circuits in the chip. As a result of these processings, the chip exhibits brightness constancy over a wide range of illumination. The chip is fabricated by using the 0.25- μm complementary metal-oxide semiconductor image sensor technology. The number of pixels is 64 × 64, and the power consumption is 32 mW at the frame rate of 30 fps. We show that our chip not only has a wide-dynamic-range but also shows a constant response to the changes in illumination.

硅视网膜是一种智能视觉传感器,通过使用模拟脊椎动物视网膜神经元电路结构的并行模拟电路,可以执行实时图像预处理。为了提高传感器在实际环境中对光照变化的鲁棒性,我们设计并制作了基于亮度恒定计算模型的硅视网膜。该芯片具有很宽的动态范围,并且对光照强度的变化有恒定的响应。由于时间调制复位电压技术的应用,本芯片中的光敏传感器近似于对数照明到电压的转移特性。利用芯片中的电阻网络和采样/保持电路,实现了类拉普拉斯-高斯空间滤波和帧差计算两种图像处理。作为这些处理的结果,芯片在广泛的照明范围内显示亮度恒定。该芯片采用0.25 μm互补金属氧化物半导体图像传感器技术制作而成。像素数为64 × 64,帧率为30fps时,功耗为32mw。我们的研究表明,我们的芯片不仅具有宽动态范围,而且对光照的变化具有恒定的响应。
{"title":"Wide-dynamic-range APS-based silicon retina with brightness constancy.","authors":"Kazuhiro Shimonomura,&nbsp;Seiji Kameda,&nbsp;Atsushi Iwata,&nbsp;Tetsuya Yagi","doi":"10.1109/TNN.2011.2161591","DOIUrl":"https://doi.org/10.1109/TNN.2011.2161591","url":null,"abstract":"<p><p>A silicon retina is an intelligent vision sensor that can execute real-time image preprocessing by using a parallel analog circuit that mimics the structure of the neuronal circuits in the vertebrate retina. For enhancing the sensor's robustness to changes in illumination in a practical environment, we have designed and fabricated a silicon retina on the basis of a computational model of brightness constancy. The chip has a wide-dynamic-range and shows a constant response against changes in the illumination intensity. The photosensor in the present chip approximates logarithmic illumination-to-voltage transfer characteristics as a result of the application of a time-modulated reset voltage technique. Two types of image processing, namely, Laplacian-Gaussian-like spatial filtering and computing the frame difference, are carried out by using resistive networks and sample/hold circuits in the chip. As a result of these processings, the chip exhibits brightness constancy over a wide range of illumination. The chip is fabricated by using the 0.25- μm complementary metal-oxide semiconductor image sensor technology. The number of pixels is 64 × 64, and the power consumption is 32 mW at the frame rate of 30 fps. We show that our chip not only has a wide-dynamic-range but also shows a constant response to the changes in illumination.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2161591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
SortNet: learning to rank by a neural preference function. SortNet:通过神经偏好函数学习排序。
Pub Date : 2011-09-01 Epub Date: 2011-07-18 DOI: 10.1109/TNN.2011.2160875
Leonardo Rigutini, Tiziano Papini, Marco Maggini, Franco Scarselli

Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.

相关性排序是指根据给定的标准对一组对象进行排序。然而,在个性化检索系统中,相关标准通常因不同用户而异,可能不是预先定义的。在这种情况下,必须设计出根据用户反馈调整其行为的排名算法。文献中提出了两种主要的学习排名的方法:使用评分函数,通过示例学习,评估每个对象的基于特征的表示,产生绝对相关分数;两两方法,其中学习偏好函数来确定在给定对中必须排名第一的对象。本文提出了一种学习排序的偏好学习方法。比较神经网络(CmpNN)是一种从实例中训练的神经网络,用来逼近一对对象的比较函数。CmpNN采用了一种特殊的体系结构,旨在实现偏好函数中自然存在的对称性。学习到的偏好函数可以作为比较器嵌入到经典排序算法中,以提供一组对象的全局排名。为了提高排序性能,设计了一种主动学习过程,旨在从训练集中选择信息量最大的模式。该算法在LETOR数据集上进行了评估,与其他最先进的算法相比,显示出有希望的性能。
{"title":"SortNet: learning to rank by a neural preference function.","authors":"Leonardo Rigutini,&nbsp;Tiziano Papini,&nbsp;Marco Maggini,&nbsp;Franco Scarselli","doi":"10.1109/TNN.2011.2160875","DOIUrl":"https://doi.org/10.1109/TNN.2011.2160875","url":null,"abstract":"<p><p>Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2160875","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30019803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Generalized Halanay inequalities and their applications to neural networks with unbounded time-varying delays. 广义Halanay不等式及其在无界时变时滞神经网络中的应用。
Pub Date : 2011-09-01 Epub Date: 2011-07-18 DOI: 10.1109/TNN.2011.2160987
Bo Liu, Wenlian Lu, Tianping Chen

In this brief, we discuss some variants of generalized Halanay inequalities that are useful in the discussion of dissipativity and stability of delayed neural networks, integro-differential systems, and Volterra functional differential equations. We provide some generalizations of the Halanay inequality, which is more accurate than the existing results. As applications, we discuss invariant set, dissipative synchronization, and global asymptotic stability for the Hopfield neural networks with infinite delays. We also prove that the dynamical systems with unbounded time-varying delays are globally asymptotically stable.

在本文中,我们讨论了广义Halanay不等式的一些变体,它们在讨论延迟神经网络、积分-微分系统和Volterra泛函微分方程的耗散性和稳定性方面是有用的。我们提供了Halanay不等式的一些推广,它比现有的结果更准确。作为应用,我们讨论了具有无限延迟的Hopfield神经网络的不变量集、耗散同步和全局渐近稳定性。我们还证明了具有无界时变时滞的动力系统是全局渐近稳定的。
{"title":"Generalized Halanay inequalities and their applications to neural networks with unbounded time-varying delays.","authors":"Bo Liu,&nbsp;Wenlian Lu,&nbsp;Tianping Chen","doi":"10.1109/TNN.2011.2160987","DOIUrl":"https://doi.org/10.1109/TNN.2011.2160987","url":null,"abstract":"<p><p>In this brief, we discuss some variants of generalized Halanay inequalities that are useful in the discussion of dissipativity and stability of delayed neural networks, integro-differential systems, and Volterra functional differential equations. We provide some generalizations of the Halanay inequality, which is more accurate than the existing results. As applications, we discuss invariant set, dissipative synchronization, and global asymptotic stability for the Hopfield neural networks with infinite delays. We also prove that the dynamical systems with unbounded time-varying delays are globally asymptotically stable.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2160987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30019801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Unsupervised large margin discriminative projection. 无监督的大边际判别投影。
Pub Date : 2011-09-01 Epub Date: 2011-07-29 DOI: 10.1109/TNN.2011.2161772
Fei Wang, Bin Zhao, Changshui Zhang

We propose a new dimensionality reduction method called maximum margin projection (MMP), which aims to project data samples into the most discriminative subspace, where clusters are most well-separated. Specifically, MMP projects input patterns onto the normal of the maximum margin separating hyperplanes. As a result, MMP only depends on the geometry of the optimal decision boundary and not on the distribution of those data points lying further away from this boundary. Technically, MMP is formulated as an integer programming problem and we propose a column generation algorithm to solve it. Moreover, through a combination of theoretical results and empirical observations we show that the computation time needed for MMP can be treated as linear in the dataset size. Experimental results on both toy and real-world datasets demonstrate the effectiveness of MMP.

我们提出了一种新的降维方法,称为最大边际投影(MMP),该方法旨在将数据样本投影到最具判别性的子空间中,其中聚类分离得最好。具体来说,MMP将输入模式投影到分隔超平面的最大边距的法线上。因此,MMP仅依赖于最优决策边界的几何形状,而不依赖于远离该边界的那些数据点的分布。从技术上讲,MMP被表述为一个整数规划问题,我们提出了一种列生成算法来解决它。此外,通过理论结果和经验观察的结合,我们表明MMP所需的计算时间在数据集大小上可以被视为线性的。在玩具和现实数据集上的实验结果都证明了MMP的有效性。
{"title":"Unsupervised large margin discriminative projection.","authors":"Fei Wang,&nbsp;Bin Zhao,&nbsp;Changshui Zhang","doi":"10.1109/TNN.2011.2161772","DOIUrl":"https://doi.org/10.1109/TNN.2011.2161772","url":null,"abstract":"<p><p>We propose a new dimensionality reduction method called maximum margin projection (MMP), which aims to project data samples into the most discriminative subspace, where clusters are most well-separated. Specifically, MMP projects input patterns onto the normal of the maximum margin separating hyperplanes. As a result, MMP only depends on the geometry of the optimal decision boundary and not on the distribution of those data points lying further away from this boundary. Technically, MMP is formulated as an integer programming problem and we propose a column generation algorithm to solve it. Moreover, through a combination of theoretical results and empirical observations we show that the computation time needed for MMP can be treated as linear in the dataset size. Experimental results on both toy and real-world datasets demonstrate the effectiveness of MMP.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2161772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network. 基于流水线双线性递归神经网络的低复杂度非线性自适应滤波器。
Pub Date : 2011-09-01 Epub Date: 2011-07-29 DOI: 10.1109/TNN.2011.2161330
Haiquan Zhao, Xiangping Zeng, Zhengyou He

To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.

为了降低双线性递归神经网络(BLRNN)的计算复杂度,提出了一种基于流水线双线性递归神经网络(PBLRNN)的低复杂度非线性自适应滤波器。PBLRNN继承了Haykin和Li提出的流水线RNN的模块化架构,由多个以链式形式级联的BLRNN模块组成。每个模块由一个具有内部动态的小规模BLRNN实现。由于PBLRNN的这些模块可以以流水线并行的方式同时执行,这将导致计算效率的显著提高。此外,由于嵌套模块的存在,可以进一步提高PBLRNN的性能。为了适应模块化结构,在梯度下降法的基础上,提出了一种改进的自适应幅值实时循环学习算法。通过大量的仿真来评估PBLRNN在非线性系统识别、非线性信道均衡和混沌时间序列预测方面的性能。实验结果表明,与单BLRNN和RNN模型相比,PBLRNN具有明显更好的性能。
{"title":"Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.","authors":"Haiquan Zhao,&nbsp;Xiangping Zeng,&nbsp;Zhengyou He","doi":"10.1109/TNN.2011.2161330","DOIUrl":"https://doi.org/10.1109/TNN.2011.2161330","url":null,"abstract":"<p><p>To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2161330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Echo state Gaussian process. 回声状态高斯过程。
Pub Date : 2011-09-01 Epub Date: 2011-07-29 DOI: 10.1109/TNN.2011.2162109
Sotirios P Chatzis, Yiannis Demiris
Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance.
回声状态网络(ESNs)是一种新的递归神经网络(RNN)训练方法,它随机生成一个RNN(水库),只使用一个简单的计算效率高的算法训练一个读出。ESNs极大地促进了rnn的实际应用,在许多基准任务上优于经典方法。在本文中,我们引入了一种新的贝叶斯方法,回声状态高斯过程(ESGP)。ESGP结合了ESNs和高斯过程的优点,为传统的油藏计算网络提供了更强大的替代方案,同时还提供了对生成的预测(以预测分布的形式)的信心度量。考虑到基准数据集和现实世界的应用程序,我们在许多应用程序中展示了我们的方法的优点,在这些应用程序中,我们表明我们的方法显著增强了esn的动态数据建模能力。此外,我们还表明,与现有的基于高斯过程的动态数据建模方法相比,我们的方法的计算效率要高几个数量级,而不会影响所获得的预测性能。
{"title":"Echo state Gaussian process.","authors":"Sotirios P Chatzis,&nbsp;Yiannis Demiris","doi":"10.1109/TNN.2011.2162109","DOIUrl":"https://doi.org/10.1109/TNN.2011.2162109","url":null,"abstract":"Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance.","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2162109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29902225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
Quaternion-valued nonlinear adaptive filtering. 四元数值非线性自适应滤波。
Pub Date : 2011-08-01 Epub Date: 2011-06-27 DOI: 10.1109/TNN.2011.2157358
Bukhari Che Ujang, Clive Cheong Took, Danilo P Mandic

A class of nonlinear quaternion-valued adaptive filtering algorithms is proposed based on locally analytic nonlinear activation functions. To circumvent the stringent standard analyticity conditions which are prohibitive to the development of nonlinear adaptive quaternion-valued estimation models, we use the fact that stochastic gradient learning algorithms require only local analyticity at the operating point in the estimation space. It is shown that the quaternion-valued exponential function is locally analytic, and, since local analyticity extends to polynomials, products, and ratios, we show that a class of transcendental nonlinear functions can serve as activation functions in nonlinear and neural adaptive models. This provides a unifying framework for the derivation of gradient-based learning algorithms in the quaternion domain, and the derived algorithms are shown to have the same generic form as their real- and complex-valued counterparts. To make such models second-order optimal for the generality of quaternion signals (both circular and noncircular), we use recent developments in augmented quaternion statistics to introduce widely linear versions of the proposed nonlinear adaptive quaternion valued filters. This allows full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances to cater rigorously for second-order noncircularity (improperness), and the corresponding power mismatch in the signal components. Simulations over a range of circular and noncircular synthetic processes and a real world 3-D noncircular wind signal support the approach.

提出了一类基于局部解析非线性激活函数的非线性四元数值自适应滤波算法。为了避免发展非线性自适应四元数估计模型的严格标准分析性条件,我们使用了随机梯度学习算法在估计空间的操作点只需要局部分析性的事实。证明了四元数值指数函数是局部解析的,并且,由于局部解析性扩展到多项式,乘积和比率,我们证明了一类超越非线性函数可以作为非线性和神经自适应模型中的激活函数。这为在四元数域中推导基于梯度的学习算法提供了一个统一的框架,并且推导出的算法与它们的实值和复值对应物具有相同的一般形式。为了使这种模型对四元数信号(圆形和非圆形)的一般性具有二阶最优性,我们使用增广四元数统计的最新发展来引入所提出的非线性自适应四元数值滤波器的广泛线性版本。这允许充分利用数据中的二阶信息,包含在协方差和伪协方差中,以严格满足二阶非圆度(不适当),以及信号分量中相应的功率不匹配。对一系列圆形和非圆形合成过程的模拟以及真实世界的三维非圆形风信号支持该方法。
{"title":"Quaternion-valued nonlinear adaptive filtering.","authors":"Bukhari Che Ujang,&nbsp;Clive Cheong Took,&nbsp;Danilo P Mandic","doi":"10.1109/TNN.2011.2157358","DOIUrl":"https://doi.org/10.1109/TNN.2011.2157358","url":null,"abstract":"<p><p>A class of nonlinear quaternion-valued adaptive filtering algorithms is proposed based on locally analytic nonlinear activation functions. To circumvent the stringent standard analyticity conditions which are prohibitive to the development of nonlinear adaptive quaternion-valued estimation models, we use the fact that stochastic gradient learning algorithms require only local analyticity at the operating point in the estimation space. It is shown that the quaternion-valued exponential function is locally analytic, and, since local analyticity extends to polynomials, products, and ratios, we show that a class of transcendental nonlinear functions can serve as activation functions in nonlinear and neural adaptive models. This provides a unifying framework for the derivation of gradient-based learning algorithms in the quaternion domain, and the derived algorithms are shown to have the same generic form as their real- and complex-valued counterparts. To make such models second-order optimal for the generality of quaternion signals (both circular and noncircular), we use recent developments in augmented quaternion statistics to introduce widely linear versions of the proposed nonlinear adaptive quaternion valued filters. This allows full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances to cater rigorously for second-order noncircularity (improperness), and the corresponding power mismatch in the signal components. Simulations over a range of circular and noncircular synthetic processes and a real world 3-D noncircular wind signal support the approach.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2157358","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30273938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 164
Cerebellar input configuration toward object model abstraction in manipulation tasks. 操作任务中面向对象模型抽象的小脑输入配置。
Pub Date : 2011-08-01 Epub Date: 2011-06-23 DOI: 10.1109/TNN.2011.2156809
Niceto R Luque, Jesus A Garrido, Richard R Carrillo, Olivier J-M D Coenen, Eduardo Ros

It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.

人们普遍认为,小脑是主要的神经中枢之一,参与纠正和完善计划运动,并对运动中发生的干扰进行解释,例如,由于物体的操纵而影响机器人手臂工厂模型的运动学和动力学。在本文中,我们评估了一种小脑样结构可以在颗粒层和分子层中存储模型的方法。此外,我们研究了它的微观结构和输入表征(上下文标签和感觉运动信号)如何有效地支持模型抽象,以提供准确的纠正扭矩值,从而提高不同对象操作期间的精度。我们还描述了显式(与对象相关的输入标签)和隐式状态输入表示(感觉运动信号)如何相互补充,以更好地处理不同的模型,并允许在两个已经存储的模型之间进行插值。这有助于在利用已经存储的模型操作新对象期间进行准确的修正。
{"title":"Cerebellar input configuration toward object model abstraction in manipulation tasks.","authors":"Niceto R Luque,&nbsp;Jesus A Garrido,&nbsp;Richard R Carrillo,&nbsp;Olivier J-M D Coenen,&nbsp;Eduardo Ros","doi":"10.1109/TNN.2011.2156809","DOIUrl":"https://doi.org/10.1109/TNN.2011.2156809","url":null,"abstract":"<p><p>It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2156809","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29966787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Consensus analysis of multiagent networks via aggregated and pinning approaches. 基于聚合和固定方法的多智能体网络一致性分析。
Pub Date : 2011-08-01 Epub Date: 2011-06-30 DOI: 10.1109/TNN.2011.2157938
Wenjun Xiong, Daniel W C Ho, Zidong Wang

In this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algorithms, large-size networks are aggregated to small-size ones. Then, by applying the principle minor theory to the small-size networks, a sufficient condition is given to reduce the number of controlled nodes. Finally, simulation results are given to illustrate the effectiveness of the developed criteria.

本文讨论了多智能体非线性有向网络(MNDN)中不存在生成树来达到所有节点的一致性的情况下的一致性问题。利用李代数理论,提出了一种线性节点与节点绑定的方法,对满足给定条件集的所有非线性函数实现MNDN的一致性。基于一些优化算法,将大型网络聚合为小型网络。然后,将次要原则理论应用于小型网络,给出了减少控制节点数量的充分条件。最后给出了仿真结果,验证了该准则的有效性。
{"title":"Consensus analysis of multiagent networks via aggregated and pinning approaches.","authors":"Wenjun Xiong,&nbsp;Daniel W C Ho,&nbsp;Zidong Wang","doi":"10.1109/TNN.2011.2157938","DOIUrl":"https://doi.org/10.1109/TNN.2011.2157938","url":null,"abstract":"<p><p>In this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algorithms, large-size networks are aggregated to small-size ones. Then, by applying the principle minor theory to the small-size networks, a sufficient condition is given to reduce the number of controlled nodes. Finally, simulation results are given to illustrate the effectiveness of the developed criteria.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2157938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29979451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Semisupervised generalized discriminant analysis. 半监督广义判别分析。
Pub Date : 2011-08-01 Epub Date: 2011-06-30 DOI: 10.1109/TNN.2011.2156808
Yu Zhang, Dit-Yan Yeung

Generalized discriminant analysis (GDA) is a commonly used method for dimensionality reduction. In its general form, it seeks a nonlinear projection that simultaneously maximizes the between-class dissimilarity and minimizes the within-class dissimilarity to increase class separability. In real-world applications where labeled data are scarce, GDA may not work very well. However, unlabeled data are often available in large quantities at very low cost. In this paper, we propose a novel GDA algorithm which is abbreviated as semisupervised generalized discriminant analysis (SSGDA). We utilize unlabeled data to maximize an optimality criterion of GDA and formulate the problem as an optimization problem that is solved using the constrained concave-convex procedure. The optimization procedure leads to estimation of the class labels for the unlabeled data. We propose a novel confidence measure and a method for selecting those unlabeled data points whose labels are estimated with high confidence. The selected unlabeled data can then be used to augment the original labeled dataset for performing GDA. We also propose a variant of SSGDA, called M-SSGDA, which adopts the manifold assumption to utilize the unlabeled data. Extensive experiments on many benchmark datasets demonstrate the effectiveness of our proposed methods.

广义判别分析(GDA)是一种常用的降维方法。在其一般形式中,它寻求一种非线性投影,同时最大化阶级之间的不相似性和最小化阶级内部的不相似性,以增加阶级的可分离性。在标记数据稀缺的实际应用程序中,GDA可能不能很好地工作。然而,未标记的数据通常以极低的成本大量获得。本文提出了一种新的广义判别分析算法,简称为半监督广义判别分析(SSGDA)。我们利用未标记的数据来最大化GDA的最优性准则,并将问题表述为使用约束凹凸过程求解的优化问题。优化过程导致对未标记数据的类标签的估计。我们提出了一种新的置信度度量和一种选择那些标签估计具有高置信度的未标记数据点的方法。然后可以使用所选的未标记数据来扩展原始标记数据集以执行GDA。我们还提出了一种SSGDA的变体,称为M-SSGDA,它采用流形假设来利用未标记数据。在许多基准数据集上的大量实验证明了我们提出的方法的有效性。
{"title":"Semisupervised generalized discriminant analysis.","authors":"Yu Zhang,&nbsp;Dit-Yan Yeung","doi":"10.1109/TNN.2011.2156808","DOIUrl":"https://doi.org/10.1109/TNN.2011.2156808","url":null,"abstract":"<p><p>Generalized discriminant analysis (GDA) is a commonly used method for dimensionality reduction. In its general form, it seeks a nonlinear projection that simultaneously maximizes the between-class dissimilarity and minimizes the within-class dissimilarity to increase class separability. In real-world applications where labeled data are scarce, GDA may not work very well. However, unlabeled data are often available in large quantities at very low cost. In this paper, we propose a novel GDA algorithm which is abbreviated as semisupervised generalized discriminant analysis (SSGDA). We utilize unlabeled data to maximize an optimality criterion of GDA and formulate the problem as an optimization problem that is solved using the constrained concave-convex procedure. The optimization procedure leads to estimation of the class labels for the unlabeled data. We propose a novel confidence measure and a method for selecting those unlabeled data points whose labels are estimated with high confidence. The selected unlabeled data can then be used to augment the original labeled dataset for performing GDA. We also propose a variant of SSGDA, called M-SSGDA, which adopts the manifold assumption to utilize the unlabeled data. Extensive experiments on many benchmark datasets demonstrate the effectiveness of our proposed methods.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2156808","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29979361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
IEEE transactions on neural networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1