首页 > 最新文献

The 2011 International Joint Conference on Neural Networks最新文献

英文 中文
Reinforcement active learning hierarchical loops 强化主动学习分层循环
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033617
Goren Gordon, E. Ahissar
A curious agent, be it a robot, animal or human, acts so as to learn as much as possible about itself and its environment. Such an agent can also learn without external supervision, but rather actively probe its surrounding and autonomously induce the relations between its action's effects on the environment and the resulting sensory input. We present a model of hierarchical motor-sensory loops for such an autonomous active learning agent, meaning a model that selects the appropriate action in order to optimize the agent's learning. Furthermore, learning one motor-sensory mapping enables the learning of other mappings, thus increasing the extent and diversity of knowledge and skills, usually in hierarchical manner. Each such loop attempts to optimally learn a specific correlation between the agent's available internal information, e.g. sensory signals and motor efference copies, by finding the action that optimizes that learning. We demonstrate this architecture on the well-studied vibrissae system, and show how sensory-motor loops are actively learnt from the bottom-up, starting with the forward and inverse models of whisker motion and then extending them to object localization. The model predicts transition from free-air whisking that optimally learns the self-generated motor-sensory mapping to touch-induced palpation that optimizes object localization, both observed in naturally behaving rats.
一个好奇的主体,无论是机器人、动物还是人类,都会尽可能多地了解自己和周围的环境。这样的智能体也可以在没有外界监督的情况下学习,而是主动探索周围环境,并自主地归纳其行为对环境的影响与由此产生的感官输入之间的关系。我们提出了这样一个自主主动学习智能体的分层运动-感觉回路模型,这意味着一个模型可以选择适当的动作来优化智能体的学习。此外,学习一种运动-感觉映射能够学习其他映射,从而增加知识和技能的范围和多样性,通常以分层方式进行。每个这样的循环都试图通过找到优化学习的动作来最佳地学习代理可用的内部信息之间的特定相关性,例如感觉信号和运动感知拷贝。我们在研究得很好的触须系统上展示了这种结构,并展示了感觉-运动回路是如何自下而上地主动学习的,从胡须运动的正向和逆模型开始,然后将它们扩展到物体定位。该模型预测了从自由空气搅拌(最佳地学习自我产生的运动-感觉映射)到触摸感应触诊(优化物体定位)的转变,这两种转变都在自然行为的大鼠身上观察到。
{"title":"Reinforcement active learning hierarchical loops","authors":"Goren Gordon, E. Ahissar","doi":"10.1109/IJCNN.2011.6033617","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033617","url":null,"abstract":"A curious agent, be it a robot, animal or human, acts so as to learn as much as possible about itself and its environment. Such an agent can also learn without external supervision, but rather actively probe its surrounding and autonomously induce the relations between its action's effects on the environment and the resulting sensory input. We present a model of hierarchical motor-sensory loops for such an autonomous active learning agent, meaning a model that selects the appropriate action in order to optimize the agent's learning. Furthermore, learning one motor-sensory mapping enables the learning of other mappings, thus increasing the extent and diversity of knowledge and skills, usually in hierarchical manner. Each such loop attempts to optimally learn a specific correlation between the agent's available internal information, e.g. sensory signals and motor efference copies, by finding the action that optimizes that learning. We demonstrate this architecture on the well-studied vibrissae system, and show how sensory-motor loops are actively learnt from the bottom-up, starting with the forward and inverse models of whisker motion and then extending them to object localization. The model predicts transition from free-air whisking that optimally learns the self-generated motor-sensory mapping to touch-induced palpation that optimizes object localization, both observed in naturally behaving rats.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114798272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Lag selection for time series forecasting using Particle Swarm Optimization 基于粒子群算法的时间序列预测滞后选择
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033535
Gustavo H. T. Ribeiro, P. S. D. M. Neto, George D. C. Cavalcanti, Ing Ren Tsang
The time series forecasting is an useful application for many areas of knowledge such as biology, economics, climatology, biology, among others. A very important step for time series prediction is the correct selection of the past observations (lags). This paper uses a new algorithm based in swarm of particles to feature selection on time series, the algorithm used was Frankenstein's Particle Swarm Optimization (FPSO). Many forms of filters and wrappers were proposed to feature selection, but these approaches have their limitations in relation to properties of the data set, such as size and whether they are linear or not. Optimization algorithms, such as FPSO, make no assumption about the data and converge faster. Hence, the FPSO may to find a good set of lags for time series forecasting and produce most accurate forecastings. Two prediction models were used: Multilayer Perceptron neural network (MLP) and Support Vector Regression (SVR). The results show that the approach improved previous results and that the forecasting using SVR produced best results, moreover its showed that the feature selection with FPSO was better than the features selection with original Particle Swarm Optimization.
时间序列预测在生物学、经济学、气候学、生物学等许多领域都有广泛的应用。时间序列预测的一个非常重要的步骤是正确选择过去的观测值(滞后)。本文提出了一种新的基于粒子群的时间序列特征选择算法,该算法是弗兰肯斯坦粒子群优化算法(FPSO)。人们提出了许多形式的过滤器和包装器来进行特征选择,但是这些方法在数据集的属性方面有其局限性,例如大小和它们是否是线性的。FPSO等优化算法对数据不做任何假设,收敛速度更快。因此,FPSO可能会找到一组很好的滞后时间序列预测,并产生最准确的预测。使用了两种预测模型:多层感知器神经网络(MLP)和支持向量回归(SVR)。结果表明,该方法改进了以往的预测结果,支持向量回归预测效果最好,并且FPSO的特征选择优于原始粒子群算法的特征选择。
{"title":"Lag selection for time series forecasting using Particle Swarm Optimization","authors":"Gustavo H. T. Ribeiro, P. S. D. M. Neto, George D. C. Cavalcanti, Ing Ren Tsang","doi":"10.1109/IJCNN.2011.6033535","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033535","url":null,"abstract":"The time series forecasting is an useful application for many areas of knowledge such as biology, economics, climatology, biology, among others. A very important step for time series prediction is the correct selection of the past observations (lags). This paper uses a new algorithm based in swarm of particles to feature selection on time series, the algorithm used was Frankenstein's Particle Swarm Optimization (FPSO). Many forms of filters and wrappers were proposed to feature selection, but these approaches have their limitations in relation to properties of the data set, such as size and whether they are linear or not. Optimization algorithms, such as FPSO, make no assumption about the data and converge faster. Hence, the FPSO may to find a good set of lags for time series forecasting and produce most accurate forecastings. Two prediction models were used: Multilayer Perceptron neural network (MLP) and Support Vector Regression (SVR). The results show that the approach improved previous results and that the forecasting using SVR produced best results, moreover its showed that the feature selection with FPSO was better than the features selection with original Particle Swarm Optimization.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126478815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Solving Traveling Salesman Problem by a hybrid combination of PSO and Extremal Optimization 用粒子群优化与极值优化的混合组合求解旅行商问题
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033402
Saeed Khakmardan, H. Poostchi, M. Akbarzadeh-T.
Particle Swarm Optimization (PSO) has received great attention in recent years as a successful global search algorithm, due to its simple implementation and inexpensive computation overhead. However, PSO still suffers from the problem of early convergence to locally optimal solutions. Extremal Optimization (EO) is a local search algorithm that has been able to solve NP hard optimization problems. The combination of PSO with EO benefits from the exploration ability of PSO and the exploitation ability of EO, and reduces the probability of early trapping in the local optima. In other words, due to the EO's strong local search capability, the PSO focuses on its global search by a new mutation operator that prevents loss of variety among the particles. This is done when the particle's parameters exceed the problem conditions. The resulting hybrid algorithm Mutated PSO-EO (MPSO-EO) is then applied to the Traveling Salesman Problem (TSP) as a NP hard multimodal optimization problem. The performance of the proposed approach is compared with several other metaheuristic methods on 3 well known TSP databases and 10 unimodal and multimodal benchmark functions.
粒子群优化算法(PSO)作为一种成功的全局搜索算法,由于其实现简单、计算开销低,近年来受到了广泛的关注。然而,粒子群算法仍然存在较早收敛到局部最优解的问题。极限优化算法(EO)是一种局部搜索算法,能够解决NP困难优化问题。PSO与EO的结合利用了PSO的勘探能力和EO的开采能力,降低了早期陷入局部最优的概率。换句话说,由于粒子群算法具有较强的局部搜索能力,粒子群算法通过一个新的突变算子进行全局搜索,以防止粒子间的变异丢失。这是在粒子的参数超过问题条件时进行的。将得到的混合算法突变PSO-EO (MPSO-EO)作为NP难多模态优化问题应用于旅行商问题。在3个知名的TSP数据库和10个单峰和多峰基准函数上,与其他几种元启发式方法的性能进行了比较。
{"title":"Solving Traveling Salesman Problem by a hybrid combination of PSO and Extremal Optimization","authors":"Saeed Khakmardan, H. Poostchi, M. Akbarzadeh-T.","doi":"10.1109/IJCNN.2011.6033402","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033402","url":null,"abstract":"Particle Swarm Optimization (PSO) has received great attention in recent years as a successful global search algorithm, due to its simple implementation and inexpensive computation overhead. However, PSO still suffers from the problem of early convergence to locally optimal solutions. Extremal Optimization (EO) is a local search algorithm that has been able to solve NP hard optimization problems. The combination of PSO with EO benefits from the exploration ability of PSO and the exploitation ability of EO, and reduces the probability of early trapping in the local optima. In other words, due to the EO's strong local search capability, the PSO focuses on its global search by a new mutation operator that prevents loss of variety among the particles. This is done when the particle's parameters exceed the problem conditions. The resulting hybrid algorithm Mutated PSO-EO (MPSO-EO) is then applied to the Traveling Salesman Problem (TSP) as a NP hard multimodal optimization problem. The performance of the proposed approach is compared with several other metaheuristic methods on 3 well known TSP databases and 10 unimodal and multimodal benchmark functions.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast AdaBoost training using weighted novelty selection 快速AdaBoost训练使用加权新颖性选择
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033366
Mojtaba Seyedhosseini, António R. C. Paiva, T. Tasdizen
In this paper, a new AdaBoost learning framework, called WNS-AdaBoost, is proposed for training discriminative models. The proposed approach significantly speeds up the learning process of adaptive boosting (AdaBoost) by reducing the number of data points. For this purpose, we introduce the weighted novelty selection (WNS) sampling strategy and combine it with AdaBoost to obtain an efficient and fast learning algorithm. WNS selects a representative subset of data thereby reducing the number of data points onto which AdaBoost is applied. In addition, WNS associates a weight with each selected data point such that the weighted subset approximates the distribution of all the training data. This ensures that AdaBoost can trained efficiently and with minimal loss of accuracy. The performance of WNS-AdaBoost is first demonstrated in a classification task. Then, WNS is employed in a probabilistic boosting-tree (PBT) structure for image segmentation. Results in these two applications show that the training time using WNS-AdaBoost is greatly reduced at the cost of only a few percent in accuracy.
本文提出了一种新的AdaBoost学习框架,称为WNS-AdaBoost,用于训练判别模型。该方法通过减少数据点数量,显著加快了自适应增强(AdaBoost)的学习过程。为此,我们引入加权新颖性选择(WNS)采样策略,并将其与AdaBoost相结合,得到一种高效、快速的学习算法。WNS选择具有代表性的数据子集,从而减少应用AdaBoost的数据点数量。此外,WNS将权重与每个选定的数据点关联,使加权子集近似于所有训练数据的分布。这确保了AdaBoost可以有效地训练,并以最小的准确性损失。WNS-AdaBoost的性能首先在一个分类任务中得到验证。然后,将WNS应用于概率增强树(PBT)结构中进行图像分割。这两种应用的结果表明,使用WNS-AdaBoost的训练时间大大缩短,而准确率仅提高了几个百分点。
{"title":"Fast AdaBoost training using weighted novelty selection","authors":"Mojtaba Seyedhosseini, António R. C. Paiva, T. Tasdizen","doi":"10.1109/IJCNN.2011.6033366","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033366","url":null,"abstract":"In this paper, a new AdaBoost learning framework, called WNS-AdaBoost, is proposed for training discriminative models. The proposed approach significantly speeds up the learning process of adaptive boosting (AdaBoost) by reducing the number of data points. For this purpose, we introduce the weighted novelty selection (WNS) sampling strategy and combine it with AdaBoost to obtain an efficient and fast learning algorithm. WNS selects a representative subset of data thereby reducing the number of data points onto which AdaBoost is applied. In addition, WNS associates a weight with each selected data point such that the weighted subset approximates the distribution of all the training data. This ensures that AdaBoost can trained efficiently and with minimal loss of accuracy. The performance of WNS-AdaBoost is first demonstrated in a classification task. Then, WNS is employed in a probabilistic boosting-tree (PBT) structure for image segmentation. Results in these two applications show that the training time using WNS-AdaBoost is greatly reduced at the cost of only a few percent in accuracy.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115921286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Modularity adaptation in cooperative coevolution of feedforward neural networks 前馈神经网络协同进化中的模块化适应
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033287
Rohitash Chandra, Marcus Frean, Mengjie Zhang
In this paper, an adaptive modularity cooperative coevolutionary framework is presented for training feedforward neural networks. The modularity adaptation framework is composed of different neural network encoding schemes which transform from one level to another based on the network error. The proposed framework is compared with canonical cooperative coevolutionary methods. The results show that the proposal outperforms its counterparts in terms of training time, success rate and scalability.
本文提出了一种用于训练前馈神经网络的自适应模块化协同进化框架。模块化自适应框架由不同的神经网络编码方案组成,这些编码方案根据网络误差从一个层次转换到另一个层次。将该框架与典型的协同进化方法进行了比较。结果表明,该方法在训练时间、成功率和可扩展性方面均优于同类方法。
{"title":"Modularity adaptation in cooperative coevolution of feedforward neural networks","authors":"Rohitash Chandra, Marcus Frean, Mengjie Zhang","doi":"10.1109/IJCNN.2011.6033287","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033287","url":null,"abstract":"In this paper, an adaptive modularity cooperative coevolutionary framework is presented for training feedforward neural networks. The modularity adaptation framework is composed of different neural network encoding schemes which transform from one level to another based on the network error. The proposed framework is compared with canonical cooperative coevolutionary methods. The results show that the proposal outperforms its counterparts in terms of training time, success rate and scalability.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132036063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A reversibility analysis of encoding methods for spiking neural networks 尖峰神经网络编码方法的可逆性分析
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033443
Cameron Johnson, Sinchan Roychowdhury, G. Venayagamoorthy
There is much excitement surrounding the idea of using spiking neural networks (SNNs) as the next generation of function-approximating neural networks. However, with the unique mechanism of communication (neural spikes) between neurons comes the challenge of transferring real-world data into the network to process. Many different encoding methods have been developed for SNNs, most temporal and some spatial. This paper analyzes three of them (Poisson rate encoding, Gaussian receptor fields, and a dual-neuron n-bit representation) and tests to see if the information is fully transformed into the spiking patterns. An oft-neglected consideration in encoding for SNNs is whether or not the real-world data is even truly being introduced to the network. By testing the reversibility of the encoding methods in this paper, the completeness of the information's presence in the pattern of spikes to serve as an input to an SNN is determined.
使用尖峰神经网络(snn)作为下一代函数逼近神经网络的想法令人兴奋。然而,由于神经元之间独特的通信机制(神经尖峰),将真实世界的数据传输到网络中进行处理是一个挑战。许多不同的snn编码方法已经被开发出来,大多数是时间编码,一些是空间编码。本文分析了其中的三种(泊松率编码、高斯受体场和双神经元n位表示),并测试了信息是否完全转换为峰值模式。在对snn进行编码时,一个经常被忽视的考虑因素是,真实世界的数据是否真正被引入到网络中。通过测试本文中编码方法的可逆性,确定了作为SNN输入的尖峰模式中信息存在的完整性。
{"title":"A reversibility analysis of encoding methods for spiking neural networks","authors":"Cameron Johnson, Sinchan Roychowdhury, G. Venayagamoorthy","doi":"10.1109/IJCNN.2011.6033443","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033443","url":null,"abstract":"There is much excitement surrounding the idea of using spiking neural networks (SNNs) as the next generation of function-approximating neural networks. However, with the unique mechanism of communication (neural spikes) between neurons comes the challenge of transferring real-world data into the network to process. Many different encoding methods have been developed for SNNs, most temporal and some spatial. This paper analyzes three of them (Poisson rate encoding, Gaussian receptor fields, and a dual-neuron n-bit representation) and tests to see if the information is fully transformed into the spiking patterns. An oft-neglected consideration in encoding for SNNs is whether or not the real-world data is even truly being introduced to the network. By testing the reversibility of the encoding methods in this paper, the completeness of the information's presence in the pattern of spikes to serve as an input to an SNN is determined.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132319472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Generation of composed musical structures through recurrent neural networks based on chaotic inspiration 基于混沌灵感的递归神经网络合成音乐结构
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033648
Andres Eduardo Coca Salazar, R. Romero, Liang Zhao
In this work, an Elman recurrent neural network is used for automatic musical structure composition based on the style of a music previously learned during the training phase. Furthermore, a small fragment of a chaotic melody is added to the input layer of the neural network as an inspiration source to attain a greater variability of melodies. The neural network is trained by using the BPTT (back propagation through time) algorithm. Some melody measures are also presented for characterizing the melodies provided by the neural network and for analyzing the effect obtained by the insertion of chaotic inspiration in relation to the original melody characteristics. Specifically, a similarity melodic measure is considered for contrasting the variability obtained between the learned melody and each one of the composite melodies by using different quantities of inspiration musical notes.
在这项工作中,Elman递归神经网络用于基于先前在训练阶段学习的音乐风格的自动音乐结构作曲。此外,混沌旋律的一小段被添加到神经网络的输入层作为灵感源,以获得更大的旋律可变性。神经网络采用BPTT (back propagation through time)算法进行训练。本文还提出了一些旋律度量,用于表征神经网络提供的旋律,以及分析插入混沌灵感对原始旋律特征的影响。具体而言,通过使用不同数量的灵感音符,考虑相似性旋律度量来对比学习旋律与每个合成旋律之间的变异性。
{"title":"Generation of composed musical structures through recurrent neural networks based on chaotic inspiration","authors":"Andres Eduardo Coca Salazar, R. Romero, Liang Zhao","doi":"10.1109/IJCNN.2011.6033648","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033648","url":null,"abstract":"In this work, an Elman recurrent neural network is used for automatic musical structure composition based on the style of a music previously learned during the training phase. Furthermore, a small fragment of a chaotic melody is added to the input layer of the neural network as an inspiration source to attain a greater variability of melodies. The neural network is trained by using the BPTT (back propagation through time) algorithm. Some melody measures are also presented for characterizing the melodies provided by the neural network and for analyzing the effect obtained by the insertion of chaotic inspiration in relation to the original melody characteristics. Specifically, a similarity melodic measure is considered for contrasting the variability obtained between the learned melody and each one of the composite melodies by using different quantities of inspiration musical notes.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130055608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Models of clifford recurrent neural networks and their dynamics clifford递归神经网络模型及其动力学
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033336
Y. Kuroe
Recently, models of neural networks in the real domain have been extended into the high dimensional domain such as the complex and quaternion domain, and several high-dimensional models have been proposed. These extensions are generalized by introducing Clifford algebra (geometric algebra). In this paper we extend conventional real-valued models of recurrent neural networks into the domain defined by Clifford algebra and discuss their dynamics. Since geometric product is non-commutative, some different models can be considered. We propose three models of fully connected recurrent neural networks, which are extensions of the real-valued Hopfield type neural networks to the domain defined by Clifford algebra. We also study dynamics of the proposed models from the point view of existence conditions of an energy function. We discuss existence conditions of an energy function for two classes of the Hopfield type Clifford neural networks.
近年来,神经网络模型已从实域扩展到复数域和四元数域等高维域,并提出了几种高维模型。这些扩展通过引入Clifford代数(几何代数)进行推广。本文将传统的递归神经网络的实值模型推广到Clifford代数定义的领域,并讨论了它们的动力学问题。由于几何积是不可交换的,所以可以考虑一些不同的模型。本文提出了三种全连通递归神经网络模型,它们是实值Hopfield型神经网络在Clifford代数定义域上的扩展。我们还从能量函数存在条件的角度研究了所提模型的动力学。讨论了两类Hopfield型Clifford神经网络能量函数的存在性条件。
{"title":"Models of clifford recurrent neural networks and their dynamics","authors":"Y. Kuroe","doi":"10.1109/IJCNN.2011.6033336","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033336","url":null,"abstract":"Recently, models of neural networks in the real domain have been extended into the high dimensional domain such as the complex and quaternion domain, and several high-dimensional models have been proposed. These extensions are generalized by introducing Clifford algebra (geometric algebra). In this paper we extend conventional real-valued models of recurrent neural networks into the domain defined by Clifford algebra and discuss their dynamics. Since geometric product is non-commutative, some different models can be considered. We propose three models of fully connected recurrent neural networks, which are extensions of the real-valued Hopfield type neural networks to the domain defined by Clifford algebra. We also study dynamics of the proposed models from the point view of existence conditions of an energy function. We discuss existence conditions of an energy function for two classes of the Hopfield type Clifford neural networks.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130102131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
A batch self-organizing maps algorithm based on adaptive distances 一种基于自适应距离的批量自组织映射算法
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033515
L. Pacífico, F. D. Carvalho
Clustering methods aims to organize a set of items into clusters such that items within a given cluster have a high degree of similarity, while items belonging to different clusters have a high degree of dissimilarity. The self-organizing map (SOM) introduced by Kohonen is an unsupervised competitive learning neural network method which has both clustering and visualization properties, using a neighborhood lateral interaction function to discover the topological structure hidden in the data set. In this paper, we introduce a batch self-organizing map algorithm based on adaptive distances. Experimental results obtained in real benchmark datasets show the effectiveness of our approach in comparison with traditional batch self-organizing map algorithms.
聚类方法的目的是将一组项目组织成一个簇,使一个簇内的项目具有高度的相似性,而属于不同簇的项目具有高度的不相似性。Kohonen提出的自组织映射(SOM)是一种具有聚类和可视化特性的无监督竞争学习神经网络方法,利用邻域横向交互函数来发现隐藏在数据集中的拓扑结构。本文提出了一种基于自适应距离的批量自组织映射算法。在真实基准数据集上的实验结果表明,与传统的批量自组织映射算法相比,本文方法是有效的。
{"title":"A batch self-organizing maps algorithm based on adaptive distances","authors":"L. Pacífico, F. D. Carvalho","doi":"10.1109/IJCNN.2011.6033515","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033515","url":null,"abstract":"Clustering methods aims to organize a set of items into clusters such that items within a given cluster have a high degree of similarity, while items belonging to different clusters have a high degree of dissimilarity. The self-organizing map (SOM) introduced by Kohonen is an unsupervised competitive learning neural network method which has both clustering and visualization properties, using a neighborhood lateral interaction function to discover the topological structure hidden in the data set. In this paper, we introduce a batch self-organizing map algorithm based on adaptive distances. Experimental results obtained in real benchmark datasets show the effectiveness of our approach in comparison with traditional batch self-organizing map algorithms.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133942064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visually-guided adaptive robot (ViGuAR) 视觉引导自适应机器人(vigar)
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033608
Gennady Livitz, Heather Ames, Ben Chandler, A. Gorchetchnikov, Jasmin Léveillé, Zlatko Vasilkoski, Massimiliano Versace, E. Mingolla, G. Snider, R. Amerson, Dick Carter, H. Abdalla, M. Qureshi
A neural modeling platform known as Cog ex Machina1 (Cog) developed in the context of the DARPA SyNAPSE2 program offers a computational environment that promises, in a foreseeable future, the creation of adaptive whole-brain systems subserving complex behavioral functions in virtual and robotic agents. Cog is designed to operate on low-powered, extremely storage-dense memristive hardware3 that would support massively-parallel, scalable computations. We report an adaptive robotic agent, ViGuAR4, that we developed as a neural model implemented on the Cog platform. The neuromorphic architecture of the ViGuAR brain is designed to support visually-guided navigation and learning, which in combination with the path-planning, memory-driven navigation agent - MoNETA5 - also developed at the Neuromorphics Lab at Boston University, should effectively account for a wide range of key features in rodents' navigational behavior.
在DARPA SyNAPSE2项目的背景下,一个被称为Cog ex Machina1 (Cog)的神经建模平台提供了一个计算环境,在可预见的未来,可以创建自适应的全脑系统,为虚拟和机器人代理提供复杂的行为功能。Cog被设计在低功耗、存储密度极高的记忆体硬件上运行,这些硬件将支持大规模并行、可扩展的计算。我们报告了一个自适应机器人代理,ViGuAR4,我们开发了一个在Cog平台上实现的神经模型。ViGuAR大脑的神经形态架构旨在支持视觉引导的导航和学习,它与同样由波士顿大学神经形态实验室开发的路径规划、记忆驱动的导航代理(MoNETA5)相结合,应该有效地解释啮齿动物导航行为的广泛关键特征。
{"title":"Visually-guided adaptive robot (ViGuAR)","authors":"Gennady Livitz, Heather Ames, Ben Chandler, A. Gorchetchnikov, Jasmin Léveillé, Zlatko Vasilkoski, Massimiliano Versace, E. Mingolla, G. Snider, R. Amerson, Dick Carter, H. Abdalla, M. Qureshi","doi":"10.1109/IJCNN.2011.6033608","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033608","url":null,"abstract":"A neural modeling platform known as Cog ex Machina1 (Cog) developed in the context of the DARPA SyNAPSE2 program offers a computational environment that promises, in a foreseeable future, the creation of adaptive whole-brain systems subserving complex behavioral functions in virtual and robotic agents. Cog is designed to operate on low-powered, extremely storage-dense memristive hardware3 that would support massively-parallel, scalable computations. We report an adaptive robotic agent, ViGuAR4, that we developed as a neural model implemented on the Cog platform. The neuromorphic architecture of the ViGuAR brain is designed to support visually-guided navigation and learning, which in combination with the path-planning, memory-driven navigation agent - MoNETA5 - also developed at the Neuromorphics Lab at Boston University, should effectively account for a wide range of key features in rodents' navigational behavior.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131546377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
The 2011 International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1