首页 > 最新文献

The 2011 International Joint Conference on Neural Networks最新文献

英文 中文
Self-Organizing Neural Population Coding for improving robotic visuomotor coordination 改进机器人视觉运动协调的自组织神经群编码
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033393
Tao Zhou, P. Dudek, Bertram E. Shi
We present an extension of Kohonen's Self Organizing Map (SOM) algorithm called the Self Organizing Neural Population Coding (SONPC) algorithm. The algorithm adapts online the neural population encoding of sensory and motor coordinates of a robot according to the underlying data distribution. By allocating more neurons towards area of sensory or motor space which are more frequently visited, this representation improves the accuracy of a robot system on a visually guided reaching task. We also suggest a Mean Reflection method to solve the notorious border effect problem encountered with SOMs for the special case where the latent space and the data space dimensions are the same.
我们提出了Kohonen的自组织映射(SOM)算法的扩展,称为自组织神经种群编码(SONPC)算法。该算法根据底层数据分布在线调整机器人的感觉坐标和运动坐标的神经种群编码。通过将更多的神经元分配到更频繁访问的感觉或运动空间区域,这种表示提高了机器人系统在视觉引导下到达任务的准确性。对于潜在空间和数据空间维度相同的特殊情况,我们还提出了一种平均反射方法来解决SOMs遇到的臭名昭著的边界效应问题。
{"title":"Self-Organizing Neural Population Coding for improving robotic visuomotor coordination","authors":"Tao Zhou, P. Dudek, Bertram E. Shi","doi":"10.1109/IJCNN.2011.6033393","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033393","url":null,"abstract":"We present an extension of Kohonen's Self Organizing Map (SOM) algorithm called the Self Organizing Neural Population Coding (SONPC) algorithm. The algorithm adapts online the neural population encoding of sensory and motor coordinates of a robot according to the underlying data distribution. By allocating more neurons towards area of sensory or motor space which are more frequently visited, this representation improves the accuracy of a robot system on a visually guided reaching task. We also suggest a Mean Reflection method to solve the notorious border effect problem encountered with SOMs for the special case where the latent space and the data space dimensions are the same.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Natural language generation using automatically constructed lexical resources 使用自动构建的词汇资源生成自然语言
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033329
Naho Ito, M. Hagiwara
One of the practical targets of neural network research is to enable conversation ability with humans. This paper proposes a novel natural language generation method using automatically constructed lexical resources. In the proposed method, two lexical resources are employed: Kyoto University's case frame data and Google N-gram data. Word frequency in case frame can be regarded to be obtained by Hebb's learning rule. The co-occurence frequency of Google N-gram can be considered to be gained by an associative memory. The proposed method uses words as an input. It generates a sentence from case frames, using Google N-gram as to consider co-occurrence frequency between words. We only use lexical resources which are constructed automatically. Therefore the proposed method has high coverage compared to the other methods using manually constructed templates. We carried out experiments to examine the quality of generated sentences and obtained satisfactory results.
神经网络研究的实际目标之一是实现与人类的对话能力。本文提出了一种基于自动构建词汇资源的自然语言生成方法。在该方法中,使用了两个词汇资源:京都大学的案例框架数据和Google N-gram数据。格框中的词频可以认为是由Hebb的学习规则得到的。Google N-gram的共现频率可以认为是通过联想记忆获得的。该方法使用单词作为输入。它从case框架生成一个句子,使用Google N-gram来考虑单词之间的共现频率。我们只使用自动构造的词汇资源。因此,与其他手工构造模板的方法相比,该方法具有较高的覆盖率。我们对生成的句子质量进行了实验检验,取得了满意的结果。
{"title":"Natural language generation using automatically constructed lexical resources","authors":"Naho Ito, M. Hagiwara","doi":"10.1109/IJCNN.2011.6033329","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033329","url":null,"abstract":"One of the practical targets of neural network research is to enable conversation ability with humans. This paper proposes a novel natural language generation method using automatically constructed lexical resources. In the proposed method, two lexical resources are employed: Kyoto University's case frame data and Google N-gram data. Word frequency in case frame can be regarded to be obtained by Hebb's learning rule. The co-occurence frequency of Google N-gram can be considered to be gained by an associative memory. The proposed method uses words as an input. It generates a sentence from case frames, using Google N-gram as to consider co-occurrence frequency between words. We only use lexical resources which are constructed automatically. Therefore the proposed method has high coverage compared to the other methods using manually constructed templates. We carried out experiments to examine the quality of generated sentences and obtained satisfactory results.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A new algorithm for graph mining 一种新的图挖掘算法
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033330
B. Chandra, Shalini Bhaskar
Mining frequent substructures has gained importance in the recent past. Number of algorithms has been presented for mining undirected graphs. Focus of this paper is on mining frequent substructures in directed labeled graphs since it has variety of applications in the area of biology, web mining etc. A novel approach of using equivalence class principle has been proposed for reducing the size of the graph database to be processed for finding frequent substructures. For generating candidate substructures a combination of L-R join operation, serial and mixed extensions have been carried out. This avoids missing of any candidate substructures and at the same time candidate substructures that have high probability of becoming frequent are generated.
采矿频繁的子结构在最近变得越来越重要。对于无向图的挖掘,已经提出了许多算法。由于有向标记图在生物学、网络挖掘等领域有着广泛的应用,因此本文的重点是对有向标记图中频繁子结构的挖掘。提出了一种利用等价类原理来减小图数据库查找频繁子结构的处理规模的新方法。为了生成候选子结构,采用了L-R连接操作、串联扩展和混合扩展相结合的方法。这避免了任何候选子结构的缺失,同时生成了高概率变得频繁的候选子结构。
{"title":"A new algorithm for graph mining","authors":"B. Chandra, Shalini Bhaskar","doi":"10.1109/IJCNN.2011.6033330","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033330","url":null,"abstract":"Mining frequent substructures has gained importance in the recent past. Number of algorithms has been presented for mining undirected graphs. Focus of this paper is on mining frequent substructures in directed labeled graphs since it has variety of applications in the area of biology, web mining etc. A novel approach of using equivalence class principle has been proposed for reducing the size of the graph database to be processed for finding frequent substructures. For generating candidate substructures a combination of L-R join operation, serial and mixed extensions have been carried out. This avoids missing of any candidate substructures and at the same time candidate substructures that have high probability of becoming frequent are generated.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132749789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive self-protective motion based on reflex control 基于反射控制的自适应自保护运动
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033596
T. Shimizu, R. Saegusa, Shuhei Ikemoto, H. Ishiguro, G. Metta
This paper describes a self-protective whole-body control method for humanoid robots. A set of postural reactions are used to create whole-body movements. A set of reactions is merged to cope with a general falling down direction, while allowing the upper limbs to contact safely with obstacles. The collision detection is achieved by force sensing. We verified that our method generates the self-protective motion in real time, and reduced the impact energy in multiple situations by simulator. We also verified that our systems works adequately in real-robot.
介绍了一种仿人机器人全身自保护控制方法。一组姿势反应被用来创造全身运动。一组反应被合并在一起,以应对一般的坠落方向,同时允许上肢安全地接触障碍物。碰撞检测是通过力传感实现的。通过仿真验证了该方法能够实时产生自保护动作,降低了多种情况下的冲击能量。我们还验证了我们的系统在真实机器人中可以充分工作。
{"title":"Adaptive self-protective motion based on reflex control","authors":"T. Shimizu, R. Saegusa, Shuhei Ikemoto, H. Ishiguro, G. Metta","doi":"10.1109/IJCNN.2011.6033596","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033596","url":null,"abstract":"This paper describes a self-protective whole-body control method for humanoid robots. A set of postural reactions are used to create whole-body movements. A set of reactions is merged to cope with a general falling down direction, while allowing the upper limbs to contact safely with obstacles. The collision detection is achieved by force sensing. We verified that our method generates the self-protective motion in real time, and reduced the impact energy in multiple situations by simulator. We also verified that our systems works adequately in real-robot.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133122791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Finding dependent and independent components from two related data sets 从两个相关的数据集中找到依赖的和独立的组件
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033257
J. Karhunen, T. Hao
Independent component analysis (ICA) and blind source separation (BSS) are usually applied to a single data set. Both these techniques are nowadays well understood, and several good methods based on somewhat varying assumptions on the data are available. In this paper, we consider an extension of ICA and BSS for separating mutually dependent and independent components from two different but related data sets. This problem is important in practice, because such data sets are common in real-world applications. We propose a new method which first uses canonical correlation analysis (CCA) for detecting subspaces of independent and dependent components. Standard ICA and BSS methods can after this be used for final separation of these components. The proposed method performs excellently for synthetic data sets for which the assumed data model holds exactly, and provides meaningful results for real-world robot grasping data. The method has a sound theoretical basis, and it is straightforward to implement and computationally not too demanding. Moreover, the proposed method has a very important by-product: its improves clearly the separation results provided by the FastICA and UniBSS methods that we have used in our experiments. Not only are the signal-to-noise ratios of the separated sources often clearly higher, but CCA preprocessing also helps FastICA to separate sources that it alone is not able to separate.
独立分量分析(ICA)和盲源分离(BSS)通常用于单个数据集。这两种技术现在都得到了很好的理解,并且有几种基于对数据略有不同的假设的好方法。在本文中,我们考虑了ICA和BSS的扩展,用于从两个不同但相关的数据集中分离相互依赖和独立的组件。这个问题在实践中很重要,因为这样的数据集在实际应用程序中很常见。我们提出了一种新的方法,首先使用典型相关分析(CCA)来检测独立和相关分量的子空间。标准的ICA和BSS方法可以在此之后用于这些成分的最终分离。该方法在假设数据模型完全成立的合成数据集上表现出色,并为实际机器人抓取数据提供了有意义的结果。该方法具有良好的理论基础,实现简单,计算量不高。此外,所提出的方法有一个非常重要的副产品:它明显改善了我们在实验中使用的FastICA和UniBSS方法提供的分离结果。不仅分离后的信号源的信噪比通常明显更高,而且CCA预处理还可以帮助FastICA分离它本身无法分离的信号源。
{"title":"Finding dependent and independent components from two related data sets","authors":"J. Karhunen, T. Hao","doi":"10.1109/IJCNN.2011.6033257","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033257","url":null,"abstract":"Independent component analysis (ICA) and blind source separation (BSS) are usually applied to a single data set. Both these techniques are nowadays well understood, and several good methods based on somewhat varying assumptions on the data are available. In this paper, we consider an extension of ICA and BSS for separating mutually dependent and independent components from two different but related data sets. This problem is important in practice, because such data sets are common in real-world applications. We propose a new method which first uses canonical correlation analysis (CCA) for detecting subspaces of independent and dependent components. Standard ICA and BSS methods can after this be used for final separation of these components. The proposed method performs excellently for synthetic data sets for which the assumed data model holds exactly, and provides meaningful results for real-world robot grasping data. The method has a sound theoretical basis, and it is straightforward to implement and computationally not too demanding. Moreover, the proposed method has a very important by-product: its improves clearly the separation results provided by the FastICA and UniBSS methods that we have used in our experiments. Not only are the signal-to-noise ratios of the separated sources often clearly higher, but CCA preprocessing also helps FastICA to separate sources that it alone is not able to separate.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133211857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Conditional multi-output regression 条件多输出回归
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033220
Chao Yuan
In multi-output regression, the goal is to establish a mapping from inputs to multivariate outputs that are often assumed unknown. However, in practice, some outputs may become available. How can we use this extra information to improve our prediction on the remaining outputs? For example, can we use the job data released today to better predict the house sales data to be released tomorrow? Most previous approaches use a single generative model to model the joint predictive distribution of all outputs, based on which unknown outputs are inferred conditionally from the known outputs. However, learning such a joint distribution for all outputs is very challenging and also unnecessary if our goal is just to predict each of the unknown outputs. We propose a conditional model to directly model the conditional probability of a target output on both inputs and all other outputs. A simple generative model is used to infer other outputs if they are unknown. Both models only consist of standard regression predictors, for example, Gaussian process, which can be easily learned.
在多输出回归中,目标是建立一个从输入到多变量输出的映射,这些多变量输出通常被假设为未知。不过,在实践中,可能会有一些产出。我们如何使用这些额外的信息来改进对剩余输出的预测?例如,我们可以用今天发布的就业数据来更好地预测明天要发布的房屋销售数据吗?大多数以前的方法使用一个单一的生成模型来模拟所有输出的联合预测分布,在此基础上,从已知输出有条件地推断未知输出。然而,如果我们的目标只是预测每个未知的输出,那么学习所有输出的联合分布是非常具有挑战性的,也是不必要的。我们提出了一个条件模型来直接模拟目标输出在输入和所有其他输出上的条件概率。一个简单的生成模型用于推断未知的其他输出。这两种模型都只包含标准回归预测因子,例如高斯过程,这很容易学习。
{"title":"Conditional multi-output regression","authors":"Chao Yuan","doi":"10.1109/IJCNN.2011.6033220","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033220","url":null,"abstract":"In multi-output regression, the goal is to establish a mapping from inputs to multivariate outputs that are often assumed unknown. However, in practice, some outputs may become available. How can we use this extra information to improve our prediction on the remaining outputs? For example, can we use the job data released today to better predict the house sales data to be released tomorrow? Most previous approaches use a single generative model to model the joint predictive distribution of all outputs, based on which unknown outputs are inferred conditionally from the known outputs. However, learning such a joint distribution for all outputs is very challenging and also unnecessary if our goal is just to predict each of the unknown outputs. We propose a conditional model to directly model the conditional probability of a target output on both inputs and all other outputs. A simple generative model is used to infer other outputs if they are unknown. Both models only consist of standard regression predictors, for example, Gaussian process, which can be easily learned.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133634968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sparse analog associative memory via L1-regularization and thresholding 基于l1正则化和阈值的稀疏模拟联想记忆
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033470
R. Chalasani, J. Príncipe
The CA3 region of the hippocampus acts as an auto-associative memory and is responsible for the consolidation of episodic memory. Two important characteristics of such a network is the sparsity of the stored patterns and the nonsaturating firing rate dynamics. To construct such a network, here we use a maximum a posteriori based cost function, regularized with L1-norm, to change the internal state of the neurons. Then a linear thresholding function is used to obtain the desired output firing rate. We show how such a model leads to a more biologically reasonable dynamic model which can produce a sparse output and recalls with good accuracy when the network is presented with a corrupted input.
海马体的CA3区域作为自联想记忆,负责情景记忆的巩固。这种网络的两个重要特征是存储模式的稀疏性和非饱和发射速率动力学。为了构建这样一个网络,这里我们使用一个最大后验代价函数,用l1范数正则化,来改变神经元的内部状态。然后使用线性阈值函数来获得期望的输出发射率。我们展示了这样的模型如何导致一个更生物合理的动态模型,该模型可以产生稀疏的输出,并在网络呈现损坏的输入时具有良好的准确性。
{"title":"Sparse analog associative memory via L1-regularization and thresholding","authors":"R. Chalasani, J. Príncipe","doi":"10.1109/IJCNN.2011.6033470","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033470","url":null,"abstract":"The CA3 region of the hippocampus acts as an auto-associative memory and is responsible for the consolidation of episodic memory. Two important characteristics of such a network is the sparsity of the stored patterns and the nonsaturating firing rate dynamics. To construct such a network, here we use a maximum a posteriori based cost function, regularized with L1-norm, to change the internal state of the neurons. Then a linear thresholding function is used to obtain the desired output firing rate. We show how such a model leads to a more biologically reasonable dynamic model which can produce a sparse output and recalls with good accuracy when the network is presented with a corrupted input.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132198252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A neuromorphic architecture from single transistor neurons with organic bistable devices for weights 单晶体管神经元的神经形态结构与有机双稳装置的重量
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033256
Robert A. Nawrocki, S. Shaheen, R. Voyles
Artificial Intelligence (AI) has made tremendous progress since it was first postulated in the 1950s. However, AI systems are primarily emulated on serial machine hardware that result in high power consumption, especially when compared to their biological counterparts. Recent interest in neuromorphic architectures aims to more directly emulate biological information processing to achieve substantially lower power consumption for appropriate information processing tasks. We propose a novel way of realizing a neuromorphic architecture, termed Synthetic Neural Network (SNN), that is modeled after conventional artificial neural networks and incorporates organic bistable devices as circuit elements that resemble the basic operation of a binary synapse. Via computer simulation we demonstrate how a single synthetic neuron, created with only a single transistor, a single-bistable-device-per-input, and two resistors, exhibits a behavior of an artificial neuron and approximates the sigmoidal activation function. We also show that, by increasing the number of bistable devices per input, a single neuron can be trained to behave like a Boolean logic AND or OR gate. To validate the efficacy of our design, we show two simulations where SNN is used as a pattern classifier of complicated, non-linear relationships based on real-world problems. In the first example, our SNN is shown to perform the trained task of directional propulsion due to water hammer effect with an average error of about 7.2%. The second task, a robotic wall following, resulted in SNN error of approximately 9.6%. Our simulations and analysis are based on the performance of organic electronic elements created in our laboratory.
人工智能(AI)自20世纪50年代首次提出以来,已经取得了巨大的进步。然而,人工智能系统主要是在串行机器硬件上模拟的,这导致了高功耗,特别是与生物系统相比。最近对神经形态架构的兴趣旨在更直接地模拟生物信息处理,从而在适当的信息处理任务中实现更低的功耗。我们提出了一种实现神经形态架构的新方法,称为合成神经网络(SNN),它以传统的人工神经网络为模型,并将有机双稳态器件作为电路元件,类似于二元突触的基本操作。通过计算机模拟,我们演示了单个合成神经元是如何由单个晶体管、每个输入一个双稳器件和两个电阻器创建的,它表现出人工神经元的行为,并近似于s型激活函数。我们还表明,通过增加每个输入的双稳态器件的数量,单个神经元可以被训练成一个布尔逻辑与或门。为了验证我们设计的有效性,我们展示了两个模拟,其中SNN被用作基于现实世界问题的复杂非线性关系的模式分类器。在第一个例子中,我们的SNN在水锤效应下执行定向推进的训练任务,平均误差约为7.2%。第二个任务是机器人墙跟踪,SNN误差约为9.6%。我们的模拟和分析是基于我们实验室创造的有机电子元件的性能。
{"title":"A neuromorphic architecture from single transistor neurons with organic bistable devices for weights","authors":"Robert A. Nawrocki, S. Shaheen, R. Voyles","doi":"10.1109/IJCNN.2011.6033256","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033256","url":null,"abstract":"Artificial Intelligence (AI) has made tremendous progress since it was first postulated in the 1950s. However, AI systems are primarily emulated on serial machine hardware that result in high power consumption, especially when compared to their biological counterparts. Recent interest in neuromorphic architectures aims to more directly emulate biological information processing to achieve substantially lower power consumption for appropriate information processing tasks. We propose a novel way of realizing a neuromorphic architecture, termed Synthetic Neural Network (SNN), that is modeled after conventional artificial neural networks and incorporates organic bistable devices as circuit elements that resemble the basic operation of a binary synapse. Via computer simulation we demonstrate how a single synthetic neuron, created with only a single transistor, a single-bistable-device-per-input, and two resistors, exhibits a behavior of an artificial neuron and approximates the sigmoidal activation function. We also show that, by increasing the number of bistable devices per input, a single neuron can be trained to behave like a Boolean logic AND or OR gate. To validate the efficacy of our design, we show two simulations where SNN is used as a pattern classifier of complicated, non-linear relationships based on real-world problems. In the first example, our SNN is shown to perform the trained task of directional propulsion due to water hammer effect with an average error of about 7.2%. The second task, a robotic wall following, resulted in SNN error of approximately 9.6%. Our simulations and analysis are based on the performance of organic electronic elements created in our laboratory.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"357 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122851968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Application of Cover's theorem to the evaluation of the performance of CI observers Cover定理在CI观测器性能评价中的应用
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033334
F. Samuelson, David G. Brown
For any N points arbitrarily located in a d-dimensional space, Thomas Cover popularized and augmented a theorem that gives an expression for the number of the 2N possible two-class dichotomies of those points that are separable by a hyperplane. Since separation of two-class dichotomies in d dimensions is a common problem addressed by computational intelligence (CI) decision functions or “observers,” Cover's theorem provides a benchmark against which CI observer performance can be measured. We demonstrate that the performance of a simple perceptron approaches the ideal performance and how a single layer MLP and an SVM fare in comparison. We show how Cover's theorem can be used to develop a procedure for CI parameter optimization and to serve as a descriptor of CI complexity. Both simulated and micro-array genomic data are used.
对于任意位于d维空间中的N个点,Thomas Cover推广并扩充了一个定理,该定理给出了这些点的2N个可能的两类二分类数的表达式,这些点可以被超平面分离。由于d维中两类二分类的分离是计算智能(CI)决策函数或“观察者”解决的一个常见问题,因此Cover定理提供了一个可以衡量CI观察者性能的基准。我们证明了简单感知器的性能接近理想性能,以及单层MLP和支持向量机的比较。我们展示了Cover定理如何用于开发CI参数优化过程,并作为CI复杂性的描述符。模拟和微阵列基因组数据都被使用。
{"title":"Application of Cover's theorem to the evaluation of the performance of CI observers","authors":"F. Samuelson, David G. Brown","doi":"10.1109/IJCNN.2011.6033334","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033334","url":null,"abstract":"For any N points arbitrarily located in a d-dimensional space, Thomas Cover popularized and augmented a theorem that gives an expression for the number of the 2N possible two-class dichotomies of those points that are separable by a hyperplane. Since separation of two-class dichotomies in d dimensions is a common problem addressed by computational intelligence (CI) decision functions or “observers,” Cover's theorem provides a benchmark against which CI observer performance can be measured. We demonstrate that the performance of a simple perceptron approaches the ideal performance and how a single layer MLP and an SVM fare in comparison. We show how Cover's theorem can be used to develop a procedure for CI parameter optimization and to serve as a descriptor of CI complexity. Both simulated and micro-array genomic data are used.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123863332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Metamodeling for large-scale optimization tasks based on object networks 基于对象网络的大规模优化任务元建模
Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033602
L. Werbos, R. Kozma, Rodrigo Silva-Lugo, G. E. Pazienza, P. Werbos
Optimization in large-scale networks - such as large logistical networks and electric power grids involving many thousands of variables - is a very challenging task. In this paper, we present the theoretical basis and the related experiments involving the development and use of visualization tools and improvements in existing best practices in managing optimization software, as preparation for the use of “metamodeling” - the insertion of complex neural networks or other universal nonlinear function approximators into key parts of these complicated and expensive computations; this novel approach has been developed by the new Center for Large-Scale Integrated Optimization and Networks (CLION) at University of Memphis, TN.
大型网络的优化——例如大型物流网络和涉及数千个变量的电网——是一项非常具有挑战性的任务。在本文中,我们提出了理论基础和相关实验,涉及可视化工具的开发和使用以及管理优化软件中现有最佳实践的改进,作为使用“元建模”的准备-将复杂的神经网络或其他通用非线性函数逼近器插入这些复杂和昂贵的计算的关键部分;这种新颖的方法是由田纳西州孟菲斯大学新成立的大规模集成优化和网络中心(CLION)开发的。
{"title":"Metamodeling for large-scale optimization tasks based on object networks","authors":"L. Werbos, R. Kozma, Rodrigo Silva-Lugo, G. E. Pazienza, P. Werbos","doi":"10.1109/IJCNN.2011.6033602","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033602","url":null,"abstract":"Optimization in large-scale networks - such as large logistical networks and electric power grids involving many thousands of variables - is a very challenging task. In this paper, we present the theoretical basis and the related experiments involving the development and use of visualization tools and improvements in existing best practices in managing optimization software, as preparation for the use of “metamodeling” - the insertion of complex neural networks or other universal nonlinear function approximators into key parts of these complicated and expensive computations; this novel approach has been developed by the new Center for Large-Scale Integrated Optimization and Networks (CLION) at University of Memphis, TN.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
The 2011 International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1