首页 > 最新文献

Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)最新文献

英文 中文
VLSI implementation of a fully parallel stochastic neural network VLSI实现的一个全并行随机神经网络
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374527
J. Quero, J. G. Ortega, C. Janer, L. Franquelo
Presents a purely digital stochastic implementation of multilayer neural networks. The authors have developed this implementation using an architecture that permits the addition of a very large number of synaptic connections, provided that the neuron's transfer function is the hard limiting function. The expression that relates the design parameter, that is, the maximum pulse density, with the accuracy of the operations has been used as the design criterion. The resulting circuit is easily configurable and expandable.<>
提出了多层神经网络的纯数字随机实现。作者使用一种允许添加大量突触连接的架构开发了这种实现,前提是神经元的传递函数是硬限制函数。设计参数即最大脉冲密度与操作精度的关系式被用作设计准则。由此产生的电路易于配置和扩展。
{"title":"VLSI implementation of a fully parallel stochastic neural network","authors":"J. Quero, J. G. Ortega, C. Janer, L. Franquelo","doi":"10.1109/ICNN.1994.374527","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374527","url":null,"abstract":"Presents a purely digital stochastic implementation of multilayer neural networks. The authors have developed this implementation using an architecture that permits the addition of a very large number of synaptic connections, provided that the neuron's transfer function is the hard limiting function. The expression that relates the design parameter, that is, the maximum pulse density, with the accuracy of the operations has been used as the design criterion. The resulting circuit is easily configurable and expandable.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127773623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The analysis of continuous temporal sequences by a map of sequential leaky integrators 用序列泄漏积分器映射分析连续时间序列
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374733
C. Privitera, P. Morasso
The problem to detect and recognize the occurrence of specific events in a continually evolving environment, is particularly important in many fields, starting from motor planning. In this paper, the authors propose a two-dimensional map, where the processing elements correspond to specific instances of leaky integrators whose parameters (or tops) are learned in a self-organizing manner: in this way the map becomes a topologic representation of temporal sequences whose presence in a continuous temporal data flow is detectable by means of the activation level of the corresponding neurons.<>
从运动规划开始,在不断变化的环境中检测和识别特定事件的发生问题在许多领域尤为重要。在本文中,作者提出了一个二维映射,其中处理元素对应于泄漏积分器的特定实例,其参数(或顶部)以自组织的方式学习:通过这种方式,映射成为时间序列的拓扑表示,其在连续时间数据流中的存在通过相应神经元的激活水平可检测到。
{"title":"The analysis of continuous temporal sequences by a map of sequential leaky integrators","authors":"C. Privitera, P. Morasso","doi":"10.1109/ICNN.1994.374733","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374733","url":null,"abstract":"The problem to detect and recognize the occurrence of specific events in a continually evolving environment, is particularly important in many fields, starting from motor planning. In this paper, the authors propose a two-dimensional map, where the processing elements correspond to specific instances of leaky integrators whose parameters (or tops) are learned in a self-organizing manner: in this way the map becomes a topologic representation of temporal sequences whose presence in a continuous temporal data flow is detectable by means of the activation level of the corresponding neurons.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126358927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A decomposition approach to forecasting electric power system commercial load using an artificial neural network 基于人工神经网络的电力系统负荷预测分解方法
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.375040
G. Mbamalu, M. El-Hawary
We use a multilayer neural network with a backpropagation algorithm to forecast the commercial sector load portion resulting from decomposing the system load of the Nova Scotia Power Inc. system. To minimize the effect of weather on the forecast of the commercial load, it is further decomposed into four autonomous sections of six hour durations. The optimal input for a training set is determined based on the sum of the squared residuals of the predicted loads. The input patterns are made up of the immediate past four or five hours load and the output is the fifth or the sixth hour load. The results obtained using the proposed approach provide evidence that in the absence of some influential variables such as temperature, a careful selection of training patterns will enhance the performance of the artificial neural network in predicting the power system load.<>
通过对新斯科舍省电力公司系统负荷进行分解,利用多层神经网络和反向传播算法对商业部门负荷部分进行预测。为了最大限度地减少天气对商业负荷预报的影响,将其进一步分解为四个独立的部分,每个部分持续6小时。训练集的最优输入是根据预测负荷的残差平方和确定的。输入模式由刚刚过去的四或五个小时的负载组成,输出是第五或第六个小时的负载。使用该方法获得的结果证明,在缺乏一些影响变量(如温度)的情况下,仔细选择训练模式将提高人工神经网络在预测电力系统负荷方面的性能。
{"title":"A decomposition approach to forecasting electric power system commercial load using an artificial neural network","authors":"G. Mbamalu, M. El-Hawary","doi":"10.1109/ICNN.1994.375040","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375040","url":null,"abstract":"We use a multilayer neural network with a backpropagation algorithm to forecast the commercial sector load portion resulting from decomposing the system load of the Nova Scotia Power Inc. system. To minimize the effect of weather on the forecast of the commercial load, it is further decomposed into four autonomous sections of six hour durations. The optimal input for a training set is determined based on the sum of the squared residuals of the predicted loads. The input patterns are made up of the immediate past four or five hours load and the output is the fifth or the sixth hour load. The results obtained using the proposed approach provide evidence that in the absence of some influential variables such as temperature, a careful selection of training patterns will enhance the performance of the artificial neural network in predicting the power system load.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126443993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sparse adaptive memory and handwritten digit recognition 稀疏自适应记忆与手写数字识别
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374336
B. Flachs, M. Flynn
Pattern recognition is a budding field with many possible approaches. This article describes sparse adaptive memory (SARI), an associative memory built upon the strengths of Parzen classifiers, nearest neighbor classifiers, feedforward neural networks, and is related to learning vector quantization. A key feature of this learning architecture is the ability to adaptively change its prototype patterns in addition to its output mapping. As SAM changes the prototype patterns in the list, it isolates modes in the density functions to produce a classifier that is in some senses optimal. Some very important interactions of gradient descent learning are exposed, providing conditions under which gradient descent will converge to an admissible solution in an associative memory structure. A layer of learning heuristics can be built upon the basic gradient descent learning algorithm to improve memory efficiency in terms of error rate, and therefore hardware requirements. A simulation study examines the effects of one such heuristic in the context of handwritten digit recognition.<>
模式识别是一个新兴的领域,有许多可能的方法。本文描述了稀疏自适应记忆(SARI),这是一种建立在Parzen分类器、最近邻分类器、前馈神经网络的优势之上的联想记忆,与学习向量量化有关。这种学习体系结构的一个关键特性是能够自适应地更改其原型模式以及输出映射。当SAM改变列表中的原型模式时,它会隔离密度函数中的模式,从而产生某种意义上最优的分类器。揭示了梯度下降学习的一些非常重要的相互作用,提供了梯度下降收敛到联想记忆结构的可接受解的条件。可以在基本梯度下降学习算法的基础上构建一层学习启发式,以提高内存效率(错误率),从而降低硬件要求。一项模拟研究检验了这种启发式在手写数字识别中的效果。
{"title":"Sparse adaptive memory and handwritten digit recognition","authors":"B. Flachs, M. Flynn","doi":"10.1109/ICNN.1994.374336","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374336","url":null,"abstract":"Pattern recognition is a budding field with many possible approaches. This article describes sparse adaptive memory (SARI), an associative memory built upon the strengths of Parzen classifiers, nearest neighbor classifiers, feedforward neural networks, and is related to learning vector quantization. A key feature of this learning architecture is the ability to adaptively change its prototype patterns in addition to its output mapping. As SAM changes the prototype patterns in the list, it isolates modes in the density functions to produce a classifier that is in some senses optimal. Some very important interactions of gradient descent learning are exposed, providing conditions under which gradient descent will converge to an admissible solution in an associative memory structure. A layer of learning heuristics can be built upon the basic gradient descent learning algorithm to improve memory efficiency in terms of error rate, and therefore hardware requirements. A simulation study examines the effects of one such heuristic in the context of handwritten digit recognition.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125485066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An architecture for learning to behave 一个学习行为的架构
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374286
A. M. Aitken
The SAM architecture is a novel neural network architecture, based on the cerebral neocortex, for combining unsupervised learning modules. When used as part of the control system for an agent, the architecture enables the agent to learn the functional semantics of its motor outputs and sensory inputs, and to acquire behavioral sequences by imitating other agents (learning by 'watching'). This involves attempting to recreate the sensory sequences the agent has been exposed to. The architecture scales well to multiple motor and sensory modalities, and to more complex behavioral requirements. The SAM architecture may also hint at an explanation of several features of the operation of the cerebral neocortex.<>
SAM架构是一种基于大脑新皮层的新型神经网络架构,用于组合无监督学习模块。当作为智能体控制系统的一部分使用时,该架构使智能体能够学习其运动输出和感官输入的功能语义,并通过模仿其他智能体(通过“观察”学习)来获取行为序列。这包括尝试重新创造agent所接触到的感觉序列。这种结构可以很好地扩展到多种运动和感觉模式,以及更复杂的行为要求。SAM结构也可能暗示对大脑新皮层运作的几个特征的解释。
{"title":"An architecture for learning to behave","authors":"A. M. Aitken","doi":"10.1109/ICNN.1994.374286","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374286","url":null,"abstract":"The SAM architecture is a novel neural network architecture, based on the cerebral neocortex, for combining unsupervised learning modules. When used as part of the control system for an agent, the architecture enables the agent to learn the functional semantics of its motor outputs and sensory inputs, and to acquire behavioral sequences by imitating other agents (learning by 'watching'). This involves attempting to recreate the sensory sequences the agent has been exposed to. The architecture scales well to multiple motor and sensory modalities, and to more complex behavioral requirements. The SAM architecture may also hint at an explanation of several features of the operation of the cerebral neocortex.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125601082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A modified backpropagation algorithm 一种改进的反向传播算法
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374289
B. K. Verma, J. Mulawka
A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fast training process is presented. The modification is based on the solving of the weight matrix for the output layer using theory of equations and least squares techniques.<>
对于使用反向传播算法的多层神经网络来说,训练过程长且不确定是最重要的问题之一。本文提出了一种改进的反向传播算法,用于特定的快速训练过程。修正的基础是利用方程理论和最小二乘技术求解输出层的权矩阵。
{"title":"A modified backpropagation algorithm","authors":"B. K. Verma, J. Mulawka","doi":"10.1109/ICNN.1994.374289","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374289","url":null,"abstract":"A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fast training process is presented. The modification is based on the solving of the weight matrix for the output layer using theory of equations and least squares techniques.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122009612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
VLSI implementation of the hippocampus on nonlinear system model VLSI上实现海马非线性系统模型
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374521
O. Chen, T. Berger, B. Sheu
The nonlinear model of the functional properties of the hippocampal formation has been developed. The architecture of the proposed hardware implementation has a topology highly similar to the anatomical structure of the hippocampus, and the dynamical properties of its components are based on experimental characterization of individual hippocampal neurons. The design scheme of a analog cellular neural network has been extensively applied. By using a 1-/spl mu/m CMOS technology, the 5/spl times/5 neuron array with some testing modules has been designed for fabrication. According to the SPICE-3 circuit simulator, the response time of each neuron with memorizing 4 time units is around 0.5 /spl mu/sec.<>
建立了海马结构功能特性的非线性模型。所提出的硬件实现的架构具有与海马解剖结构高度相似的拓扑结构,其组件的动态特性基于单个海马神经元的实验表征。模拟细胞神经网络的设计方案得到了广泛的应用。采用1-/spl mu/m的CMOS技术,设计了5/spl次/5的神经元阵列及其测试模块。根据SPICE-3电路模拟器,每个记忆4个时间单位的神经元的响应时间约为0.5 /spl mu/sec。
{"title":"VLSI implementation of the hippocampus on nonlinear system model","authors":"O. Chen, T. Berger, B. Sheu","doi":"10.1109/ICNN.1994.374521","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374521","url":null,"abstract":"The nonlinear model of the functional properties of the hippocampal formation has been developed. The architecture of the proposed hardware implementation has a topology highly similar to the anatomical structure of the hippocampus, and the dynamical properties of its components are based on experimental characterization of individual hippocampal neurons. The design scheme of a analog cellular neural network has been extensively applied. By using a 1-/spl mu/m CMOS technology, the 5/spl times/5 neuron array with some testing modules has been designed for fabrication. According to the SPICE-3 circuit simulator, the response time of each neuron with memorizing 4 time units is around 0.5 /spl mu/sec.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122266372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A multi-level backpropagation network for pattern recognition systems 模式识别系统的多层次反向传播网络
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374724
C.Y. Chen, C. Hwang
The backpropagation network (BPN) is now widely used in the field of pattern recognition because this artificial neural network can classify complex patterns and perform nontrivial mapping functions. In this paper, we propose a multi-level backpropagation network (MLBPN) model as a classifier for practical pattern recognition systems. The described model reserves the benefits of the BPN and derives the extra benefits of this MLBPN with two fold: (1) the MLBPN can reduce the complexity of BPN, and (2) a speed-up of the recognition process is attained. The experimental results verify these characteristics and show that the MLBPN model is a practical classifier for pattern recognition systems.<>
反向传播网络(BPN)由于能够对复杂的模式进行分类和执行非平凡的映射函数,在模式识别领域得到了广泛的应用。在本文中,我们提出了一个多层次反向传播网络(MLBPN)模型作为实际模式识别系统的分类器。所描述的模型保留了BPN的优点,并从两个方面获得了该MLBPN的额外优点:(1)MLBPN可以降低BPN的复杂性;(2)实现了识别过程的加速。实验结果验证了这些特征,表明MLBPN模型是一种实用的模式识别分类器
{"title":"A multi-level backpropagation network for pattern recognition systems","authors":"C.Y. Chen, C. Hwang","doi":"10.1109/ICNN.1994.374724","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374724","url":null,"abstract":"The backpropagation network (BPN) is now widely used in the field of pattern recognition because this artificial neural network can classify complex patterns and perform nontrivial mapping functions. In this paper, we propose a multi-level backpropagation network (MLBPN) model as a classifier for practical pattern recognition systems. The described model reserves the benefits of the BPN and derives the extra benefits of this MLBPN with two fold: (1) the MLBPN can reduce the complexity of BPN, and (2) a speed-up of the recognition process is attained. The experimental results verify these characteristics and show that the MLBPN model is a practical classifier for pattern recognition systems.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127977354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Diminishing the number of nodes in multi-layered neural networks 多层神经网络中节点数的减少
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374981
P. Nocera, R. Quélavoine
We propose in this paper two ways for diminishing the size of a multilayered neural network trained to recognise French vowels. The first deals with the hidden layers: the study of the variation of the outputs of each node gives us information on its very discrimination power and then allows us to reduce the size of the network. The second involves the input nodes: by the examination of the connecting weights between the input nodes and the following hidden layer, we can determinate which features are actually relevant for our classification problem, and then eliminate the useless ones. Through the problem of recognising the French vowel /a/, we show that we can obtain a reduced structure that still can learn.<>
我们在本文中提出了两种方法来减小多层神经网络的大小,以训练识别法语元音。第一种方法处理隐藏层:研究每个节点输出的变化,为我们提供有关其识别能力的信息,然后允许我们减小网络的大小。第二个涉及到输入节点:通过检查输入节点与下一个隐藏层之间的连接权值,我们可以确定哪些特征与我们的分类问题真正相关,然后消除无用的特征。通过识别法语元音/a/的问题,我们展示了我们可以获得一个仍然可以学习的简化结构。>
{"title":"Diminishing the number of nodes in multi-layered neural networks","authors":"P. Nocera, R. Quélavoine","doi":"10.1109/ICNN.1994.374981","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374981","url":null,"abstract":"We propose in this paper two ways for diminishing the size of a multilayered neural network trained to recognise French vowels. The first deals with the hidden layers: the study of the variation of the outputs of each node gives us information on its very discrimination power and then allows us to reduce the size of the network. The second involves the input nodes: by the examination of the connecting weights between the input nodes and the following hidden layer, we can determinate which features are actually relevant for our classification problem, and then eliminate the useless ones. Through the problem of recognising the French vowel /a/, we show that we can obtain a reduced structure that still can learn.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128177867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An adaptive VLSI neural network chip 一种自适应VLSI神经网络芯片
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374523
R. Zaman, D. Wunsch
Presents an adaptive neural network, which uses multiplying-digital-to-analog converters (MDACs) as synaptic weights. The chip takes advantage of digital processing to learn weights, but retains the parallel asynchronous behavior of analog systems, since part of the neuron functions are analog. The authors use MDAC units of 6 bit accuracy for this chip. Hebbian learning is employed, which is very attractive for electronic neural networks since it only uses local information in adapting weights.<>
提出了一种使用多重数模转换器(MDACs)作为突触权值的自适应神经网络。该芯片利用数字处理来学习权重,但保留了模拟系统的并行异步行为,因为部分神经元功能是模拟的。该芯片采用6位精度的MDAC单元。采用了Hebbian学习,这对电子神经网络非常有吸引力,因为它只使用局部信息来适应权重。
{"title":"An adaptive VLSI neural network chip","authors":"R. Zaman, D. Wunsch","doi":"10.1109/ICNN.1994.374523","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374523","url":null,"abstract":"Presents an adaptive neural network, which uses multiplying-digital-to-analog converters (MDACs) as synaptic weights. The chip takes advantage of digital processing to learn weights, but retains the parallel asynchronous behavior of analog systems, since part of the neuron functions are analog. The authors use MDAC units of 6 bit accuracy for this chip. Hebbian learning is employed, which is very attractive for electronic neural networks since it only uses local information in adapting weights.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1