首页 > 最新文献

Proceedings of Fifth International Conference on Microelectronics for Neural Networks最新文献

英文 中文
On-line arithmetic-based reprogrammable hardware implementation of multilayer perceptron back-propagation 基于在线算法的多层感知器反向传播的可编程硬件实现
B. Girau, Arnaud Tisserand
A digital hardware implementation of a whole neural network learning is described. It uses on-line arithmetic on FPGAs. The modularity of our solution avoids the development problems that occur with more usual hardware circuits. A precise analysis of the computations required by the back-propagation algorithm allows us to maximize the parallism of our implementation.
描述了一个全神经网络学习的数字硬件实现。它在fpga上使用在线算法。我们的解决方案的模块化避免了常见硬件电路的开发问题。对反向传播算法所需计算的精确分析使我们能够最大化实现的并行性。
{"title":"On-line arithmetic-based reprogrammable hardware implementation of multilayer perceptron back-propagation","authors":"B. Girau, Arnaud Tisserand","doi":"10.1109/MNNFS.1996.493788","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493788","url":null,"abstract":"A digital hardware implementation of a whole neural network learning is described. It uses on-line arithmetic on FPGAs. The modularity of our solution avoids the development problems that occur with more usual hardware circuits. A precise analysis of the computations required by the back-propagation algorithm allows us to maximize the parallism of our implementation.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"979 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
An efficient handwritten digit recognition method on a flexible parallel architecture 一种基于柔性并行结构的高效手写数字识别方法
A.P. Maubant, Y. Autret, G. Leonhard, G. Ouvradou, A. Thépaut
This paper presents neural and hybrid (symbolic and subsymbolic) applications downloaded on the distributed computer architecture ArMenX. This machine is articulated around a ring of FPGAs acting as routing resources as well as fine grain computing resources and thus giving great flexibility. More coarse grain computing resources-Transputer and DSP-tightly coupled via FPGAs give a large application spectrum to the machine, making it possible to implement heterogeneous algorithms efficiently involving both low level (computing intensive) and high level (control intensive) tasks. We first introduce the ArMenX project and the main architecture features. Then, after giving details on the computing of propagation and back-propagation of the multi-layer perceptron on ArMenX, we will focus on a handwritten digit (issued from a zip code data base) recognition application. An original and efficient method, involving three neural networks, is developed. The first two neural networks deal with the 'reading process', and the last neural network, which learned to write, helps to make decisions on the first two network outputs, when they are not confident. Before concluding, the paper presents the work of integration of ArMenX into a high level programming environment, designed to make it easier to take advantage of the architecture flexibility.
本文介绍了在分布式计算机体系结构ArMenX上下载的神经和混合(符号和亚符号)应用程序。这台机器是围绕一圈fpga作为路由资源和细粒度计算资源铰接的,因此具有很大的灵活性。更多的粗粒度计算资源- transputer和dsp -通过fpga紧密耦合,为机器提供了更大的应用范围,使其能够有效地实现涉及低级(计算密集型)和高级(控制密集型)任务的异构算法。我们首先介绍ArMenX项目和主要的体系结构特性。然后,在详细介绍了多层感知器在ArMenX上的传播和反向传播计算之后,我们将重点关注手写数字(来自邮政编码数据库)识别应用程序。提出了一种新颖而高效的方法,该方法涉及三个神经网络。前两个神经网络处理“阅读过程”,最后一个神经网络学习写作,当它们不自信时,帮助对前两个网络的输出做出决定。在结束之前,本文介绍了将ArMenX集成到高级编程环境中的工作,旨在使其更容易利用架构灵活性。
{"title":"An efficient handwritten digit recognition method on a flexible parallel architecture","authors":"A.P. Maubant, Y. Autret, G. Leonhard, G. Ouvradou, A. Thépaut","doi":"10.1109/MNNFS.1996.493815","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493815","url":null,"abstract":"This paper presents neural and hybrid (symbolic and subsymbolic) applications downloaded on the distributed computer architecture ArMenX. This machine is articulated around a ring of FPGAs acting as routing resources as well as fine grain computing resources and thus giving great flexibility. More coarse grain computing resources-Transputer and DSP-tightly coupled via FPGAs give a large application spectrum to the machine, making it possible to implement heterogeneous algorithms efficiently involving both low level (computing intensive) and high level (control intensive) tasks. We first introduce the ArMenX project and the main architecture features. Then, after giving details on the computing of propagation and back-propagation of the multi-layer perceptron on ArMenX, we will focus on a handwritten digit (issued from a zip code data base) recognition application. An original and efficient method, involving three neural networks, is developed. The first two neural networks deal with the 'reading process', and the last neural network, which learned to write, helps to make decisions on the first two network outputs, when they are not confident. Before concluding, the paper presents the work of integration of ArMenX into a high level programming environment, designed to make it easier to take advantage of the architecture flexibility.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131459951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neuron-MOS-based association hardware for real-time event recognition 基于神经元- mos的实时事件识别关联硬件
T. Shibata, M. Konda, Y. Yamashita, T. Nakai, T. Ohmi
Neuron MOS transistor (/spl upsi/MOS) mimicking the fundamental behavior of neurons at a very primitive device level has been applied to construct a real-time event recognition hardware. A neuron MOS associator searches for the most similar event in the past memory to the current event based on Manhattan distance calculation and the minimum distance search by a winner take all (WTA) circuitry in a fully parallel architecture. A unique floating-gate analog EEPROM technology has been developed to build a vast memory system storing the events in the past. Test circuits of key subsystems were fabricated by a double-polysilicon CMOS process and their operation was verified by measurements as well as by simulation. As a simple application of the basic architecture, a motion-vector-search hardware was designed and fabricated. The circuit can find out the two-dimensional motion vector in about 150 nsec by a very simple circuitry.
神经元MOS晶体管(/spl upsi/MOS)在非常原始的器件水平上模拟神经元的基本行为,已被应用于构建实时事件识别硬件。基于曼哈顿距离计算和赢家通吃(WTA)电路的最小距离搜索,神经元MOS关联器在全并行架构中搜索过去记忆中与当前事件最相似的事件。一种独特的浮门模拟EEPROM技术被开发出来,用来建立一个巨大的存储系统来存储过去的事件。采用双多晶硅CMOS工艺制作了关键子系统的测试电路,并通过测量和仿真验证了测试电路的有效性。作为基本架构的一个简单应用,设计并制作了一个运动矢量搜索硬件。该电路通过一个非常简单的电路,可以在150nsec左右的时间内求出二维运动矢量。
{"title":"Neuron-MOS-based association hardware for real-time event recognition","authors":"T. Shibata, M. Konda, Y. Yamashita, T. Nakai, T. Ohmi","doi":"10.1109/MNNFS.1996.493777","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493777","url":null,"abstract":"Neuron MOS transistor (/spl upsi/MOS) mimicking the fundamental behavior of neurons at a very primitive device level has been applied to construct a real-time event recognition hardware. A neuron MOS associator searches for the most similar event in the past memory to the current event based on Manhattan distance calculation and the minimum distance search by a winner take all (WTA) circuitry in a fully parallel architecture. A unique floating-gate analog EEPROM technology has been developed to build a vast memory system storing the events in the past. Test circuits of key subsystems were fabricated by a double-polysilicon CMOS process and their operation was verified by measurements as well as by simulation. As a simple application of the basic architecture, a motion-vector-search hardware was designed and fabricated. The circuit can find out the two-dimensional motion vector in about 150 nsec by a very simple circuitry.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"36 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132154182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Direct synthesis of neural networks 直接合成神经网络
Valeriu Beiu, J.G. Taylor
The paper overviews recent developments of a VLSI-friendly, constructive algorithm as well as detailing two extensions. The problem is to construct a neural network when m examples of n inputs are given (classification problem). The two extensions discussed are: (i) the use of analog comparators; and (ii) digital as well as analog solution to XOR-like problems. For a simple example (the two-spirals), we are able to show that the algorithm does a very "efficient" encoding of a given problem into the neural network it "builds"-when compared to the entropy of the given problem and to other learning algorithms. We are also able to estimate the number of bits needed to solve any classification problem for the general case. Being interested in the VLSI implementation of such networks, the optimum criteria are not only the classical size and depth, but also the connectivity and the number of bits for representing the weights-as such measures are closer estimates of the area and lead to better approximations of the AT/sup 2/.
本文概述了vlsi友好的建设性算法的最新发展,并详细介绍了两个扩展。问题是在给定n个输入的m个示例时构造一个神经网络(分类问题)。讨论的两个扩展是:(i)使用模拟比较器;(ii)类似xor问题的数字和模拟解决方案。对于一个简单的例子(双螺旋),我们能够证明,当与给定问题的熵和其他学习算法相比时,算法对给定问题进行了非常“有效”的编码,并将其“构建”到神经网络中。我们还能够估计解决一般情况下任何分类问题所需的比特数。对这种网络的VLSI实现感兴趣,最佳标准不仅是经典的尺寸和深度,而且是连接和表示权重的比特数,因为这些措施是对面积的更接近估计,并导致更好的AT/sup 2/近似值。
{"title":"Direct synthesis of neural networks","authors":"Valeriu Beiu, J.G. Taylor","doi":"10.1109/MNNFS.1996.493800","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493800","url":null,"abstract":"The paper overviews recent developments of a VLSI-friendly, constructive algorithm as well as detailing two extensions. The problem is to construct a neural network when m examples of n inputs are given (classification problem). The two extensions discussed are: (i) the use of analog comparators; and (ii) digital as well as analog solution to XOR-like problems. For a simple example (the two-spirals), we are able to show that the algorithm does a very \"efficient\" encoding of a given problem into the neural network it \"builds\"-when compared to the entropy of the given problem and to other learning algorithms. We are also able to estimate the number of bits needed to solve any classification problem for the general case. Being interested in the VLSI implementation of such networks, the optimum criteria are not only the classical size and depth, but also the connectivity and the number of bits for representing the weights-as such measures are closer estimates of the area and lead to better approximations of the AT/sup 2/.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133145465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A modified RBF neural network for efficient current-mode VLSI implementation 基于改进RBF神经网络的高效电流模VLSI实现
R. Dogaru, A. Murgan, S. Ortmann, M. Glesner
A modified RBF neural network model is proposed allowing efficient VLSI implementation in both analog or digital technology. This model is based essentially on replacing the standard Gaussian basis function with a piece-wise linear one and on using a fast allocation unit learning algorithm for determination of unit centers. The modified RBF approximates optimally Gaussians for the whole range of parameters (radius and distance). The learning algorithm is fully on-line and easy to be implemented in VLSI using the proposed neural structures for on-line signal processing tasks. Applying the standard test problem of the chaotic time series prediction, the functional performances of different RBF networks were compared. Experimental results show that the proposed architecture outperforms the standard RBF networks, the main advantages being related with low hardware requirements and fast learning while the learning algorithm can be also efficient embedded in silicon. A suggestion for current-mode implementation is presented together with considerations regarding the computational requirements of the proposed model for digital implementations.
提出了一种改进的RBF神经网络模型,可以在模拟或数字技术下实现高效的VLSI。该模型本质上是基于用分段线性基函数代替标准高斯基函数,并使用快速分配单元学习算法来确定单元中心。改进的RBF对整个参数范围(半径和距离)最优逼近高斯分布。该学习算法是完全在线的,并且易于在超大规模集成电路中使用所提出的神经结构来实现在线信号处理任务。应用混沌时间序列预测的标准测试问题,比较了不同RBF网络的功能性能。实验结果表明,该结构优于标准RBF网络,主要优点是硬件要求低,学习速度快,学习算法可以高效嵌入到芯片中。提出了电流模式实现的建议,并考虑了所提出的模型对数字实现的计算要求。
{"title":"A modified RBF neural network for efficient current-mode VLSI implementation","authors":"R. Dogaru, A. Murgan, S. Ortmann, M. Glesner","doi":"10.1109/MNNFS.1996.493801","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493801","url":null,"abstract":"A modified RBF neural network model is proposed allowing efficient VLSI implementation in both analog or digital technology. This model is based essentially on replacing the standard Gaussian basis function with a piece-wise linear one and on using a fast allocation unit learning algorithm for determination of unit centers. The modified RBF approximates optimally Gaussians for the whole range of parameters (radius and distance). The learning algorithm is fully on-line and easy to be implemented in VLSI using the proposed neural structures for on-line signal processing tasks. Applying the standard test problem of the chaotic time series prediction, the functional performances of different RBF networks were compared. Experimental results show that the proposed architecture outperforms the standard RBF networks, the main advantages being related with low hardware requirements and fast learning while the learning algorithm can be also efficient embedded in silicon. A suggestion for current-mode implementation is presented together with considerations regarding the computational requirements of the proposed model for digital implementations.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116102622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Analog VLSI circuits for covert attentional shifts 用于隐蔽注意力转移的模拟VLSI电路
T. Morris, S. DeWeerth
In this paper we present analog very large-scale integrated (aVLSI) circuits that facilitate the selection process for initiating and mediating attentive visual processing. We demonstrate the performance of these circuits within a system that implements covert attentional shifts based on an input array that represents saliency across the visual field. The selection process, which enables the transition from preattentive to attentive processing, uses knowledge of previous selections and appropriate duration of selections to perform its task. The circuitry uses local feedback to create a hysteretic effect in the switching from one location of attention to the next. We also include an inhibition-of-return mechanism to facilitate shifting the location of attention even when the input array remains constant. We present test data from a one-dimensional version of the system.
在本文中,我们提出了一种模拟的超大规模集成电路(aVLSI),它促进了启动和调解注意视觉处理的选择过程。我们在一个系统中展示了这些电路的性能,该系统实现了基于代表整个视野显著性的输入阵列的隐蔽注意力转移。选择过程,使从前注意到注意加工的过渡,使用先前选择的知识和适当的选择持续时间来执行其任务。电路利用局部反馈在注意力从一个位置切换到下一个位置时产生滞后效应。我们还包括一个返回抑制机制,以便在输入数组保持不变的情况下转移注意力的位置。我们给出了该系统一维版本的测试数据。
{"title":"Analog VLSI circuits for covert attentional shifts","authors":"T. Morris, S. DeWeerth","doi":"10.1109/MNNFS.1996.493769","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493769","url":null,"abstract":"In this paper we present analog very large-scale integrated (aVLSI) circuits that facilitate the selection process for initiating and mediating attentive visual processing. We demonstrate the performance of these circuits within a system that implements covert attentional shifts based on an input array that represents saliency across the visual field. The selection process, which enables the transition from preattentive to attentive processing, uses knowledge of previous selections and appropriate duration of selections to perform its task. The circuitry uses local feedback to create a hysteretic effect in the switching from one location of attention to the next. We also include an inhibition-of-return mechanism to facilitate shifting the location of attention even when the input array remains constant. We present test data from a one-dimensional version of the system.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123206011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
System implementations of analog VLSI velocity sensors 模拟VLSI速度传感器的系统实现
G. Indiveri, J. Kramer, C. Koch
We present three different architectures that make use of analog VLSI velocity sensors for detecting the focus of expansion, time to contact and motion discontinuities respectively. For each of the architectures proposed we describe the functionality of their component modules and their principles of operation. Data measurements obtained from the VLSI chips developed demonstrate their correct performance and their limits of operation.
我们提出了三种不同的架构,分别利用模拟VLSI速度传感器来检测膨胀焦点、接触时间和运动不连续。对于提出的每一个体系结构,我们都描述了它们的组件模块的功能和它们的操作原则。从开发的VLSI芯片获得的数据测量证明了它们的正确性能和操作限制。
{"title":"System implementations of analog VLSI velocity sensors","authors":"G. Indiveri, J. Kramer, C. Koch","doi":"10.1109/MNNFS.1996.493767","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493767","url":null,"abstract":"We present three different architectures that make use of analog VLSI velocity sensors for detecting the focus of expansion, time to contact and motion discontinuities respectively. For each of the architectures proposed we describe the functionality of their component modules and their principles of operation. Data measurements obtained from the VLSI chips developed demonstrate their correct performance and their limits of operation.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123763478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Using the GREMLIN for digital FIR networks 在数字FIR网络中使用GREMLIN
M. Diepenhorst, W. Jansen, J. Nijhuis, M. Schreiner, L. Spaanenburg, A. Ypma
Time-delay neural networks are well-suited for prediction purposes. A particular implementation is the Finite Impulse Response neural net. The GREMLIN architecture is introduced to accommodate such networks. It can be micropipelined to achieve a 85 MCPS performance on a conventional connection-serial structure and allows from its Logic-Enhance Memory nature an easily parametrized design. A typical design for biomedical applications can be trained in a Cascade fashion and subsequently mapped.
时滞神经网络非常适合于预测目的。一个特殊的实现是有限脉冲响应神经网络。引入GREMLIN架构就是为了适应这样的网络。它可以通过微流水线在传统的连接串行结构上实现85 MCPS的性能,并且可以通过其逻辑增强存储器的特性轻松进行参数化设计。生物医学应用的典型设计可以以级联方式进行训练并随后进行映射。
{"title":"Using the GREMLIN for digital FIR networks","authors":"M. Diepenhorst, W. Jansen, J. Nijhuis, M. Schreiner, L. Spaanenburg, A. Ypma","doi":"10.1109/MNNFS.1996.493813","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493813","url":null,"abstract":"Time-delay neural networks are well-suited for prediction purposes. A particular implementation is the Finite Impulse Response neural net. The GREMLIN architecture is introduced to accommodate such networks. It can be micropipelined to achieve a 85 MCPS performance on a conventional connection-serial structure and allows from its Logic-Enhance Memory nature an easily parametrized design. A typical design for biomedical applications can be trained in a Cascade fashion and subsequently mapped.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"2010 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127352049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Implementation of a biologically inspired neuron-model in FPGA 基于FPGA的生物启发神经元模型的实现
M. Rossmann, B. Hesse, K. Goser, A. Buhlmeier, G. Manteuffel
This paper presents the implementation of a biologically inspired neuron-model. Learning is performed on-line in special synapses based on the biologically proved Hebbian learning algorithm. This algorithm is implemented on-chip allowing an architecture of autonomous neural units. The algorithm is transparent so connections between the neurons can easily be engineered. Due to their functionality and their flexibility only few neurons are needed to fulfil basic tasks. A parallel and a serial concept for an implementation in an FPGA (Field Programmable Gate-Array) are discussed. A prototype of the serial approach is developed in a XILINX FPGA series 3090. This solution has one excitatory, one inhibitory, two Hebbian synapses and one output operating with 8 bit resolution. The internal computation is performed at higher resolution to eliminate errors due to overflow. The Hebbian weights are stored at a precision of 19 bit for multiplication. The prototype works at a clock frequency of 5 MHz leading to an update rate of 333 kCUPS.
本文提出了一个生物学启发神经元模型的实现。基于生物学证明的Hebbian学习算法,在特殊的突触中在线进行学习。该算法在芯片上实现,允许自主神经单元的架构。该算法是透明的,因此神经元之间的连接可以很容易地进行设计。由于它们的功能性和灵活性,只需要很少的神经元来完成基本任务。讨论了在FPGA(现场可编程门阵列)中实现并行和串行的概念。在XILINX系列3090 FPGA上开发了串行方法的原型。该解决方案具有一个兴奋性,一个抑制性,两个Hebbian突触和一个输出以8位分辨率操作。内部计算以更高的分辨率执行,以消除由于溢出引起的错误。Hebbian权重以19位的精度存储,用于乘法。该样机工作在5兆赫的时钟频率下,导致更新速率为333 kCUPS。
{"title":"Implementation of a biologically inspired neuron-model in FPGA","authors":"M. Rossmann, B. Hesse, K. Goser, A. Buhlmeier, G. Manteuffel","doi":"10.1109/MNNFS.1996.493810","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493810","url":null,"abstract":"This paper presents the implementation of a biologically inspired neuron-model. Learning is performed on-line in special synapses based on the biologically proved Hebbian learning algorithm. This algorithm is implemented on-chip allowing an architecture of autonomous neural units. The algorithm is transparent so connections between the neurons can easily be engineered. Due to their functionality and their flexibility only few neurons are needed to fulfil basic tasks. A parallel and a serial concept for an implementation in an FPGA (Field Programmable Gate-Array) are discussed. A prototype of the serial approach is developed in a XILINX FPGA series 3090. This solution has one excitatory, one inhibitory, two Hebbian synapses and one output operating with 8 bit resolution. The internal computation is performed at higher resolution to eliminate errors due to overflow. The Hebbian weights are stored at a precision of 19 bit for multiplication. The prototype works at a clock frequency of 5 MHz leading to an update rate of 333 kCUPS.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129301877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A low-power Neuro-Fuzzy pulse stream system 低功耗神经模糊脉冲流系统
M. Chiaberge, E. Miranda Sologuren, L. Reyneri
This paper describes a VLSI device design for low-power neuro-fuzzy computation, which is based on coherent pulse width modulation. The device can implement either multi-layer perceptrons, radial basis functions or fuzzy paradigms. In all cases, weights are stored as a voltage on a pair of capacitors, which are sequentially refreshed by a built-in self-refresh circuit.
本文介绍了一种基于相干脉宽调制的低功耗神经模糊计算VLSI器件的设计。该装置可以实现多层感知器、径向基函数或模糊范式。在所有情况下,重量都以电压的形式存储在一对电容器上,由内置的自刷新电路依次刷新。
{"title":"A low-power Neuro-Fuzzy pulse stream system","authors":"M. Chiaberge, E. Miranda Sologuren, L. Reyneri","doi":"10.1109/MNNFS.1996.493791","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493791","url":null,"abstract":"This paper describes a VLSI device design for low-power neuro-fuzzy computation, which is based on coherent pulse width modulation. The device can implement either multi-layer perceptrons, radial basis functions or fuzzy paradigms. In all cases, weights are stored as a voltage on a pair of capacitors, which are sequentially refreshed by a built-in self-refresh circuit.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122259639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of Fifth International Conference on Microelectronics for Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1