首页 > 最新文献

Proceedings of Fifth International Conference on Microelectronics for Neural Networks最新文献

英文 中文
Hardware-friendly learning algorithms for neural networks: an overview 神经网络的硬件友好学习算法:概述
E. FieslerIDIAPCP, P. Moerland
The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with far-reaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of various methods that simplify the hardware implementation of neural network models. Adaptations that are proper to specific learning rules or network architectures are discussed. These range from the use of perturbation in multilayer feedforward networks and local learning algorithms to quantization effects in self-organizing feature maps. Moreover, in more general terms, the problems of inaccuracy, limited precision, and robustness are treated.
人工神经网络及其学习算法的硬件实现是一个具有广泛应用前景的研究领域。然而,从理想的数学模型到紧凑可靠的硬件的映射还远远不够明显。本文概述了各种简化神经网络模型硬件实现的方法。讨论了适合特定学习规则或网络体系结构的适应性。这些范围从在多层前馈网络和局部学习算法中使用扰动到自组织特征映射中的量化效应。此外,在更一般的术语,不准确,有限的精度和鲁棒性的问题进行了处理。
{"title":"Hardware-friendly learning algorithms for neural networks: an overview","authors":"E. FieslerIDIAPCP, P. Moerland","doi":"10.1109/MNNFS.1996.493781","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493781","url":null,"abstract":"The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with far-reaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of various methods that simplify the hardware implementation of neural network models. Adaptations that are proper to specific learning rules or network architectures are discussed. These range from the use of perturbation in multilayer feedforward networks and local learning algorithms to quantization effects in self-organizing feature maps. Moreover, in more general terms, the problems of inaccuracy, limited precision, and robustness are treated.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A low-power high-precision tunable WINNER-TAKE-ALL network 低功耗高精度可调谐赢家通吃网络
R. Canegallo, M. Chinosi, A. Kramer
This paper describes a low power CMOS circuit for selecting the greatest of n analog voltages within a tunable selection range. An increasing speed-decreasing precision law is used to determine the amplitude of the selection range. 16 mV to 4 mV resolution, over a 2 V to 4 V dynamic input range, can be obtained by reducing the speed from 2 MHz to 500 kHz. 1 /spl mu/A quiescent current, 2 /spl mu/A AC current for the selected cells and small size make this circuit available for VLSI implementations of massively parallel analog computational circuits.
本文介绍了一种低功耗CMOS电路,用于在可调的选择范围内选择n个模拟电压中的最大值。选取范围的幅值采用精度递增-递减规律确定。在2v到4v的动态输入范围内,通过将速度从2mhz降低到500khz,可以获得16mv到4mv的分辨率。静态电流为1 /spl mu/A,所选单元的交流电流为2 /spl mu/A,体积小,可用于大规模并行模拟计算电路的VLSI实现。
{"title":"A low-power high-precision tunable WINNER-TAKE-ALL network","authors":"R. Canegallo, M. Chinosi, A. Kramer","doi":"10.1109/MNNFS.1996.493805","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493805","url":null,"abstract":"This paper describes a low power CMOS circuit for selecting the greatest of n analog voltages within a tunable selection range. An increasing speed-decreasing precision law is used to determine the amplitude of the selection range. 16 mV to 4 mV resolution, over a 2 V to 4 V dynamic input range, can be obtained by reducing the speed from 2 MHz to 500 kHz. 1 /spl mu/A quiescent current, 2 /spl mu/A AC current for the selected cells and small size make this circuit available for VLSI implementations of massively parallel analog computational circuits.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"109 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130660544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A current mode CMOS multi-layer perceptron chip 一种电流模式CMOS多层感知器芯片
G. M. Bo, D. Caviglia, M. Valle
An analog VLSI neural network integrated circuit is presented. It consist of a feedforward multi layer perceptron (MLP) network with 64 inputs, 64 hidden neurons and 10 outputs. The computational cells have been designed by using the current mode approach and weak inversion biased MOS transistors to reduce the occupied area and power consumption. The processing delay is less than 2 /spl mu/s and the total average power consumption is around 200 mW. This is equivalent to a computational power of about 2.5/spl times/10/sup 9/ connections per second. The chip can be employed in a chip-in-the-loop neural architecture.
提出了一种模拟VLSI神经网络集成电路。它由一个具有64个输入、64个隐藏神经元和10个输出的前馈多层感知器(MLP)网络组成。计算单元采用电流模式方法和弱反转偏置MOS晶体管进行设计,以减少占用面积和功耗。处理延迟小于2 /spl mu/s,总平均功耗在200mw左右。这相当于每秒2.5/spl次/10/sup /连接的计算能力。该芯片可用于芯片在环神经结构。
{"title":"A current mode CMOS multi-layer perceptron chip","authors":"G. M. Bo, D. Caviglia, M. Valle","doi":"10.1109/MNNFS.1996.493778","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493778","url":null,"abstract":"An analog VLSI neural network integrated circuit is presented. It consist of a feedforward multi layer perceptron (MLP) network with 64 inputs, 64 hidden neurons and 10 outputs. The computational cells have been designed by using the current mode approach and weak inversion biased MOS transistors to reduce the occupied area and power consumption. The processing delay is less than 2 /spl mu/s and the total average power consumption is around 200 mW. This is equivalent to a computational power of about 2.5/spl times/10/sup 9/ connections per second. The chip can be employed in a chip-in-the-loop neural architecture.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132492861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Analog VLSI circuits for visual motion-based adaptation of post-saccadic drift 基于视觉运动的后跳跃漂移适应模拟VLSI电路
T. Horiuchi, C. Koch
Using the analog VLSI-based saccadic eye movement system previously developed we investigate the use of biologically realistic error signals to calibrate the system in a manner similar to the primate oculomotor system. In this paper we introduce two new circuit components which are used to perform this task, a resettable-integrator model of the burst generator with a floating-gate structure to provide on-chip storage of analog parameters and a directionally-selective motion detector for detecting post-saccadic drift.
利用先前开发的基于vlsi的模拟眼动系统,我们研究了使用生物学上真实的误差信号以类似于灵长类动眼力系统的方式校准系统。在本文中,我们介绍了用于完成这项任务的两个新的电路组件,一个是具有浮门结构的突发发生器的可复位积分器模型,用于在片上存储模拟参数,另一个是用于检测跳变后漂移的方向选择运动检测器。
{"title":"Analog VLSI circuits for visual motion-based adaptation of post-saccadic drift","authors":"T. Horiuchi, C. Koch","doi":"10.1109/MNNFS.1996.493773","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493773","url":null,"abstract":"Using the analog VLSI-based saccadic eye movement system previously developed we investigate the use of biologically realistic error signals to calibrate the system in a manner similar to the primate oculomotor system. In this paper we introduce two new circuit components which are used to perform this task, a resettable-integrator model of the burst generator with a floating-gate structure to provide on-chip storage of analog parameters and a directionally-selective motion detector for detecting post-saccadic drift.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Computational image sensors for on-sensor-compression 用于传感器上压缩的计算图像传感器
T. Hamamoto, Y. Egi, M. Hatori, K. Aizawa, T. Okubo, H. Maruyama, E. Fossum
In this paper, we propose novel image sensors which compress image signal. By making use of very fast analog processing on the imager plane, the compression sensor can significantly reduce the amount of pixel data output from the sensor. The proposed sensor is intended to overcome the communication bottle neck for high pixel rate imaging such as high frame rate imaging and high resolution imaging. The compression sensor consists of three parts; transducer, memory and processor. Two architectures for on-sensor-compression are discussed in this paper that are pixel parallel architecture and column parallel architecture. In the former architecture, the three parts are put together in each pixel, and processing is pixel parallel. In the latter architecture, transducer, processor and memory areas are separated, and processing is column parallel. We also describe a prototype chip of pixel-parallel-type sensor with 32/spl times/32 pixels which has been fabricated using 2 /spl mu/m CMOS technology. Some results of examinations are shown in this paper.
本文提出了一种新型的图像传感器,可以对图像信号进行压缩。通过利用成像平面上非常快速的模拟处理,压缩传感器可以显著减少传感器输出的像素数据量。该传感器旨在克服高帧率成像和高分辨率成像等高像素率成像的通信瓶颈。压缩传感器由三部分组成;传感器、存储器和处理器。本文讨论了两种传感器上的并行结构:像素并行结构和列并行结构。在前一种架构中,这三个部分放在每个像素上,并且处理是像素并行的。在后一种体系结构中,传感器、处理器和存储区是分开的,处理是列并行的。我们还描述了一种采用2/spl μ m CMOS技术制作的32/spl倍/32像素像素并行式传感器原型芯片。本文给出了一些检验结果。
{"title":"Computational image sensors for on-sensor-compression","authors":"T. Hamamoto, Y. Egi, M. Hatori, K. Aizawa, T. Okubo, H. Maruyama, E. Fossum","doi":"10.1109/MNNFS.1996.493806","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493806","url":null,"abstract":"In this paper, we propose novel image sensors which compress image signal. By making use of very fast analog processing on the imager plane, the compression sensor can significantly reduce the amount of pixel data output from the sensor. The proposed sensor is intended to overcome the communication bottle neck for high pixel rate imaging such as high frame rate imaging and high resolution imaging. The compression sensor consists of three parts; transducer, memory and processor. Two architectures for on-sensor-compression are discussed in this paper that are pixel parallel architecture and column parallel architecture. In the former architecture, the three parts are put together in each pixel, and processing is pixel parallel. In the latter architecture, transducer, processor and memory areas are separated, and processing is column parallel. We also describe a prototype chip of pixel-parallel-type sensor with 32/spl times/32 pixels which has been fabricated using 2 /spl mu/m CMOS technology. Some results of examinations are shown in this paper.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132544630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Implementation of time-multiplexed CNN building block cell 时间复用CNN构建块单元的实现
K. K. Lai, P. Leong
We have proposed an area efficient implementation of Cellular Neural Network by using the time-multiplexed method. This paper describes the underlying theory, method, and the circuit architecture of a VLSI implementation. Spice simulation results have been obtained to illustrate the circuit operation. A building block cell of a time-multiplexed cellular neural network has been completed and is currently being fabricated.
提出了一种基于时间复用的细胞神经网络区域高效实现方法。本文介绍了VLSI实现的基本原理、方法和电路结构。仿真结果说明了该电路的工作原理。时间复用细胞神经网络的构建单元已经完成,目前正在制作中。
{"title":"Implementation of time-multiplexed CNN building block cell","authors":"K. K. Lai, P. Leong","doi":"10.1109/MNNFS.1996.493775","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493775","url":null,"abstract":"We have proposed an area efficient implementation of Cellular Neural Network by using the time-multiplexed method. This paper describes the underlying theory, method, and the circuit architecture of a VLSI implementation. Spice simulation results have been obtained to illustrate the circuit operation. A building block cell of a time-multiplexed cellular neural network has been completed and is currently being fabricated.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
On-chip backpropagation training using parallel stochastic bit streams 使用并行随机比特流的片上反向传播训练
Kuno Kollmann, K. Riemschneider, Hans Christoph
It is proposed to use stochastic arithmetic computing for all arithmetic operations of training and processing backpropagation nets. In this way it is possible to design simple processing elements which fulfil all the requirements of information processing using values coded as independent stochastic bit streams. Combining such processing elements silicon saving and full parallel neural networks of variable structure and capacity are available supporting the complete implementation of the error backpropagation algorithm in hardware. A sign considering method of coding as proposed which allows a homogeneous implementation of the net without separating it into an inhibitoric and an excitatoric part. Furthermore, parameterizable nonlinearities based on stochastic automata are used. Comparable to the momentum (pulse term) and improving the training of a net there is a sequential arrangement of adaptive and integrative elements influencing the weights and implemented stochastically, too. Experimental hardware implementations based on PLD's/FPGA's and a first silicon prototype are realized.
提出将随机算法计算用于训练和处理反向传播网络的所有算术运算。用这种方法,可以设计出简单的处理元件,它可以使用编码为独立随机比特流的值来满足信息处理的所有要求。结合这些处理元素,可以提供节省硅和可变结构和容量的全并行神经网络,支持误差反向传播算法在硬件上的完整实现。所提出的一种考虑编码方法的符号,它允许网络的同质实现,而不将其分为抑制部分和激励部分。此外,还采用了基于随机自动机的可参数非线性。与动量(脉冲项)和改进网络训练相比,影响权重的自适应和综合要素的顺序排列也是随机实现的。基于PLD /FPGA的实验硬件实现和第一个硅原型实现。
{"title":"On-chip backpropagation training using parallel stochastic bit streams","authors":"Kuno Kollmann, K. Riemschneider, Hans Christoph","doi":"10.1109/MNNFS.1996.493785","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493785","url":null,"abstract":"It is proposed to use stochastic arithmetic computing for all arithmetic operations of training and processing backpropagation nets. In this way it is possible to design simple processing elements which fulfil all the requirements of information processing using values coded as independent stochastic bit streams. Combining such processing elements silicon saving and full parallel neural networks of variable structure and capacity are available supporting the complete implementation of the error backpropagation algorithm in hardware. A sign considering method of coding as proposed which allows a homogeneous implementation of the net without separating it into an inhibitoric and an excitatoric part. Furthermore, parameterizable nonlinearities based on stochastic automata are used. Comparable to the momentum (pulse term) and improving the training of a net there is a sequential arrangement of adaptive and integrative elements influencing the weights and implemented stochastically, too. Experimental hardware implementations based on PLD's/FPGA's and a first silicon prototype are realized.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132259811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
On-line hand-printing recognition with neural networks 基于神经网络的在线手印识别
R. Lyon, L. Yaeger
The need for fast and accurate text entry on small handheld computers has led to a resurgence of interest in on-line word recognition using artificial neural networks. Classical methods have been combined and improved to produce robust recognition of hand-printed English text. The central concept of a neural net as a character classifier provides a good base for a recognition system; long-standing issues relative to training generalization, segmentation, probabilistic formalisms, etc., need to resolved, however, to get adequate performance. A number of innovations in how to use a neural net as a classifier in a word recognizer are presented: negative training, stroke warping, balancing, normalized output error, error emphasis, multiple representations, quantized weights, and integrated word segmentation all contribute to efficient and robust performance.
由于需要在小型手持计算机上快速准确地输入文本,人们对使用人工神经网络进行在线单词识别的兴趣重新燃起。本文结合并改进了经典方法,实现了对手印英语文本的鲁棒识别。神经网络作为字符分类器的核心概念为识别系统提供了良好的基础;然而,需要解决与训练泛化、分割、概率形式化等相关的长期问题,以获得足够的性能。在如何使用神经网络作为分类器的词识别器中提出了许多创新:负训练,笔画扭曲,平衡,归一化输出误差,错误强调,多重表示,量化权重和集成分词都有助于高效和鲁棒的性能。
{"title":"On-line hand-printing recognition with neural networks","authors":"R. Lyon, L. Yaeger","doi":"10.1109/MNNFS.1996.493792","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493792","url":null,"abstract":"The need for fast and accurate text entry on small handheld computers has led to a resurgence of interest in on-line word recognition using artificial neural networks. Classical methods have been combined and improved to produce robust recognition of hand-printed English text. The central concept of a neural net as a character classifier provides a good base for a recognition system; long-standing issues relative to training generalization, segmentation, probabilistic formalisms, etc., need to resolved, however, to get adequate performance. A number of innovations in how to use a neural net as a classifier in a word recognizer are presented: negative training, stroke warping, balancing, normalized output error, error emphasis, multiple representations, quantized weights, and integrated word segmentation all contribute to efficient and robust performance.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"36 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133708374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
A variable-precision systolic architecture for ANN computation 一种用于人工神经网络计算的变精度收缩结构
Amine Bermak, D. Martinez
When Artificial Neural Networks (ANNs) are implemented in VLSI with fixed precision arithmetic, the accumulation of numerical errors may lead to results which are completely inaccurate. To avoid this, we propose a variable-precision arithmetic in which the precision of the computation is specified by the user at each layer in the network. This paper presents a top-down approach for designing an efficient bit-level systolic architecture for variable precision neural computation.
在固定精度算法的超大规模集成电路中实现人工神经网络时,数值误差的累积可能导致结果完全不准确。为了避免这种情况,我们提出了一种变精度算法,其中计算的精度由用户在网络中的每一层指定。本文提出了一种自顶向下的方法来设计一种有效的位级收缩结构,用于变精度神经计算。
{"title":"A variable-precision systolic architecture for ANN computation","authors":"Amine Bermak, D. Martinez","doi":"10.1109/MNNFS.1996.493814","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493814","url":null,"abstract":"When Artificial Neural Networks (ANNs) are implemented in VLSI with fixed precision arithmetic, the accumulation of numerical errors may lead to results which are completely inaccurate. To avoid this, we propose a variable-precision arithmetic in which the precision of the computation is specified by the user at each layer in the network. This paper presents a top-down approach for designing an efficient bit-level systolic architecture for variable precision neural computation.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133825770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Single electron tunneling technology for neural networks 神经网络的单电子隧道技术
M. Goossens, C. Verhoeven, A. V. van Roermund
A new neural network hardware concept based on single electron tunneling is presented. Single electron tunneling transistors have some advantageous properties which make them very attractive to make neural networks, among which their very small size, extremely low power consumption and potentially high speed. After a brief description of the technology, the relevant properties of SET transistors are described. Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. Finally, interconnecting the building blocks to form a neural network is analyzed.
提出了一种新的基于单电子隧穿的神经网络硬件概念。单电子隧道晶体管具有体积小、功耗极低、速度快等优点,对神经网络的研究具有很大的吸引力。在简要介绍了该技术之后,介绍了SET晶体管的相关特性。在一些小的SET晶体管电路上进行了模拟,显示出与神经网络所需的功能特性相似。最后,分析了构建块之间的相互连接,形成一个神经网络。
{"title":"Single electron tunneling technology for neural networks","authors":"M. Goossens, C. Verhoeven, A. V. van Roermund","doi":"10.1109/MNNFS.1996.493782","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493782","url":null,"abstract":"A new neural network hardware concept based on single electron tunneling is presented. Single electron tunneling transistors have some advantageous properties which make them very attractive to make neural networks, among which their very small size, extremely low power consumption and potentially high speed. After a brief description of the technology, the relevant properties of SET transistors are described. Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. Finally, interconnecting the building blocks to form a neural network is analyzed.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121387382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
Proceedings of Fifth International Conference on Microelectronics for Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1