Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493781
E. FieslerIDIAPCP, P. Moerland
The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with far-reaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of various methods that simplify the hardware implementation of neural network models. Adaptations that are proper to specific learning rules or network architectures are discussed. These range from the use of perturbation in multilayer feedforward networks and local learning algorithms to quantization effects in self-organizing feature maps. Moreover, in more general terms, the problems of inaccuracy, limited precision, and robustness are treated.
{"title":"Hardware-friendly learning algorithms for neural networks: an overview","authors":"E. FieslerIDIAPCP, P. Moerland","doi":"10.1109/MNNFS.1996.493781","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493781","url":null,"abstract":"The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with far-reaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of various methods that simplify the hardware implementation of neural network models. Adaptations that are proper to specific learning rules or network architectures are discussed. These range from the use of perturbation in multilayer feedforward networks and local learning algorithms to quantization effects in self-organizing feature maps. Moreover, in more general terms, the problems of inaccuracy, limited precision, and robustness are treated.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493805
R. Canegallo, M. Chinosi, A. Kramer
This paper describes a low power CMOS circuit for selecting the greatest of n analog voltages within a tunable selection range. An increasing speed-decreasing precision law is used to determine the amplitude of the selection range. 16 mV to 4 mV resolution, over a 2 V to 4 V dynamic input range, can be obtained by reducing the speed from 2 MHz to 500 kHz. 1 /spl mu/A quiescent current, 2 /spl mu/A AC current for the selected cells and small size make this circuit available for VLSI implementations of massively parallel analog computational circuits.
{"title":"A low-power high-precision tunable WINNER-TAKE-ALL network","authors":"R. Canegallo, M. Chinosi, A. Kramer","doi":"10.1109/MNNFS.1996.493805","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493805","url":null,"abstract":"This paper describes a low power CMOS circuit for selecting the greatest of n analog voltages within a tunable selection range. An increasing speed-decreasing precision law is used to determine the amplitude of the selection range. 16 mV to 4 mV resolution, over a 2 V to 4 V dynamic input range, can be obtained by reducing the speed from 2 MHz to 500 kHz. 1 /spl mu/A quiescent current, 2 /spl mu/A AC current for the selected cells and small size make this circuit available for VLSI implementations of massively parallel analog computational circuits.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"109 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130660544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493778
G. M. Bo, D. Caviglia, M. Valle
An analog VLSI neural network integrated circuit is presented. It consist of a feedforward multi layer perceptron (MLP) network with 64 inputs, 64 hidden neurons and 10 outputs. The computational cells have been designed by using the current mode approach and weak inversion biased MOS transistors to reduce the occupied area and power consumption. The processing delay is less than 2 /spl mu/s and the total average power consumption is around 200 mW. This is equivalent to a computational power of about 2.5/spl times/10/sup 9/ connections per second. The chip can be employed in a chip-in-the-loop neural architecture.
{"title":"A current mode CMOS multi-layer perceptron chip","authors":"G. M. Bo, D. Caviglia, M. Valle","doi":"10.1109/MNNFS.1996.493778","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493778","url":null,"abstract":"An analog VLSI neural network integrated circuit is presented. It consist of a feedforward multi layer perceptron (MLP) network with 64 inputs, 64 hidden neurons and 10 outputs. The computational cells have been designed by using the current mode approach and weak inversion biased MOS transistors to reduce the occupied area and power consumption. The processing delay is less than 2 /spl mu/s and the total average power consumption is around 200 mW. This is equivalent to a computational power of about 2.5/spl times/10/sup 9/ connections per second. The chip can be employed in a chip-in-the-loop neural architecture.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132492861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493773
T. Horiuchi, C. Koch
Using the analog VLSI-based saccadic eye movement system previously developed we investigate the use of biologically realistic error signals to calibrate the system in a manner similar to the primate oculomotor system. In this paper we introduce two new circuit components which are used to perform this task, a resettable-integrator model of the burst generator with a floating-gate structure to provide on-chip storage of analog parameters and a directionally-selective motion detector for detecting post-saccadic drift.
{"title":"Analog VLSI circuits for visual motion-based adaptation of post-saccadic drift","authors":"T. Horiuchi, C. Koch","doi":"10.1109/MNNFS.1996.493773","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493773","url":null,"abstract":"Using the analog VLSI-based saccadic eye movement system previously developed we investigate the use of biologically realistic error signals to calibrate the system in a manner similar to the primate oculomotor system. In this paper we introduce two new circuit components which are used to perform this task, a resettable-integrator model of the burst generator with a floating-gate structure to provide on-chip storage of analog parameters and a directionally-selective motion detector for detecting post-saccadic drift.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493806
T. Hamamoto, Y. Egi, M. Hatori, K. Aizawa, T. Okubo, H. Maruyama, E. Fossum
In this paper, we propose novel image sensors which compress image signal. By making use of very fast analog processing on the imager plane, the compression sensor can significantly reduce the amount of pixel data output from the sensor. The proposed sensor is intended to overcome the communication bottle neck for high pixel rate imaging such as high frame rate imaging and high resolution imaging. The compression sensor consists of three parts; transducer, memory and processor. Two architectures for on-sensor-compression are discussed in this paper that are pixel parallel architecture and column parallel architecture. In the former architecture, the three parts are put together in each pixel, and processing is pixel parallel. In the latter architecture, transducer, processor and memory areas are separated, and processing is column parallel. We also describe a prototype chip of pixel-parallel-type sensor with 32/spl times/32 pixels which has been fabricated using 2 /spl mu/m CMOS technology. Some results of examinations are shown in this paper.
本文提出了一种新型的图像传感器,可以对图像信号进行压缩。通过利用成像平面上非常快速的模拟处理,压缩传感器可以显著减少传感器输出的像素数据量。该传感器旨在克服高帧率成像和高分辨率成像等高像素率成像的通信瓶颈。压缩传感器由三部分组成;传感器、存储器和处理器。本文讨论了两种传感器上的并行结构:像素并行结构和列并行结构。在前一种架构中,这三个部分放在每个像素上,并且处理是像素并行的。在后一种体系结构中,传感器、处理器和存储区是分开的,处理是列并行的。我们还描述了一种采用2/spl μ m CMOS技术制作的32/spl倍/32像素像素并行式传感器原型芯片。本文给出了一些检验结果。
{"title":"Computational image sensors for on-sensor-compression","authors":"T. Hamamoto, Y. Egi, M. Hatori, K. Aizawa, T. Okubo, H. Maruyama, E. Fossum","doi":"10.1109/MNNFS.1996.493806","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493806","url":null,"abstract":"In this paper, we propose novel image sensors which compress image signal. By making use of very fast analog processing on the imager plane, the compression sensor can significantly reduce the amount of pixel data output from the sensor. The proposed sensor is intended to overcome the communication bottle neck for high pixel rate imaging such as high frame rate imaging and high resolution imaging. The compression sensor consists of three parts; transducer, memory and processor. Two architectures for on-sensor-compression are discussed in this paper that are pixel parallel architecture and column parallel architecture. In the former architecture, the three parts are put together in each pixel, and processing is pixel parallel. In the latter architecture, transducer, processor and memory areas are separated, and processing is column parallel. We also describe a prototype chip of pixel-parallel-type sensor with 32/spl times/32 pixels which has been fabricated using 2 /spl mu/m CMOS technology. Some results of examinations are shown in this paper.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132544630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493775
K. K. Lai, P. Leong
We have proposed an area efficient implementation of Cellular Neural Network by using the time-multiplexed method. This paper describes the underlying theory, method, and the circuit architecture of a VLSI implementation. Spice simulation results have been obtained to illustrate the circuit operation. A building block cell of a time-multiplexed cellular neural network has been completed and is currently being fabricated.
{"title":"Implementation of time-multiplexed CNN building block cell","authors":"K. K. Lai, P. Leong","doi":"10.1109/MNNFS.1996.493775","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493775","url":null,"abstract":"We have proposed an area efficient implementation of Cellular Neural Network by using the time-multiplexed method. This paper describes the underlying theory, method, and the circuit architecture of a VLSI implementation. Spice simulation results have been obtained to illustrate the circuit operation. A building block cell of a time-multiplexed cellular neural network has been completed and is currently being fabricated.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493785
Kuno Kollmann, K. Riemschneider, Hans Christoph
It is proposed to use stochastic arithmetic computing for all arithmetic operations of training and processing backpropagation nets. In this way it is possible to design simple processing elements which fulfil all the requirements of information processing using values coded as independent stochastic bit streams. Combining such processing elements silicon saving and full parallel neural networks of variable structure and capacity are available supporting the complete implementation of the error backpropagation algorithm in hardware. A sign considering method of coding as proposed which allows a homogeneous implementation of the net without separating it into an inhibitoric and an excitatoric part. Furthermore, parameterizable nonlinearities based on stochastic automata are used. Comparable to the momentum (pulse term) and improving the training of a net there is a sequential arrangement of adaptive and integrative elements influencing the weights and implemented stochastically, too. Experimental hardware implementations based on PLD's/FPGA's and a first silicon prototype are realized.
{"title":"On-chip backpropagation training using parallel stochastic bit streams","authors":"Kuno Kollmann, K. Riemschneider, Hans Christoph","doi":"10.1109/MNNFS.1996.493785","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493785","url":null,"abstract":"It is proposed to use stochastic arithmetic computing for all arithmetic operations of training and processing backpropagation nets. In this way it is possible to design simple processing elements which fulfil all the requirements of information processing using values coded as independent stochastic bit streams. Combining such processing elements silicon saving and full parallel neural networks of variable structure and capacity are available supporting the complete implementation of the error backpropagation algorithm in hardware. A sign considering method of coding as proposed which allows a homogeneous implementation of the net without separating it into an inhibitoric and an excitatoric part. Furthermore, parameterizable nonlinearities based on stochastic automata are used. Comparable to the momentum (pulse term) and improving the training of a net there is a sequential arrangement of adaptive and integrative elements influencing the weights and implemented stochastically, too. Experimental hardware implementations based on PLD's/FPGA's and a first silicon prototype are realized.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132259811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493792
R. Lyon, L. Yaeger
The need for fast and accurate text entry on small handheld computers has led to a resurgence of interest in on-line word recognition using artificial neural networks. Classical methods have been combined and improved to produce robust recognition of hand-printed English text. The central concept of a neural net as a character classifier provides a good base for a recognition system; long-standing issues relative to training generalization, segmentation, probabilistic formalisms, etc., need to resolved, however, to get adequate performance. A number of innovations in how to use a neural net as a classifier in a word recognizer are presented: negative training, stroke warping, balancing, normalized output error, error emphasis, multiple representations, quantized weights, and integrated word segmentation all contribute to efficient and robust performance.
{"title":"On-line hand-printing recognition with neural networks","authors":"R. Lyon, L. Yaeger","doi":"10.1109/MNNFS.1996.493792","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493792","url":null,"abstract":"The need for fast and accurate text entry on small handheld computers has led to a resurgence of interest in on-line word recognition using artificial neural networks. Classical methods have been combined and improved to produce robust recognition of hand-printed English text. The central concept of a neural net as a character classifier provides a good base for a recognition system; long-standing issues relative to training generalization, segmentation, probabilistic formalisms, etc., need to resolved, however, to get adequate performance. A number of innovations in how to use a neural net as a classifier in a word recognizer are presented: negative training, stroke warping, balancing, normalized output error, error emphasis, multiple representations, quantized weights, and integrated word segmentation all contribute to efficient and robust performance.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"36 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133708374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493814
Amine Bermak, D. Martinez
When Artificial Neural Networks (ANNs) are implemented in VLSI with fixed precision arithmetic, the accumulation of numerical errors may lead to results which are completely inaccurate. To avoid this, we propose a variable-precision arithmetic in which the precision of the computation is specified by the user at each layer in the network. This paper presents a top-down approach for designing an efficient bit-level systolic architecture for variable precision neural computation.
{"title":"A variable-precision systolic architecture for ANN computation","authors":"Amine Bermak, D. Martinez","doi":"10.1109/MNNFS.1996.493814","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493814","url":null,"abstract":"When Artificial Neural Networks (ANNs) are implemented in VLSI with fixed precision arithmetic, the accumulation of numerical errors may lead to results which are completely inaccurate. To avoid this, we propose a variable-precision arithmetic in which the precision of the computation is specified by the user at each layer in the network. This paper presents a top-down approach for designing an efficient bit-level systolic architecture for variable precision neural computation.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133825770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493782
M. Goossens, C. Verhoeven, A. V. van Roermund
A new neural network hardware concept based on single electron tunneling is presented. Single electron tunneling transistors have some advantageous properties which make them very attractive to make neural networks, among which their very small size, extremely low power consumption and potentially high speed. After a brief description of the technology, the relevant properties of SET transistors are described. Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. Finally, interconnecting the building blocks to form a neural network is analyzed.
{"title":"Single electron tunneling technology for neural networks","authors":"M. Goossens, C. Verhoeven, A. V. van Roermund","doi":"10.1109/MNNFS.1996.493782","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493782","url":null,"abstract":"A new neural network hardware concept based on single electron tunneling is presented. Single electron tunneling transistors have some advantageous properties which make them very attractive to make neural networks, among which their very small size, extremely low power consumption and potentially high speed. After a brief description of the technology, the relevant properties of SET transistors are described. Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. Finally, interconnecting the building blocks to form a neural network is analyzed.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121387382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}