Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493780
V. Fabbrizio, F. Raynal, X. Mariaud, A. Kramer, G. Colli
Analog implementations of neural networks have been used for a wide variety of tasks especially in the area of image processing. Typically, implementations of analog neural networks have been based on the use of either current or charge as the variable of computation. This work introduces a new class of analog neural network circuits based on the concept of conductance-mode computation. In this class of circuits, accumulated weighted inputs are represented as conductances, and a conductance-mode neuron is used to apply nonlinearity and produce an output. The advantages of this class of circuits are twofold: firstly, conductance-mode computation is fast-we have developed circuits based on these principles which compute at 5-10 MHz; secondly, because conductance-mode computation requires the minimum charge necessary to compare two conductances, its energy-consumption is self-scaling depending on the difficulty of the decision to be made-we have a working prototype which consumes 166 fJ per connection. The computing precision of these circuits is high: test results on a small test structure indicate an intrinsic precision of 8-9 bits. We have developed a larger test circuit which is able to perform computation with 1056 binary-valued inputs. Initial measurements in this large test structure indicate a more limited computing precision of 6+ to 8+ bits depending on the common mode of the input signal.
{"title":"Low power, low voltage conductance-mode CMOS analog neuron","authors":"V. Fabbrizio, F. Raynal, X. Mariaud, A. Kramer, G. Colli","doi":"10.1109/MNNFS.1996.493780","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493780","url":null,"abstract":"Analog implementations of neural networks have been used for a wide variety of tasks especially in the area of image processing. Typically, implementations of analog neural networks have been based on the use of either current or charge as the variable of computation. This work introduces a new class of analog neural network circuits based on the concept of conductance-mode computation. In this class of circuits, accumulated weighted inputs are represented as conductances, and a conductance-mode neuron is used to apply nonlinearity and produce an output. The advantages of this class of circuits are twofold: firstly, conductance-mode computation is fast-we have developed circuits based on these principles which compute at 5-10 MHz; secondly, because conductance-mode computation requires the minimum charge necessary to compare two conductances, its energy-consumption is self-scaling depending on the difficulty of the decision to be made-we have a working prototype which consumes 166 fJ per connection. The computing precision of these circuits is high: test results on a small test structure indicate an intrinsic precision of 8-9 bits. We have developed a larger test circuit which is able to perform computation with 1056 binary-valued inputs. Initial measurements in this large test structure indicate a more limited computing precision of 6+ to 8+ bits depending on the common mode of the input signal.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114867925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493789
D. Mayes, A. Murray, H. Reekie
This paper presents simulation and hardware results from cascadable circuits for pulsed Radial Basis Function (RBF) neural network chips. The functionality of each circuit is clearly demonstrated from the hardware results and consideration is also given to the practical issues affecting the development of a pulsed RBF demonstrator chip.
{"title":"Pulsed VLSI for RBF neural networks","authors":"D. Mayes, A. Murray, H. Reekie","doi":"10.1109/MNNFS.1996.493789","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493789","url":null,"abstract":"This paper presents simulation and hardware results from cascadable circuits for pulsed Radial Basis Function (RBF) neural network chips. The functionality of each circuit is clearly demonstrated from the hardware results and consideration is also given to the practical issues affecting the development of a pulsed RBF demonstrator chip.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"372 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121746585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493796
A. Jahnke, U. Roth, H. Klar
We present the architecture of a a neurocomputer for the simulation of spike-processing biological neural networks (NESPINN). It consists mainly of a neuron state memory, two connectivity units, a spike-event list, a sector unit and the NESPINN chip with a control unit, and eight PEs with 2 kB local on-chip memory each. In order to increase the performance features such as mixed SIMD/dataflow mode are included. The neurocomputer allows the simulation of up to 512 k neurons with a speed-up of ca. 600 over a Sparc-10. It thus allows tackling difficult low vision problems (e.g. scene segmentation) or simulation of the detailed spike behaviour of large cortical networks.
{"title":"A SIMD/dataflow architecture for a neurocomputer for spike-processing neural networks (NESPINN)","authors":"A. Jahnke, U. Roth, H. Klar","doi":"10.1109/MNNFS.1996.493796","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493796","url":null,"abstract":"We present the architecture of a a neurocomputer for the simulation of spike-processing biological neural networks (NESPINN). It consists mainly of a neuron state memory, two connectivity units, a spike-event list, a sector unit and the NESPINN chip with a control unit, and eight PEs with 2 kB local on-chip memory each. In order to increase the performance features such as mixed SIMD/dataflow mode are included. The neurocomputer allows the simulation of up to 512 k neurons with a speed-up of ca. 600 over a Sparc-10. It thus allows tackling difficult low vision problems (e.g. scene segmentation) or simulation of the detailed spike behaviour of large cortical networks.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121752994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493790
T. Lehmann
In this paper, we present a continuous time version of a differential Hebbian learning algorithm for pulsed neural systems with non-linear synapses. We argue that future analogue integrated implementations of artificial neural networks with on-chip learning must take as a starting point the basic properties of the technology. In particular asynchronous and inherently offset free, simple circuit structures must be used. We argue that unsupervised type learning schemes are most natural for analogue implementations and we seek inspiration from psychobiology to derive a learning scheme suitable for adaptive pulsed VLSI neural networks. We present simulations on this new learning scheme and show that it behaves as the original drive-reinforcement algorithm while being compatible with the technology. Finally, we show how the important weight change circuit is implemented in CMOS.
{"title":"Teaching pulsed integrated neural systems: a psychobiological approach","authors":"T. Lehmann","doi":"10.1109/MNNFS.1996.493790","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493790","url":null,"abstract":"In this paper, we present a continuous time version of a differential Hebbian learning algorithm for pulsed neural systems with non-linear synapses. We argue that future analogue integrated implementations of artificial neural networks with on-chip learning must take as a starting point the basic properties of the technology. In particular asynchronous and inherently offset free, simple circuit structures must be used. We argue that unsupervised type learning schemes are most natural for analogue implementations and we seek inspiration from psychobiology to derive a learning scheme suitable for adaptive pulsed VLSI neural networks. We present simulations on this new learning scheme and show that it behaves as the original drive-reinforcement algorithm while being compatible with the technology. Finally, we show how the important weight change circuit is implemented in CMOS.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131918602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493768
Pierre-François Ruedi
This article reports on a functional two-dimensional analog silicon retina performing motion detection along three directions at 120 degrees of each other and 2 speed channels per direction. The output of each channel is binary, however integration of this information over time yields an analog value. Motion detection is performed by correlation of events, which are the disappearance of edges. A retina made of 23 by 23 pixels with an hexagonal layout of pixels was integrated in a 2 /spl mu/m CMOS technology and showed to perform well. Pixel size is 223 /spl mu/m/spl times/215 /spl mu/m and consumption is around 20 /spl mu/W per pixel.
{"title":"Motion detection silicon retina based on event correlations","authors":"Pierre-François Ruedi","doi":"10.1109/MNNFS.1996.493768","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493768","url":null,"abstract":"This article reports on a functional two-dimensional analog silicon retina performing motion detection along three directions at 120 degrees of each other and 2 speed channels per direction. The output of each channel is binary, however integration of this information over time yields an analog value. Motion detection is performed by correlation of events, which are the disappearance of edges. A retina made of 23 by 23 pixels with an hexagonal layout of pixels was integrated in a 2 /spl mu/m CMOS technology and showed to perform well. Pixel size is 223 /spl mu/m/spl times/215 /spl mu/m and consumption is around 20 /spl mu/W per pixel.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133768004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493803
K. Tsuji, H. Yonezu, Jae-Kyun Shin
The neural network of a human brain can well perform higher-order-information processing which could not be achieved by Neuman-type computers. In order to perform the processing, it is necessary to fabricate artificial neural systems which can form the topological mapping through learning. A new learning algorithm and a new network model have been proposed for fabrication by means of CMOS analog circuits with variations of device characteristics. The functions of those circuits were confirmed by means of SPICE simulations and the functions of PDM (pulse density modulator) were confirmed experimentally. The learning simulations of the network consisting of the circuits have also been carried out. The results show that the topological mapping is almost formed, even when variations of device characteristics exist in the neural network. The results also reveal that calculating the weighted sum of each neuron's potential and potentials of its surrounding neurons as the output of each neuron and adding proper number of redundant neurons to the output layer are effective mechanisms for the network with variations of device characteristics.
{"title":"Topological mapping formation in a neural network with variations of device characteristics","authors":"K. Tsuji, H. Yonezu, Jae-Kyun Shin","doi":"10.1109/MNNFS.1996.493803","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493803","url":null,"abstract":"The neural network of a human brain can well perform higher-order-information processing which could not be achieved by Neuman-type computers. In order to perform the processing, it is necessary to fabricate artificial neural systems which can form the topological mapping through learning. A new learning algorithm and a new network model have been proposed for fabrication by means of CMOS analog circuits with variations of device characteristics. The functions of those circuits were confirmed by means of SPICE simulations and the functions of PDM (pulse density modulator) were confirmed experimentally. The learning simulations of the network consisting of the circuits have also been carried out. The results show that the topological mapping is almost formed, even when variations of device characteristics exist in the neural network. The results also reveal that calculating the weighted sum of each neuron's potential and potentials of its surrounding neurons as the output of each neuron and adding proper number of redundant neurons to the output layer are effective mechanisms for the network with variations of device characteristics.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132354605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493802
T. Lande, H. Ranjbar, M. Ismail, Y. Berg
In this paper we present a simple CMOS analog memory structure using the floating gate of a MOS transistor. The structure is based on a special but simple layout which allows significant tunneling at relatively low voltage levels. The programming of the memory is achieved using the standard Fowler-Nordheim tunneling and is implemented in a standard digital CMOS process with only one polysilicon layer. A simple on-chip memory driver circuit is also presented. Experimental results from test chips fabricated in a standard 2-micron CMOS process show six orders of magnitude dynamic range in current for subthreshold operation.
{"title":"An analog floating-gate memory in a standard digital technology","authors":"T. Lande, H. Ranjbar, M. Ismail, Y. Berg","doi":"10.1109/MNNFS.1996.493802","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493802","url":null,"abstract":"In this paper we present a simple CMOS analog memory structure using the floating gate of a MOS transistor. The structure is based on a special but simple layout which allows significant tunneling at relatively low voltage levels. The programming of the memory is achieved using the standard Fowler-Nordheim tunneling and is implemented in a standard digital CMOS process with only one polysilicon layer. A simple on-chip memory driver circuit is also presented. Experimental results from test chips fabricated in a standard 2-micron CMOS process show six orders of magnitude dynamic range in current for subthreshold operation.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132392534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493797
A. Pelagotti, V. Piuri
The universal approximation capability exhibited by one-hidden-layer neural network is explored to create a new synthesis method for minimized architectures suited for VLSI implementation. The development is based on the spectral analysis of the network, which focuses their capability of combining single neurons spectra to obtain the spectrum of the function to approximate. In this paper, we propose a new spectrum-based technique to synthesize 1-N-1 networks which approximate y=f(x) functions, with x, y/spl isin/R.
{"title":"Spectral analysis and synthesis of three-layered feed-forward neural networks for function approximation","authors":"A. Pelagotti, V. Piuri","doi":"10.1109/MNNFS.1996.493797","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493797","url":null,"abstract":"The universal approximation capability exhibited by one-hidden-layer neural network is explored to create a new synthesis method for minimized architectures suited for VLSI implementation. The development is based on the spectral analysis of the network, which focuses their capability of combining single neurons spectra to obtain the spectrum of the function to approximate. In this paper, we propose a new spectrum-based technique to synthesize 1-N-1 networks which approximate y=f(x) functions, with x, y/spl isin/R.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132634816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493811
Jocelyn Cloutier, Eric Cosatto, Steven Pigeon, R. Boyer, Patrice Y. Simard
The present in this paper the architecture and implementation of the Virtual Image Processor (VIP) which is an SIMD multiprocessor build with large FPGAs. The SIMD architecture, together with a 2D torus connection topology, is well suited for image processing, pattern recognition and neural network algorithms. The VIP board can be programmed on-line at the logic level, allowing optimal hardware dedication to any given algorithm.
{"title":"VIP: an FPGA-based processor for image processing and neural networks","authors":"Jocelyn Cloutier, Eric Cosatto, Steven Pigeon, R. Boyer, Patrice Y. Simard","doi":"10.1109/MNNFS.1996.493811","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493811","url":null,"abstract":"The present in this paper the architecture and implementation of the Virtual Image Processor (VIP) which is an SIMD multiprocessor build with large FPGAs. The SIMD architecture, together with a 2D torus connection topology, is well suited for image processing, pattern recognition and neural network algorithms. The VIP board can be programmed on-line at the logic level, allowing optimal hardware dedication to any given algorithm.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131908652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493783
A. Dupret, J. Rodier, D. Prévost, E. Belhaire, P. Lalanne, P. Chavel, P. Garda
A new approach to the VLSI implementation of stochastic cellular networks is demonstrated. Arrays of high throughput Gaussian noise sources are obtained thanks to the transduction of random patterns imaged onto an opto-electronic analog-digital circuit. A 4/spl times/4 cells prototype chip was implemented in a 1 /spl mu/m CMOS technology. It was successfully tested and operated at 100 kHz. This led us to the design of a 24/spl times/24 prototype.
{"title":"Simulated annealing of binary fields using an optoelectronic circuit","authors":"A. Dupret, J. Rodier, D. Prévost, E. Belhaire, P. Lalanne, P. Chavel, P. Garda","doi":"10.1109/MNNFS.1996.493783","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493783","url":null,"abstract":"A new approach to the VLSI implementation of stochastic cellular networks is demonstrated. Arrays of high throughput Gaussian noise sources are obtained thanks to the transduction of random patterns imaged onto an opto-electronic analog-digital circuit. A 4/spl times/4 cells prototype chip was implemented in a 1 /spl mu/m CMOS technology. It was successfully tested and operated at 100 kHz. This led us to the design of a 24/spl times/24 prototype.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133606671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}