Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493809
I. Rojas, F. Pelayo, O. Ortega, A. Prieto
This paper presents a compact current-mode CMOS design for the implementation of fuzzy controllers, using membership functions with variable output ranges. This design aims to avoid the division operation required to obtain the final crisp output. A feedback block is included, whose complexity does not depend on the number of rules of the fuzzy controller, thus the circuit can be applied to very complex systems.
{"title":"A CMOS implementation of fuzzy controllers based on adaptive membership function ranges","authors":"I. Rojas, F. Pelayo, O. Ortega, A. Prieto","doi":"10.1109/MNNFS.1996.493809","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493809","url":null,"abstract":"This paper presents a compact current-mode CMOS design for the implementation of fuzzy controllers, using membership functions with variable output ranges. This design aims to avoid the division operation required to obtain the final crisp output. A feedback block is included, whose complexity does not depend on the number of rules of the fuzzy controller, thus the circuit can be applied to very complex systems.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130482918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493776
O. Landolt
An analog fuzzy rule circuit is proposed, which is based on a network of MOS transistors exploited as linear resistive elements. A low number of transistors are needed for each rule circuit, because the same devices cumulate several processing steps of the computation. Another property of the circuit is that the power consumed by a given rule is nearly zero when the weight of that rule is zero. This property enables an efficient use of power in integrated circuits containing fuzzy rule arrays, since normally only a few rules are active simultaneously. In addition, the proposed circuit features an analog center-of-gravity defuzzification circuit which can process digitally stored parameters without local D/A conversion. A completely functional research prototype with 80 rules was fabricated in a 2 /spl mu/m CMOS technology. The chip core area is 1.32 mm/sup 2/, the power consumption is 850 nW with a 1.8 V supply, and the 90% settling time in response to an input step is less than 400 /spl mu/s.
{"title":"Low-power analog fuzzy rule implementation based on a linear MOS transistor network","authors":"O. Landolt","doi":"10.1109/MNNFS.1996.493776","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493776","url":null,"abstract":"An analog fuzzy rule circuit is proposed, which is based on a network of MOS transistors exploited as linear resistive elements. A low number of transistors are needed for each rule circuit, because the same devices cumulate several processing steps of the computation. Another property of the circuit is that the power consumed by a given rule is nearly zero when the weight of that rule is zero. This property enables an efficient use of power in integrated circuits containing fuzzy rule arrays, since normally only a few rules are active simultaneously. In addition, the proposed circuit features an analog center-of-gravity defuzzification circuit which can process digitally stored parameters without local D/A conversion. A completely functional research prototype with 80 rules was fabricated in a 2 /spl mu/m CMOS technology. The chip core area is 1.32 mm/sup 2/, the power consumption is 850 nW with a 1.8 V supply, and the 90% settling time in response to an input step is less than 400 /spl mu/s.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131711928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493808
D. Dupeyron, S. Le Masson, Y. Deval, G. Le Masson, J. Dom
This paper presents an analog design of a biologically inspired neuron model: the conductance-based Hodgkin-Huxley formalism. After a description of the model equations set, the corresponding subcircuits are detailed. ASICs were fabricated in a 2 /spl mu/m BiCMOS technology, and have a block structure allowing the constitution of complex cells or small networks. As an application, numerical and analog computations of the action potentials are compared, and the effects of some model parameters modifications are shown.
{"title":"A BiCMOS implementation of the Hodgkin-Huxley formalism","authors":"D. Dupeyron, S. Le Masson, Y. Deval, G. Le Masson, J. Dom","doi":"10.1109/MNNFS.1996.493808","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493808","url":null,"abstract":"This paper presents an analog design of a biologically inspired neuron model: the conductance-based Hodgkin-Huxley formalism. After a description of the model equations set, the corresponding subcircuits are detailed. ASICs were fabricated in a 2 /spl mu/m BiCMOS technology, and have a block structure allowing the constitution of complex cells or small networks. As an application, numerical and analog computations of the action potentials are compared, and the effects of some model parameters modifications are shown.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121917972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493793
N. Hendrich
This paper presents a digital architecture with on-chip learning for Hopfield attractor neural networks with binary weights. A new learning rule for the binary weights network is proposed that allows pattern storage up to capacity /spl alpha/=0.4 and incurs very low hardware overhead. Due to the use of binary couplings the network has minimal storage requirements. A flexible communication structure allows cascading of multiple chips in order to build fully connected, block connected, or feed-forward networks. System performance and communication bandwidth scale linear with the number of chips. A prototype chip has been fabricated and is fully functional. A pattern recognition application shows the performance of the binary couplings network.
{"title":"A scalable architecture for binary couplings attractor neural networks","authors":"N. Hendrich","doi":"10.1109/MNNFS.1996.493793","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493793","url":null,"abstract":"This paper presents a digital architecture with on-chip learning for Hopfield attractor neural networks with binary weights. A new learning rule for the binary weights network is proposed that allows pattern storage up to capacity /spl alpha/=0.4 and incurs very low hardware overhead. Due to the use of binary couplings the network has minimal storage requirements. A flexible communication structure allows cascading of multiple chips in order to build fully connected, block connected, or feed-forward networks. System performance and communication bandwidth scale linear with the number of chips. A prototype chip has been fabricated and is fully functional. A pattern recognition application shows the performance of the binary couplings network.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126185649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493770
L. Merlat, N. Silvestre, J. Mercklé
The design of an electronic oscillator based on the Hindmarsh and Rose model of bursting neurons is presented. Because of hardware area requirements, the original model is reduced to a system of two coupled differential equations by means of a hysteresis function. The phase plane analysis of the Hindmarsh and Rose model emphasizes the dynamical properties underlying the bursts generation. These fundamental properties have guided the analogue design of the electronic burster. Spice simulations show great similarities in the behavior of the original model and this bio-inspired circuit.
{"title":"A Hindmarsh and Rose-based electronic burster","authors":"L. Merlat, N. Silvestre, J. Mercklé","doi":"10.1109/MNNFS.1996.493770","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493770","url":null,"abstract":"The design of an electronic oscillator based on the Hindmarsh and Rose model of bursting neurons is presented. Because of hardware area requirements, the original model is reduced to a system of two coupled differential equations by means of a hysteresis function. The phase plane analysis of the Hindmarsh and Rose model emphasizes the dynamical properties underlying the bursts generation. These fundamental properties have guided the analogue design of the electronic burster. Spice simulations show great similarities in the behavior of the original model and this bio-inspired circuit.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131233056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493779
J. Schlussler, J. Werner, J. Dohndorf, I. Koren, U. Ramacher, Chang-Han Yi, H. Klar
In this article first activities on circuit implementation of analog neural network hardware are presented. These circuits are intended to be used as sensory and preprocessing components of a digital VLSI high level image processing system. The one approach described here is based on the implementation of the McCulloch Pitts neuron model in a current mode circuit technique. A test chip with reduced resolution is being prepared. Simulation results obtained by solving the system of differential equations numerically shows some features of this type of neural network.
{"title":"Current mode implementation of a neural algorithm for image preprocessing","authors":"J. Schlussler, J. Werner, J. Dohndorf, I. Koren, U. Ramacher, Chang-Han Yi, H. Klar","doi":"10.1109/MNNFS.1996.493779","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493779","url":null,"abstract":"In this article first activities on circuit implementation of analog neural network hardware are presented. These circuits are intended to be used as sensory and preprocessing components of a digital VLSI high level image processing system. The one approach described here is based on the implementation of the McCulloch Pitts neuron model in a current mode circuit technique. A test chip with reduced resolution is being prepared. Simulation results obtained by solving the system of differential equations numerically shows some features of this type of neural network.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130846401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493816
I.Z. Milosavlevich, B. Flower, M. Jabri
PANNE (Parallel Artificial Neural Network Engine) is a parallel computing engine aimed at delivering super-computing power to numerical applications such as connectionist simulation and signal processing. The PANNE system exploits the features of the TMS320C40 DSP chip which make it suitable for building parallel computing systems. PANNE has been built with flexibility in mind; it is expandable in terms of hardware resources and supports both shared and distributed memory programming paradigms. We estimate that a system of 16 DSPs would be capable of delivering up to 80/spl times/10/sup 6/ connection updates per second.
{"title":"PANNE: a parallel computing engine for connectionist simulation","authors":"I.Z. Milosavlevich, B. Flower, M. Jabri","doi":"10.1109/MNNFS.1996.493816","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493816","url":null,"abstract":"PANNE (Parallel Artificial Neural Network Engine) is a parallel computing engine aimed at delivering super-computing power to numerical applications such as connectionist simulation and signal processing. The PANNE system exploits the features of the TMS320C40 DSP chip which make it suitable for building parallel computing systems. PANNE has been built with flexibility in mind; it is expandable in terms of hardware resources and supports both shared and distributed memory programming paradigms. We estimate that a system of 16 DSPs would be capable of delivering up to 80/spl times/10/sup 6/ connection updates per second.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123134076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493787
W. S. Mischo
CMAC is one of the first neural networks successfully applied to real world control problems. Its ability to locally "generalize" an input/output behaviour based on a non-linear input point processing and a linear algorithm for modifying internal states provides fast convergence to an implicit model. In this paper CMAC is shown in its basic functionality. Guidelines for a CMAC hardware realization are discussed, as they were used for the implementation of an ASIC, which now is available in a first version.
{"title":"A CMAC-type neural memory for control applications","authors":"W. S. Mischo","doi":"10.1109/MNNFS.1996.493787","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493787","url":null,"abstract":"CMAC is one of the first neural networks successfully applied to real world control problems. Its ability to locally \"generalize\" an input/output behaviour based on a non-linear input point processing and a linear algorithm for modifying internal states provides fast convergence to an implicit model. In this paper CMAC is shown in its basic functionality. Guidelines for a CMAC hardware realization are discussed, as they were used for the implementation of an ASIC, which now is available in a first version.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130867637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-02-12DOI: 10.1109/MNNFS.1996.493794
N. Avellana, A. Strey, R. Holgado, J. A. Fernandes, R. Capillas, E. Valderrama
This paper presents a new parallel computer architecture for high-speed emulation of any neural network model. The system is based on a new ASIC (Application Specific Integrated Circuit) that performs all required arithmetical operations. The essential feature of this ASIC is its ability to adapt the internal parallelism dynamically to the data precision for achieving an optimal utilization of the available hardware resources. Four ASICs are installed on one board of the neurocomputer system and emulate in parallel a neural network in a synchronous operation mode (SIMD architecture). By additional boards the system performance and also the size of the neural networks that can be simulated is increased. The main advantage of the system architecture is the simplicity of the design allowing the construction of low cost neurocomputer systems with a high performance. The achieved performance depends on the data precision, and the number of installed boards. In the case of 16 bit weights and only one board a performance of 480 MCPs and 120 MCUPs (using backpropagation) can be obtained.
{"title":"Design of a low-cost and high-speed neurocomputer system","authors":"N. Avellana, A. Strey, R. Holgado, J. A. Fernandes, R. Capillas, E. Valderrama","doi":"10.1109/MNNFS.1996.493794","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493794","url":null,"abstract":"This paper presents a new parallel computer architecture for high-speed emulation of any neural network model. The system is based on a new ASIC (Application Specific Integrated Circuit) that performs all required arithmetical operations. The essential feature of this ASIC is its ability to adapt the internal parallelism dynamically to the data precision for achieving an optimal utilization of the available hardware resources. Four ASICs are installed on one board of the neurocomputer system and emulate in parallel a neural network in a synchronous operation mode (SIMD architecture). By additional boards the system performance and also the size of the neural networks that can be simulated is increased. The main advantage of the system architecture is the simplicity of the design allowing the construction of low cost neurocomputer systems with a high performance. The achieved performance depends on the data precision, and the number of installed boards. In the case of 16 bit weights and only one board a performance of 480 MCPs and 120 MCUPs (using backpropagation) can be obtained.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125766443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-11-27DOI: 10.1109/MNNFS.1996.493795
J. Wawrzynek, K. Asanović, Brian Kingsbury, J. Beck, David Johnson, N. Morgan
We report on the development of a high-performance system for neural network and other signal processing applications. We have designed and implemented a vector microprocessor and packaged it as an attached processor for a conventional workstation. We present performance comparisons with workstations on neural network backpropagation training. The SPERT-II system demonstrates roughly 15 times the performance of a mid-range workstation and five times the performance of a high-end workstation with extensive hand-optimization of both workstation versions.
{"title":"SPERT-II: a vector microprocessor system and its application to large problems in backpropagation training","authors":"J. Wawrzynek, K. Asanović, Brian Kingsbury, J. Beck, David Johnson, N. Morgan","doi":"10.1109/MNNFS.1996.493795","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493795","url":null,"abstract":"We report on the development of a high-performance system for neural network and other signal processing applications. We have designed and implemented a vector microprocessor and packaged it as an attached processor for a conventional workstation. We present performance comparisons with workstations on neural network backpropagation training. The SPERT-II system demonstrates roughly 15 times the performance of a mid-range workstation and five times the performance of a high-end workstation with extensive hand-optimization of both workstation versions.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"56 15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115868822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}