{"title":"Energy-Efficient Programable Analog Computing: Analog computing in a standard CMOS process","authors":"Jennifer Hasler","doi":"10.1109/MSSC.2024.3462388","DOIUrl":null,"url":null,"abstract":"When I started working on analog computing for neural network systems in the 1980s, the question everyone feared to be asked at the end of their presentation was “couldn’t this be done on a DSP processor?” Carver Mead provided a theoretical and intuitive answer in 1990 that analog computing should be thousands of times more efficient than digital computation. Three decades later the “couldn’t this be done on a DSP?” question, or its related question “couldn’t this be done on a digital computing IC?”, has been definitively addressed with a resounding No for several implementations. This paper addresses how end-to-end analog computing system demonstrates this increased efficiency over digital computing systems. These computations are possible entirely in a standard CMOS process using Floating-Gate (FG) devices available in any CMOS IC process. The efficiency is both at the computational circuit level, as well as at the architecture level. The first crossbar and single-transistor synapse concept appeared 30 years ago. The definitive demonstration of Mead’s energy efficiency hypothesis appeared 20 years ago, and before that demonstration was the start of Compute in Memory (CiM) as well as the start of a large-scale FPAA with FG-enabled connections. Analog computing opportunities in Standard CMOS IC processes can provide many opportunities for the wider commercial market.","PeriodicalId":100636,"journal":{"name":"IEEE Solid-State Circuits Magazine","volume":"16 4","pages":"32-40"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Solid-State Circuits Magazine","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10752792/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When I started working on analog computing for neural network systems in the 1980s, the question everyone feared to be asked at the end of their presentation was “couldn’t this be done on a DSP processor?” Carver Mead provided a theoretical and intuitive answer in 1990 that analog computing should be thousands of times more efficient than digital computation. Three decades later the “couldn’t this be done on a DSP?” question, or its related question “couldn’t this be done on a digital computing IC?”, has been definitively addressed with a resounding No for several implementations. This paper addresses how end-to-end analog computing system demonstrates this increased efficiency over digital computing systems. These computations are possible entirely in a standard CMOS process using Floating-Gate (FG) devices available in any CMOS IC process. The efficiency is both at the computational circuit level, as well as at the architecture level. The first crossbar and single-transistor synapse concept appeared 30 years ago. The definitive demonstration of Mead’s energy efficiency hypothesis appeared 20 years ago, and before that demonstration was the start of Compute in Memory (CiM) as well as the start of a large-scale FPAA with FG-enabled connections. Analog computing opportunities in Standard CMOS IC processes can provide many opportunities for the wider commercial market.