{"title":"Low‐Power Computing with Neuromorphic Engineering","authors":"Dingbang Liu, Hao Yu, Y. Chai","doi":"10.1002/aisy.202000150","DOIUrl":null,"url":null,"abstract":"The increasing power consumption in the existing computation architecture presents grand challenges for the performance and reliability of very‐large‐scale integrated circuits. Inspired by the characteristics of the human brain for processing complicated tasks with low power, neuromorphic computing is intensively investigated for decreasing power consumption and enriching computation functions. Hardware implementation of neuromorphic computing with emerging devices substantially reduces power consumption down to a few mW cm−2, compared with the central processing unit based on conventional Si complementary metal–oxide semiconductor (CMOS) technologies (50–100 W cm−2). Herein, a brief introduction on the characteristics of neuromorphic computing is provided. Then, emerging devices for low‐power neuromorphic computing are overviewed, e.g., resistive random access memory with low power consumption (< pJ) per synaptic event. A few computation models for artificial neural networks (NNs), including spiking neural network (SNN) and deep neural network (DNN), which boost power efficiency by simplifying the computing procedure and minimizing memory access are discussed. A few examples for system‐level demonstration are described, such as mixed synchronous–asynchronous and reconfigurable convolution neuron network (CNN)–recurrent NN (RNN) for low‐power computing.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"101 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/aisy.202000150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28
Abstract
The increasing power consumption in the existing computation architecture presents grand challenges for the performance and reliability of very‐large‐scale integrated circuits. Inspired by the characteristics of the human brain for processing complicated tasks with low power, neuromorphic computing is intensively investigated for decreasing power consumption and enriching computation functions. Hardware implementation of neuromorphic computing with emerging devices substantially reduces power consumption down to a few mW cm−2, compared with the central processing unit based on conventional Si complementary metal–oxide semiconductor (CMOS) technologies (50–100 W cm−2). Herein, a brief introduction on the characteristics of neuromorphic computing is provided. Then, emerging devices for low‐power neuromorphic computing are overviewed, e.g., resistive random access memory with low power consumption (< pJ) per synaptic event. A few computation models for artificial neural networks (NNs), including spiking neural network (SNN) and deep neural network (DNN), which boost power efficiency by simplifying the computing procedure and minimizing memory access are discussed. A few examples for system‐level demonstration are described, such as mixed synchronous–asynchronous and reconfigurable convolution neuron network (CNN)–recurrent NN (RNN) for low‐power computing.
现有计算架构中不断增加的功耗对超大规模集成电路的性能和可靠性提出了巨大的挑战。受人脑处理低功耗复杂任务的特点的启发,神经形态计算在降低功耗和丰富计算功能方面得到了广泛的研究。与基于传统Si互补金属氧化物半导体(CMOS)技术(50-100 W cm - 2)的中央处理单元相比,新兴器件的神经形态计算硬件实现大大降低了功耗,功耗低至几mW cm - 2。本文简要介绍了神经形态计算的特点。然后,概述了用于低功耗神经形态计算的新兴器件,例如,每个突触事件低功耗(< pJ)的电阻式随机存取存储器。讨论了几种人工神经网络(NNs)的计算模型,包括尖峰神经网络(SNN)和深度神经网络(DNN),它们通过简化计算过程和最小化内存访问来提高功率效率。本文描述了一些用于系统级演示的例子,例如用于低功耗计算的混合同步-异步和可重构卷积神经元网络(CNN) -循环神经网络(RNN)。