Pub Date : 2020-01-01DOI: 10.1587/transele.2020ecp5034
Kazuki Yoshida, K. Saito, Keito Sogai, M. Miura, K. Kanomata, B. Ahmmad, S. Kubota, F. Hirose
{"title":"Room temperature atomic layer deposition of nano crystalline ZnO and its application for flexible electronics","authors":"Kazuki Yoshida, K. Saito, Keito Sogai, M. Miura, K. Kanomata, B. Ahmmad, S. Kubota, F. Hirose","doi":"10.1587/transele.2020ecp5034","DOIUrl":"https://doi.org/10.1587/transele.2020ecp5034","url":null,"abstract":"","PeriodicalId":50384,"journal":{"name":"IEICE Transactions on Electronics","volume":"1 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67305798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1587/transele.2020cdp0003
Akira Kitayama, G. Ono, Kishimoto Tadashi, Hiroaki Ito, Naohiro Kohmu
Reducing power consumption is crucial for edge devices using convolutional neural network (CNN). The zero-skipping approach for CNNs is a processing technique widely known for its relatively low power consumption and high speed. This approach stops multiplication and accumulation (MAC) when the multiplication results of the input data and weight are zero. However, this technique requires large logic circuits with around 5% overhead, and the average rate of MAC stopping is approximately 30%. In this paper, we propose a precise zero-skipping method that uses input data and simple logic circuits to stop multipliers and accumulators precisely. We also propose an active data-skipping method to further reduce power consumption by slightly degrading recognition accuracy. In this method, each multiplier and accumulator are stopped by using small values (e.g., 1, 2) as input. We implemented single shot multi-box detector 500 (SSD500) network model on a Xilinx ZU9 and applied our proposed techniques. We verified that operations were stopped at a rate of 49.1%, recognition accuracy was degraded by 0.29%, power consumption was reduced from 9.2 to 4.4 W (−52.3%), and circuit overhead was reduced from 5.1 to 2.7% (−45.9%). The proposed techniques were determined to be effective for lowering the power consumption of CNN-based edge devices such as FPGA. key words: convolutional neural network (CNN), SSD500 network, deep neural network (DNN) implementation, low power consumption, embedded AI technique
{"title":"Low-Power Implementation Techniques for Convolutional Neural Networks using Precise and Active Skipping Methods","authors":"Akira Kitayama, G. Ono, Kishimoto Tadashi, Hiroaki Ito, Naohiro Kohmu","doi":"10.1587/transele.2020cdp0003","DOIUrl":"https://doi.org/10.1587/transele.2020cdp0003","url":null,"abstract":"Reducing power consumption is crucial for edge devices using convolutional neural network (CNN). The zero-skipping approach for CNNs is a processing technique widely known for its relatively low power consumption and high speed. This approach stops multiplication and accumulation (MAC) when the multiplication results of the input data and weight are zero. However, this technique requires large logic circuits with around 5% overhead, and the average rate of MAC stopping is approximately 30%. In this paper, we propose a precise zero-skipping method that uses input data and simple logic circuits to stop multipliers and accumulators precisely. We also propose an active data-skipping method to further reduce power consumption by slightly degrading recognition accuracy. In this method, each multiplier and accumulator are stopped by using small values (e.g., 1, 2) as input. We implemented single shot multi-box detector 500 (SSD500) network model on a Xilinx ZU9 and applied our proposed techniques. We verified that operations were stopped at a rate of 49.1%, recognition accuracy was degraded by 0.29%, power consumption was reduced from 9.2 to 4.4 W (−52.3%), and circuit overhead was reduced from 5.1 to 2.7% (−45.9%). The proposed techniques were determined to be effective for lowering the power consumption of CNN-based edge devices such as FPGA. key words: convolutional neural network (CNN), SSD500 network, deep neural network (DNN) implementation, low power consumption, embedded AI technique","PeriodicalId":50384,"journal":{"name":"IEICE Transactions on Electronics","volume":"1 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67305477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1587/transele.2020cdp0007
Ukon Yuta, Shimpei Sato, A. Takahashi
Design Method of Variable-Latency Circuit
可变延时电路的设计方法
{"title":"Design Method of Variable-Latency Circuit with Tunable Approximate Completion-Detection Mechanism","authors":"Ukon Yuta, Shimpei Sato, A. Takahashi","doi":"10.1587/transele.2020cdp0007","DOIUrl":"https://doi.org/10.1587/transele.2020cdp0007","url":null,"abstract":"Design Method of Variable-Latency Circuit","PeriodicalId":50384,"journal":{"name":"IEICE Transactions on Electronics","volume":"1 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67305664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}