O. Chen, Z. Li, Tomoharu Yamauchi, Yanzhi Wang, N. Yoshikawa
{"title":"Performance Assessment of an Extremely Energy-Efficient Binary Neural Network Using Adiabatic Superconductor Devices","authors":"O. Chen, Z. Li, Tomoharu Yamauchi, Yanzhi Wang, N. Yoshikawa","doi":"10.1109/AICAS57966.2023.10168607","DOIUrl":null,"url":null,"abstract":"Binary Neural Networks (BNNs) are gaining popularity for solving real-world problems using Deep Neural Networks (DNNs), such as image recognition and natural language processing. BNNs use binary precision for weights and activations, reducing memory usage by 32 times compared to conventional networks using 32-bit floating-point precision. Among various types of BNNs, AQFP-based BNNs utilizing superconducting logic families are promising for energy-efficient computing, using magnetic flux quantization and quantum interference in Josephson-junction-based superconductor loops. This paper presents a performance assessment of a novel AQFP-based BNN architecture, highlighting scalability issues caused by increased inductance in the analog accumulation circuit. We also discuss potential optimization approaches to address these issues and improve scalability.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168607","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Binary Neural Networks (BNNs) are gaining popularity for solving real-world problems using Deep Neural Networks (DNNs), such as image recognition and natural language processing. BNNs use binary precision for weights and activations, reducing memory usage by 32 times compared to conventional networks using 32-bit floating-point precision. Among various types of BNNs, AQFP-based BNNs utilizing superconducting logic families are promising for energy-efficient computing, using magnetic flux quantization and quantum interference in Josephson-junction-based superconductor loops. This paper presents a performance assessment of a novel AQFP-based BNN architecture, highlighting scalability issues caused by increased inductance in the analog accumulation circuit. We also discuss potential optimization approaches to address these issues and improve scalability.