F. F. Athena, Omobayode Fagbohungbe, Nanbo Gong, M. Rasch, Jimmy Penaloza, SoonCheon Seo, Arthur R Gasasira, P. Solomon, Valeria Bragaglia, S. Consiglio, H. Higuchi, Chanro Park, K. Brew, Paul Jamison, C. Catano, Iqbal Saraf, Claire Silvestre, Xuefeng Liu, Babar Khan, Nikhil Jain, Steven McDermott, Rick Johnson, I. Estrada-Raygoza, Juntao Li, T. Gokmen, Ning Li, Ruturaj Pujari, Fabio Carta, H. Miyazoe, Martin M. Frank, Antonio La Porta, D. Koty, Qingyun Yang, R. Clark, K. Tapily, C. Wajda, A. Mosden, Jeff Shearer, Andrew Metz, S. Teehan, N. Saulnier, B. Offrein, T. Tsunomura, G. Leusink, Vijay Narayanan, Takashi Ando
{"title":"Demonstration of transfer learning using 14 nm technology analog ReRAM array","authors":"F. F. Athena, Omobayode Fagbohungbe, Nanbo Gong, M. Rasch, Jimmy Penaloza, SoonCheon Seo, Arthur R Gasasira, P. Solomon, Valeria Bragaglia, S. Consiglio, H. Higuchi, Chanro Park, K. Brew, Paul Jamison, C. Catano, Iqbal Saraf, Claire Silvestre, Xuefeng Liu, Babar Khan, Nikhil Jain, Steven McDermott, Rick Johnson, I. Estrada-Raygoza, Juntao Li, T. Gokmen, Ning Li, Ruturaj Pujari, Fabio Carta, H. Miyazoe, Martin M. Frank, Antonio La Porta, D. Koty, Qingyun Yang, R. Clark, K. Tapily, C. Wajda, A. Mosden, Jeff Shearer, Andrew Metz, S. Teehan, N. Saulnier, B. Offrein, T. Tsunomura, G. Leusink, Vijay Narayanan, Takashi Ando","doi":"10.3389/felec.2023.1331280","DOIUrl":null,"url":null,"abstract":"Analog memory presents a promising solution in the face of the growing demand for energy-efficient artificial intelligence (AI) at the edge. In this study, we demonstrate efficient deep neural network transfer learning utilizing hardware and algorithm co-optimization in an analog resistive random-access memory (ReRAM) array. For the first time, we illustrate that in open-loop deep neural network (DNN) transfer learning for image classification tasks, convergence rates can be accelerated by approximately 3.5 times through the utilization of co-optimized analog ReRAM hardware and the hardware-aware Tiki-Taka v2 (TTv2) algorithm. A simulation based on statistical 14 nm CMOS ReRAM array data provides insights into the performance of transfer learning on larger network workloads, exhibiting notable improvement over conventional training with random initialization. This study shows that analog DNN transfer learning using an optimized ReRAM array can achieve faster convergence with a smaller dataset compared to training from scratch, thus augmenting AI capability at the edge.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":" 10","pages":""},"PeriodicalIF":17.7000,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/felec.2023.1331280","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Analog memory presents a promising solution in the face of the growing demand for energy-efficient artificial intelligence (AI) at the edge. In this study, we demonstrate efficient deep neural network transfer learning utilizing hardware and algorithm co-optimization in an analog resistive random-access memory (ReRAM) array. For the first time, we illustrate that in open-loop deep neural network (DNN) transfer learning for image classification tasks, convergence rates can be accelerated by approximately 3.5 times through the utilization of co-optimized analog ReRAM hardware and the hardware-aware Tiki-Taka v2 (TTv2) algorithm. A simulation based on statistical 14 nm CMOS ReRAM array data provides insights into the performance of transfer learning on larger network workloads, exhibiting notable improvement over conventional training with random initialization. This study shows that analog DNN transfer learning using an optimized ReRAM array can achieve faster convergence with a smaller dataset compared to training from scratch, thus augmenting AI capability at the edge.
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.