Y. Li, S. Kim, X. Sun, P. Solomon, T. Gokmen, H. Tsai, S. Koswatta, Z. Ren, R. Mo, C. Yeh, W. Haensch, E. Leobandung
{"title":"基于电容的记录对称线性模拟神经网络交叉点阵列","authors":"Y. Li, S. Kim, X. Sun, P. Solomon, T. Gokmen, H. Tsai, S. Koswatta, Z. Ren, R. Mo, C. Yeh, W. Haensch, E. Leobandung","doi":"10.1109/VLSIT.2018.8510648","DOIUrl":null,"url":null,"abstract":"We report a capacitor-based cross-point array that can be used to train analog-based Deep Neural Networks (DNNs), fabricated with trench capacitors in 14nm technology. The fundamental DNN functionalities of multiply-accumulate and weight-update are demonstrated. We also demonstrate the best symmetry and linearity ever reported for an analog cross-point array system. For DNNs, the capacitor leakage does not impact learning accuracy even without any refresh cycle, as the weights are continuously updated during training. This makes capacitor an ideal candidate for neural network training. We also discuss the scalability of this array using optimized low-leakage DRAM technology.","PeriodicalId":6561,"journal":{"name":"2018 IEEE Symposium on VLSI Technology","volume":"6 1","pages":"25-26"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":"{\"title\":\"Capacitor-based Cross-point Array for Analog Neural Network with Record Symmetry and Linearity\",\"authors\":\"Y. Li, S. Kim, X. Sun, P. Solomon, T. Gokmen, H. Tsai, S. Koswatta, Z. Ren, R. Mo, C. Yeh, W. Haensch, E. Leobandung\",\"doi\":\"10.1109/VLSIT.2018.8510648\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We report a capacitor-based cross-point array that can be used to train analog-based Deep Neural Networks (DNNs), fabricated with trench capacitors in 14nm technology. The fundamental DNN functionalities of multiply-accumulate and weight-update are demonstrated. We also demonstrate the best symmetry and linearity ever reported for an analog cross-point array system. For DNNs, the capacitor leakage does not impact learning accuracy even without any refresh cycle, as the weights are continuously updated during training. This makes capacitor an ideal candidate for neural network training. We also discuss the scalability of this array using optimized low-leakage DRAM technology.\",\"PeriodicalId\":6561,\"journal\":{\"name\":\"2018 IEEE Symposium on VLSI Technology\",\"volume\":\"6 1\",\"pages\":\"25-26\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"33\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Symposium on VLSI Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VLSIT.2018.8510648\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Symposium on VLSI Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VLSIT.2018.8510648","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Capacitor-based Cross-point Array for Analog Neural Network with Record Symmetry and Linearity
We report a capacitor-based cross-point array that can be used to train analog-based Deep Neural Networks (DNNs), fabricated with trench capacitors in 14nm technology. The fundamental DNN functionalities of multiply-accumulate and weight-update are demonstrated. We also demonstrate the best symmetry and linearity ever reported for an analog cross-point array system. For DNNs, the capacitor leakage does not impact learning accuracy even without any refresh cycle, as the weights are continuously updated during training. This makes capacitor an ideal candidate for neural network training. We also discuss the scalability of this array using optimized low-leakage DRAM technology.