{"title":"探讨量化在短时学习中的应用","authors":"Meiqi Wang, Ruixin Xue, Jun Lin, Zhongfeng Wang","doi":"10.1109/NEWCAS49341.2020.9159767","DOIUrl":null,"url":null,"abstract":"Training the neural networks on chip, which enables the local privacy data to be stored and processed at edge platforms, is earning vital importance with the explosive growth of Internet of Things (IoT). Although the on-chip training has been widely investigated in previous arts, there are few works related to the on-chip learning of Few-Shot Learning (FSL), an emerging topic which explores effective learning with only a small number of samples. In this paper, we explore the effectiveness of quantization, a mainstream compression method that helps reduce the memory footprint and computational resource requirements of a full-precision neural network to enable the on-chip deployment of FSL. We first perform extensive experiments on quantization of three mainstream meta-learning-based FSL networks, MAML, Meta-SGD, and Reptile, for both training and testing stages. Experimental results show that the 16-bit quantized training and testing models can be achieved with negligible losses on MAML and Meta-SGD. Then a comprehensive analysis is presented which demonstrates that a most favorable trade-off between accuracy, computational complexity, and model size can be achieved using the Meta-SGD model. This paves the way for the deployment of FSL system on the resource-constrained platforms.","PeriodicalId":135163,"journal":{"name":"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploring Quantization in Few-Shot Learning\",\"authors\":\"Meiqi Wang, Ruixin Xue, Jun Lin, Zhongfeng Wang\",\"doi\":\"10.1109/NEWCAS49341.2020.9159767\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training the neural networks on chip, which enables the local privacy data to be stored and processed at edge platforms, is earning vital importance with the explosive growth of Internet of Things (IoT). Although the on-chip training has been widely investigated in previous arts, there are few works related to the on-chip learning of Few-Shot Learning (FSL), an emerging topic which explores effective learning with only a small number of samples. In this paper, we explore the effectiveness of quantization, a mainstream compression method that helps reduce the memory footprint and computational resource requirements of a full-precision neural network to enable the on-chip deployment of FSL. We first perform extensive experiments on quantization of three mainstream meta-learning-based FSL networks, MAML, Meta-SGD, and Reptile, for both training and testing stages. Experimental results show that the 16-bit quantized training and testing models can be achieved with negligible losses on MAML and Meta-SGD. Then a comprehensive analysis is presented which demonstrates that a most favorable trade-off between accuracy, computational complexity, and model size can be achieved using the Meta-SGD model. This paves the way for the deployment of FSL system on the resource-constrained platforms.\",\"PeriodicalId\":135163,\"journal\":{\"name\":\"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NEWCAS49341.2020.9159767\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NEWCAS49341.2020.9159767","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Training the neural networks on chip, which enables the local privacy data to be stored and processed at edge platforms, is earning vital importance with the explosive growth of Internet of Things (IoT). Although the on-chip training has been widely investigated in previous arts, there are few works related to the on-chip learning of Few-Shot Learning (FSL), an emerging topic which explores effective learning with only a small number of samples. In this paper, we explore the effectiveness of quantization, a mainstream compression method that helps reduce the memory footprint and computational resource requirements of a full-precision neural network to enable the on-chip deployment of FSL. We first perform extensive experiments on quantization of three mainstream meta-learning-based FSL networks, MAML, Meta-SGD, and Reptile, for both training and testing stages. Experimental results show that the 16-bit quantized training and testing models can be achieved with negligible losses on MAML and Meta-SGD. Then a comprehensive analysis is presented which demonstrates that a most favorable trade-off between accuracy, computational complexity, and model size can be achieved using the Meta-SGD model. This paves the way for the deployment of FSL system on the resource-constrained platforms.