{"title":"微小深度学习环境下卷积神经网络的迁移学习","authors":"E. Fragkou, Vasileios Lygnos, Dimitrios Katsaros","doi":"10.1145/3575879.3575984","DOIUrl":null,"url":null,"abstract":"Tiny Machine Learning (TinyML) and Transfer Learning (TL) are two widespread methods of successfully deploying ML models to resource-starving devices. Tiny ML provides compact models, that can run on resource-constrained environments, while TL contributes to the performance of the model by using pre-existing knowledge. So, in this work we propose a simple but efficient TL method, applied to three types of Convolutional Neural Networks (CNN), by retraining more than the last fully connected layer of a CNN in the target device, and specifically one or more of the last convolutional layers. Our results shown that our proposed method (FxC1) achieves about increase in accuracy and increase in convergence speed, while it incurs a bit larger energy consumption overhead, compared to two baseline techniques, namely one that retrains the last fully connected layer, and another that retrains the whole network.","PeriodicalId":164036,"journal":{"name":"Proceedings of the 26th Pan-Hellenic Conference on Informatics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transfer Learning for Convolutional Neural Networks in Tiny Deep Learning Environments\",\"authors\":\"E. Fragkou, Vasileios Lygnos, Dimitrios Katsaros\",\"doi\":\"10.1145/3575879.3575984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Tiny Machine Learning (TinyML) and Transfer Learning (TL) are two widespread methods of successfully deploying ML models to resource-starving devices. Tiny ML provides compact models, that can run on resource-constrained environments, while TL contributes to the performance of the model by using pre-existing knowledge. So, in this work we propose a simple but efficient TL method, applied to three types of Convolutional Neural Networks (CNN), by retraining more than the last fully connected layer of a CNN in the target device, and specifically one or more of the last convolutional layers. Our results shown that our proposed method (FxC1) achieves about increase in accuracy and increase in convergence speed, while it incurs a bit larger energy consumption overhead, compared to two baseline techniques, namely one that retrains the last fully connected layer, and another that retrains the whole network.\",\"PeriodicalId\":164036,\"journal\":{\"name\":\"Proceedings of the 26th Pan-Hellenic Conference on Informatics\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 26th Pan-Hellenic Conference on Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3575879.3575984\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th Pan-Hellenic Conference on Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3575879.3575984","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Transfer Learning for Convolutional Neural Networks in Tiny Deep Learning Environments
Tiny Machine Learning (TinyML) and Transfer Learning (TL) are two widespread methods of successfully deploying ML models to resource-starving devices. Tiny ML provides compact models, that can run on resource-constrained environments, while TL contributes to the performance of the model by using pre-existing knowledge. So, in this work we propose a simple but efficient TL method, applied to three types of Convolutional Neural Networks (CNN), by retraining more than the last fully connected layer of a CNN in the target device, and specifically one or more of the last convolutional layers. Our results shown that our proposed method (FxC1) achieves about increase in accuracy and increase in convergence speed, while it incurs a bit larger energy consumption overhead, compared to two baseline techniques, namely one that retrains the last fully connected layer, and another that retrains the whole network.