Marcus Rüb, Philipp Tuchel, Axel Sikora, Daniel Mueller-Gritschneder
{"title":"利用数据集蒸馏和模型大小调整实现 TinyML 设备上训练的持续增量学习方法","authors":"Marcus Rüb, Philipp Tuchel, Axel Sikora, Daniel Mueller-Gritschneder","doi":"arxiv-2409.07114","DOIUrl":null,"url":null,"abstract":"A new algorithm for incremental learning in the context of Tiny Machine\nlearning (TinyML) is presented, which is optimized for low-performance and\nenergy efficient embedded devices. TinyML is an emerging field that deploys\nmachine learning models on resource-constrained devices such as\nmicrocontrollers, enabling intelligent applications like voice recognition,\nanomaly detection, predictive maintenance, and sensor data processing in\nenvironments where traditional machine learning models are not feasible. The\nalgorithm solve the challenge of catastrophic forgetting through the use of\nknowledge distillation to create a small, distilled dataset. The novelty of the\nmethod is that the size of the model can be adjusted dynamically, so that the\ncomplexity of the model can be adapted to the requirements of the task. This\noffers a solution for incremental learning in resource-constrained\nenvironments, where both model size and computational efficiency are critical\nfactors. Results show that the proposed algorithm offers a promising approach\nfor TinyML incremental learning on embedded devices. The algorithm was tested\non five datasets including: CIFAR10, MNIST, CORE50, HAR, Speech Commands. The\nfindings indicated that, despite using only 43% of Floating Point Operations\n(FLOPs) compared to a larger fixed model, the algorithm experienced a\nnegligible accuracy loss of just 1%. In addition, the presented method is\nmemory efficient. While state-of-the-art incremental learning is usually very\nmemory intensive, the method requires only 1% of the original data set.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption\",\"authors\":\"Marcus Rüb, Philipp Tuchel, Axel Sikora, Daniel Mueller-Gritschneder\",\"doi\":\"arxiv-2409.07114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new algorithm for incremental learning in the context of Tiny Machine\\nlearning (TinyML) is presented, which is optimized for low-performance and\\nenergy efficient embedded devices. TinyML is an emerging field that deploys\\nmachine learning models on resource-constrained devices such as\\nmicrocontrollers, enabling intelligent applications like voice recognition,\\nanomaly detection, predictive maintenance, and sensor data processing in\\nenvironments where traditional machine learning models are not feasible. The\\nalgorithm solve the challenge of catastrophic forgetting through the use of\\nknowledge distillation to create a small, distilled dataset. The novelty of the\\nmethod is that the size of the model can be adjusted dynamically, so that the\\ncomplexity of the model can be adapted to the requirements of the task. This\\noffers a solution for incremental learning in resource-constrained\\nenvironments, where both model size and computational efficiency are critical\\nfactors. Results show that the proposed algorithm offers a promising approach\\nfor TinyML incremental learning on embedded devices. The algorithm was tested\\non five datasets including: CIFAR10, MNIST, CORE50, HAR, Speech Commands. The\\nfindings indicated that, despite using only 43% of Floating Point Operations\\n(FLOPs) compared to a larger fixed model, the algorithm experienced a\\nnegligible accuracy loss of just 1%. In addition, the presented method is\\nmemory efficient. While state-of-the-art incremental learning is usually very\\nmemory intensive, the method requires only 1% of the original data set.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07114\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption
A new algorithm for incremental learning in the context of Tiny Machine
learning (TinyML) is presented, which is optimized for low-performance and
energy efficient embedded devices. TinyML is an emerging field that deploys
machine learning models on resource-constrained devices such as
microcontrollers, enabling intelligent applications like voice recognition,
anomaly detection, predictive maintenance, and sensor data processing in
environments where traditional machine learning models are not feasible. The
algorithm solve the challenge of catastrophic forgetting through the use of
knowledge distillation to create a small, distilled dataset. The novelty of the
method is that the size of the model can be adjusted dynamically, so that the
complexity of the model can be adapted to the requirements of the task. This
offers a solution for incremental learning in resource-constrained
environments, where both model size and computational efficiency are critical
factors. Results show that the proposed algorithm offers a promising approach
for TinyML incremental learning on embedded devices. The algorithm was tested
on five datasets including: CIFAR10, MNIST, CORE50, HAR, Speech Commands. The
findings indicated that, despite using only 43% of Floating Point Operations
(FLOPs) compared to a larger fixed model, the algorithm experienced a
negligible accuracy loss of just 1%. In addition, the presented method is
memory efficient. While state-of-the-art incremental learning is usually very
memory intensive, the method requires only 1% of the original data set.