{"title":"用蒸馏图像进行课堂增量学习","authors":"Abel S. Zacarias, L. Alexandre","doi":"10.1109/ISCMI56532.2022.10068463","DOIUrl":null,"url":null,"abstract":"Lifelong learning aims to develop machine learning systems that can learn new tasks while preserving the performance on previously learned tasks. Learning new tasks in most proposals, implies to keeping examples of previously learned tasks to retrain the model when learning new tasks, which has an impact in terms of storage capacity. In this paper, we present a method that adds new capabilities, in an incrementally way, to an existing model keeping examples from previously learned classes but avoiding the problem of running out of storage by using distilled images to condensate sets of images into a single image. The experimental results on four data sets confirmed the effectiveness of CILDI to learn new classes incrementally across different tasks and obtaining a performance close to the state-of-the-art algorithms for class incremental learning using only one distilled image per learned class and beating the state-of-the-art on the four data sets when using 10 distilled images per learned class, while using a smaller memory footprint than the competing approaches.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CILDI: Class Incremental Learning with Distilled Images\",\"authors\":\"Abel S. Zacarias, L. Alexandre\",\"doi\":\"10.1109/ISCMI56532.2022.10068463\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lifelong learning aims to develop machine learning systems that can learn new tasks while preserving the performance on previously learned tasks. Learning new tasks in most proposals, implies to keeping examples of previously learned tasks to retrain the model when learning new tasks, which has an impact in terms of storage capacity. In this paper, we present a method that adds new capabilities, in an incrementally way, to an existing model keeping examples from previously learned classes but avoiding the problem of running out of storage by using distilled images to condensate sets of images into a single image. The experimental results on four data sets confirmed the effectiveness of CILDI to learn new classes incrementally across different tasks and obtaining a performance close to the state-of-the-art algorithms for class incremental learning using only one distilled image per learned class and beating the state-of-the-art on the four data sets when using 10 distilled images per learned class, while using a smaller memory footprint than the competing approaches.\",\"PeriodicalId\":340397,\"journal\":{\"name\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCMI56532.2022.10068463\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CILDI: Class Incremental Learning with Distilled Images
Lifelong learning aims to develop machine learning systems that can learn new tasks while preserving the performance on previously learned tasks. Learning new tasks in most proposals, implies to keeping examples of previously learned tasks to retrain the model when learning new tasks, which has an impact in terms of storage capacity. In this paper, we present a method that adds new capabilities, in an incrementally way, to an existing model keeping examples from previously learned classes but avoiding the problem of running out of storage by using distilled images to condensate sets of images into a single image. The experimental results on four data sets confirmed the effectiveness of CILDI to learn new classes incrementally across different tasks and obtaining a performance close to the state-of-the-art algorithms for class incremental learning using only one distilled image per learned class and beating the state-of-the-art on the four data sets when using 10 distilled images per learned class, while using a smaller memory footprint than the competing approaches.