用于嵌入式空间应用的在线持续流式学习

IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal of Real-Time Image Processing Pub Date : 2024-04-02 DOI:10.1007/s11554-024-01438-4
Alaa Eddine Mazouz, Van-Tam Nguyen
{"title":"用于嵌入式空间应用的在线持续流式学习","authors":"Alaa Eddine Mazouz, Van-Tam Nguyen","doi":"10.1007/s11554-024-01438-4","DOIUrl":null,"url":null,"abstract":"<p>This paper proposes an online continual learning (OCL) methodology tested on hardware and validated for space applications using an object detection close-proximity operations task. The proposed OCL algorithm simulates a streaming scenario and uses experience replay to enable the model to update its knowledge without suffering catastrophic forgetting by saving past inputs in an onboard reservoir that will be sampled during updates. A stream buffer is introduced to enable online training, i.e., the ability to update the model as data is streamed, one sample at a time, rather than being available in batches. Hyperparameters such as buffer sizes, update rate, batch size, batch concatenation parameters and number of iterations per batch are all investigated to find an optimized approach for the incremental domain and streaming learning task. The algorithm is tested on a customized dataset for space applications simulating changes in visual environments that significantly impact the deployed model’s performance. Our OCL methodology uses Weighted Sampling, a novel approach which allows the system to analytically choose more useful input samples during training, the results show that a model can be updated online achieving up to 60% Average Learning while Average Forgetting can be as low as 13% all with a Model Size Efficiency of 1, meaning the model size does not increase. An additional contribution is an implementation of On-Device Continual Training for embedded applications, a hardware experiment is carried out on the Zynq 7100 FPGA where a pre-trained CNN model is updated online using our FPGA backpropagation pipeline and OCL methodology to take into account new data and satisfactorily complete the planned task in less than 5 min achieving 90 FPS.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"47 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Online continual streaming learning for embedded space applications\",\"authors\":\"Alaa Eddine Mazouz, Van-Tam Nguyen\",\"doi\":\"10.1007/s11554-024-01438-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This paper proposes an online continual learning (OCL) methodology tested on hardware and validated for space applications using an object detection close-proximity operations task. The proposed OCL algorithm simulates a streaming scenario and uses experience replay to enable the model to update its knowledge without suffering catastrophic forgetting by saving past inputs in an onboard reservoir that will be sampled during updates. A stream buffer is introduced to enable online training, i.e., the ability to update the model as data is streamed, one sample at a time, rather than being available in batches. Hyperparameters such as buffer sizes, update rate, batch size, batch concatenation parameters and number of iterations per batch are all investigated to find an optimized approach for the incremental domain and streaming learning task. The algorithm is tested on a customized dataset for space applications simulating changes in visual environments that significantly impact the deployed model’s performance. Our OCL methodology uses Weighted Sampling, a novel approach which allows the system to analytically choose more useful input samples during training, the results show that a model can be updated online achieving up to 60% Average Learning while Average Forgetting can be as low as 13% all with a Model Size Efficiency of 1, meaning the model size does not increase. An additional contribution is an implementation of On-Device Continual Training for embedded applications, a hardware experiment is carried out on the Zynq 7100 FPGA where a pre-trained CNN model is updated online using our FPGA backpropagation pipeline and OCL methodology to take into account new data and satisfactorily complete the planned task in less than 5 min achieving 90 FPS.</p>\",\"PeriodicalId\":51224,\"journal\":{\"name\":\"Journal of Real-Time Image Processing\",\"volume\":\"47 1\",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Real-Time Image Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11554-024-01438-4\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Real-Time Image Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11554-024-01438-4","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一种在线持续学习(OCL)方法,在硬件上进行了测试,并利用物体检测近距离操作任务对空间应用进行了验证。所提出的 OCL 算法模拟了一个流场景,并使用经验重放,通过将过去的输入保存在一个板载存储库中,使模型能够更新其知识,而不会遭受灾难性遗忘,该存储库将在更新期间进行采样。引入流缓冲区是为了实现在线训练,即在数据流中一次一个样本地更新模型,而不是成批地更新。对缓冲区大小、更新率、批次大小、批次连接参数和每批迭代次数等超参数进行了研究,以找到增量领域和流式学习任务的优化方法。该算法在空间应用的定制数据集上进行了测试,模拟了视觉环境的变化,这些变化对部署模型的性能产生了重大影响。我们的 OCL 方法使用了加权采样(一种新颖的方法,允许系统在训练过程中分析选择更有用的输入样本),结果表明,在线更新模型可实现高达 60% 的平均学习率,而平均遗忘率可低至 13%,模型大小效率为 1,这意味着模型大小不会增加。另外一个贡献是为嵌入式应用实现了设备上持续训练,在 Zynq 7100 FPGA 上进行了硬件实验,使用我们的 FPGA 反向传播管道和 OCL 方法对预先训练好的 CNN 模型进行在线更新,以考虑到新数据,并在不到 5 分钟的时间内圆满完成计划任务,达到 90 FPS。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Online continual streaming learning for embedded space applications

This paper proposes an online continual learning (OCL) methodology tested on hardware and validated for space applications using an object detection close-proximity operations task. The proposed OCL algorithm simulates a streaming scenario and uses experience replay to enable the model to update its knowledge without suffering catastrophic forgetting by saving past inputs in an onboard reservoir that will be sampled during updates. A stream buffer is introduced to enable online training, i.e., the ability to update the model as data is streamed, one sample at a time, rather than being available in batches. Hyperparameters such as buffer sizes, update rate, batch size, batch concatenation parameters and number of iterations per batch are all investigated to find an optimized approach for the incremental domain and streaming learning task. The algorithm is tested on a customized dataset for space applications simulating changes in visual environments that significantly impact the deployed model’s performance. Our OCL methodology uses Weighted Sampling, a novel approach which allows the system to analytically choose more useful input samples during training, the results show that a model can be updated online achieving up to 60% Average Learning while Average Forgetting can be as low as 13% all with a Model Size Efficiency of 1, meaning the model size does not increase. An additional contribution is an implementation of On-Device Continual Training for embedded applications, a hardware experiment is carried out on the Zynq 7100 FPGA where a pre-trained CNN model is updated online using our FPGA backpropagation pipeline and OCL methodology to take into account new data and satisfactorily complete the planned task in less than 5 min achieving 90 FPS.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Real-Time Image Processing
Journal of Real-Time Image Processing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
6.80
自引率
6.70%
发文量
68
审稿时长
6 months
期刊介绍: Due to rapid advancements in integrated circuit technology, the rich theoretical results that have been developed by the image and video processing research community are now being increasingly applied in practical systems to solve real-world image and video processing problems. Such systems involve constraints placed not only on their size, cost, and power consumption, but also on the timeliness of the image data processed. Examples of such systems are mobile phones, digital still/video/cell-phone cameras, portable media players, personal digital assistants, high-definition television, video surveillance systems, industrial visual inspection systems, medical imaging devices, vision-guided autonomous robots, spectral imaging systems, and many other real-time embedded systems. In these real-time systems, strict timing requirements demand that results are available within a certain interval of time as imposed by the application. It is often the case that an image processing algorithm is developed and proven theoretically sound, presumably with a specific application in mind, but its practical applications and the detailed steps, methodology, and trade-off analysis required to achieve its real-time performance are not fully explored, leaving these critical and usually non-trivial issues for those wishing to employ the algorithm in a real-time system. The Journal of Real-Time Image Processing is intended to bridge the gap between the theory and practice of image processing, serving the greater community of researchers, practicing engineers, and industrial professionals who deal with designing, implementing or utilizing image processing systems which must satisfy real-time design constraints.
期刊最新文献
High-precision real-time autonomous driving target detection based on YOLOv8 GMS-YOLO: an enhanced algorithm for water meter reading recognition in complex environments Fast rough mode decision algorithm and hardware architecture design for AV1 encoder AdaptoMixNet: detection of foreign objects on power transmission lines under severe weather conditions Mfdd: Multi-scale attention fatigue and distracted driving detector based on facial features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1