Incremental data modeling based on neural ordinary differential equations

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2025-02-17 DOI:10.1007/s40747-025-01793-0
Zhang Chen, Hanlin Bian, Wei Zhu
{"title":"Incremental data modeling based on neural ordinary differential equations","authors":"Zhang Chen, Hanlin Bian, Wei Zhu","doi":"10.1007/s40747-025-01793-0","DOIUrl":null,"url":null,"abstract":"<p>With the development of data acquisition technology, a large amount of time-series data can be collected. However, handling too much data often leads to a waste of social resources. It becomes significant to determine the minimum data size required for training. In this paper, a framework for neural ordinary differential equations based on incremental learning is discussed, which can enhance learning ability and determine the minimum data size required in data modeling compared to neural ordinary differential equations. This framework continuously updates the neural ordinary differential equations with newly added data while avoiding the addition of extra parameters. Once the preset accuracy is reached, the minimum data size needed for training can be determined. Furthermore, the minimum data size required for five classic models under various sampling rates is discussed. By incorporating new data, it enhances accuracy instead of increasing the depth and width of the neural network. The close integration of data generation and training can significantly reduce the total time required. Theoretical analysis confirms convergence, while numerical results demonstrate that the framework offers superior predictive ability and reduced computation time compared to traditional neural differential equations.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01793-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

With the development of data acquisition technology, a large amount of time-series data can be collected. However, handling too much data often leads to a waste of social resources. It becomes significant to determine the minimum data size required for training. In this paper, a framework for neural ordinary differential equations based on incremental learning is discussed, which can enhance learning ability and determine the minimum data size required in data modeling compared to neural ordinary differential equations. This framework continuously updates the neural ordinary differential equations with newly added data while avoiding the addition of extra parameters. Once the preset accuracy is reached, the minimum data size needed for training can be determined. Furthermore, the minimum data size required for five classic models under various sampling rates is discussed. By incorporating new data, it enhances accuracy instead of increasing the depth and width of the neural network. The close integration of data generation and training can significantly reduce the total time required. Theoretical analysis confirms convergence, while numerical results demonstrate that the framework offers superior predictive ability and reduced computation time compared to traditional neural differential equations.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于神经常微分方程的增量数据建模
随着数据采集技术的发展,可以采集到大量的时间序列数据。然而,处理过多的数据往往会导致社会资源的浪费。确定训练所需的最小数据量变得非常重要。本文讨论了一种基于增量学习的神经常微分方程框架,与神经常微分方程相比,该框架可以提高神经常微分方程的学习能力,并确定数据建模所需的最小数据量。该框架使用新添加的数据不断更新神经常微分方程,同时避免了额外参数的添加。一旦达到预设的精度,就可以确定训练所需的最小数据量。进一步讨论了不同采样率下5种经典模型所需的最小数据量。通过合并新数据,它提高了准确性,而不是增加神经网络的深度和宽度。数据生成和训练的紧密结合可以大大减少所需的总时间。理论分析证实了该框架的收敛性,而数值结果表明,与传统的神经微分方程相比,该框架具有更好的预测能力和更少的计算时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
Harmonizing fusion modeling for accurate liver cancer diagnosis using explainable artificial intelligence: a step toward trustworthy medical AI A novel ensemble neural network for classification and detection of fire and smoke Integrated optimization of forest fire task scheduling and emergency resource delivery under uncertain environments StrokeFuse-AttnNet: a hybrid feature fusion and self-attention model for stroke detection using neuroimages A robust methodology for multi-criteria group decision-making: intuitionistic fuzzy N-bipolar soft expert sets in cybersecurity risk assessment for financial institutions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1