{"title":"Incremental data modeling based on neural ordinary differential equations","authors":"Zhang Chen, Hanlin Bian, Wei Zhu","doi":"10.1007/s40747-025-01793-0","DOIUrl":null,"url":null,"abstract":"<p>With the development of data acquisition technology, a large amount of time-series data can be collected. However, handling too much data often leads to a waste of social resources. It becomes significant to determine the minimum data size required for training. In this paper, a framework for neural ordinary differential equations based on incremental learning is discussed, which can enhance learning ability and determine the minimum data size required in data modeling compared to neural ordinary differential equations. This framework continuously updates the neural ordinary differential equations with newly added data while avoiding the addition of extra parameters. Once the preset accuracy is reached, the minimum data size needed for training can be determined. Furthermore, the minimum data size required for five classic models under various sampling rates is discussed. By incorporating new data, it enhances accuracy instead of increasing the depth and width of the neural network. The close integration of data generation and training can significantly reduce the total time required. Theoretical analysis confirms convergence, while numerical results demonstrate that the framework offers superior predictive ability and reduced computation time compared to traditional neural differential equations.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01793-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the development of data acquisition technology, a large amount of time-series data can be collected. However, handling too much data often leads to a waste of social resources. It becomes significant to determine the minimum data size required for training. In this paper, a framework for neural ordinary differential equations based on incremental learning is discussed, which can enhance learning ability and determine the minimum data size required in data modeling compared to neural ordinary differential equations. This framework continuously updates the neural ordinary differential equations with newly added data while avoiding the addition of extra parameters. Once the preset accuracy is reached, the minimum data size needed for training can be determined. Furthermore, the minimum data size required for five classic models under various sampling rates is discussed. By incorporating new data, it enhances accuracy instead of increasing the depth and width of the neural network. The close integration of data generation and training can significantly reduce the total time required. Theoretical analysis confirms convergence, while numerical results demonstrate that the framework offers superior predictive ability and reduced computation time compared to traditional neural differential equations.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.