Johannes Viehweg , Dominik Walther , Patrick Mäder
{"title":"时间卷积衍生的多层油藏计算","authors":"Johannes Viehweg , Dominik Walther , Patrick Mäder","doi":"10.1016/j.neucom.2024.128938","DOIUrl":null,"url":null,"abstract":"<div><div>The prediction of time series is a challenging task relevant in such diverse applications as analyzing financial data, forecasting flow dynamics or understanding biological processes. Especially chaotic time series that depend on a long history pose an exceptionally difficult problem. While machine learning has shown to be a promising approach for predicting such time series, it either demands long training time and much training data when using deep Recurrent Neural Networks. Alternative, when using a Reservoir Computing approach it comes with high uncertainty and typically a high number of random initializations and extensive hyper-parameter tuning. In this paper, we focus on the Reservoir Computing approach and propose a new mapping of input data into the reservoir’s state space. Furthermore, we incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network while reducing the dependence on randomness. For the evaluation, we approximate a set of time series from the Mackey–Glass equation, inhabiting non-chaotic as well as chaotic behavior as well as the SantaFe Laser dataset and compare our approaches in regard to their predictive capabilities to Echo State Networks, Autoencoder connected Echo State Networks and Gated Recurrent Units. For the chaotic time series, we observe an error reduction of up to 85.45% compared to Echo State Networks and 90.72% compared to Gated Recurrent Units. Furthermore, we also observe tremendous improvements for non-chaotic time series of up to 99.99% in contrast to the existing approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128938"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Temporal convolution derived multi-layered reservoir computing\",\"authors\":\"Johannes Viehweg , Dominik Walther , Patrick Mäder\",\"doi\":\"10.1016/j.neucom.2024.128938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The prediction of time series is a challenging task relevant in such diverse applications as analyzing financial data, forecasting flow dynamics or understanding biological processes. Especially chaotic time series that depend on a long history pose an exceptionally difficult problem. While machine learning has shown to be a promising approach for predicting such time series, it either demands long training time and much training data when using deep Recurrent Neural Networks. Alternative, when using a Reservoir Computing approach it comes with high uncertainty and typically a high number of random initializations and extensive hyper-parameter tuning. In this paper, we focus on the Reservoir Computing approach and propose a new mapping of input data into the reservoir’s state space. Furthermore, we incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network while reducing the dependence on randomness. For the evaluation, we approximate a set of time series from the Mackey–Glass equation, inhabiting non-chaotic as well as chaotic behavior as well as the SantaFe Laser dataset and compare our approaches in regard to their predictive capabilities to Echo State Networks, Autoencoder connected Echo State Networks and Gated Recurrent Units. For the chaotic time series, we observe an error reduction of up to 85.45% compared to Echo State Networks and 90.72% compared to Gated Recurrent Units. Furthermore, we also observe tremendous improvements for non-chaotic time series of up to 99.99% in contrast to the existing approaches.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"617 \",\"pages\":\"Article 128938\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-11-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224017090\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224017090","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
The prediction of time series is a challenging task relevant in such diverse applications as analyzing financial data, forecasting flow dynamics or understanding biological processes. Especially chaotic time series that depend on a long history pose an exceptionally difficult problem. While machine learning has shown to be a promising approach for predicting such time series, it either demands long training time and much training data when using deep Recurrent Neural Networks. Alternative, when using a Reservoir Computing approach it comes with high uncertainty and typically a high number of random initializations and extensive hyper-parameter tuning. In this paper, we focus on the Reservoir Computing approach and propose a new mapping of input data into the reservoir’s state space. Furthermore, we incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network while reducing the dependence on randomness. For the evaluation, we approximate a set of time series from the Mackey–Glass equation, inhabiting non-chaotic as well as chaotic behavior as well as the SantaFe Laser dataset and compare our approaches in regard to their predictive capabilities to Echo State Networks, Autoencoder connected Echo State Networks and Gated Recurrent Units. For the chaotic time series, we observe an error reduction of up to 85.45% compared to Echo State Networks and 90.72% compared to Gated Recurrent Units. Furthermore, we also observe tremendous improvements for non-chaotic time series of up to 99.99% in contrast to the existing approaches.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.