{"title":"探索一维混沌时间序列的神经状态空间学习","authors":"Zhiwei Shi, Min Han, Jianhui Xi","doi":"10.1109/ICNSC.2005.1461230","DOIUrl":null,"url":null,"abstract":"Because the chaotic system is initial condition sensitive, it is difficult to decide a proper initial state for a recurrent neural network to model observed one-dimension chaotic time series. In this paper, a recurrent neural network with feedback composed of internal state is introduced to model one-dimension chaotic time series. The neural network output is a nonlinear combination of the internal state variable. To successfully model a chaotic time series, this paper proves that the recurrent neural network with internal state can start from arbitrary initial state. In the simulation, the neural systems perform multi-step ahead prediction, also, the reconstructed neural state space is compared with the original state space, and largest LEs (Lyapunov exponents) of the two systems are calculated and compared to see if the two systems have similar chaotic invariant.","PeriodicalId":313251,"journal":{"name":"Proceedings. 2005 IEEE Networking, Sensing and Control, 2005.","volume":"96 2-3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploring the neural state space learning from one-dimension chaotic time series\",\"authors\":\"Zhiwei Shi, Min Han, Jianhui Xi\",\"doi\":\"10.1109/ICNSC.2005.1461230\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Because the chaotic system is initial condition sensitive, it is difficult to decide a proper initial state for a recurrent neural network to model observed one-dimension chaotic time series. In this paper, a recurrent neural network with feedback composed of internal state is introduced to model one-dimension chaotic time series. The neural network output is a nonlinear combination of the internal state variable. To successfully model a chaotic time series, this paper proves that the recurrent neural network with internal state can start from arbitrary initial state. In the simulation, the neural systems perform multi-step ahead prediction, also, the reconstructed neural state space is compared with the original state space, and largest LEs (Lyapunov exponents) of the two systems are calculated and compared to see if the two systems have similar chaotic invariant.\",\"PeriodicalId\":313251,\"journal\":{\"name\":\"Proceedings. 2005 IEEE Networking, Sensing and Control, 2005.\",\"volume\":\"96 2-3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. 2005 IEEE Networking, Sensing and Control, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNSC.2005.1461230\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 2005 IEEE Networking, Sensing and Control, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNSC.2005.1461230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring the neural state space learning from one-dimension chaotic time series
Because the chaotic system is initial condition sensitive, it is difficult to decide a proper initial state for a recurrent neural network to model observed one-dimension chaotic time series. In this paper, a recurrent neural network with feedback composed of internal state is introduced to model one-dimension chaotic time series. The neural network output is a nonlinear combination of the internal state variable. To successfully model a chaotic time series, this paper proves that the recurrent neural network with internal state can start from arbitrary initial state. In the simulation, the neural systems perform multi-step ahead prediction, also, the reconstructed neural state space is compared with the original state space, and largest LEs (Lyapunov exponents) of the two systems are calculated and compared to see if the two systems have similar chaotic invariant.