{"title":"基于 GA-LSTM 速度预测的增程车辆 DDQN 能源管理","authors":"Laiwei Lu, Hong Zhao, Fuliang Xv, Yong Luo, Junjie Chen, Xiaoyun Ding","doi":"10.1016/j.egyai.2024.100367","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, a dual deep Q-network (DDQN) energy management model based on long-short memory neural network (LSTM) speed prediction is proposed under the model predictive control (MPC) framework. The initial learning rate and neuron dropout probability of the LSTM speed prediction model are optimized by the genetic algorithm (GA). The prediction results show that the root-mean-square error of the GA-LSTM speed prediction method is smaller than the SVR method in different speed prediction horizons. The predicted demand power, the state of charge (SOC), and the demand power at the current moment are used as the state input of the agent, and the real-time control of the control strategy is realized by the MPC method. The simulation results show that the proposed control strategy reduces the equivalent fuel consumption by 0.0354 kg compared with DDQN, 0.8439 kg compared with ECMS, and 0.742 kg compared with the power-following control strategy. The difference between the proposed control strategy and the dynamic planning control strategy is only 0.0048 kg, 0.193%, while the SOC of the power battery remains stable. Finally, the hardware-in-the-loop simulation verifies that the proposed control strategy has good real-time performance.</p></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"17 ","pages":"Article 100367"},"PeriodicalIF":9.6000,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666546824000338/pdfft?md5=765201343f5b2525062ac683ecde4d5d&pid=1-s2.0-S2666546824000338-main.pdf","citationCount":"0","resultStr":"{\"title\":\"GA-LSTM speed prediction-based DDQN energy management for extended-range vehicles\",\"authors\":\"Laiwei Lu, Hong Zhao, Fuliang Xv, Yong Luo, Junjie Chen, Xiaoyun Ding\",\"doi\":\"10.1016/j.egyai.2024.100367\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In this paper, a dual deep Q-network (DDQN) energy management model based on long-short memory neural network (LSTM) speed prediction is proposed under the model predictive control (MPC) framework. The initial learning rate and neuron dropout probability of the LSTM speed prediction model are optimized by the genetic algorithm (GA). The prediction results show that the root-mean-square error of the GA-LSTM speed prediction method is smaller than the SVR method in different speed prediction horizons. The predicted demand power, the state of charge (SOC), and the demand power at the current moment are used as the state input of the agent, and the real-time control of the control strategy is realized by the MPC method. The simulation results show that the proposed control strategy reduces the equivalent fuel consumption by 0.0354 kg compared with DDQN, 0.8439 kg compared with ECMS, and 0.742 kg compared with the power-following control strategy. The difference between the proposed control strategy and the dynamic planning control strategy is only 0.0048 kg, 0.193%, while the SOC of the power battery remains stable. Finally, the hardware-in-the-loop simulation verifies that the proposed control strategy has good real-time performance.</p></div>\",\"PeriodicalId\":34138,\"journal\":{\"name\":\"Energy and AI\",\"volume\":\"17 \",\"pages\":\"Article 100367\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2024-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666546824000338/pdfft?md5=765201343f5b2525062ac683ecde4d5d&pid=1-s2.0-S2666546824000338-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energy and AI\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666546824000338\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy and AI","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666546824000338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
GA-LSTM speed prediction-based DDQN energy management for extended-range vehicles
In this paper, a dual deep Q-network (DDQN) energy management model based on long-short memory neural network (LSTM) speed prediction is proposed under the model predictive control (MPC) framework. The initial learning rate and neuron dropout probability of the LSTM speed prediction model are optimized by the genetic algorithm (GA). The prediction results show that the root-mean-square error of the GA-LSTM speed prediction method is smaller than the SVR method in different speed prediction horizons. The predicted demand power, the state of charge (SOC), and the demand power at the current moment are used as the state input of the agent, and the real-time control of the control strategy is realized by the MPC method. The simulation results show that the proposed control strategy reduces the equivalent fuel consumption by 0.0354 kg compared with DDQN, 0.8439 kg compared with ECMS, and 0.742 kg compared with the power-following control strategy. The difference between the proposed control strategy and the dynamic planning control strategy is only 0.0048 kg, 0.193%, while the SOC of the power battery remains stable. Finally, the hardware-in-the-loop simulation verifies that the proposed control strategy has good real-time performance.