Environmental concerns and technological progress push the development and market penetration of electric vehicles (EVs) and hybrid electric vehicles (HEVs). On the other hand, transportation systems are becoming more efficient by improved communication systems within vehicles and between vehicles and infrastructure. In this study, a driving cycle-based energy management strategy is developed for range-extended electric vehicles (REEVs) to increase system efficiency and equivalent vehicle range. A validated vehicle model is developed by critical subsystem testing and a comparative study is conducted to assess the developed strategy. The results showed that the optimized strategy can save CO2 emission by 6.21%, 1.77%, and 0.58% for heavy, moderate, and light traffic, respectively. Furthermore, the efficient use of a range extender (REx), guided by traffic data, extends the vehicle range, especially in heavy traffic conditions.
{"title":"Driving Cycle-Based Energy Management Strategy Development for Range-Extended Electric Vehicles","authors":"Abdulehad Ozdemir, Ilker Murat Koç, Bilsay Sümer, Ayhan Kural, Alaeddin Arpaci","doi":"10.4271/14-13-01-0007","DOIUrl":"https://doi.org/10.4271/14-13-01-0007","url":null,"abstract":"<div>Environmental concerns and technological progress push the development and market penetration of electric vehicles (EVs) and hybrid electric vehicles (HEVs). On the other hand, transportation systems are becoming more efficient by improved communication systems within vehicles and between vehicles and infrastructure. In this study, a driving cycle-based energy management strategy is developed for range-extended electric vehicles (REEVs) to increase system efficiency and equivalent vehicle range. A validated vehicle model is developed by critical subsystem testing and a comparative study is conducted to assess the developed strategy. The results showed that the optimized strategy can save CO<sub>2</sub> emission by 6.21%, 1.77%, and 0.58% for heavy, moderate, and light traffic, respectively. Furthermore, the efficient use of a range extender (REx), guided by traffic data, extends the vehicle range, especially in heavy traffic conditions.</div>","PeriodicalId":36261,"journal":{"name":"SAE International Journal of Electrified Vehicles","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136344719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reviewers","authors":"Simona Onori","doi":"10.4271/14-12-03-0024","DOIUrl":"https://doi.org/10.4271/14-12-03-0024","url":null,"abstract":"<div>Reviewers</div>","PeriodicalId":36261,"journal":{"name":"SAE International Journal of Electrified Vehicles","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135826778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanchen Wang, Ziba Arjmandzadeh, Yiming Ye, Jiangfeng Zhang, Bin Xu
Deep reinforcement learning has been utilized in different areas with significant progress, such as robotics, games, and autonomous vehicles. However, the optimal result from deep reinforcement learning is based on multiple sufficient training processes, which are time-consuming and hard to be applied in real-time vehicle energy management. This study aims to use expert knowledge to warm start the deep reinforcement learning for the energy management of a hybrid electric vehicle, thus reducing the learning time. In this study, expert domain knowledge is directly encoded to a set of rules, which can be represented by a decision tree. The agent can quickly start learning effective policies after initialization by directly transferring the logical rules from the decision tree into neural network weights and biases. The results show that the expert knowledge-based warm start agent has a higher initial learning reward in the training process than the cold start. With more expert knowledge, the warm start shows improved performance in the initial learning stage compared to the warm start method with less expert knowledge. The results indicate that the proposed warm start method requires 76.7% less time to achieve convergence than the cold start. The proposed warm start method is also compared with the conventional rule-based method and equivalent consumption minimization strategy. The proposed warm start method reduces energy consumption by 8.62% and 3.62% compared with the two baseline methods, respectively. The results of this work can facilitate the expert knowledge-based deep reinforcement learning warm start in hybrid electric vehicle energy management problems.
{"title":"Automated Expert Knowledge-Based Deep Reinforcement Learning Warm\u0000 Start via Decision Tree for Hybrid Electric Vehicle Energy\u0000 Management","authors":"Hanchen Wang, Ziba Arjmandzadeh, Yiming Ye, Jiangfeng Zhang, Bin Xu","doi":"10.4271/14-13-01-0006","DOIUrl":"https://doi.org/10.4271/14-13-01-0006","url":null,"abstract":"Deep reinforcement learning has been utilized in different areas with significant\u0000 progress, such as robotics, games, and autonomous vehicles. However, the optimal\u0000 result from deep reinforcement learning is based on multiple sufficient training\u0000 processes, which are time-consuming and hard to be applied in real-time vehicle\u0000 energy management. This study aims to use expert knowledge to warm start the\u0000 deep reinforcement learning for the energy management of a hybrid electric\u0000 vehicle, thus reducing the learning time. In this study, expert domain knowledge\u0000 is directly encoded to a set of rules, which can be represented by a decision\u0000 tree. The agent can quickly start learning effective policies after\u0000 initialization by directly transferring the logical rules from the decision tree\u0000 into neural network weights and biases. The results show that the expert\u0000 knowledge-based warm start agent has a higher initial learning reward in the\u0000 training process than the cold start. With more expert knowledge, the warm start\u0000 shows improved performance in the initial learning stage compared to the warm\u0000 start method with less expert knowledge. The results indicate that the proposed\u0000 warm start method requires 76.7% less time to achieve convergence than the cold\u0000 start. The proposed warm start method is also compared with the conventional\u0000 rule-based method and equivalent consumption minimization strategy. The proposed\u0000 warm start method reduces energy consumption by 8.62% and 3.62% compared with\u0000 the two baseline methods, respectively. The results of this work can facilitate\u0000 the expert knowledge-based deep reinforcement learning warm start in hybrid\u0000 electric vehicle energy management problems.","PeriodicalId":36261,"journal":{"name":"SAE International Journal of Electrified Vehicles","volume":"24 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76221656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}