Mario Picerno;Lucas Koch;Kevin Badalian;Marius Wegener;Joschka Schaub;Charles R. Koch;Jakob Andert
{"title":"Transfer of Reinforcement Learning-Based Powertrain Controllers From Model- to Hardware-in-the-Loop","authors":"Mario Picerno;Lucas Koch;Kevin Badalian;Marius Wegener;Joschka Schaub;Charles R. Koch;Jakob Andert","doi":"10.1109/TVT.2025.3546717","DOIUrl":null,"url":null,"abstract":"Developing powertrain control functions is time-consuming and resource-intensive, often leading to sub-optimal solutions. Reinforcement Learning (RL) allows agents to perform complex control tasks with minimal human involvement, but is often confined to simulations due to testing costs and safety concerns. To effectively apply RL in embedded powertrain control, agents must be able to handle real-world scenarios, particularly through direct interaction with real actuators and control systems. Therefore, this research applies Transfer Learning (TL) and X-in-the-Loop (XiL) simulations to develop agents that can seamlessly transition and perform robustly in real-world environments. For transient exhaust gas re-circulation control of an internal combustion engine, the process begins with a computationally inexpensive Model-in-the-Loop (MiL) simulation to select a suitable algorithm, fine-tune hyperparameters, and conduct preliminary training. In the next step, pre-trained agents are transferred to an advanced Hardware-in-the-Loop (HiL) system with real hardware using TL for further training. Compared to agents trained entirely on HiL systems, transferred agents required significantly less real-world training time (up to <inline-formula><tex-math>$5.9$</tex-math></inline-formula> times shorter) while outperforming the series production Engine Control Unit (ECU). The results highlight that for real-world effectiveness, integrating actual hardware into training is essential, reward fine-tuning plays a critical role in optimizing these interactions, and the maturity of the policy significantly influences both training duration and overall performance.","PeriodicalId":13421,"journal":{"name":"IEEE Transactions on Vehicular Technology","volume":"74 7","pages":"10332-10343"},"PeriodicalIF":7.1000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Vehicular Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10907977/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Developing powertrain control functions is time-consuming and resource-intensive, often leading to sub-optimal solutions. Reinforcement Learning (RL) allows agents to perform complex control tasks with minimal human involvement, but is often confined to simulations due to testing costs and safety concerns. To effectively apply RL in embedded powertrain control, agents must be able to handle real-world scenarios, particularly through direct interaction with real actuators and control systems. Therefore, this research applies Transfer Learning (TL) and X-in-the-Loop (XiL) simulations to develop agents that can seamlessly transition and perform robustly in real-world environments. For transient exhaust gas re-circulation control of an internal combustion engine, the process begins with a computationally inexpensive Model-in-the-Loop (MiL) simulation to select a suitable algorithm, fine-tune hyperparameters, and conduct preliminary training. In the next step, pre-trained agents are transferred to an advanced Hardware-in-the-Loop (HiL) system with real hardware using TL for further training. Compared to agents trained entirely on HiL systems, transferred agents required significantly less real-world training time (up to $5.9$ times shorter) while outperforming the series production Engine Control Unit (ECU). The results highlight that for real-world effectiveness, integrating actual hardware into training is essential, reward fine-tuning plays a critical role in optimizing these interactions, and the maturity of the policy significantly influences both training duration and overall performance.
期刊介绍:
The scope of the Transactions is threefold (which was approved by the IEEE Periodicals Committee in 1967) and is published on the journal website as follows: Communications: The use of mobile radio on land, sea, and air, including cellular radio, two-way radio, and one-way radio, with applications to dispatch and control vehicles, mobile radiotelephone, radio paging, and status monitoring and reporting. Related areas include spectrum usage, component radio equipment such as cavities and antennas, compute control for radio systems, digital modulation and transmission techniques, mobile radio circuit design, radio propagation for vehicular communications, effects of ignition noise and radio frequency interference, and consideration of the vehicle as part of the radio operating environment. Transportation Systems: The use of electronic technology for the control of ground transportation systems including, but not limited to, traffic aid systems; traffic control systems; automatic vehicle identification, location, and monitoring systems; automated transport systems, with single and multiple vehicle control; and moving walkways or people-movers. Vehicular Electronics: The use of electronic or electrical components and systems for control, propulsion, or auxiliary functions, including but not limited to, electronic controls for engineer, drive train, convenience, safety, and other vehicle systems; sensors, actuators, and microprocessors for onboard use; electronic fuel control systems; vehicle electrical components and systems collision avoidance systems; electromagnetic compatibility in the vehicle environment; and electric vehicles and controls.