{"title":"Adaptive Control and Intersections with Reinforcement Learning","authors":"A. Annaswamy","doi":"10.1146/annurev-control-062922-090153","DOIUrl":null,"url":null,"abstract":"This article provides an exposition of the field of adaptive control and its intersections with reinforcement learning. Adaptive control and reinforcement learning are two different methods that are both commonly employed for the control of uncertain systems. Historically, adaptive control has excelled at real-time control of systems with specific model structures through adaptive rules that learn the underlying parameters while providing strict guarantees on stability, asymptotic performance, and learning. Reinforcement learning methods are applicable to a broad class of systems and are able to produce near-optimal policies for highly complex control tasks. This is often enabled by significant offline training via simulation or the collection of large input-state datasets. This article attempts to compare adaptive control and reinforcement learning using a common framework. The problem statement in each field and highlights of their results are outlined. Two specific examples of dynamic systems are used to illustrate the details of the two methods, their advantages, and their deficiencies. The need for real-time control methods that leverage tools from both approaches is motivated through the lens of this common framework. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 14 is May 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":29961,"journal":{"name":"Annual Review of Control Robotics and Autonomous Systems","volume":"128 1","pages":""},"PeriodicalIF":11.2000,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Review of Control Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1146/annurev-control-062922-090153","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 5
Abstract
This article provides an exposition of the field of adaptive control and its intersections with reinforcement learning. Adaptive control and reinforcement learning are two different methods that are both commonly employed for the control of uncertain systems. Historically, adaptive control has excelled at real-time control of systems with specific model structures through adaptive rules that learn the underlying parameters while providing strict guarantees on stability, asymptotic performance, and learning. Reinforcement learning methods are applicable to a broad class of systems and are able to produce near-optimal policies for highly complex control tasks. This is often enabled by significant offline training via simulation or the collection of large input-state datasets. This article attempts to compare adaptive control and reinforcement learning using a common framework. The problem statement in each field and highlights of their results are outlined. Two specific examples of dynamic systems are used to illustrate the details of the two methods, their advantages, and their deficiencies. The need for real-time control methods that leverage tools from both approaches is motivated through the lens of this common framework. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 14 is May 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
期刊介绍:
The Annual Review of Control, Robotics, and Autonomous Systems offers comprehensive reviews on theoretical and applied developments influencing autonomous and semiautonomous systems engineering. Major areas covered include control, robotics, mechanics, optimization, communication, information theory, machine learning, computing, and signal processing. The journal extends its reach beyond engineering to intersect with fields like biology, neuroscience, and human behavioral sciences. The current volume has transitioned to open access through the Subscribe to Open program, with all articles published under a CC BY license.