{"title":"Adaptive Gain Scheduling using Reinforcement Learning for Quadcopter Control","authors":"Mike Timmerman, Aryan Patel, Tim Reinhart","doi":"arxiv-2403.07216","DOIUrl":null,"url":null,"abstract":"The paper presents a technique using reinforcement learning (RL) to adapt the\ncontrol gains of a quadcopter controller. Specifically, we employed Proximal\nPolicy Optimization (PPO) to train a policy which adapts the gains of a\ncascaded feedback controller in-flight. The primary goal of this controller is\nto minimize tracking error while following a specified trajectory. The paper's\nkey objective is to analyze the effectiveness of the adaptive gain policy and\ncompare it to the performance of a static gain control algorithm, where the\nIntegral Squared Error and Integral Time Squared Error are used as metrics. The\nresults show that the adaptive gain scheme achieves over 40$\\%$ decrease in\ntracking error as compared to the static gain controller.","PeriodicalId":501062,"journal":{"name":"arXiv - CS - Systems and Control","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.07216","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The paper presents a technique using reinforcement learning (RL) to adapt the
control gains of a quadcopter controller. Specifically, we employed Proximal
Policy Optimization (PPO) to train a policy which adapts the gains of a
cascaded feedback controller in-flight. The primary goal of this controller is
to minimize tracking error while following a specified trajectory. The paper's
key objective is to analyze the effectiveness of the adaptive gain policy and
compare it to the performance of a static gain control algorithm, where the
Integral Squared Error and Integral Time Squared Error are used as metrics. The
results show that the adaptive gain scheme achieves over 40$\%$ decrease in
tracking error as compared to the static gain controller.