{"title":"Controlled gradient descent: A control theoretical perspective for optimization","authors":"Revati Gunjal, Syed Shadab Nayyer, S.R. Wagh, N.M. Singh","doi":"10.1016/j.rico.2024.100417","DOIUrl":null,"url":null,"abstract":"<div><p>The Gradient Descent (GD) paradigm is a foundational principle of modern optimization algorithms. The GD algorithm and its variants, including accelerated optimization algorithms, geodesic optimization, natural gradient, and contraction-based optimization, to name a few, are used in machine learning and the system and control domain. Here, we proposed a new algorithm based on the control theoretical perspective, labeled as the Controlled Gradient Descent (CGD). Specifically, this approach overcomes the challenges of the abovementioned algorithms, which rely on the choice of a suitable geometric structure, particularly in machine learning. The proposed CGD approach visualizes the optimization as a Manifold Stabilization Problem (MSP) through the notion of an invariant manifold and its attractivity. The CGD approach leads to an exponential contraction of trajectories under the influence of a pseudo-Riemannian metric generated through the control procedure as an additional outcome. The efficacy of the CGD is demonstrated with various test objective functions like the benchmark Rosenbrock function, objective function with a lack of flatness, and semi-contracting objective functions often encountered in machine learning applications.</p></div>","PeriodicalId":34733,"journal":{"name":"Results in Control and Optimization","volume":"15 ","pages":"Article 100417"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266672072400047X/pdfft?md5=6d3e8563b7dd084183d4e190beae7445&pid=1-s2.0-S266672072400047X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Results in Control and Optimization","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266672072400047X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0
Abstract
The Gradient Descent (GD) paradigm is a foundational principle of modern optimization algorithms. The GD algorithm and its variants, including accelerated optimization algorithms, geodesic optimization, natural gradient, and contraction-based optimization, to name a few, are used in machine learning and the system and control domain. Here, we proposed a new algorithm based on the control theoretical perspective, labeled as the Controlled Gradient Descent (CGD). Specifically, this approach overcomes the challenges of the abovementioned algorithms, which rely on the choice of a suitable geometric structure, particularly in machine learning. The proposed CGD approach visualizes the optimization as a Manifold Stabilization Problem (MSP) through the notion of an invariant manifold and its attractivity. The CGD approach leads to an exponential contraction of trajectories under the influence of a pseudo-Riemannian metric generated through the control procedure as an additional outcome. The efficacy of the CGD is demonstrated with various test objective functions like the benchmark Rosenbrock function, objective function with a lack of flatness, and semi-contracting objective functions often encountered in machine learning applications.