混合协调优化方法

Jianxin Tang, P. Luh, T. Chang
{"title":"混合协调优化方法","authors":"Jianxin Tang, P. Luh, T. Chang","doi":"10.23919/ACC.1988.4790021","DOIUrl":null,"url":null,"abstract":"This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.","PeriodicalId":6395,"journal":{"name":"1988 American Control Conference","volume":"52 1","pages":"1811-1816"},"PeriodicalIF":0.0000,"publicationDate":"1988-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Optimization with the Mixed Coordination Method\",\"authors\":\"Jianxin Tang, P. Luh, T. Chang\",\"doi\":\"10.23919/ACC.1988.4790021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.\",\"PeriodicalId\":6395,\"journal\":{\"name\":\"1988 American Control Conference\",\"volume\":\"52 1\",\"pages\":\"1811-1816\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1988-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"1988 American Control Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ACC.1988.4790021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"1988 American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC.1988.4790021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

本文用混合协调方法研究了等式约束下的静态优化问题。其思想是通过拉格朗日乘数来放松等式约束,并创建一个层次结构,其中拉格朗日乘数和部分决策变量被选为高级变量。该方法是在大约十年前提出的,带有一个简单的高层更新方案。在适当的条件下,我们证明了这种简单的更新方案具有线性收敛速度。为了获得更快的收敛速度,在高层采用了修正牛顿法。修正牛顿法有两个困难。一是在确定牛顿方向时如何得到黑森矩阵,因为需要对所有高级变量求目标函数的二阶导数。第二个问题是沿着牛顿方向执行直线搜索时何时停止,因为高级问题是寻找鞍点的maxmini问题。本文采用一种灵敏度分析方法得到了Hessian矩阵。另一方面,线搜索停止准则是基于梯度向量的范数。文中提供了大量的数值试验结果。由于低层次是一组独立的子问题,因此该方法非常适合并行处理。此外,由于可以在保持原问题可分性的同时添加凸化项,因此该方法适用于非凸问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Optimization with the Mixed Coordination Method
This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Reachable Set Control For Preferred Axis Homing Missiles Parallel Algorithms for Large Scale Power System Dynamic Simulation On the Stability of a Self-Tuning Controller in the Presence of Bounded Disturbances Evaluation and Time-Scaling of Trajectories for Wheeled Mobile Robots Dynamics and Tuning of Systems with Large Delay
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1