Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems

IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Algorithms Pub Date : 2024-05-14 DOI:10.3390/a17050211
Chein-Shan Liu, C. Kuo, Chih-Wen Chang
{"title":"Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems","authors":"Chein-Shan Liu, C. Kuo, Chih-Wen Chang","doi":"10.3390/a17050211","DOIUrl":null,"url":null,"abstract":"A double optimal solution (DOS) of a least-squares problem Ax=b,A∈Rq×n with q≠n is derived in an m+1-dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the m+1 expansion coefficients of the solution x in the VAKS. The minimal-norm solution can be obtained automatically regardless of whether the linear system is consistent or inconsistent. A new double optimal algorithm (DOA) is created; it is sufficiently time saving by inverting an m×m positive definite matrix at each iteration step, where m≪min(n,q). The properties of the DOA are investigated and the estimation of residual error is provided. The residual norms are proven to be strictly decreasing in the iterations; hence, the DOA is absolutely convergent. Numerical tests reveal the efficiency of the DOA for solving least-squares problems. The DOA is applicable to least-squares problems regardless of whether qn. The Moore–Penrose inverse matrix is also addressed by adopting the DOA; the accuracy and efficiency of the proposed method are proven. The m+1-dimensional VAKS is different from the traditional m-dimensional affine Krylov subspace used in the conjugate gradient (CG)-type iterative algorithms CGNR (or CGLS) and CGRE (or Craig method) for solving least-squares problems with q>n. We propose a variant of the Karush–Kuhn–Tucker equation, and then we apply the partial pivoting Gaussian elimination method to solve the variant, which is better than the original Karush–Kuhn–Tucker equation, the CGNR and the CGNE for solving over-determined linear systems. Our main contribution is developing a double-optimization-based iterative algorithm in a varying affine Krylov subspace for effectively and accurately solving least-squares problems, even for a dense and ill-conditioned matrix A with q≪n or q≫n.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/a17050211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

A double optimal solution (DOS) of a least-squares problem Ax=b,A∈Rq×n with q≠n is derived in an m+1-dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the m+1 expansion coefficients of the solution x in the VAKS. The minimal-norm solution can be obtained automatically regardless of whether the linear system is consistent or inconsistent. A new double optimal algorithm (DOA) is created; it is sufficiently time saving by inverting an m×m positive definite matrix at each iteration step, where m≪min(n,q). The properties of the DOA are investigated and the estimation of residual error is provided. The residual norms are proven to be strictly decreasing in the iterations; hence, the DOA is absolutely convergent. Numerical tests reveal the efficiency of the DOA for solving least-squares problems. The DOA is applicable to least-squares problems regardless of whether qn. The Moore–Penrose inverse matrix is also addressed by adopting the DOA; the accuracy and efficiency of the proposed method are proven. The m+1-dimensional VAKS is different from the traditional m-dimensional affine Krylov subspace used in the conjugate gradient (CG)-type iterative algorithms CGNR (or CGLS) and CGRE (or Craig method) for solving least-squares problems with q>n. We propose a variant of the Karush–Kuhn–Tucker equation, and then we apply the partial pivoting Gaussian elimination method to solve the variant, which is better than the original Karush–Kuhn–Tucker equation, the CGNR and the CGNE for solving over-determined linear systems. Our main contribution is developing a double-optimization-based iterative algorithm in a varying affine Krylov subspace for effectively and accurately solving least-squares problems, even for a dense and ill-conditioned matrix A with q≪n or q≫n.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过双优算法和过确定系统的卡鲁什-库恩-塔克方程变式解决最小二乘法问题
在 m+1 维变化仿射克雷洛夫子空间(VAKS)中推导出了最小二乘问题 Ax=b,A∈Rq×n 且 q≠n 的双最优解(DOS);两种最小化技术精确确定了解 x 在 VAKS 中的 m+1 个扩展系数。无论线性系统是一致还是不一致,都能自动获得最小规范解。我们创建了一种新的双最优算法(DOA);它通过在每个迭代步骤中反转 m×m 正定矩阵(其中 m≪min(n,q))来充分节省时间。对 DOA 的特性进行了研究,并提供了残差误差的估计。证明残差规范在迭代中严格递减,因此 DOA 是绝对收敛的。数值测试表明了 DOA 在解决最小二乘问题时的效率。采用 DOA 方法还解决了 Moore-Penrose 逆矩阵问题,证明了所提方法的准确性和高效性。m+1 维 VAKS 不同于共轭梯度(CG)型迭代算法 CGNR(或 CGLS)和 CGRE(或 Craig 方法)中用于求解 q>n 最小二乘问题的传统 m 维仿射 Krylov 子空间。我们提出了 Karush-Kuhn-Tucker 方程的一个变体,然后应用部分枢轴高斯消元法求解该变体,在求解超定线性系统方面,该变体优于原始的 Karush-Kuhn-Tucker 方程、CGNR 和 CGNE。我们的主要贡献是在变化的仿射克雷洛夫子空间中开发了一种基于双重优化的迭代算法,即使对于q≪n或q≫n的密集且条件不佳的矩阵A,也能有效、准确地求解最小二乘问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Algorithms
Algorithms Mathematics-Numerical Analysis
CiteScore
4.10
自引率
4.30%
发文量
394
审稿时长
11 weeks
期刊最新文献
EEG Channel Selection for Stroke Patient Rehabilitation Using BAT Optimizer Classification and Regression of Pinhole Corrosions on Pipelines Based on Magnetic Flux Leakage Signals Using Convolutional Neural Networks The Parallel Machine Scheduling Problem with Different Speeds and Release Times in the Ore Hauling Operation A Novel Hybrid Crow Search Arithmetic Optimization Algorithm for Solving Weighted Combined Economic Emission Dispatch with Load-Shifting Practice Normalization of Web of Science Institution Names Based on Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1