Pub Date : 2026-02-01Epub Date: 2025-09-19DOI: 10.1016/j.apnum.2025.09.008
Mahmoud A. Zaky
In this paper, we construct and analyze a linearized L1–Galerkin Legendre spectral method for solving two-dimensional time-fractional diffusion equations with delay. The approach combines the L1 temporal discretization of the Caputo derivative with a Legendre spectral approximation in space, while nonlinear source terms are efficiently handled through a linearization strategy. To further enhance computational performance, we employ matrix diagonalization approach to solve the resulting algebraic systems in numerical implementation. Rigorous stability and convergence analyses are carried out using discrete fractional Grönwall and fractional Halanay inequalities, establishing unconditional stability and spectral error estimates. Numerical experiments confirm the theoretical predictions, demonstrating spectral accuracy in space and -order accuracy in time, as well as validating the robustness and efficiency of the proposed method across different fractional orders and delay parameters.
{"title":"A linearized two-dimensional Galerkin-L1 spectral method with diagonalization for time-fractional diffusion equations with delay","authors":"Mahmoud A. Zaky","doi":"10.1016/j.apnum.2025.09.008","DOIUrl":"10.1016/j.apnum.2025.09.008","url":null,"abstract":"<div><div>In this paper, we construct and analyze a linearized L1–Galerkin Legendre spectral method for solving two-dimensional time-fractional diffusion equations with delay. The approach combines the L1 temporal discretization of the Caputo derivative with a Legendre spectral approximation in space, while nonlinear source terms are efficiently handled through a linearization strategy. To further enhance computational performance, we employ matrix diagonalization approach to solve the resulting algebraic systems in numerical implementation. Rigorous stability and convergence analyses are carried out using discrete fractional Grönwall and fractional Halanay inequalities, establishing unconditional stability and spectral error estimates. Numerical experiments confirm the theoretical predictions, demonstrating spectral accuracy in space and <span><math><mrow><mo>(</mo><mn>2</mn><mo>−</mo><mi>ϑ</mi><mo>)</mo></mrow></math></span>-order accuracy in time, as well as validating the robustness and efficiency of the proposed method across different fractional orders and delay parameters.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 1-12"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-10DOI: 10.1016/j.apnum.2025.10.005
Congpei An , Mou Cai
This paper explores the incorporation of Tikhonov regularization into the least squares approximation scheme using trigonometric polynomials on the unit circle. This approach encompasses interpolation and hyperinterpolation as specific cases. With the aid of the de la Vallée-Poussin approximation, we derive a uniform error bound and a concrete error bound. These error estimates demonstrate the effectiveness of Tikhonov regularization in the denoising process. A new regularity condition for the selection of regularization parameters is proposed. We investigate three strategies for choosing regularization parameters: Morozov’s discrepancy principle, the L-curve, and generalized cross-validation, by explicitly combining these error bounds of the approximating trigonometric polynomial. We show that Morozov’s discrepancy principle satisfies the proposed regularity condition, while the other two methods do not. Finally, numerical examples are provided to illustrate how the aforementioned methodologies, when applied with well-chosen parameters, can significantly improve the quality of approximation.
本文探讨了利用单位圆上的三角多项式将Tikhonov正则化纳入最小二乘近似方案。这种方法包括插值和超插值作为具体案例。利用de la vall - poussin近似,导出了统一的误差界和具体的L2误差界。这些误差估计证明了吉洪诺夫正则化在去噪过程中的有效性。提出了一种新的正则化参数选择的正则性条件。我们研究了三种选择正则化参数的策略:Morozov的差异原理、l曲线和广义交叉验证,通过显式地组合这些近似三角多项式的误差界限。结果表明,Morozov的差异原理满足所提出的正则性条件,而其他两种方法则不满足。最后,提供了数值实例来说明上述方法如何在选择良好的参数时应用,可以显着提高近似的质量。
{"title":"Parameter choice strategies for regularized least squares approximation of noisy continuous functions on the unit circle","authors":"Congpei An , Mou Cai","doi":"10.1016/j.apnum.2025.10.005","DOIUrl":"10.1016/j.apnum.2025.10.005","url":null,"abstract":"<div><div>This paper explores the incorporation of Tikhonov regularization into the least squares approximation scheme using trigonometric polynomials on the unit circle. This approach encompasses interpolation and hyperinterpolation as specific cases. With the aid of the de la Vallée-Poussin approximation, we derive a uniform error bound and a concrete <span><math><msub><mi>L</mi><mn>2</mn></msub></math></span> error bound. These error estimates demonstrate the effectiveness of Tikhonov regularization in the denoising process. A new regularity condition for the selection of regularization parameters is proposed. We investigate three strategies for choosing regularization parameters: Morozov’s discrepancy principle, the L-curve, and generalized cross-validation, by explicitly combining these error bounds of the approximating trigonometric polynomial. We show that Morozov’s discrepancy principle satisfies the proposed regularity condition, while the other two methods do not. Finally, numerical examples are provided to illustrate how the aforementioned methodologies, when applied with well-chosen parameters, can significantly improve the quality of approximation.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 84-103"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-10DOI: 10.1016/j.apnum.2025.10.002
Jon Henshaw , Aviv Gibali , Thomas Humphries
The superiorization methodology (SM) is an optimization heuristic in which an iterative algorithm, which aims to solve a particular problem, is “superiorized” to promote solutions that are improved with respect to some secondary criterion. This superiorization is achieved by perturbing iterates of the algorithm in nonascending directions of a prescribed function that penalizes undesirable characteristics in the solution; the solution produced by the superiorized algorithm should therefore be improved with respect to the value of this function. In this paper, we broaden the SM to allow for the perturbations to be introduced by an arbitrary procedure instead, using a plug-and-play approach. This allows for operations such as image denoisers or deep neural networks, which have applications to a broad class of problems, to be incorporated within the superiorization methodology. As proof of concept, we perform numerical simulations involving low-dose and sparse-view computed tomography image reconstruction, comparing the plug-and-play approach to two conventionally superiorized algorithms, as well as a post-processing approach. The plug-and-play approach provides comparable or better image quality in most cases, while also providing advantages in terms of computing time, and data fidelity of the solutions.
{"title":"Plug-and-play superiorization","authors":"Jon Henshaw , Aviv Gibali , Thomas Humphries","doi":"10.1016/j.apnum.2025.10.002","DOIUrl":"10.1016/j.apnum.2025.10.002","url":null,"abstract":"<div><div>The superiorization methodology (SM) is an optimization heuristic in which an iterative algorithm, which aims to solve a particular problem, is “superiorized” to promote solutions that are improved with respect to some secondary criterion. This superiorization is achieved by perturbing iterates of the algorithm in nonascending directions of a prescribed function that penalizes undesirable characteristics in the solution; the solution produced by the superiorized algorithm should therefore be improved with respect to the value of this function. In this paper, we broaden the SM to allow for the perturbations to be introduced by an arbitrary procedure instead, using a plug-and-play approach. This allows for operations such as image denoisers or deep neural networks, which have applications to a broad class of problems, to be incorporated within the superiorization methodology. As proof of concept, we perform numerical simulations involving low-dose and sparse-view computed tomography image reconstruction, comparing the plug-and-play approach to two conventionally superiorized algorithms, as well as a post-processing approach. The plug-and-play approach provides comparable or better image quality in most cases, while also providing advantages in terms of computing time, and data fidelity of the solutions.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 29-43"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-22DOI: 10.1016/j.apnum.2025.10.012
Akbar Shirilord, Mehdi Dehghan
In this article, we introduce a preconditioned minimal residual (PMR) algorithm designed to address a wide range of matrix equations and linear systems. We illustrate the efficacy of this algorithm through several numerical examples, including the solution of matrix equations. Notably, we tackle various significant problems such as the minimization of Frobenius norms, least squares optimization, and the computation of the Moore-Penrose pseudo-inverse. Convergence analysis shows that it converges without any constraints and for any initial guess, although this algorithm is more efficient when the matrices are sparse. To validate the effectiveness of our proposed iterative algorithm, we offer various numerical examples by large matrices. As an application of the matrix equation, we explore a method for encrypting and decrypting color images.
{"title":"A unified preconditioned minimal residual (PMR) algorithm for matrix problems: Linear systems, multiple right-hand sides linear systems, least squares problems, inversion and pseudo-inversion with application to color image encryption","authors":"Akbar Shirilord, Mehdi Dehghan","doi":"10.1016/j.apnum.2025.10.012","DOIUrl":"10.1016/j.apnum.2025.10.012","url":null,"abstract":"<div><div>In this article, we introduce a preconditioned minimal residual (PMR) algorithm designed to address a wide range of matrix equations and linear systems. We illustrate the efficacy of this algorithm through several numerical examples, including the solution of matrix equations. Notably, we tackle various significant problems such as the minimization of Frobenius norms, least squares optimization, and the computation of the Moore-Penrose pseudo-inverse. Convergence analysis shows that it converges without any constraints and for any initial guess, although this algorithm is more efficient when the matrices are sparse. To validate the effectiveness of our proposed iterative algorithm, we offer various numerical examples by large matrices. As an application of the matrix equation, we explore a method for encrypting and decrypting color images.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 216-245"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145413757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-03DOI: 10.1016/j.apnum.2025.10.018
Stefan R. Panic
A novel quasi-Newton method for solving systems of nonlinear equations by leveraging directional approximations of both the gradient and curvature via Legendre-Gauss quadrature has been proposed. The method reformulates the root-finding problem as the minimization of a scalar merit function , and approximates second-order information using three-node orthogonal polynomial integration along search directions. A rank-1 approximation of the Jacobian action is constructed without requiring explicit derivative information. The resulting scheme features a scalar curvature parameter that dynamically controls the step size, enabling stable updates through an inexact Armijo-type line search. The method remains numerically stable across problems without requiring explicit Jacobian evaluations or storage. We establish global convergence under mild assumptions and explore quasi-Newton properties under additional curvature conditions. Extensive numerical experiments demonstrate competitive accuracy, robustness, and reduced iteration counts compared to existing diagonal quasi-Newton methods.
{"title":"Directional gradient and curvature approximation via Legendre quadrature in unconstrained optimization","authors":"Stefan R. Panic","doi":"10.1016/j.apnum.2025.10.018","DOIUrl":"10.1016/j.apnum.2025.10.018","url":null,"abstract":"<div><div>A novel quasi-Newton method for solving systems of nonlinear equations by leveraging directional approximations of both the gradient and curvature via Legendre-Gauss quadrature has been proposed. The method reformulates the root-finding problem <span><math><mrow><mi>F</mi><mo>(</mo><mi>x</mi><mo>)</mo><mo>=</mo><mn>0</mn></mrow></math></span> as the minimization of a scalar merit function <span><math><mrow><mi>G</mi><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><msup><mrow><mo>∥</mo><mi>F</mi><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow><mo>∥</mo></mrow><mn>2</mn></msup></mrow></math></span>, and approximates second-order information using three-node orthogonal polynomial integration along search directions. A rank-1 approximation of the Jacobian action is constructed without requiring explicit derivative information. The resulting scheme features a scalar curvature parameter <span><math><msub><mi>γ</mi><mi>k</mi></msub></math></span> that dynamically controls the step size, enabling stable updates through an inexact Armijo-type line search. The method remains numerically stable across problems without requiring explicit Jacobian evaluations or storage. We establish global convergence under mild assumptions and explore quasi-Newton properties under additional curvature conditions. Extensive numerical experiments demonstrate competitive accuracy, robustness, and reduced iteration counts compared to existing diagonal quasi-Newton methods.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 294-309"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145463216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-08DOI: 10.1016/j.apnum.2025.11.001
Changlun Ye , Hai Bi , Liangkun Xu , Xianbing Luo
In this paper, for the Cahn-Hilliard equation with dynamic boundary conditions, we establish a variational time stepping numerical scheme integrated with finite element methods. This scheme is a structure-preserving scheme, which effectively maintains the inherent physical properties of the continuous model including mass conservation and energy dissipation. We demonstrate the existence of discrete solutions without restrictions on the discretization parameters, and establish the uniqueness under mild conditions. Finally, we present ample numerical results which validate our theoretical findings and demonstrate that our numerical scheme can achieve second-order convergence in time. We also apply our scheme to the KLS (proposed by P. Knopf, K.F. Lam, and J. Stange) and KLLM (proposed by P. Knopf, K. F. Lam, C. Liu, and S. Metzger) models, two other Cahn-Hilliard models with dynamic boundaries, and verify that the solutions of KLS model converge to the solutions of KLLM model numerically.
本文针对具有动态边界条件的Cahn-Hilliard方程,建立了与有限元法相结合的变分时步数值格式。该方案是一种结构保持方案,有效地保持了连续模型固有的物理性质,包括质量守恒和能量耗散。我们证明了不受离散化参数限制的离散解的存在性,并在温和条件下证明了其唯一性。最后,我们给出了大量的数值结果来验证我们的理论发现,并证明了我们的数值格式在时间上可以达到二阶收敛。我们还将我们的方案应用于KLS (P. Knopf, K.F. Lam, and J. Stange提出)和KLLM (P. Knopf, K.F. Lam, C. Liu, and S. Metzger提出)模型以及另外两种具有动态边界的Cahn-Hilliard模型,并在数值上验证了KLS模型的解收敛于KLLM模型的解。
{"title":"A structure-preserving variational time stepping scheme for Cahn-Hilliard equation with dynamic boundary conditions","authors":"Changlun Ye , Hai Bi , Liangkun Xu , Xianbing Luo","doi":"10.1016/j.apnum.2025.11.001","DOIUrl":"10.1016/j.apnum.2025.11.001","url":null,"abstract":"<div><div>In this paper, for the Cahn-Hilliard equation with dynamic boundary conditions, we establish a variational time stepping numerical scheme integrated with finite element methods. This scheme is a structure-preserving scheme, which effectively maintains the inherent physical properties of the continuous model including mass conservation and energy dissipation. We demonstrate the existence of discrete solutions without restrictions on the discretization parameters, and establish the uniqueness under mild conditions. Finally, we present ample numerical results which validate our theoretical findings and demonstrate that our numerical scheme can achieve second-order convergence in time. We also apply our scheme to the KLS (proposed by P. Knopf, K.F. Lam, and J. Stange) and KLLM (proposed by P. Knopf, K. F. Lam, C. Liu, and S. Metzger) models, two other Cahn-Hilliard models with dynamic boundaries, and verify that the solutions of KLS model converge to the solutions of KLLM model numerically.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 346-372"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145576255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-14DOI: 10.1016/j.apnum.2025.10.008
Xuewei Liu , Zhenyu Wang , Xiaohua Ding , Shao-Liang Zhang
The stochastic two-dimensional KdV equation arises as a mathematical model for shallow water wave dynamics in physical systems. To efficiently handle the equation’s high-order spatial derivatives and stochastic terms, a local discontinuous Galerkin method is proposed. The method is proved to be -stable and to achieve the optimal mean-square convergence rate of order when degree- polynomials are used. For temporal discretization, the implicit midpoint method is applied, and the restarted Generalized Minimum Residual method is employed to solve the resulting linear systems in two-dimensional simulations. Numerical experiments demonstrate optimal convergence rates and confirm both the theoretical analysis and the effectiveness of the method.
{"title":"Optimal error estimates and stability of a local discontinuous Galerkin method for the stochastic two-dimensional KdV equation","authors":"Xuewei Liu , Zhenyu Wang , Xiaohua Ding , Shao-Liang Zhang","doi":"10.1016/j.apnum.2025.10.008","DOIUrl":"10.1016/j.apnum.2025.10.008","url":null,"abstract":"<div><div>The stochastic two-dimensional KdV equation arises as a mathematical model for shallow water wave dynamics in physical systems. To efficiently handle the equation’s high-order spatial derivatives and stochastic terms, a local discontinuous Galerkin method is proposed. The method is proved to be <span><math><msup><mi>L</mi><mn>2</mn></msup></math></span>-stable and to achieve the optimal mean-square convergence rate of order <span><math><mrow><mi>N</mi><mo>+</mo><mn>1</mn></mrow></math></span> when degree-<span><math><mi>N</mi></math></span> polynomials are used. For temporal discretization, the implicit midpoint method is applied, and the restarted Generalized Minimum Residual method is employed to solve the resulting linear systems in two-dimensional simulations. Numerical experiments demonstrate optimal convergence rates and confirm both the theoretical analysis and the effectiveness of the method.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 310-328"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145526261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-10DOI: 10.1016/j.apnum.2025.10.004
Behzad Kafash
In this study, a modified Rayleigh-Ritz method is presented for the solution of optimal control problems governed by time-delayed dynamical systems, considering both constrained and unconstrained control and state variables. In this approach, the control or state variables are approximated using shifted Chebyshev polynomials with unknown coefficients. The proposed modified Rayleigh-Ritz method transforms the constrained optimal control problems governed by time-delayed dynamical systems into an optimization problem with constraints. Furthermore, a computational algorithm is developed for implementing the proposed method, and its convergence is proven analytically. To evaluate the efficiency and accuracy of the proposed algorithm, several numerical examples are presented. These include the single-input/single-output system as a case study with control and final state constraints, and an optimal control problem of the harmonic oscillator under different scenarios, which involve constraints on state and control variables.
{"title":"Numerical approximation of constrained optimal control problems in delayed systems using an enhanced Rayleigh-Ritz algorithm","authors":"Behzad Kafash","doi":"10.1016/j.apnum.2025.10.004","DOIUrl":"10.1016/j.apnum.2025.10.004","url":null,"abstract":"<div><div>In this study, a modified Rayleigh-Ritz method is presented for the solution of optimal control problems governed by time-delayed dynamical systems, considering both constrained and unconstrained control and state variables. In this approach, the control or state variables are approximated using shifted Chebyshev polynomials with unknown coefficients. The proposed modified Rayleigh-Ritz method transforms the constrained optimal control problems governed by time-delayed dynamical systems into an optimization problem with constraints. Furthermore, a computational algorithm is developed for implementing the proposed method, and its convergence is proven analytically. To evaluate the efficiency and accuracy of the proposed algorithm, several numerical examples are presented. These include the single-input/single-output system as a case study with control and final state constraints, and an optimal control problem of the harmonic oscillator under different scenarios, which involve constraints on state and control variables.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 104-122"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-09DOI: 10.1016/j.apnum.2025.10.006
Feng Shao , Hu Shao , Bin Wu , Haijun Wang , Pengjie Liu , Meixing Liu
In this paper, we aim to develop a general conjugate gradient (CG) algorithmic framework for solving unconstrained optimization problems. Additionally, we employ different hybrid techniques to derive two hybrid conjugate parameters, which are then integrated into the algorithmic framework to develop two effective hybrid CG methods. Under common assumptions, we establish the global convergence of the proposed framework without any convexity assumption. Furthermore, we reveal the convergence rate of the framework under the uniformly convex condition. Preliminary numerical experiment results, including applications to unconstrained optimization and image restoration problems, are presented to explicitly illustrate the performance of the proposed methods in comparison with several existing methods.
{"title":"A conjugate gradient algorithmic framework for unconstrained optimization with applications: Convergence and rate analyses","authors":"Feng Shao , Hu Shao , Bin Wu , Haijun Wang , Pengjie Liu , Meixing Liu","doi":"10.1016/j.apnum.2025.10.006","DOIUrl":"10.1016/j.apnum.2025.10.006","url":null,"abstract":"<div><div>In this paper, we aim to develop a general conjugate gradient (CG) algorithmic framework for solving unconstrained optimization problems. Additionally, we employ different hybrid techniques to derive two hybrid conjugate parameters, which are then integrated into the algorithmic framework to develop two effective hybrid CG methods. Under common assumptions, we establish the global convergence of the proposed framework without any convexity assumption. Furthermore, we reveal the convergence rate of the framework under the uniformly convex condition. Preliminary numerical experiment results, including applications to unconstrained optimization and image restoration problems, are presented to explicitly illustrate the performance of the proposed methods in comparison with several existing methods.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 13-28"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-22DOI: 10.1016/j.apnum.2025.10.011
Guillermo Albuja , Andrés I. Ávila , Miguel Murillo
The Richards’ equation is a nonlinear degenerate parabolic differential equation, whose numerical solutions depend on the linearization methods used to deal with the degeneracy. Those methods have two main properties: convergence (global v.s. local) and order (linear v.s. quadratic). Among the main methods, Newton’s Method, the modified Picard method, and the L-scheme have one good property but not the other. Mixed schemes get the best of both properties, starting with a global linear method and following with a quadratic local scheme without a clear rule to switch from a global method to a local method.
In this work, we use two different approaches to define new global superlinear and quadratic schemes. First, we use an error-correction convex combination of classical linearization methods, a global linear method, and a quadratic local method by selecting the parameter via an error-correction approach to get fixed-point convergent sequences. We built an error-correction type-Secant scheme (ECtS) without derivatives to get a superlinear global scheme. Next, we build the convex combination of the L-scheme with three global schemes: the type-Secant scheme (ECLtS), the modified Picard scheme (ECLP), and Newton’s scheme (ECLN) to obtain global superlinear convergent schemes. Second, we use a parameter to adapt the time step in the general Newton-Raphson method, applying to three classical linearizations and the new three error-correction linearizations. For the new schemes, we first apply the -adaptation to the classical methods (-Newton’s, -L-scheme, and -modified Picard). Next, we apply to the error-correction schemes (-AtS, -ALtS, -ALP, -ALN). Finally, we consider a combination of the L-scheme and the -adaptive Newton’s Method, mixing both methods (-LAN).
We test the twelve new schemes with five examples given in the literature, showing that they are robust and fast, including cases when Newton’s scheme does not converge. Moreover, we include an example which uses the Gardner exponential nonlinearities, showing that L- and L2-schemes are as slow as linearization techniques. Some new schemes show high performance in different examples. The -LAN scheme has advantages, using fewer iterations in most examples.
{"title":"Global superlinear linearization schemes based on adaptive strategies for solving Richards’ equation","authors":"Guillermo Albuja , Andrés I. Ávila , Miguel Murillo","doi":"10.1016/j.apnum.2025.10.011","DOIUrl":"10.1016/j.apnum.2025.10.011","url":null,"abstract":"<div><div>The Richards’ equation is a nonlinear degenerate parabolic differential equation, whose numerical solutions depend on the linearization methods used to deal with the degeneracy. Those methods have two main properties: convergence (global v.s. local) and order (linear v.s. quadratic). Among the main methods, Newton’s Method, the modified Picard method, and the L-scheme have one good property but not the other. Mixed schemes get the best of both properties, starting with a global linear method and following with a quadratic local scheme without a clear rule to switch from a global method to a local method.</div><div>In this work, we use two different approaches to define new global superlinear and quadratic schemes. First, we use an error-correction convex combination of classical linearization methods, a global linear method, and a quadratic local method by selecting the parameter <span><math><msubsup><mi>λ</mi><mi>k</mi><mi>n</mi></msubsup></math></span> via an error-correction approach to get fixed-point convergent sequences. We built an error-correction type-Secant scheme (ECtS) without derivatives to get a superlinear global scheme. Next, we build the convex combination of the L-scheme with three global schemes: the type-Secant scheme (ECLtS), the modified Picard scheme (ECLP), and Newton’s scheme (ECLN) to obtain global superlinear convergent schemes. Second, we use a parameter <span><math><mi>τ</mi></math></span> to adapt the time step in the general Newton-Raphson method, applying to three classical linearizations and the new three error-correction linearizations. For the new schemes, we first apply the <span><math><mi>τ</mi></math></span>-adaptation to the classical methods (<span><math><mi>τ</mi></math></span>-Newton’s, <span><math><mi>τ</mi></math></span>-L-scheme, and <span><math><mi>τ</mi></math></span>-modified Picard). Next, we apply to the error-correction schemes (<span><math><mi>τ</mi></math></span>-AtS, <span><math><mi>τ</mi></math></span>-ALtS, <span><math><mi>τ</mi></math></span>-ALP, <span><math><mi>τ</mi></math></span>-ALN). Finally, we consider a combination of the L-scheme and the <span><math><mi>τ</mi></math></span>-adaptive Newton’s Method, mixing both methods (<span><math><mi>τ</mi></math></span>-LAN).</div><div>We test the twelve new schemes with five examples given in the literature, showing that they are robust and fast, including cases when Newton’s scheme does not converge. Moreover, we include an example which uses the Gardner exponential nonlinearities, showing that L- and L2-schemes are as slow as linearization techniques. Some new schemes show high performance in different examples. The <span><math><mi>τ</mi></math></span>-LAN scheme has advantages, using fewer iterations in most examples.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"220 ","pages":"Pages 189-215"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145413755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}