Pub Date : 2025-03-04DOI: 10.1016/j.jco.2025.101935
Yang Liu
We consider the integrability of weak mixed first-order derivatives of the integrand and study convergence rates of scrambled digital nets. We show that the generalized Vitali variation with parameter from [Dick and Pillichshammer, 2010] is bounded above by the norm of the weak mixed first-order derivative, where . Consequently, when the weak mixed first-order derivative belongs to for , the variance of the scrambled digital nets estimator convergences at a rate of . Numerical experiments further validate the theoretical results.
{"title":"Integrability of weak mixed first-order derivatives and convergence rates of scrambled digital nets","authors":"Yang Liu","doi":"10.1016/j.jco.2025.101935","DOIUrl":"10.1016/j.jco.2025.101935","url":null,"abstract":"<div><div>We consider the <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span> integrability of weak mixed first-order derivatives of the integrand and study convergence rates of scrambled digital nets. We show that the generalized Vitali variation with parameter <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mn>1</mn><mo>]</mo></math></span> from [Dick and Pillichshammer, 2010] is bounded above by the <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span> norm of the weak mixed first-order derivative, where <span><math><mi>p</mi><mo>=</mo><mfrac><mrow><mn>2</mn></mrow><mrow><mn>3</mn><mo>−</mo><mn>2</mn><mi>α</mi></mrow></mfrac></math></span>. Consequently, when the weak mixed first-order derivative belongs to <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span> for <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>≤</mo><mn>2</mn></math></span>, the variance of the scrambled digital nets estimator convergences at a rate of <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>N</mi></mrow><mrow><mo>−</mo><mn>4</mn><mo>+</mo><mfrac><mrow><mn>2</mn></mrow><mrow><mi>p</mi></mrow></mfrac></mrow></msup><msup><mrow><mi>log</mi></mrow><mrow><mi>s</mi><mo>−</mo><mn>1</mn></mrow></msup><mo></mo><mi>N</mi><mo>)</mo></math></span>. Numerical experiments further validate the theoretical results.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"89 ","pages":"Article 101935"},"PeriodicalIF":1.8,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-26DOI: 10.1016/j.jco.2025.101932
Denis Belomestny , John Schoenmakers , Veronika Zorina
We introduce a mesh-type approach for tackling discrete-time, finite-horizon Markov Decision Processes (MDPs) characterized by state and action spaces that are general, encompassing both finite and infinite (yet suitably regular) subsets of Euclidean space. In particular, for bounded state and action spaces, our algorithm achieves a computational complexity that is tractable in the sense of Novak & Woźniakowski [12], and is polynomial in the time horizon. For an unbounded state space the algorithm is “semi-tractable” in the sense that the complexity is proportional to with some dimension independent , to achieve precision ε, and polynomial in the time horizon with linear degree in the underlying dimension. As such, the proposed approach has some flavor of the randomization method by Rust [14] which uses uniform sampling in compact state space. However, the present approach is essentially different due to the inhomogeneous finite horizon setting, which involves general transition distributions over a possibly non-compact state space. To demonstrate the effectiveness of our algorithm, we provide illustrations based on Linear-Quadratic Gaussian (LQG) control problems.
{"title":"Weighted mesh algorithms for general Markov decision processes: Convergence and tractability","authors":"Denis Belomestny , John Schoenmakers , Veronika Zorina","doi":"10.1016/j.jco.2025.101932","DOIUrl":"10.1016/j.jco.2025.101932","url":null,"abstract":"<div><div>We introduce a mesh-type approach for tackling discrete-time, finite-horizon Markov Decision Processes (MDPs) characterized by state and action spaces that are general, encompassing both finite and infinite (yet suitably regular) subsets of Euclidean space. In particular, for bounded state and action spaces, our algorithm achieves a computational complexity that is tractable in the sense of Novak & Woźniakowski <span><span>[12]</span></span>, and is polynomial in the time horizon. For an unbounded state space the algorithm is “semi-tractable” in the sense that the complexity is proportional to <span><math><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mi>c</mi></mrow></msup></math></span> with some dimension independent <span><math><mi>c</mi><mo>≥</mo><mn>2</mn></math></span>, to achieve precision <em>ε</em>, and polynomial in the time horizon with linear degree in the underlying dimension. As such, the proposed approach has some flavor of the randomization method by Rust <span><span>[14]</span></span> which uses uniform sampling in compact state space. However, the present approach is essentially different due to the inhomogeneous finite horizon setting, which involves general transition distributions over a possibly non-compact state space. To demonstrate the effectiveness of our algorithm, we provide illustrations based on Linear-Quadratic Gaussian (LQG) control problems.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"88 ","pages":"Article 101932"},"PeriodicalIF":1.8,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-25DOI: 10.1016/j.jco.2025.101934
Alexander Demin , Joris van der Hoeven
Consider a sparse polynomial in several variables given explicitly as a sum of non-zero terms with coefficients in an effective field. In this paper, we present several algorithms for factoring such polynomials and related tasks (such as gcd computation, square-free factorization, content-free factorization, and root extraction). Our methods are all based on sparse interpolation, but follow two main lines of attack: iteration on the number of variables and more direct reductions to the univariate or bivariate case. We present detailed probabilistic complexity bounds in terms of the complexity of sparse interpolation and evaluation.
{"title":"Factoring sparse polynomials fast","authors":"Alexander Demin , Joris van der Hoeven","doi":"10.1016/j.jco.2025.101934","DOIUrl":"10.1016/j.jco.2025.101934","url":null,"abstract":"<div><div>Consider a sparse polynomial in several variables given explicitly as a sum of non-zero terms with coefficients in an effective field. In this paper, we present several algorithms for factoring such polynomials and related tasks (such as gcd computation, square-free factorization, content-free factorization, and root extraction). Our methods are all based on sparse interpolation, but follow two main lines of attack: iteration on the number of variables and more direct reductions to the univariate or bivariate case. We present detailed probabilistic complexity bounds in terms of the complexity of sparse interpolation and evaluation.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"88 ","pages":"Article 101934"},"PeriodicalIF":1.8,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143529702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1016/j.jco.2025.101933
Ben Adcock , Nick Dexter , Sebastian Moraga
Infinite-dimensional, holomorphic functions have been studied in detail over the last several decades, due to their relevance to parametric differential equations and computational uncertainty quantification. The approximation of such functions from finitely-many samples is of particular interest, due to the practical importance of constructing surrogate models to complex mathematical models of physical processes. In a previous work, [5] we studied the approximation of so-called Banach-valued, -holomorphic functions on the infinite-dimensional hypercube from m (potentially adaptive) samples. In particular, we derived lower bounds for the adaptive m-widths for classes of such functions, which showed that certain algebraic rates of the form are the best possible regardless of the sampling-recovery pair. In this work, we continue this investigation by focusing on the practical case where the samples are pointwise evaluations drawn identically and independently from the underlying probability measure for the problem. Specifically, for Hilbert-valued -holomorphic functions, we show that the same rates can be achieved (up to a small polylogarithmic or algebraic factor) for tensor-product Jacobi measures. Our reconstruction maps are based on least squares and compressed sensing procedures using the corresponding orthonormal Jacobi polynomials. In doing so, we strengthen and generalize past work that has derived weaker nonuniform guarantees for the uniform and Chebyshev measures (and corresponding polynomials) only. We also extend various best s-term polynomial approximation error bounds to arbitrary Jacobi polynomial expansions. Overall, we demonstrate that i.i.d. pointwise samples drawn from an underlying probability measure are near-optimal for the recovery of infinite-dimensional, holomorphic functions.
{"title":"Optimal approximation of infinite-dimensional holomorphic functions II: Recovery from i.i.d. pointwise samples","authors":"Ben Adcock , Nick Dexter , Sebastian Moraga","doi":"10.1016/j.jco.2025.101933","DOIUrl":"10.1016/j.jco.2025.101933","url":null,"abstract":"<div><div>Infinite-dimensional, holomorphic functions have been studied in detail over the last several decades, due to their relevance to parametric differential equations and computational uncertainty quantification. The approximation of such functions from finitely-many samples is of particular interest, due to the practical importance of constructing surrogate models to complex mathematical models of physical processes. In a previous work, <span><span>[5]</span></span> we studied the approximation of so-called Banach-valued, <span><math><mo>(</mo><mi>b</mi><mo>,</mo><mi>ε</mi><mo>)</mo></math></span>-holomorphic functions on the infinite-dimensional hypercube <span><math><msup><mrow><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>N</mi></mrow></msup></math></span> from <em>m</em> (potentially adaptive) samples. In particular, we derived lower bounds for the adaptive <em>m</em>-widths for classes of such functions, which showed that certain algebraic rates of the form <span><math><msup><mrow><mi>m</mi></mrow><mrow><mn>1</mn><mo>/</mo><mn>2</mn><mo>−</mo><mn>1</mn><mo>/</mo><mi>p</mi></mrow></msup></math></span> are the best possible regardless of the sampling-recovery pair. In this work, we continue this investigation by focusing on the practical case where the samples are pointwise evaluations drawn identically and independently from the underlying probability measure for the problem. Specifically, for Hilbert-valued <span><math><mo>(</mo><mi>b</mi><mo>,</mo><mi>ε</mi><mo>)</mo></math></span>-holomorphic functions, we show that the same rates can be achieved (up to a small polylogarithmic or algebraic factor) for tensor-product Jacobi measures. Our reconstruction maps are based on least squares and compressed sensing procedures using the corresponding orthonormal Jacobi polynomials. In doing so, we strengthen and generalize past work that has derived weaker nonuniform guarantees for the uniform and Chebyshev measures (and corresponding polynomials) only. We also extend various best <em>s</em>-term polynomial approximation error bounds to arbitrary Jacobi polynomial expansions. Overall, we demonstrate that i.i.d. pointwise samples drawn from an underlying probability measure are near-optimal for the recovery of infinite-dimensional, holomorphic functions.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"89 ","pages":"Article 101933"},"PeriodicalIF":1.8,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1016/j.jco.2025.101931
Aoli Yang , Jun Fan , Dao-Hong Xiang
The pursuit of individualized treatment rules in precision medicine has generated significant interest due to its potential to optimize clinical outcomes for patients with diverse treatment responses. One approach that has gained attention is outcome weighted learning, which is tailored to estimate optimal individualized treatment rules by leveraging each patient's unique characteristics under a weighted classification framework. However, traditional offline learning algorithms, which process all available data at once, face limitations when applied to high-dimensional electronic health records data due to its sheer volume. Additionally, the dynamic nature of precision medicine requires that learning algorithms can effectively handle streaming data that arrives in a sequential manner. To overcome these challenges, we present a novel framework that combines outcome weighted learning with online gradient descent algorithms, aiming to enhance precision medicine practices. Our framework provides a comprehensive analysis of the learning theory associated with online outcome weighted learning algorithms, taking into account general classification loss functions. We establish the convergence of these algorithms for the first time, providing explicit convergence rates while assuming polynomially decaying step sizes, with (or without) a regularization term. Our findings present a non-trivial extension of online classification to online outcome weighted learning, contributing to the theoretical foundations of learning algorithms tailored for processing streaming input-output-reward type data.
{"title":"Online outcome weighted learning with general loss functions","authors":"Aoli Yang , Jun Fan , Dao-Hong Xiang","doi":"10.1016/j.jco.2025.101931","DOIUrl":"10.1016/j.jco.2025.101931","url":null,"abstract":"<div><div>The pursuit of individualized treatment rules in precision medicine has generated significant interest due to its potential to optimize clinical outcomes for patients with diverse treatment responses. One approach that has gained attention is outcome weighted learning, which is tailored to estimate optimal individualized treatment rules by leveraging each patient's unique characteristics under a weighted classification framework. However, traditional offline learning algorithms, which process all available data at once, face limitations when applied to high-dimensional electronic health records data due to its sheer volume. Additionally, the dynamic nature of precision medicine requires that learning algorithms can effectively handle streaming data that arrives in a sequential manner. To overcome these challenges, we present a novel framework that combines outcome weighted learning with online gradient descent algorithms, aiming to enhance precision medicine practices. Our framework provides a comprehensive analysis of the learning theory associated with online outcome weighted learning algorithms, taking into account general classification loss functions. We establish the convergence of these algorithms for the first time, providing explicit convergence rates while assuming polynomially decaying step sizes, with (or without) a regularization term. Our findings present a non-trivial extension of online classification to online outcome weighted learning, contributing to the theoretical foundations of learning algorithms tailored for processing streaming input-output-reward type data.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"88 ","pages":"Article 101931"},"PeriodicalIF":1.8,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are given a value-oracle for a d-dimensional function f that satisfies the conditions of Miranda's theorem, and therefore has a root. Our goal is to compute an approximate root using a number of evaluations that is polynomial in the number of accuracy digits. For this is always possible using the bisection method, but for this is impossible in general.
We show that, if and f satisfies a single monotonicity condition, then the number of required evaluations is polynomial in the accuracy. The same holds if and f satisfies some particular monotonicity conditions. We show that, if and f satisfies a single monotonicity condition, then the number of required evaluations is polynomial in the accuracy. The same holds if and f satisfies some particular monotonicity conditions. In contrast, if even two of these monotonicity conditions are missing, then the required number of evaluations might be exponential.
As an example application, we show that approximate roots of monotone functions can be used for approximate envy-free cake-cutting.
{"title":"Computing approximate roots of monotone functions","authors":"Alexandros Hollender , Chester Lawrence, Erel Segal-Halevi","doi":"10.1016/j.jco.2025.101930","DOIUrl":"10.1016/j.jco.2025.101930","url":null,"abstract":"<div><div>We are given a value-oracle for a <em>d</em>-dimensional function <em>f</em> that satisfies the conditions of Miranda's theorem, and therefore has a root. Our goal is to compute an approximate root using a number of evaluations that is polynomial in the number of accuracy digits. For <span><math><mi>d</mi><mo>=</mo><mn>1</mn></math></span> this is always possible using the bisection method, but for <span><math><mi>d</mi><mo>≥</mo><mn>2</mn></math></span> this is impossible in general.</div><div>We show that, if <span><math><mi>d</mi><mo>=</mo><mn>2</mn></math></span> and <em>f</em> satisfies a single monotonicity condition, then the number of required evaluations is polynomial in the accuracy. The same holds if <span><math><mi>d</mi><mo>≥</mo><mn>3</mn></math></span> and <em>f</em> satisfies some particular <span><math><msup><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>−</mo><mi>d</mi></math></span> monotonicity conditions. We show that, if <span><math><mi>d</mi><mo>=</mo><mn>2</mn></math></span> and <em>f</em> satisfies a single monotonicity condition, then the number of required evaluations is polynomial in the accuracy. The same holds if <span><math><mi>d</mi><mo>≥</mo><mn>3</mn></math></span> and <em>f</em> satisfies some particular <span><math><msup><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>−</mo><mi>d</mi></math></span> monotonicity conditions. In contrast, if even two of these monotonicity conditions are missing, then the required number of evaluations might be exponential.</div><div>As an example application, we show that approximate roots of monotone functions can be used for approximate envy-free cake-cutting.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"88 ","pages":"Article 101930"},"PeriodicalIF":1.8,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1016/j.jco.2025.101929
Michael Maller
In previous work we defined a computational saddle transition problem which arises in the dynamics of diffeomorphisms of the 2−dimensional torus, and proved this problem is in Oracle NP, working in a model of computation appropriate for Turing machine computations on problems defined over the real numbers. In this note we report further work on these problems, studying orbit descriptions represented as finite words in periodic points. We show these Orbit Word Problems are again in Oracle NP, in our model. Our methods also reveal structures in the set of realized orbit words, suggesting further applications in complexity.
{"title":"On the complexity of orbit word problems","authors":"Michael Maller","doi":"10.1016/j.jco.2025.101929","DOIUrl":"10.1016/j.jco.2025.101929","url":null,"abstract":"<div><div>In previous work we defined a computational saddle transition problem which arises in the dynamics of diffeomorphisms of the 2−dimensional torus, and proved this problem is in Oracle <strong>NP</strong>, working in a model of computation appropriate for Turing machine computations on problems defined over the real numbers. In this note we report further work on these problems, studying orbit descriptions represented as finite words in periodic points. We show these Orbit Word Problems are again in Oracle <strong>NP</strong>, in our model. Our methods also reveal structures in the set of realized orbit words, suggesting further applications in complexity.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"88 ","pages":"Article 101929"},"PeriodicalIF":1.8,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1016/j.jco.2024.101922
Joris van der Hoeven, Grégoire Lecerf
Consider a sparse multivariate polynomial f with integer coefficients. Assume that f is represented as a “modular black box polynomial”, e.g. via an algorithm to evaluate f at arbitrary integer points, modulo arbitrary positive integers. The problem of sparse interpolation is to recover f in its usual sparse representation, as a sum of coefficients times monomials. For the first time we present a quasi-optimal algorithm for this task in term of the product of the number of terms of f by the maximum of the bit-size of the terms of f.
{"title":"Fast interpolation of multivariate polynomials with sparse exponents","authors":"Joris van der Hoeven, Grégoire Lecerf","doi":"10.1016/j.jco.2024.101922","DOIUrl":"10.1016/j.jco.2024.101922","url":null,"abstract":"<div><div>Consider a sparse multivariate polynomial <em>f</em> with integer coefficients. Assume that <em>f</em> is represented as a “modular black box polynomial”, e.g. via an algorithm to evaluate <em>f</em> at arbitrary integer points, modulo arbitrary positive integers. The problem of sparse interpolation is to recover <em>f</em> in its usual sparse representation, as a sum of coefficients times monomials. For the first time we present a quasi-optimal algorithm for this task in term of the product of the number of terms of <em>f</em> by the maximum of the bit-size of the terms of <em>f</em>.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"87 ","pages":"Article 101922"},"PeriodicalIF":1.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-06DOI: 10.1016/j.jco.2024.101921
Santhosh George , Muniyasamy M , Manjusree Gopal , Chandhini G , Ioannis K. Argyros
In this paper, we propose a procedure to obtain an iterative method that increases its convergence order from p to 5p for solving nonlinear systems. Our analysis is given in more general Banach space settings and uses assumptions on the derivative of the involved operator only up to order . Here, k is the order of the highest derivative used in the convergence analysis of the iterative method with convergence order p. A particular case of our analysis includes an existing fifth-order method and improves its applicability to more problems than the problems covered by the method's analysis in earlier study.
{"title":"A procedure for increasing the convergence order of iterative methods from p to 5p for solving nonlinear system","authors":"Santhosh George , Muniyasamy M , Manjusree Gopal , Chandhini G , Ioannis K. Argyros","doi":"10.1016/j.jco.2024.101921","DOIUrl":"10.1016/j.jco.2024.101921","url":null,"abstract":"<div><div>In this paper, we propose a procedure to obtain an iterative method that increases its convergence order from <em>p</em> to 5<em>p</em> for solving nonlinear systems. Our analysis is given in more general Banach space settings and uses assumptions on the derivative of the involved operator only up to order <span><math><mi>max</mi><mo></mo><mo>{</mo><mi>k</mi><mo>,</mo><mn>2</mn><mo>}</mo></math></span>. Here, <em>k</em> is the order of the highest derivative used in the convergence analysis of the iterative method with convergence order <em>p</em>. A particular case of our analysis includes an existing fifth-order method and improves its applicability to more problems than the problems covered by the method's analysis in earlier study.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"87 ","pages":"Article 101921"},"PeriodicalIF":1.8,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1016/j.jco.2024.101920
Duo Liu , Qin Huang , Qinian Jin
In recent years, Nesterov acceleration has been introduced to enhance the efficiency of Landweber iteration for solving ill-posed problems. For linear ill-posed problems in Hilbert spaces, Nesterov acceleration has been analyzed with a discrepancy principle proposed to terminate the iterations. However, the existing approach requires computing residuals along two distinct iterative sequences, resulting in increased computational costs. In this paper, we propose an alternative discrepancy principle for Nesterov acceleration that eliminates the need to compute the residuals for one of the iterative sequences, thereby reducing computational time by approximately one-third per iteration. We provide a convergence analysis of the proposed method, establishing both its convergence and convergence rates. The effectiveness of our approach is demonstrated through numerical simulations.
{"title":"A revisit on Nesterov acceleration for linear ill-posed problems","authors":"Duo Liu , Qin Huang , Qinian Jin","doi":"10.1016/j.jco.2024.101920","DOIUrl":"10.1016/j.jco.2024.101920","url":null,"abstract":"<div><div>In recent years, Nesterov acceleration has been introduced to enhance the efficiency of Landweber iteration for solving ill-posed problems. For linear ill-posed problems in Hilbert spaces, Nesterov acceleration has been analyzed with a discrepancy principle proposed to terminate the iterations. However, the existing approach requires computing residuals along two distinct iterative sequences, resulting in increased computational costs. In this paper, we propose an alternative discrepancy principle for Nesterov acceleration that eliminates the need to compute the residuals for one of the iterative sequences, thereby reducing computational time by approximately one-third per iteration. We provide a convergence analysis of the proposed method, establishing both its convergence and convergence rates. The effectiveness of our approach is demonstrated through numerical simulations.</div></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"87 ","pages":"Article 101920"},"PeriodicalIF":1.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}