In Banach spaces, the convergence analysis of iteratively regularized Landweber iteration (IRLI) is recently studied via conditional stability estimates. But the formulation of IRLI does not include general non-smooth convex penalty functionals, which is essential to capture special characteristics of the sought solution. In this paper, we formulate a generalized form of IRLI so that its formulation includes general non-smooth uniformly convex penalty functionals. We study the convergence analysis and derive the convergence rates of the generalized method solely via conditional stability estimates in Banach spaces for both the perturbed and unperturbed data. We also discuss few examples of inverse problems on which our method is applicable.
Many practical problems, such as the Malthusian population growth model, eigenvalue computations for matrices, and solving the Van der Waals' ideal gas equation, inherently involve nonlinearities. This paper initially introduces a two-parameter iterative scheme with a convergence order of two. Building on this, a three-parameter scheme with a convergence order of four is proposed. Then we extend these schemes into higher-order schemes with memory using Newton's interpolation, achieving an upper bound for the efficiency index of . Finally, we validate the new schemes by solving various numerical and practical examples, demonstrating their superior efficiency in terms of computational cost, CPU time, and accuracy compared to existing methods.
Given a Hilbert space and a finite measure space Ω, the approximation of a vector-valued function by a k-dimensional subspace plays an important role in dimension reduction techniques, such as reduced basis methods for solving parameter-dependent partial differential equations. For functions in the Lebesgue–Bochner space , the best possible subspace approximation error is characterized by the singular values of f. However, for practical reasons, is often restricted to be spanned by point samples of f. We show that this restriction only has a mild impact on the attainable error; there always exist k samples such that the resulting error is not larger than . Our work extends existing results by Binev et al. (2011) [3] on approximation in supremum norm and by Deshpande et al. (2006) [8] on column subset selection for matrices.
In this paper, we study the high probability convergence of AdaGrad-Norm for constrained, non-smooth, weakly convex optimization with bounded noise and sub-Gaussian noise cases. We also investigate a more general accelerated gradient descent (AGD) template (Ghadimi and Lan, 2016) encompassing the AdaGrad-Norm, the Nesterov's accelerated gradient descent, and the RSAG (Ghadimi and Lan, 2016) with different parameter choices. We provide a high probability convergence rate without knowing the information of the weak convexity parameter and the gradient bound to tune the step-sizes.
Fourier phase retrieval, which aims to reconstruct a signal from its Fourier magnitude, is of fundamental importance in fields of engineering and science. In this paper, we provide a theoretical understanding of algorithms for the one-dimensional Fourier phase retrieval problem. Specifically, we demonstrate that if an algorithm exists which can reconstruct an arbitrary signal in time to reach ϵ-precision from its magnitude of discrete Fourier transform and its initial value , then . This partially elucidates the phenomenon that, despite the fact that almost all signals are uniquely determined by their Fourier magnitude and the absolute value of their initial value , no algorithm with theoretical guarantees has been proposed in the last few decades. Our proofs employ the result in computational complexity theory that the Product Partition problem is NP-complete in the strong sense.
The usual univariate interpolation problem of finding a monic polynomial f of degree n that interpolates n given values is well understood. This paper studies a variant where f is required to be composite, say, a composition of two polynomials of degrees d and e, respectively, with , and with given values. Some special cases are easy to solve, and for the general case, we construct a homotopy between it and a special case. We compute a geometric solution of the algebraic curve presenting this homotopy, and this also provides an answer to the interpolation task. The computing time is polynomial in the geometric data, like the degree, of this curve. A consequence is that for almost all inputs, a decomposable interpolation polynomial exists.

