Pub Date : 2024-03-18DOI: 10.1016/j.jco.2024.101851
Robert J. Kunsch
We study the numerical integration of functions from isotropic Sobolev spaces using finitely many function evaluations within randomized algorithms, aiming for the smallest possible probabilistic error guarantee at confidence level . For spaces consisting of continuous functions, non-linear Monte Carlo methods with optimal confidence properties have already been known, in few cases even linear methods that succeed in that respect. In this paper we promote a method called stratified control variates (SCV) and by it show that already linear methods achieve optimal probabilistic error rates in the high smoothness regime without the need to adjust algorithmic parameters to the uncertainty δ. We also analyse a version of SCV in the low smoothness regime where may contain functions with singularities. Here, we observe a polynomial dependence of the error on in contrast to the logarithmic dependence in the high smoothness regime.
{"title":"Linear Monte Carlo quadrature with optimal confidence intervals","authors":"Robert J. Kunsch","doi":"10.1016/j.jco.2024.101851","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101851","url":null,"abstract":"<div><p>We study the numerical integration of functions from isotropic Sobolev spaces <span><math><msubsup><mrow><mi>W</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>s</mi></mrow></msubsup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> using finitely many function evaluations within randomized algorithms, aiming for the smallest possible probabilistic error guarantee <span><math><mi>ε</mi><mo>></mo><mn>0</mn></math></span> at confidence level <span><math><mn>1</mn><mo>−</mo><mi>δ</mi><mo>∈</mo><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>. For spaces consisting of continuous functions, non-linear Monte Carlo methods with optimal confidence properties have already been known, in few cases even linear methods that succeed in that respect. In this paper we promote a method called <em>stratified control variates</em> (SCV) and by it show that already linear methods achieve optimal probabilistic error rates in the high smoothness regime without the need to adjust algorithmic parameters to the uncertainty <em>δ</em>. We also analyse a version of SCV in the low smoothness regime where <span><math><msubsup><mrow><mi>W</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>s</mi></mrow></msubsup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> may contain functions with singularities. Here, we observe a polynomial dependence of the error on <span><math><msup><mrow><mi>δ</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup></math></span> in contrast to the logarithmic dependence in the high smoothness regime.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"83 ","pages":"Article 101851"},"PeriodicalIF":1.7,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000281/pdfft?md5=6b29dfcc17f60ddbf6ee19d289e21700&pid=1-s2.0-S0885064X24000281-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140180612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-16DOI: 10.1016/j.jco.2024.101852
François Clément , Carola Doerr , Luís Paquete
Building upon the exact methods presented in our earlier work (2022) [5], we introduce a heuristic approach for the star discrepancy subset selection problem. The heuristic gradually improves the current-best subset by replacing one of its elements at a time. While it does not necessarily return an optimal solution, we obtain promising results for all tested dimensions. For example, for moderate sizes , we obtain point sets in dimension 6 with star discrepancy up to 35% better than that of the first n points of the Sobol' sequence. Our heuristic works in all dimensions, the main limitation being the precision of the discrepancy calculation algorithms. We provide a comparison with an energy functional introduced by Steinerberger (2019) [31], showing that our heuristic performs better on all tested instances. Finally, our results give further empirical information on inverse star discrepancy conjectures.
{"title":"Heuristic approaches to obtain low-discrepancy point sets via subset selection","authors":"François Clément , Carola Doerr , Luís Paquete","doi":"10.1016/j.jco.2024.101852","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101852","url":null,"abstract":"<div><p>Building upon the exact methods presented in our earlier work (2022) <span>[5]</span>, we introduce a heuristic approach for the star discrepancy subset selection problem. The heuristic gradually improves the current-best subset by replacing one of its elements at a time. While it does not necessarily return an optimal solution, we obtain promising results for all tested dimensions. For example, for moderate sizes <span><math><mn>30</mn><mo>≤</mo><mi>n</mi><mo>≤</mo><mn>240</mn></math></span>, we obtain point sets in dimension 6 with <span><math><msub><mrow><mi>L</mi></mrow><mrow><mo>∞</mo></mrow></msub></math></span> star discrepancy up to 35% better than that of the first <em>n</em> points of the Sobol' sequence. Our heuristic works in all dimensions, the main limitation being the precision of the discrepancy calculation algorithms. We provide a comparison with an energy functional introduced by Steinerberger (2019) <span>[31]</span>, showing that our heuristic performs better on all tested instances. Finally, our results give further empirical information on inverse star discrepancy conjectures.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"83 ","pages":"Article 101852"},"PeriodicalIF":1.7,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000293/pdfft?md5=026f36f25d20579c91a0fc64a95356e5&pid=1-s2.0-S0885064X24000293-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140190621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1016/j.jco.2024.101842
Chenxu Pang , Xiaojie Wang , Yue Wu
This article investigates the weak approximation towards the invariant measure of semi-linear stochastic differential equations (SDEs) under non-globally Lipschitz coefficients. For this purpose, we propose a linear-theta-projected Euler (LTPE) scheme, which also admits an invariant measure, to handle the potential influence of the linear stiffness. Under certain assumptions, both the SDE and the corresponding LTPE method are shown to converge exponentially to the underlying invariant measures, respectively. Moreover, with time-independent regularity estimates for the corresponding Kolmogorov equation, the weak error between the numerical invariant measure and the original one can be guaranteed with convergence of order one. In terms of computational complexity, the proposed ergodicity preserving scheme with the nonlinearity explicitly treated has a significant advantage over the ergodicity preserving implicit Euler method in the literature. Numerical experiments are provided to verify our theoretical findings.
{"title":"Linear implicit approximations of invariant measures of semi-linear SDEs with non-globally Lipschitz coefficients","authors":"Chenxu Pang , Xiaojie Wang , Yue Wu","doi":"10.1016/j.jco.2024.101842","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101842","url":null,"abstract":"<div><p>This article investigates the weak approximation towards the invariant measure of semi-linear stochastic differential equations (SDEs) under non-globally Lipschitz coefficients. For this purpose, we propose a linear-theta-projected Euler (LTPE) scheme, which also admits an invariant measure, to handle the potential influence of the linear stiffness. Under certain assumptions, both the SDE and the corresponding LTPE method are shown to converge exponentially to the underlying invariant measures, respectively. Moreover, with time-independent regularity estimates for the corresponding Kolmogorov equation, the weak error between the numerical invariant measure and the original one can be guaranteed with convergence of order one. In terms of computational complexity, the proposed ergodicity preserving scheme with the nonlinearity explicitly treated has a significant advantage over the ergodicity preserving implicit Euler method in the literature. Numerical experiments are provided to verify our theoretical findings.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"83 ","pages":"Article 101842"},"PeriodicalIF":1.7,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000190/pdfft?md5=196a33f1ce0b753c885d6d05ad1d70a4&pid=1-s2.0-S0885064X24000190-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1016/j.jco.2024.101840
David Krieg , Peter Kritzer
We consider linear problems in the worst-case setting. That is, given a linear operator and a pool of admissible linear measurements, we want to approximate the operator uniformly on a convex and balanced set by means of algorithms using at most n such measurements. It is known that, in general, linear algorithms do not yield an optimal approximation. However, as we show here, an optimal approximation can always be obtained with a homogeneous algorithm. This is of interest for two reasons. First, the homogeneity allows us to extend any error bound on the unit ball to the full input space. Second, homogeneous algorithms are better suited to tackle problems on cones, a scenario far less understood than the classical situation of balls. We use the optimality of homogeneous algorithms to prove solvability for a family of problems defined on cones. We illustrate our results by several examples.
我们考虑的是最坏情况下的线性问题。也就是说,给定一个线性算子和一组可接受的线性测量值,我们希望通过最多使用 n 个此类测量值的算法,在一个凸平衡集合上均匀地近似算子。众所周知,一般来说,线性算法不会产生最佳近似值。然而,正如我们在此所展示的,同质算法总能获得最佳近似值。我们之所以对此感兴趣,有两个原因。首先,同质算法允许我们将单位球上的任何误差约束扩展到整个输入空间。其次,同质算法更适合解决锥体上的问题,而对锥体问题的理解远不如对球的经典理解。我们利用同构算法的最优性来证明定义在圆锥上的一系列问题的可解性。我们通过几个例子来说明我们的结果。
{"title":"Homogeneous algorithms and solvable problems on cones","authors":"David Krieg , Peter Kritzer","doi":"10.1016/j.jco.2024.101840","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101840","url":null,"abstract":"<div><p>We consider linear problems in the worst-case setting. That is, given a linear operator and a pool of admissible linear measurements, we want to approximate the operator uniformly on a convex and balanced set by means of algorithms using at most <em>n</em> such measurements. It is known that, in general, linear algorithms do not yield an optimal approximation. However, as we show here, an optimal approximation can always be obtained with a homogeneous algorithm. This is of interest for two reasons. First, the homogeneity allows us to extend any error bound on the unit ball to the full input space. Second, homogeneous algorithms are better suited to tackle problems on cones, a scenario far less understood than the classical situation of balls. We use the optimality of homogeneous algorithms to prove solvability for a family of problems defined on cones. We illustrate our results by several examples.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"83 ","pages":"Article 101840"},"PeriodicalIF":1.7,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000177/pdfft?md5=a93de3c8d5e250c4bbebc0c932ec7f46&pid=1-s2.0-S0885064X24000177-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140180611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1016/j.jco.2024.101841
Simon Foucart , Chunyang Liao
For objects belonging to a known model set and observed through a prescribed linear process, we aim at determining methods to recover linear quantities of these objects that are optimal from a worst-case perspective. Working in a Hilbert setting, we show that, if the model set is the intersection of two hyperellipsoids centered at the origin, then there is an optimal recovery method which is linear. It is specifically given by a constrained regularization procedure whose parameters can be precomputed by semidefinite programming. This general framework can be applied to several scenarios, including the two-space problem and problems involving -inaccurate data. It can also be applied to the problem of recovery from -inaccurate data. For the latter, we reach the conclusion of existence of an optimal recovery method which is linear, again given by constrained regularization, under a computationally verifiable sufficient condition.
{"title":"Radius of information for two intersected centered hyperellipsoids and implications in optimal recovery from inaccurate data","authors":"Simon Foucart , Chunyang Liao","doi":"10.1016/j.jco.2024.101841","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101841","url":null,"abstract":"<div><p>For objects belonging to a known model set and observed through a prescribed linear process, we aim at determining methods to recover linear quantities of these objects that are optimal from a worst-case perspective. Working in a Hilbert setting, we show that, if the model set is the intersection of two hyperellipsoids centered at the origin, then there is an optimal recovery method which is linear. It is specifically given by a constrained regularization procedure whose parameters can be precomputed by semidefinite programming. This general framework can be applied to several scenarios, including the two-space problem and problems involving <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-inaccurate data. It can also be applied to the problem of recovery from <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-inaccurate data. For the latter, we reach the conclusion of existence of an optimal recovery method which is linear, again given by constrained regularization, under a computationally verifiable sufficient condition.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"83 ","pages":"Article 101841"},"PeriodicalIF":1.7,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140123263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.1016/j.jco.2024.101839
Markus Bachmayr, Manfred Faldum
An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.
{"title":"A space-time adaptive low-rank method for high-dimensional parabolic partial differential equations","authors":"Markus Bachmayr, Manfred Faldum","doi":"10.1016/j.jco.2024.101839","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101839","url":null,"abstract":"<div><p>An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101839"},"PeriodicalIF":1.7,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000165/pdfft?md5=4d8d034eef881c11a8710e5ae9111cdb&pid=1-s2.0-S0885064X24000165-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139737552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-07DOI: 10.1016/j.jco.2024.101838
A.A. Khartov , I.A. Limar
We consider a problem of approximation of d-variate functions defined on which belong to the Hilbert space with tensor product-type reproducing Gaussian kernel with constant shape parameter. Within worst case setting, we investigate the growth of the information complexity as . The asymptotics are obtained for the case of fixed error threshold and for the case when it goes to zero as .
我们考虑的是定义在 Rd 上的 d 变量函数的近似问题,这些函数属于具有张量乘型再现高斯核且形状参数不变的希尔伯特空间。在最坏情况下,我们研究了信息复杂度随 d→∞ 的增长。在误差阈值固定的情况下,以及当误差阈值随 d→∞ 变为零时,我们得到了渐近线。
{"title":"Asymptotic analysis in multivariate worst case approximation with Gaussian kernels","authors":"A.A. Khartov , I.A. Limar","doi":"10.1016/j.jco.2024.101838","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101838","url":null,"abstract":"<div><p>We consider a problem of approximation of <em>d</em>-variate functions defined on <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> which belong to the Hilbert space with tensor product-type reproducing Gaussian kernel with constant shape parameter. Within worst case setting, we investigate the growth of the information complexity as <span><math><mi>d</mi><mo>→</mo><mo>∞</mo></math></span>. The asymptotics are obtained for the case of fixed error threshold and for the case when it goes to zero as <span><math><mi>d</mi><mo>→</mo><mo>∞</mo></math></span>.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101838"},"PeriodicalIF":1.7,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139714084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-30DOI: 10.1016/j.jco.2024.101834
Erich Novak
{"title":"Thomas Jahn, Tino Ullrich and Felix Voigtlaender are the Winners of the 2023 Best Paper Award of the Journal of Complexity","authors":"Erich Novak","doi":"10.1016/j.jco.2024.101834","DOIUrl":"10.1016/j.jco.2024.101834","url":null,"abstract":"","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101834"},"PeriodicalIF":1.7,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000116/pdfft?md5=1582996b1025acb2a96c8e9b4945e0ef&pid=1-s2.0-S0885064X24000116-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139647728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1016/j.jco.2024.101833
Minh-Thang Do , Hoang-Long Ngo , Nhat-An Pho
In this paper, we consider stochastic differential equations whose drift coefficient is superlinearly growing and piecewise continuous, and whose diffusion coefficient is superlinearly growing and locally Hölder continuous. We first prove the existence and uniqueness of solution to such stochastic differential equations and then propose a tamed-adaptive Euler-Maruyama approximation scheme. We study the rate of convergence in -norm of the scheme in both finite and infinite time intervals.
{"title":"Tamed-adaptive Euler-Maruyama approximation for SDEs with superlinearly growing and piecewise continuous drift, superlinearly growing and locally Hölder continuous diffusion","authors":"Minh-Thang Do , Hoang-Long Ngo , Nhat-An Pho","doi":"10.1016/j.jco.2024.101833","DOIUrl":"10.1016/j.jco.2024.101833","url":null,"abstract":"<div><p><span>In this paper, we consider stochastic differential equations<span> whose drift coefficient is superlinearly growing and piecewise continuous, and whose diffusion coefficient is superlinearly growing and locally Hölder continuous. We first prove the existence and uniqueness of solution to such stochastic differential equations and then propose a tamed-adaptive Euler-Maruyama approximation scheme. We study the rate of convergence in </span></span><span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span><span>-norm of the scheme in both finite and infinite time intervals.</span></p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101833"},"PeriodicalIF":1.7,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139508603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1016/j.jco.2024.101825
Yuan Mao, Zheng-Chu Guo
In recent years, functional linear models have attracted growing attention in statistics and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature.
{"title":"Online regularized learning algorithm for functional data","authors":"Yuan Mao, Zheng-Chu Guo","doi":"10.1016/j.jco.2024.101825","DOIUrl":"10.1016/j.jco.2024.101825","url":null,"abstract":"<div><p>In recent years, functional linear models have attracted growing attention in statistics<span> and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space<span>. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature.</span></span></p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101825"},"PeriodicalIF":1.7,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139411895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}