Pub Date : 2024-02-09DOI: 10.1016/j.jco.2024.101839
Markus Bachmayr, Manfred Faldum
An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.
{"title":"A space-time adaptive low-rank method for high-dimensional parabolic partial differential equations","authors":"Markus Bachmayr, Manfred Faldum","doi":"10.1016/j.jco.2024.101839","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101839","url":null,"abstract":"<div><p>An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101839"},"PeriodicalIF":1.7,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000165/pdfft?md5=4d8d034eef881c11a8710e5ae9111cdb&pid=1-s2.0-S0885064X24000165-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139737552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-07DOI: 10.1016/j.jco.2024.101838
A.A. Khartov , I.A. Limar
We consider a problem of approximation of d-variate functions defined on which belong to the Hilbert space with tensor product-type reproducing Gaussian kernel with constant shape parameter. Within worst case setting, we investigate the growth of the information complexity as . The asymptotics are obtained for the case of fixed error threshold and for the case when it goes to zero as .
我们考虑的是定义在 Rd 上的 d 变量函数的近似问题,这些函数属于具有张量乘型再现高斯核且形状参数不变的希尔伯特空间。在最坏情况下,我们研究了信息复杂度随 d→∞ 的增长。在误差阈值固定的情况下,以及当误差阈值随 d→∞ 变为零时,我们得到了渐近线。
{"title":"Asymptotic analysis in multivariate worst case approximation with Gaussian kernels","authors":"A.A. Khartov , I.A. Limar","doi":"10.1016/j.jco.2024.101838","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101838","url":null,"abstract":"<div><p>We consider a problem of approximation of <em>d</em>-variate functions defined on <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> which belong to the Hilbert space with tensor product-type reproducing Gaussian kernel with constant shape parameter. Within worst case setting, we investigate the growth of the information complexity as <span><math><mi>d</mi><mo>→</mo><mo>∞</mo></math></span>. The asymptotics are obtained for the case of fixed error threshold and for the case when it goes to zero as <span><math><mi>d</mi><mo>→</mo><mo>∞</mo></math></span>.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101838"},"PeriodicalIF":1.7,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139714084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-30DOI: 10.1016/j.jco.2024.101834
Erich Novak
{"title":"Thomas Jahn, Tino Ullrich and Felix Voigtlaender are the Winners of the 2023 Best Paper Award of the Journal of Complexity","authors":"Erich Novak","doi":"10.1016/j.jco.2024.101834","DOIUrl":"10.1016/j.jco.2024.101834","url":null,"abstract":"","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101834"},"PeriodicalIF":1.7,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000116/pdfft?md5=1582996b1025acb2a96c8e9b4945e0ef&pid=1-s2.0-S0885064X24000116-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139647728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1016/j.jco.2024.101833
Minh-Thang Do , Hoang-Long Ngo , Nhat-An Pho
In this paper, we consider stochastic differential equations whose drift coefficient is superlinearly growing and piecewise continuous, and whose diffusion coefficient is superlinearly growing and locally Hölder continuous. We first prove the existence and uniqueness of solution to such stochastic differential equations and then propose a tamed-adaptive Euler-Maruyama approximation scheme. We study the rate of convergence in -norm of the scheme in both finite and infinite time intervals.
{"title":"Tamed-adaptive Euler-Maruyama approximation for SDEs with superlinearly growing and piecewise continuous drift, superlinearly growing and locally Hölder continuous diffusion","authors":"Minh-Thang Do , Hoang-Long Ngo , Nhat-An Pho","doi":"10.1016/j.jco.2024.101833","DOIUrl":"10.1016/j.jco.2024.101833","url":null,"abstract":"<div><p><span>In this paper, we consider stochastic differential equations<span> whose drift coefficient is superlinearly growing and piecewise continuous, and whose diffusion coefficient is superlinearly growing and locally Hölder continuous. We first prove the existence and uniqueness of solution to such stochastic differential equations and then propose a tamed-adaptive Euler-Maruyama approximation scheme. We study the rate of convergence in </span></span><span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span><span>-norm of the scheme in both finite and infinite time intervals.</span></p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101833"},"PeriodicalIF":1.7,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139508603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1016/j.jco.2024.101825
Yuan Mao, Zheng-Chu Guo
In recent years, functional linear models have attracted growing attention in statistics and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature.
{"title":"Online regularized learning algorithm for functional data","authors":"Yuan Mao, Zheng-Chu Guo","doi":"10.1016/j.jco.2024.101825","DOIUrl":"10.1016/j.jco.2024.101825","url":null,"abstract":"<div><p>In recent years, functional linear models have attracted growing attention in statistics<span> and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space<span>. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature.</span></span></p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101825"},"PeriodicalIF":1.7,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139411895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1016/j.jco.2024.101826
Ying-Ao Wang , Qin Huang , Zhigang Yao , Ye Zhang
In this paper, a unified study is presented for the design and analysis of a broad class of linear regression methods. The proposed general framework includes the conventional linear regression methods (such as the least squares regression and the Ridge regression) and some new regression methods (e.g. the Landweber regression and Showalter regression), which have recently been introduced in the fields of optimization and inverse problems. The strong consistency, the reduced mean squared error, the asymptotic Gaussian property, and the best worst case error of this class of linear regression methods are investigated. Various numerical experiments are performed to demonstrate the consistency and efficiency of the proposed class of methods for linear regression.
{"title":"On a class of linear regression methods","authors":"Ying-Ao Wang , Qin Huang , Zhigang Yao , Ye Zhang","doi":"10.1016/j.jco.2024.101826","DOIUrl":"10.1016/j.jco.2024.101826","url":null,"abstract":"<div><p>In this paper, a unified study is presented for the design and analysis of a broad class of linear regression methods. The proposed general framework includes the conventional linear regression methods (such as the least squares regression and the Ridge regression) and some new regression methods (e.g. the Landweber regression and Showalter regression), which have recently been introduced in the fields of optimization and inverse problems. The strong consistency, the reduced mean squared error, the asymptotic Gaussian property, and the best worst case error of this class of linear regression methods are investigated. Various numerical experiments are performed to demonstrate the consistency and efficiency of the proposed class of methods for linear regression.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101826"},"PeriodicalIF":1.7,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139411985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-06DOI: 10.1016/j.jco.2024.101824
Abhishake Rastogi
In this paper, we study Tikhonov regularization scheme in Hilbert scales for a nonlinear statistical inverse problem with general noise. The regularizing norm in this scheme is stronger than the norm in the Hilbert space. We focus on developing a theoretical analysis for this scheme based on conditional stability estimates. We utilize the concept of the distance function to establish high probability estimates of the direct and reconstruction errors in the Reproducing Kernel Hilbert space setting. Furthermore, explicit rates of convergence in terms of sample size are established for the oversmoothing case and the regular case over the regularity class defined through an appropriate source condition. Our results improve upon and generalize previous results obtained in related settings.
{"title":"Nonlinear Tikhonov regularization in Hilbert scales for inverse learning","authors":"Abhishake Rastogi","doi":"10.1016/j.jco.2024.101824","DOIUrl":"10.1016/j.jco.2024.101824","url":null,"abstract":"<div><p>In this paper, we study Tikhonov regularization scheme in Hilbert scales for a nonlinear statistical inverse problem with general noise. The regularizing norm in this scheme is stronger than the norm in the Hilbert space. We focus on developing a theoretical analysis for this scheme based on conditional stability estimates. We utilize the concept of the distance function to establish high probability estimates of the direct and reconstruction errors in the Reproducing Kernel Hilbert space setting. Furthermore, explicit rates of convergence in terms of sample size are established for the oversmoothing case and the regular case over the regularity class defined through an appropriate source condition. Our results improve upon and generalize previous results obtained in related settings.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101824"},"PeriodicalIF":1.7,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000013/pdfft?md5=1a65eb323b09b712bcf07de5eb47b8eb&pid=1-s2.0-S0885064X24000013-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1016/j.jco.2023.101823
Stefan Heinrich
<div><p><span>We study the complexity of randomized computation of integrals depending on a parameter, with integrands<span> from Sobolev spaces. That is, for </span></span><span><math><mi>r</mi><mo>,</mo><msub><mrow><mi>d</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>∈</mo><mi>N</mi></math></span>, <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>,</mo><mi>q</mi><mo>≤</mo><mo>∞</mo></math></span>, <span><math><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>=</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><msub><mrow><mi>d</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msup></math></span>, and <span><math><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>=</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><msub><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></msup></math></span> we are given <span><math><mi>f</mi><mo>∈</mo><msubsup><mrow><mi>W</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>r</mi></mrow></msubsup><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>×</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></math></span> and we seek to approximate<span><span><span><math><mo>(</mo><mi>S</mi><mi>f</mi><mo>)</mo><mo>(</mo><mi>s</mi><mo>)</mo><mo>=</mo><munder><mo>∫</mo><mrow><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></munder><mi>f</mi><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo><mi>d</mi><mi>t</mi><mspace></mspace><mo>(</mo><mi>s</mi><mo>∈</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo><mo>,</mo></math></span></span></span> with error measured in the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>q</mi></mrow></msub><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo></math></span>-norm. Information is standard, that is, function values of <em>f</em>. Our results extend previous work of Heinrich and Sindambiwe (1999) <span>[10]</span> for <span><math><mi>p</mi><mo>=</mo><mi>q</mi><mo>=</mo><mo>∞</mo></math></span> and Wiegand (2006) <span>[15]</span> for <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>=</mo><mi>q</mi><mo><</mo><mo>∞</mo></math></span>. Wiegand's analysis was carried out under the assumption that <span><math><msubsup><mrow><mi>W</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>r</mi></mrow></msubsup><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>×</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></math></span> is continuously embedded in <span><math><mi>C</mi><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>×</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></math></span><span> (embedding condition). We also study the case that the embedding condition does not hold. For this purpose a new ingredient is developed – a stochastic discretization
{"title":"Randomized complexity of parametric integration and the role of adaption II. Sobolev spaces","authors":"Stefan Heinrich","doi":"10.1016/j.jco.2023.101823","DOIUrl":"10.1016/j.jco.2023.101823","url":null,"abstract":"<div><p><span>We study the complexity of randomized computation of integrals depending on a parameter, with integrands<span> from Sobolev spaces. That is, for </span></span><span><math><mi>r</mi><mo>,</mo><msub><mrow><mi>d</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>∈</mo><mi>N</mi></math></span>, <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>,</mo><mi>q</mi><mo>≤</mo><mo>∞</mo></math></span>, <span><math><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>=</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><msub><mrow><mi>d</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msup></math></span>, and <span><math><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>=</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><msub><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></msup></math></span> we are given <span><math><mi>f</mi><mo>∈</mo><msubsup><mrow><mi>W</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>r</mi></mrow></msubsup><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>×</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></math></span> and we seek to approximate<span><span><span><math><mo>(</mo><mi>S</mi><mi>f</mi><mo>)</mo><mo>(</mo><mi>s</mi><mo>)</mo><mo>=</mo><munder><mo>∫</mo><mrow><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></munder><mi>f</mi><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo><mi>d</mi><mi>t</mi><mspace></mspace><mo>(</mo><mi>s</mi><mo>∈</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo><mo>,</mo></math></span></span></span> with error measured in the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>q</mi></mrow></msub><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo></math></span>-norm. Information is standard, that is, function values of <em>f</em>. Our results extend previous work of Heinrich and Sindambiwe (1999) <span>[10]</span> for <span><math><mi>p</mi><mo>=</mo><mi>q</mi><mo>=</mo><mo>∞</mo></math></span> and Wiegand (2006) <span>[15]</span> for <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>=</mo><mi>q</mi><mo><</mo><mo>∞</mo></math></span>. Wiegand's analysis was carried out under the assumption that <span><math><msubsup><mrow><mi>W</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>r</mi></mrow></msubsup><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>×</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></math></span> is continuously embedded in <span><math><mi>C</mi><mo>(</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>×</mo><msub><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></math></span><span> (embedding condition). We also study the case that the embedding condition does not hold. For this purpose a new ingredient is developed – a stochastic discretization","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"82 ","pages":"Article 101823"},"PeriodicalIF":1.7,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139094532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1016/j.jco.2023.101822
Simon Ellinger
We study pathwise approximation of strong solutions of scalar stochastic differential equations (SDEs) at a single time in the presence of discontinuities of the drift coefficient. Recently, it has been shown by Müller-Gronbach and Yaroslavtseva (2022) that for all a transformed Milstein-type scheme reaches an -error rate of at least 3/4 when the drift coefficient is a piecewise Lipschitz-continuous function with a piecewise Lipschitz-continuous derivative and the diffusion coefficient is constant. It has been proven by Müller-Gronbach and Yaroslavtseva (2023) that this rate 3/4 is optimal if one additionally assumes that the drift coefficient is bounded, increasing and has a point of discontinuity. While boundedness and monotonicity of the drift coefficient are crucial for the proof of the matching lower bound from Müller-Gronbach and Yaroslavtseva (2023), we show that both conditions can be dropped. For the proof we apply a transformation technique which was so far only used to obtain upper bounds.
{"title":"Sharp lower error bounds for strong approximation of SDEs with piecewise Lipschitz continuous drift coefficient","authors":"Simon Ellinger","doi":"10.1016/j.jco.2023.101822","DOIUrl":"10.1016/j.jco.2023.101822","url":null,"abstract":"<div><p><span>We study pathwise approximation of strong solutions of scalar stochastic differential equations (SDEs) at a single time in the presence of discontinuities of the drift coefficient. Recently, it has been shown by Müller-Gronbach and Yaroslavtseva (2022) that for all </span><span><math><mi>p</mi><mo>∈</mo><mo>[</mo><mn>1</mn><mo>,</mo><mo>∞</mo><mo>)</mo></math></span> a transformed Milstein-type scheme reaches an <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span><span><span>-error rate of at least 3/4 when the drift coefficient is a piecewise Lipschitz-continuous function with a piecewise Lipschitz-continuous derivative and the diffusion coefficient is constant. It has been proven by Müller-Gronbach and Yaroslavtseva (2023) that this rate 3/4 is optimal if one additionally assumes that the drift coefficient is bounded, increasing and has a point of discontinuity. While </span>boundedness and monotonicity of the drift coefficient are crucial for the proof of the matching lower bound from Müller-Gronbach and Yaroslavtseva (2023), we show that both conditions can be dropped. For the proof we apply a transformation technique which was so far only used to obtain upper bounds.</span></p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"81 ","pages":"Article 101822"},"PeriodicalIF":1.7,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139094366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-27DOI: 10.1016/j.jco.2023.101820
A.G. Werschulz
<div><p>Consider the variational form of the ordinary integro-differential equation (OIDE)<span><span><span><math><mo>−</mo><msup><mrow><mi>u</mi></mrow><mrow><mo>″</mo></mrow></msup><mo>+</mo><mi>u</mi><mo>+</mo><munderover><mo>∫</mo><mrow><mn>0</mn></mrow><mrow><mn>1</mn></mrow></munderover><mi>q</mi><mo>(</mo><mo>⋅</mo><mo>,</mo><mi>y</mi><mo>)</mo><mi>u</mi><mo>(</mo><mi>y</mi><mo>)</mo><mrow><mtext>dy</mtext></mrow><mo>=</mo><mi>f</mi></math></span></span></span> on the unit interval <em>I</em><span>, subject to homogeneous Neumann boundary conditions. Here, </span><em>f</em> and <em>q</em> respectively belong to the unit ball of <span><math><msup><mrow><mi>H</mi></mrow><mrow><mi>r</mi></mrow></msup><mo>(</mo><mi>I</mi><mo>)</mo></math></span> and the ball of radius <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> of <span><math><msup><mrow><mi>H</mi></mrow><mrow><mi>s</mi></mrow></msup><mo>(</mo><msup><mrow><mi>I</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span>, where <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>∈</mo><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>. For <span><math><mi>ε</mi><mo>></mo><mn>0</mn></math></span>, we want to compute <em>ε</em>-approximations for this problem, measuring error in the <span><math><msup><mrow><mi>H</mi></mrow><mrow><mn>1</mn></mrow></msup><mo>(</mo><mi>I</mi><mo>)</mo></math></span> sense in the worst case setting. Assuming that standard information is admissible, we find that the <em>n</em>th minimal error is <span><math><mi>Θ</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mo>)</mo></math></span>, so that the information <em>ε</em>-complexity is <span><math><mi>Θ</mi><mo>(</mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mo>)</mo></math></span><span>; moreover, finite element methods of degree </span><span><math><mi>max</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>}</mo></math></span><span> are minimal-error algorithms. We use a Picard method to approximate the solution of the resulting linear systems, since Gaussian elimination will be too expensive. We find that the total </span><em>ε</em>-complexity of the problem is at least <span><math><mi>Ω</mi><mo>(</mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mo>)</mo></math></span> and at most <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mi>ln</mi><mo></mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn></m
{"title":"Complexity for a class of elliptic ordinary integro-differential equations","authors":"A.G. Werschulz","doi":"10.1016/j.jco.2023.101820","DOIUrl":"10.1016/j.jco.2023.101820","url":null,"abstract":"<div><p>Consider the variational form of the ordinary integro-differential equation (OIDE)<span><span><span><math><mo>−</mo><msup><mrow><mi>u</mi></mrow><mrow><mo>″</mo></mrow></msup><mo>+</mo><mi>u</mi><mo>+</mo><munderover><mo>∫</mo><mrow><mn>0</mn></mrow><mrow><mn>1</mn></mrow></munderover><mi>q</mi><mo>(</mo><mo>⋅</mo><mo>,</mo><mi>y</mi><mo>)</mo><mi>u</mi><mo>(</mo><mi>y</mi><mo>)</mo><mrow><mtext>dy</mtext></mrow><mo>=</mo><mi>f</mi></math></span></span></span> on the unit interval <em>I</em><span>, subject to homogeneous Neumann boundary conditions. Here, </span><em>f</em> and <em>q</em> respectively belong to the unit ball of <span><math><msup><mrow><mi>H</mi></mrow><mrow><mi>r</mi></mrow></msup><mo>(</mo><mi>I</mi><mo>)</mo></math></span> and the ball of radius <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> of <span><math><msup><mrow><mi>H</mi></mrow><mrow><mi>s</mi></mrow></msup><mo>(</mo><msup><mrow><mi>I</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span>, where <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>∈</mo><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>. For <span><math><mi>ε</mi><mo>></mo><mn>0</mn></math></span>, we want to compute <em>ε</em>-approximations for this problem, measuring error in the <span><math><msup><mrow><mi>H</mi></mrow><mrow><mn>1</mn></mrow></msup><mo>(</mo><mi>I</mi><mo>)</mo></math></span> sense in the worst case setting. Assuming that standard information is admissible, we find that the <em>n</em>th minimal error is <span><math><mi>Θ</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mo>)</mo></math></span>, so that the information <em>ε</em>-complexity is <span><math><mi>Θ</mi><mo>(</mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mo>)</mo></math></span><span>; moreover, finite element methods of degree </span><span><math><mi>max</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>}</mo></math></span><span> are minimal-error algorithms. We use a Picard method to approximate the solution of the resulting linear systems, since Gaussian elimination will be too expensive. We find that the total </span><em>ε</em>-complexity of the problem is at least <span><math><mi>Ω</mi><mo>(</mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mo>)</mo></math></span> and at most <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mi>min</mi><mo></mo><mo>{</mo><mi>r</mi><mo>,</mo><mi>s</mi><mo>/</mo><mn>2</mn><mo>}</mo></mrow></msup><mi>ln</mi><mo></mo><msup><mrow><mi>ε</mi></mrow><mrow><mo>−</mo><mn>1</mn></m","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"81 ","pages":"Article 101820"},"PeriodicalIF":1.7,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139071062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}