{"title":"通过 SOLIT-Sharp Optimal Lepskiĭ-Inspired Tuning 在统计逆问题中实现自适应最小优化","authors":"Housen Li, Frank Werner","doi":"10.1088/1361-6420/ad12e0","DOIUrl":null,"url":null,"abstract":"We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form <inline-formula>\n<tex-math><?CDATA $\\widehat f_\\alpha = q_\\alpha \\left(T\\,^*T\\right)T\\,^*Y$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:msub><mml:mover><mml:mi>f</mml:mi><mml:mo>ˆ</mml:mo></mml:mover><mml:mi>α</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mi>α</mml:mi></mml:msub><mml:mfenced close=\")\" open=\"(\"><mml:mrow><mml:msup><mml:mi>T</mml:mi><mml:mo>∗</mml:mo></mml:msup><mml:mi>T</mml:mi></mml:mrow></mml:mfenced><mml:msup><mml:mi>T</mml:mi><mml:mo>∗</mml:mo></mml:msup><mml:mi>Y</mml:mi></mml:math>\n<inline-graphic xlink:href=\"ipad12e0ieqn1.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula>, where <italic toggle=\"yes\">Y</italic> is the available data, <italic toggle=\"yes\">T</italic> the forward operator, <inline-formula>\n<tex-math><?CDATA $\\left(q_\\alpha\\right)_{\\alpha \\in \\mathcal A}$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:msub><mml:mfenced close=\")\" open=\"(\"><mml:msub><mml:mi>q</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:mfenced><mml:mrow><mml:mi>α</mml:mi><mml:mo>∈</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math>\n<inline-graphic xlink:href=\"ipad12e0ieqn2.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula> an ordered filter, and <italic toggle=\"yes\">α</italic> > 0 a regularization parameter. Whenever such a method is used in practice, <italic toggle=\"yes\">α</italic> has to be appropriately chosen. Typically, the aim is to find or at least approximate the best possible <italic toggle=\"yes\">α</italic> in the sense that mean squared error (MSE) <inline-formula>\n<tex-math><?CDATA $\\mathbb{E} [\\Vert \\widehat f_\\alpha - f^\\dagger\\Vert^2]$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:mrow><mml:mi mathvariant=\"double-struck\">E</mml:mi></mml:mrow><mml:mo stretchy=\"false\">[</mml:mo><mml:mo>∥</mml:mo><mml:msub><mml:mover><mml:mi>f</mml:mi><mml:mo>ˆ</mml:mo></mml:mover><mml:mi>α</mml:mi></mml:msub><mml:mo>−</mml:mo><mml:msup><mml:mi>f</mml:mi><mml:mo>†</mml:mo></mml:msup><mml:mrow><mml:msup><mml:mo>∥</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo stretchy=\"false\">]</mml:mo></mml:math>\n<inline-graphic xlink:href=\"ipad12e0ieqn3.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula> w.r.t. the true solution <inline-formula>\n<tex-math><?CDATA $f^\\dagger$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:msup><mml:mi>f</mml:mi><mml:mo>†</mml:mo></mml:msup></mml:math>\n<inline-graphic xlink:href=\"ipad12e0ieqn4.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula> is minimized. In this paper, we introduce the Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) method, which yields an <italic toggle=\"yes\">a posteriori</italic> parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on <italic toggle=\"yes\">Y</italic> and the noise level <italic toggle=\"yes\">σ</italic> as well as the operator <italic toggle=\"yes\">T</italic> and the filter <inline-formula>\n<tex-math><?CDATA $\\left(q_\\alpha\\right)_{\\alpha \\in \\mathcal A}$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:msub><mml:mfenced close=\")\" open=\"(\"><mml:msub><mml:mi>q</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:mfenced><mml:mrow><mml:mi>α</mml:mi><mml:mo>∈</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math>\n<inline-graphic xlink:href=\"ipad12e0ieqn5.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula> and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other <italic toggle=\"yes\">a posteriori</italic> parameter choice rule can yield a better performance in terms of the order of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepskiĭ-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"43 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive minimax optimality in statistical inverse problems via SOLIT—Sharp Optimal Lepskiĭ-Inspired Tuning\",\"authors\":\"Housen Li, Frank Werner\",\"doi\":\"10.1088/1361-6420/ad12e0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form <inline-formula>\\n<tex-math><?CDATA $\\\\widehat f_\\\\alpha = q_\\\\alpha \\\\left(T\\\\,^*T\\\\right)T\\\\,^*Y$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:msub><mml:mover><mml:mi>f</mml:mi><mml:mo>ˆ</mml:mo></mml:mover><mml:mi>α</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mi>α</mml:mi></mml:msub><mml:mfenced close=\\\")\\\" open=\\\"(\\\"><mml:mrow><mml:msup><mml:mi>T</mml:mi><mml:mo>∗</mml:mo></mml:msup><mml:mi>T</mml:mi></mml:mrow></mml:mfenced><mml:msup><mml:mi>T</mml:mi><mml:mo>∗</mml:mo></mml:msup><mml:mi>Y</mml:mi></mml:math>\\n<inline-graphic xlink:href=\\\"ipad12e0ieqn1.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula>, where <italic toggle=\\\"yes\\\">Y</italic> is the available data, <italic toggle=\\\"yes\\\">T</italic> the forward operator, <inline-formula>\\n<tex-math><?CDATA $\\\\left(q_\\\\alpha\\\\right)_{\\\\alpha \\\\in \\\\mathcal A}$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:msub><mml:mfenced close=\\\")\\\" open=\\\"(\\\"><mml:msub><mml:mi>q</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:mfenced><mml:mrow><mml:mi>α</mml:mi><mml:mo>∈</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math>\\n<inline-graphic xlink:href=\\\"ipad12e0ieqn2.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula> an ordered filter, and <italic toggle=\\\"yes\\\">α</italic> > 0 a regularization parameter. Whenever such a method is used in practice, <italic toggle=\\\"yes\\\">α</italic> has to be appropriately chosen. Typically, the aim is to find or at least approximate the best possible <italic toggle=\\\"yes\\\">α</italic> in the sense that mean squared error (MSE) <inline-formula>\\n<tex-math><?CDATA $\\\\mathbb{E} [\\\\Vert \\\\widehat f_\\\\alpha - f^\\\\dagger\\\\Vert^2]$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:mrow><mml:mi mathvariant=\\\"double-struck\\\">E</mml:mi></mml:mrow><mml:mo stretchy=\\\"false\\\">[</mml:mo><mml:mo>∥</mml:mo><mml:msub><mml:mover><mml:mi>f</mml:mi><mml:mo>ˆ</mml:mo></mml:mover><mml:mi>α</mml:mi></mml:msub><mml:mo>−</mml:mo><mml:msup><mml:mi>f</mml:mi><mml:mo>†</mml:mo></mml:msup><mml:mrow><mml:msup><mml:mo>∥</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo stretchy=\\\"false\\\">]</mml:mo></mml:math>\\n<inline-graphic xlink:href=\\\"ipad12e0ieqn3.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula> w.r.t. the true solution <inline-formula>\\n<tex-math><?CDATA $f^\\\\dagger$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:msup><mml:mi>f</mml:mi><mml:mo>†</mml:mo></mml:msup></mml:math>\\n<inline-graphic xlink:href=\\\"ipad12e0ieqn4.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula> is minimized. In this paper, we introduce the Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) method, which yields an <italic toggle=\\\"yes\\\">a posteriori</italic> parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on <italic toggle=\\\"yes\\\">Y</italic> and the noise level <italic toggle=\\\"yes\\\">σ</italic> as well as the operator <italic toggle=\\\"yes\\\">T</italic> and the filter <inline-formula>\\n<tex-math><?CDATA $\\\\left(q_\\\\alpha\\\\right)_{\\\\alpha \\\\in \\\\mathcal A}$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:msub><mml:mfenced close=\\\")\\\" open=\\\"(\\\"><mml:msub><mml:mi>q</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:mfenced><mml:mrow><mml:mi>α</mml:mi><mml:mo>∈</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math>\\n<inline-graphic xlink:href=\\\"ipad12e0ieqn5.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula> and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other <italic toggle=\\\"yes\\\">a posteriori</italic> parameter choice rule can yield a better performance in terms of the order of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepskiĭ-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.\",\"PeriodicalId\":50275,\"journal\":{\"name\":\"Inverse Problems\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2023-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inverse Problems\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1088/1361-6420/ad12e0\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inverse Problems","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1088/1361-6420/ad12e0","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Adaptive minimax optimality in statistical inverse problems via SOLIT—Sharp Optimal Lepskiĭ-Inspired Tuning
We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form fˆα=qαT∗TT∗Y, where Y is the available data, T the forward operator, qαα∈A an ordered filter, and α > 0 a regularization parameter. Whenever such a method is used in practice, α has to be appropriately chosen. Typically, the aim is to find or at least approximate the best possible α in the sense that mean squared error (MSE) E[∥fˆα−f†∥2] w.r.t. the true solution f† is minimized. In this paper, we introduce the Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) method, which yields an a posteriori parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on Y and the noise level σ as well as the operator T and the filter qαα∈A and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other a posteriori parameter choice rule can yield a better performance in terms of the order of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepskiĭ-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.
期刊介绍:
An interdisciplinary journal combining mathematical and experimental papers on inverse problems with theoretical, numerical and practical approaches to their solution.
As well as applied mathematicians, physical scientists and engineers, the readership includes those working in geophysics, radar, optics, biology, acoustics, communication theory, signal processing and imaging, among others.
The emphasis is on publishing original contributions to methods of solving mathematical, physical and applied problems. To be publishable in this journal, papers must meet the highest standards of scientific quality, contain significant and original new science and should present substantial advancement in the field. Due to the broad scope of the journal, we require that authors provide sufficient introductory material to appeal to the wide readership and that articles which are not explicitly applied include a discussion of possible applications.