{"title":"学习交互核的最优最小最大速率","authors":"Xiong Wang, Inbar Seroussi, Fei Lu","doi":"arxiv-2311.16852","DOIUrl":null,"url":null,"abstract":"Nonparametric estimation of nonlocal interaction kernels is crucial in\nvarious applications involving interacting particle systems. The inference\nchallenge, situated at the nexus of statistical learning and inverse problems,\ncomes from the nonlocal dependency. A central question is whether the optimal\nminimax rate of convergence for this problem aligns with the rate of\n$M^{-\\frac{2\\beta}{2\\beta+1}}$ in classical nonparametric regression, where $M$\nis the sample size and $\\beta$ represents the smoothness exponent of the radial\nkernel. Our study confirms this alignment for systems with a finite number of\nparticles. We introduce a tamed least squares estimator (tLSE) that attains the optimal\nconvergence rate for a broad class of exchangeable distributions. The tLSE\nbridges the smallest eigenvalue of random matrices and Sobolev embedding. This\nestimator relies on nonasymptotic estimates for the left tail probability of\nthe smallest eigenvalue of the normal matrix. The lower minimax rate is derived\nusing the Fano-Tsybakov hypothesis testing method. Our findings reveal that\nprovided the inverse problem in the large sample limit satisfies a coercivity\ncondition, the left tail probability does not alter the bias-variance tradeoff,\nand the optimal minimax rate remains intact. Our tLSE method offers a\nstraightforward approach for establishing the optimal minimax rate for models\nwith either local or nonlocal dependency.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"91 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimal minimax rate of learning interaction kernels\",\"authors\":\"Xiong Wang, Inbar Seroussi, Fei Lu\",\"doi\":\"arxiv-2311.16852\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nonparametric estimation of nonlocal interaction kernels is crucial in\\nvarious applications involving interacting particle systems. The inference\\nchallenge, situated at the nexus of statistical learning and inverse problems,\\ncomes from the nonlocal dependency. A central question is whether the optimal\\nminimax rate of convergence for this problem aligns with the rate of\\n$M^{-\\\\frac{2\\\\beta}{2\\\\beta+1}}$ in classical nonparametric regression, where $M$\\nis the sample size and $\\\\beta$ represents the smoothness exponent of the radial\\nkernel. Our study confirms this alignment for systems with a finite number of\\nparticles. We introduce a tamed least squares estimator (tLSE) that attains the optimal\\nconvergence rate for a broad class of exchangeable distributions. The tLSE\\nbridges the smallest eigenvalue of random matrices and Sobolev embedding. This\\nestimator relies on nonasymptotic estimates for the left tail probability of\\nthe smallest eigenvalue of the normal matrix. The lower minimax rate is derived\\nusing the Fano-Tsybakov hypothesis testing method. Our findings reveal that\\nprovided the inverse problem in the large sample limit satisfies a coercivity\\ncondition, the left tail probability does not alter the bias-variance tradeoff,\\nand the optimal minimax rate remains intact. Our tLSE method offers a\\nstraightforward approach for establishing the optimal minimax rate for models\\nwith either local or nonlocal dependency.\",\"PeriodicalId\":501330,\"journal\":{\"name\":\"arXiv - MATH - Statistics Theory\",\"volume\":\"91 2\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - MATH - Statistics Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2311.16852\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.16852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimal minimax rate of learning interaction kernels
Nonparametric estimation of nonlocal interaction kernels is crucial in
various applications involving interacting particle systems. The inference
challenge, situated at the nexus of statistical learning and inverse problems,
comes from the nonlocal dependency. A central question is whether the optimal
minimax rate of convergence for this problem aligns with the rate of
$M^{-\frac{2\beta}{2\beta+1}}$ in classical nonparametric regression, where $M$
is the sample size and $\beta$ represents the smoothness exponent of the radial
kernel. Our study confirms this alignment for systems with a finite number of
particles. We introduce a tamed least squares estimator (tLSE) that attains the optimal
convergence rate for a broad class of exchangeable distributions. The tLSE
bridges the smallest eigenvalue of random matrices and Sobolev embedding. This
estimator relies on nonasymptotic estimates for the left tail probability of
the smallest eigenvalue of the normal matrix. The lower minimax rate is derived
using the Fano-Tsybakov hypothesis testing method. Our findings reveal that
provided the inverse problem in the large sample limit satisfies a coercivity
condition, the left tail probability does not alter the bias-variance tradeoff,
and the optimal minimax rate remains intact. Our tLSE method offers a
straightforward approach for establishing the optimal minimax rate for models
with either local or nonlocal dependency.