首页 > 最新文献

Information and Inference-A Journal of the Ima最新文献

英文 中文
OUP accepted manuscript OUP接受稿件
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2021-01-01 DOI: 10.1093/imaiai/iaab021
{"title":"OUP accepted manuscript","authors":"","doi":"10.1093/imaiai/iaab021","DOIUrl":"https://doi.org/10.1093/imaiai/iaab021","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"63 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84732722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concentration inequalities for the empirical distribution of discrete distributions: beyond the method of types 离散分布的经验分布的集中不等式:超越类型的方法
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-12-16 DOI: 10.1093/imaiai/iaz025
Jay Mardia, Jiantao Jiao, Ervin Tánczos, R. Nowak, T. Weissman
We study concentration inequalities for the Kullback–Leibler (KL) divergence between the empirical distribution and the true distribution. Applying a recursion technique, we improve over the method of types bound uniformly in all regimes of sample size n and alphabet size k, and the improvement becomes more significant when k is large. We discuss the applications of our results in obtaining tighter concentration inequalities for L1 deviations of the empirical distribution from the true distribution, and the difference between concentration around the expectation or zero. We also obtain asymptotically tight bounds on the variance of the KL divergence between the empirical and true distribution, and demonstrate their quantitatively different behaviors between small and large sample sizes compared to the alphabet size.
我们研究了经验分布和真实分布之间的Kullback-Leibler (KL)散度的集中不等式。应用递归技术,在样本大小为n、字母大小为k的所有区域中,对类型一致定界的方法进行了改进,当k较大时,改进更为显著。我们讨论了我们的结果在经验分布与真实分布的L1偏差以及期望值周围或零之间的浓度差的更严格的浓度不等式中的应用。我们还获得了经验分布和真实分布之间KL散度方差的渐近紧界,并证明了与字母表大小相比,它们在小样本容量和大样本容量之间的定量差异行为。
{"title":"Concentration inequalities for the empirical distribution of discrete distributions: beyond the method of types","authors":"Jay Mardia, Jiantao Jiao, Ervin Tánczos, R. Nowak, T. Weissman","doi":"10.1093/imaiai/iaz025","DOIUrl":"https://doi.org/10.1093/imaiai/iaz025","url":null,"abstract":"We study concentration inequalities for the Kullback–Leibler (KL) divergence between the empirical distribution and the true distribution. Applying a recursion technique, we improve over the method of types bound uniformly in all regimes of sample size n and alphabet size k, and the improvement becomes more significant when k is large. We discuss the applications of our results in obtaining tighter concentration inequalities for L1 deviations of the empirical distribution from the true distribution, and the difference between concentration around the expectation or zero. We also obtain asymptotically tight bounds on the variance of the KL divergence between the empirical and true distribution, and demonstrate their quantitatively different behaviors between small and large sample sizes compared to the alphabet size.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"9 1","pages":"813-850"},"PeriodicalIF":1.6,"publicationDate":"2020-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87011039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Analysis of fast structured dictionary learning. 快速结构化词典学习分析
IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-12-01 Epub Date: 2019-11-19 DOI: 10.1093/imaiai/iaz028
Saiprasad Ravishankar, Anna Ma, Deanna Needell

Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.

基于稀疏性的模型和技术已在许多信号处理和成像应用中得到开发。基于字典和稀疏性变换学习的数据驱动方法可以从数据中学习丰富的图像特征,其效果优于分析模型。其中,交替优化算法一直是学习此类模型的常用方法。在这项工作中,我们将重点放在交替最小化上,以解决特定的结构化单元稀疏化算子学习问题,并提供收敛性分析。虽然算法一般会收敛到问题的临界点,但我们的分析在温和的假设条件下,确定了算法对数据基础稀疏化模型的局部线性收敛。分析和数值模拟表明,我们的假设对于标准概率数据模型是成立的。在实践中,该算法对初始化具有鲁棒性。
{"title":"Analysis of fast structured dictionary learning.","authors":"Saiprasad Ravishankar, Anna Ma, Deanna Needell","doi":"10.1093/imaiai/iaz028","DOIUrl":"10.1093/imaiai/iaz028","url":null,"abstract":"<p><p>Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.</p>","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"9 4","pages":"785-811"},"PeriodicalIF":1.4,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7737167/pdf/iaz028.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38730960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matchability of heterogeneous networks pairs. 异构网络对的匹配性。
IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-12-01 Epub Date: 2020-01-06 DOI: 10.1093/imaiai/iaz031
Vince Lyzinski, Daniel L Sussman

We consider the problem of graph matchability in non-identically distributed networks. In a general class of edge-independent networks, we demonstrate that graph matchability can be lost with high probability when matching the networks directly. We further demonstrate that under mild model assumptions, matchability is almost perfectly recovered by centering the networks using universal singular value thresholding before matching. These theoretical results are then demonstrated in both real and synthetic simulation settings. We also recover analogous core-matchability results in a very general core-junk network model, wherein some vertices do not correspond between the graph pair.

我们考虑的是非同分布式网络中的图匹配性问题。在一般的边缘无关网络中,我们证明了直接匹配网络时,图的可匹配性会以很高的概率丢失。我们进一步证明,在温和的模型假设下,通过在匹配前使用通用奇异值阈值将网络居中,几乎可以完美地恢复匹配性。这些理论结果随后在实际和合成模拟环境中得到了验证。我们还在一个非常通用的核心-垃圾网络模型中恢复了类似的核心匹配性结果,在这个模型中,一些顶点在图对之间并不对应。
{"title":"Matchability of heterogeneous networks pairs.","authors":"Vince Lyzinski, Daniel L Sussman","doi":"10.1093/imaiai/iaz031","DOIUrl":"10.1093/imaiai/iaz031","url":null,"abstract":"<p><p>We consider the problem of graph matchability in non-identically distributed networks. In a general class of edge-independent networks, we demonstrate that graph matchability can be lost with high probability when matching the networks directly. We further demonstrate that under mild model assumptions, matchability is almost perfectly recovered by centering the networks using universal singular value thresholding before matching. These theoretical results are then demonstrated in both real and synthetic simulation settings. We also recover analogous core-matchability results in a very general core-junk network model, wherein some vertices do not correspond between the graph pair.</p>","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"9 4","pages":"749-783"},"PeriodicalIF":1.4,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7737166/pdf/iaz031.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38730959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overlap matrix concentration in optimal Bayesian inference 最优贝叶斯推理中的重叠矩阵集中
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa008
Jean Barbier
We consider models of Bayesian inference of signals with vectorial components of finite dimensionality. We show that under a proper perturbation, these models are replica symmetric in the sense that the overlap matrix concentrates. The overlap matrix is the order parameter in these models and is directly related to error metrics such as minimum mean-square errors. Our proof is valid in the optimal Bayesian inference setting. This means that it relies on the assumption that the model and all its hyper-parameters are known so that the posterior distribution can be written exactly. Examples of important problems in high-dimensional inference and learning to which our results apply are low-rank tensor factorization, the committee machine neural network with a finite number of hidden neurons in the teacher–student scenario or multi-layer versions of the generalized linear model.
我们考虑具有有限维矢量分量的信号的贝叶斯推理模型。我们证明了在适当的扰动下,这些模型在重叠矩阵集中的意义上是复制对称的。重叠矩阵是这些模型中的阶参数,并且与误差度量(例如最小均方误差)直接相关。我们的证明在最优贝叶斯推理设置中是有效的。这意味着它依赖于这样一个假设,即模型及其所有超参数都是已知的,这样后验分布就可以精确地书写出来。我们的结果适用于高维推理和学习中的重要问题的例子是低秩张量因子分解、教师-学生场景中具有有限数量隐藏神经元的委员会机器神经网络或广义线性模型的多层版本。
{"title":"Overlap matrix concentration in optimal Bayesian inference","authors":"Jean Barbier","doi":"10.1093/imaiai/iaaa008","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa008","url":null,"abstract":"We consider models of Bayesian inference of signals with vectorial components of finite dimensionality. We show that under a proper perturbation, these models are replica symmetric in the sense that the overlap matrix concentrates. The overlap matrix is the order parameter in these models and is directly related to error metrics such as minimum mean-square errors. Our proof is valid in the optimal Bayesian inference setting. This means that it relies on the assumption that the model and all its hyper-parameters are known so that the posterior distribution can be written exactly. Examples of important problems in high-dimensional inference and learning to which our results apply are low-rank tensor factorization, the committee machine neural network with a finite number of hidden neurons in the teacher–student scenario or multi-layer versions of the generalized linear model.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"597-623"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Tight recovery guarantees for orthogonal matching pursuit under Gaussian noise 高斯噪声下正交匹配追踪的严密恢复保证
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa021
Chen Amiraz;Robert Krauthgamer;Boaz Nadler
Orthogonal matching pursuit (OMP) is a popular algorithm to estimate an unknown sparse vector from multiple linear measurements of it. Assuming exact sparsity and that the measurements are corrupted by additive Gaussian noise, the success of OMP is often formulated as exactly recovering the support of the sparse vector. Several authors derived a sufficient condition for exact support recovery by OMP with high probability depending on the signal-to-noise ratio, defined as the magnitude of the smallest non-zero coefficient of the vector divided by the noise level. We make two contributions. First, we derive a slightly sharper sufficient condition for two variants of OMP, in which either the sparsity level or the noise level is known. Next, we show that this sharper sufficient condition is tight, in the following sense: for a wide range of problem parameters, there exist a dictionary of linear measurements and a sparse vector with a signal-to-noise ratio slightly below that of the sufficient condition, for which with high probability OMP fails to recover its support. Finally, we present simulations that illustrate that our condition is tight for a much broader range of dictionaries.
正交匹配追踪(OMP)是一种流行的算法,用于从未知稀疏向量的多个线性测量中估计未知稀疏向量。假设精确的稀疏性,并且测量被加性高斯噪声破坏,OMP的成功通常被公式化为精确地恢复稀疏向量的支持。几位作者根据信噪比导出了OMP高概率精确恢复支持的充分条件,信噪比定义为向量的最小非零系数的大小除以噪声水平。我们有两个贡献。首先,我们导出了OMP的两个变体的稍微尖锐的充分条件,其中稀疏性水平或噪声水平是已知的。接下来,我们证明了这个更尖锐的充分条件是严格的,在以下意义上:对于广泛的问题参数,存在一个线性测量字典和一个信噪比略低于充分条件的稀疏向量,OMP很可能无法恢复其支持。最后,我们给出的模拟结果表明,对于更广泛的词典,我们的条件是严格的。
{"title":"Tight recovery guarantees for orthogonal matching pursuit under Gaussian noise","authors":"Chen Amiraz;Robert Krauthgamer;Boaz Nadler","doi":"10.1093/imaiai/iaaa021","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa021","url":null,"abstract":"Orthogonal matching pursuit (OMP) is a popular algorithm to estimate an unknown sparse vector from multiple linear measurements of it. Assuming exact sparsity and that the measurements are corrupted by additive Gaussian noise, the success of OMP is often formulated as exactly recovering the support of the sparse vector. Several authors derived a sufficient condition for exact support recovery by OMP with high probability depending on the signal-to-noise ratio, defined as the magnitude of the smallest non-zero coefficient of the vector divided by the noise level. We make two contributions. First, we derive a slightly sharper sufficient condition for two variants of OMP, in which either the sparsity level or the noise level is known. Next, we show that this sharper sufficient condition is tight, in the following sense: for a wide range of problem parameters, there exist a dictionary of linear measurements and a sparse vector with a signal-to-noise ratio slightly below that of the sufficient condition, for which with high probability OMP fails to recover its support. Finally, we present simulations that illustrate that our condition is tight for a much broader range of dictionaries.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"573-595"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Erratum to: Super-resolution of near-colliding point sources 勘误表:近碰撞点源的超分辨率
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa015
Dmitry Batenkov;Gil Goldman;Yosef Yomdin
{"title":"Erratum to: Super-resolution of near-colliding point sources","authors":"Dmitry Batenkov;Gil Goldman;Yosef Yomdin","doi":"10.1093/imaiai/iaaa015","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa015","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"721-721"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The limits of distribution-free conditional predictive inference 无分布条件预测推理的极限
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa017
Rina Foygel Barber;Emmanuel J Candès;Aaditya Ramdas;Ryan J Tibshirani
We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally. Existing methods such as conformal prediction offer marginal coverage guarantees, where predictive coverage holds on average over all possible test points, but this is not sufficient for many practical applications where we would like to know that our predictions are valid for a given individual, not merely on average over a population. On the other hand, exact conditional inference guarantees are known to be impossible without imposing assumptions on the underlying distribution. In this work, we aim to explore the space in between these two and examine what types of relaxations of the conditional coverage property would alleviate some of the practical concerns with marginal coverage guarantees while still being possible to achieve in a distribution-free setting.
我们考虑了无分布预测推理的问题,目的是产生有条件而非边际的预测覆盖保证。现有的方法,如保角预测,提供了边际覆盖保证,其中预测覆盖在所有可能的测试点上平均保持,但这对于许多实际应用来说是不够的,在这些应用中,我们希望知道我们的预测对给定的个体有效,而不仅仅是对群体的平均有效。另一方面,如果不对基本分布强加假设,精确的条件推理保证是不可能的。在这项工作中,我们的目的是探索这两者之间的空间,并研究什么类型的条件覆盖属性的放松将缓解边际覆盖担保的一些实际问题,同时仍然可以在无分布的环境中实现。
{"title":"The limits of distribution-free conditional predictive inference","authors":"Rina Foygel Barber;Emmanuel J Candès;Aaditya Ramdas;Ryan J Tibshirani","doi":"10.1093/imaiai/iaaa017","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa017","url":null,"abstract":"We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally. Existing methods such as conformal prediction offer marginal coverage guarantees, where predictive coverage holds on average over all possible test points, but this is not sufficient for many practical applications where we would like to know that our predictions are valid for a given individual, not merely on average over a population. On the other hand, exact conditional inference guarantees are known to be impossible without imposing assumptions on the underlying distribution. In this work, we aim to explore the space in between these two and examine what types of relaxations of the conditional coverage property would alleviate some of the practical concerns with marginal coverage guarantees while still being possible to achieve in a distribution-free setting.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"455-482"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 162
Oracle inequalities for square root analysis estimators with application to total variation penalties 平方根分析估计量的Oracle不等式及其在总变异罚中的应用
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa002
Francesco Ortelli;Sara van de Geer
Through the direct study of the analysis estimator we derive oracle inequalities with fast and slow rates by adapting the arguments involving projections by Dalalyan et al. (2017, Bernoulli, 23, 552–581). We then extend the theory to the square root analysis estimator. Finally, we focus on (square root) total variation regularized estimators on graphs and obtain constant-friendly rates, which, up to log terms, match previous results obtained by entropy calculations. We also obtain an oracle inequality for the (square root) total variation regularized estimator over the cycle graph.
通过对分析估计器的直接研究,我们通过调整Dalalyan等人(2017,Bernoulli,23552-581)涉及预测的论点,导出了具有快速和慢速率的预言不等式。然后,我们将该理论推广到平方根分析估计器。最后,我们关注图上的(平方根)全变差正则化估计量,并获得了常数友好率,该常数友好率在对数项之前与熵计算获得的先前结果相匹配。我们还得到了循环图上(平方根)全变差正则化估计器的预言不等式。
{"title":"Oracle inequalities for square root analysis estimators with application to total variation penalties","authors":"Francesco Ortelli;Sara van de Geer","doi":"10.1093/imaiai/iaaa002","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa002","url":null,"abstract":"Through the direct study of the analysis estimator we derive oracle inequalities with fast and slow rates by adapting the arguments involving projections by Dalalyan et al. (2017, Bernoulli, 23, 552–581). We then extend the theory to the square root analysis estimator. Finally, we focus on (square root) total variation regularized estimators on graphs and obtain constant-friendly rates, which, up to log terms, match previous results obtained by entropy calculations. We also obtain an oracle inequality for the (square root) total variation regularized estimator over the cycle graph.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"483-514"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Composite optimization for robust rank one bilinear sensing 鲁棒秩一双线性传感的复合优化
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa027
Vasileios Charisopoulos;Damek Davis;Mateo Díaz;Dmitriy Drusvyatskiy
We consider the task of recovering a pair of vectors from a set of rank one bilinear measurements, possibly corrupted by noise. Most notably, the problem of robust blind deconvolution can be modeled in this way. We consider a natural nonsmooth formulation of the rank one bilinear sensing problem and show that its moduli of weak convexity, sharpness and Lipschitz continuity are all dimension independent, under favorable statistical assumptions. This phenomenon persists even when up to half of the measurements are corrupted by noise. Consequently, standard algorithms, such as the subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within a constant relative error of the solution. We complete the paper with a new initialization strategy, complementing the local search algorithms. The initialization procedure is both provably efficient and robust to outlying measurements. Numerical experiments, on both simulated and real data, illustrate the developed theory and methods.
我们考虑的任务是从一组秩为一的双线性测量中恢复一对向量,可能被噪声破坏。最值得注意的是,鲁棒盲反卷积问题可以用这种方式建模。我们考虑了秩一双线性传感问题的一个自然非光滑公式,并证明了在有利的统计假设下,其弱凸性、锐度和Lipschitz连续性的模都是维度无关的。即使多达一半的测量值被噪声破坏,这种现象也会持续存在。因此,当在解的恒定相对误差内初始化时,标准算法,如次梯度和近似线性方法,以快速的与维度无关的速率收敛。我们用一种新的初始化策略来完成本文,补充了局部搜索算法。初始化过程既可证明是有效的,又对外围测量具有鲁棒性。在模拟和实际数据上的数值实验说明了所发展的理论和方法。
{"title":"Composite optimization for robust rank one bilinear sensing","authors":"Vasileios Charisopoulos;Damek Davis;Mateo Díaz;Dmitriy Drusvyatskiy","doi":"10.1093/imaiai/iaaa027","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa027","url":null,"abstract":"We consider the task of recovering a pair of vectors from a set of rank one bilinear measurements, possibly corrupted by noise. Most notably, the problem of robust blind deconvolution can be modeled in this way. We consider a natural nonsmooth formulation of the rank one bilinear sensing problem and show that its moduli of weak convexity, sharpness and Lipschitz continuity are all dimension independent, under favorable statistical assumptions. This phenomenon persists even when up to half of the measurements are corrupted by noise. Consequently, standard algorithms, such as the subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within a constant relative error of the solution. We complete the paper with a new initialization strategy, complementing the local search algorithms. The initialization procedure is both provably efficient and robust to outlying measurements. Numerical experiments, on both simulated and real data, illustrate the developed theory and methods.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"333-396"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Information and Inference-A Journal of the Ima
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1