Pub Date : 2021-12-01Epub Date: 2020-08-13DOI: 10.1093/imaiai/iaaa016
Matthew Hirn, Anna Little
We propose a nonlinear, wavelet-based signal representation that is translation invariant and robust to both additive noise and random dilations. Motivated by the multi-reference alignment problem and generalizations thereof, we analyze the statistical properties of this representation given a large number of independent corruptions of a target signal. We prove the nonlinear wavelet-based representation uniquely defines the power spectrum but allows for an unbiasing procedure that cannot be directly applied to the power spectrum. After unbiasing the representation to remove the effects of the additive noise and random dilations, we recover an approximation of the power spectrum by solving a convex optimization problem, and thus reduce to a phase retrieval problem. Extensive numerical experiments demonstrate the statistical robustness of this approximation procedure.
{"title":"Wavelet invariants for statistically robust multi-reference alignment.","authors":"Matthew Hirn, Anna Little","doi":"10.1093/imaiai/iaaa016","DOIUrl":"10.1093/imaiai/iaaa016","url":null,"abstract":"<p><p>We propose a nonlinear, wavelet-based signal representation that is translation invariant and robust to both additive noise and random dilations. Motivated by the multi-reference alignment problem and generalizations thereof, we analyze the statistical properties of this representation given a large number of independent corruptions of a target signal. We prove the nonlinear wavelet-based representation uniquely defines the power spectrum but allows for an unbiasing procedure that cannot be directly applied to the power spectrum. After unbiasing the representation to remove the effects of the additive noise and random dilations, we recover an approximation of the power spectrum by solving a convex optimization problem, and thus reduce to a phase retrieval problem. Extensive numerical experiments demonstrate the statistical robustness of this approximation procedure.</p>","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 4","pages":"1287-1351"},"PeriodicalIF":1.6,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8782248/pdf/nihms-1726636.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39962758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to: Subspace clustering using ensembles of K>-subspaces","authors":"J. Lipor, D. Hong, Yan Shuo Tan, L. Balzano","doi":"10.1093/imaiai/iaab026","DOIUrl":"https://doi.org/10.1093/imaiai/iaab026","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"23 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89241466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimating the mean of a probability distribution using i.i.d. samples is a classical problem in statistics, wherein finite-sample optimal estimators are sought under various distributional assumptions. In this paper, we consider the problem of mean estimation when independent samples are drawn from ddimensional non-identical distributions possessing a common mean. When the distributions are radially symmetric and unimodal, we propose a novel estimator, which is a hybrid of the modal interval, shorth, and median estimators, and whose performance adapts to the level of heterogeneity in the data. We show that our estimator is near-optimal when data are i.i.d. and when the fraction of “low-noise” distributions is as small as Ω ( d logn n ) , where n is the number of samples. We also derive minimax lower bounds on the expected error of any estimator that is agnostic to the scales of individual data points. Finally, we extend our theory to linear regression. In both the mean estimation and regression settings, we present computationally feasible versions of our estimators that run in time polynomial in the number of data points.
{"title":"Estimating location parameters in sample-heterogeneous distributions","authors":"Ankit Pensia, Varun Jog, Po-Ling Loh","doi":"10.1093/IMAIAI/IAAB013","DOIUrl":"https://doi.org/10.1093/IMAIAI/IAAB013","url":null,"abstract":"Estimating the mean of a probability distribution using i.i.d. samples is a classical problem in statistics, wherein finite-sample optimal estimators are sought under various distributional assumptions. In this paper, we consider the problem of mean estimation when independent samples are drawn from ddimensional non-identical distributions possessing a common mean. When the distributions are radially symmetric and unimodal, we propose a novel estimator, which is a hybrid of the modal interval, shorth, and median estimators, and whose performance adapts to the level of heterogeneity in the data. We show that our estimator is near-optimal when data are i.i.d. and when the fraction of “low-noise” distributions is as small as Ω ( d logn n ) , where n is the number of samples. We also derive minimax lower bounds on the expected error of any estimator that is agnostic to the scales of individual data points. Finally, we extend our theory to linear regression. In both the mean estimation and regression settings, we present computationally feasible versions of our estimators that run in time polynomial in the number of data points.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"73 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86158303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antoine Chatalic, V. Schellekens, F. Houssiau, Y. de Montjoye, L. Jacques, R. Gribonval
This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formalism for defining and quantifying the privacy of a random mechanism—by adding Laplace of Gaussian noise to the sketch. We combine these standard mechanisms with a new feature subsampling mechanism, which reduces the computational cost without damaging privacy. The overall framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis, for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.
{"title":"Compressive learning with privacy guarantees","authors":"Antoine Chatalic, V. Schellekens, F. Houssiau, Y. de Montjoye, L. Jacques, R. Gribonval","doi":"10.1093/IMAIAI/IAAB005","DOIUrl":"https://doi.org/10.1093/IMAIAI/IAAB005","url":null,"abstract":"\u0000 This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formalism for defining and quantifying the privacy of a random mechanism—by adding Laplace of Gaussian noise to the sketch. We combine these standard mechanisms with a new feature subsampling mechanism, which reduces the computational cost without damaging privacy. The overall framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis, for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"51 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90454586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqian Zhang, Abhishek Chakrabortty, Jelena Bradic
Semi-supervised (SS) inference has received much attention in recent years. Apart from a moderate-sized labeled data, $mathcal L$, the SS setting is characterized by an additional, much larger sized, unlabeled data, $mathcal U$. The setting of $|mathcal U |gg |mathcal L |$, makes SS inference unique and different from the standard missing data problems, owing to natural violation of the so-called ‘positivity’ or ‘overlap’ assumption. However, most of the SS literature implicitly assumes $mathcal L$ and $mathcal U$ to be equally distributed, i.e., no selection bias in the labeling. Inferential challenges in missing at random type labeling allowing for selection bias, are inevitably exacerbated by the decaying nature of the propensity score (PS). We address this gap for a prototype problem, the estimation of the response’s mean. We propose a double robust SS mean estimator and give a complete characterization of its asymptotic properties. The proposed estimator is consistent as long as either the outcome or the PS model is correctly specified. When both models are correctly specified, we provide inference results with a non-standard consistency rate that depends on the smaller size $|mathcal L |$. The results are also extended to causal inference with imbalanced treatment groups. Further, we provide several novel choices of models and estimators of the decaying PS, including a novel offset logistic model and a stratified labeling model. We present their properties under both high- and low-dimensional settings. These may be of independent interest. Lastly, we present extensive simulations and also a real data application.
半监督推理近年来受到广泛关注。除了中等大小的标记数据$mathcal L$之外,SS设置的特点是另外一个大得多的未标记数据$mathcal U$。$|mathcal U |gg |mathcal L |$的设置,使得SS推理是唯一的,不同于标准的缺失数据问题,因为它自然违反了所谓的“正性”或“重叠”假设。然而,大多数SS文献隐含地假设$mathcal L$和$mathcal U$是均匀分布的,即在标记中没有选择偏差。在允许选择偏差的随机类型标签缺失的推理挑战,不可避免地加剧了倾向得分(PS)的衰减性质。我们通过一个原型问题来解决这个差距,即响应均值的估计。我们提出了一个双鲁棒SS均值估计量,并给出了它的渐近性质的完整刻画。只要正确指定了结果或PS模型,所建议的估计量就是一致的。当两个模型都被正确指定时,我们提供的推理结果具有非标准的一致性率,该一致性率取决于较小的大小$|mathcal L |$。结果也扩展到不平衡处理组的因果推理。此外,我们提供了几种新的模型和衰减PS的估计器,包括一个新的偏移逻辑模型和一个分层标记模型。我们给出了它们在高维和低维设置下的性质。这些可能是独立的利益。最后,我们给出了大量的仿真和一个实际的数据应用。
{"title":"Double robust semi-supervised inference for the mean: selection bias under MAR labeling with decaying overlap","authors":"Yuqian Zhang, Abhishek Chakrabortty, Jelena Bradic","doi":"10.1093/imaiai/iaad021","DOIUrl":"https://doi.org/10.1093/imaiai/iaad021","url":null,"abstract":"\u0000 Semi-supervised (SS) inference has received much attention in recent years. Apart from a moderate-sized labeled data, $mathcal L$, the SS setting is characterized by an additional, much larger sized, unlabeled data, $mathcal U$. The setting of $|mathcal U |gg |mathcal L |$, makes SS inference unique and different from the standard missing data problems, owing to natural violation of the so-called ‘positivity’ or ‘overlap’ assumption. However, most of the SS literature implicitly assumes $mathcal L$ and $mathcal U$ to be equally distributed, i.e., no selection bias in the labeling. Inferential challenges in missing at random type labeling allowing for selection bias, are inevitably exacerbated by the decaying nature of the propensity score (PS). We address this gap for a prototype problem, the estimation of the response’s mean. We propose a double robust SS mean estimator and give a complete characterization of its asymptotic properties. The proposed estimator is consistent as long as either the outcome or the PS model is correctly specified. When both models are correctly specified, we provide inference results with a non-standard consistency rate that depends on the smaller size $|mathcal L |$. The results are also extended to causal inference with imbalanced treatment groups. Further, we provide several novel choices of models and estimators of the decaying PS, including a novel offset logistic model and a stratified labeling model. We present their properties under both high- and low-dimensional settings. These may be of independent interest. Lastly, we present extensive simulations and also a real data application.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"24 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78754638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athanasios Vlontzos, Yueqi Cao, Luca Schmidtke, Bernhard Kainz, Anthea Monod
Appropriately representing elements in a database so that queries may be accurately matched is a central task in information retrieval; recently, this has been achieved by embedding the graphical structure of the database into a manifold in a hierarchy-preserving manner using a variety of metrics. Persistent homology is a tool commonly used in topological data analysis that is able to rigorously characterize a database in terms of both its hierarchy and connectivity structure. Computing persistent homology on a variety of embedded datasets reveals that some commonly used embeddings fail to preserve the connectivity. We show that those embeddings which successfully retain the database topology coincide in persistent homology by introducing two dilation-invariant comparative measures to capture this effect: in particular, they address the issue of metric distortion on manifolds. We provide an algorithm for their computation that exhibits greatly reduced time complexity over existing methods. We use these measures to perform the first instance of topology-based information retrieval and demonstrate its increased performance over the standard bottleneck distance for persistent homology. We showcase our approach on databases of different data varieties including text, videos and medical images.
{"title":"Topological information retrieval with dilation-invariant bottleneck comparative measures","authors":"Athanasios Vlontzos, Yueqi Cao, Luca Schmidtke, Bernhard Kainz, Anthea Monod","doi":"10.1093/imaiai/iaad022","DOIUrl":"https://doi.org/10.1093/imaiai/iaad022","url":null,"abstract":"\u0000 Appropriately representing elements in a database so that queries may be accurately matched is a central task in information retrieval; recently, this has been achieved by embedding the graphical structure of the database into a manifold in a hierarchy-preserving manner using a variety of metrics. Persistent homology is a tool commonly used in topological data analysis that is able to rigorously characterize a database in terms of both its hierarchy and connectivity structure. Computing persistent homology on a variety of embedded datasets reveals that some commonly used embeddings fail to preserve the connectivity. We show that those embeddings which successfully retain the database topology coincide in persistent homology by introducing two dilation-invariant comparative measures to capture this effect: in particular, they address the issue of metric distortion on manifolds. We provide an algorithm for their computation that exhibits greatly reduced time complexity over existing methods. We use these measures to perform the first instance of topology-based information retrieval and demonstrate its increased performance over the standard bottleneck distance for persistent homology. We showcase our approach on databases of different data varieties including text, videos and medical images.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"58 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77142316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose and study a multi-scale approach to vector quantization (VQ). We develop an algorithm, dubbed reconstruction trees, inspired by decision trees. Here the objective is parsimonious reconstruction of unsupervised data, rather than classification. Contrasted to more standard VQ methods, such as $k$