{"title":"离散分布间 KL Divergence 的最小率最优估计。","authors":"Yanjun Han, Jiantao Jiao, Tsachy Weissman","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>We refine the general methodology in [1] for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions with support size <i>S</i> comparable with the number of observations <i>n</i>. Specifically, we determine the \"smooth\" and \"non-smooth\" regimes based on the confidence set and the smoothness of the functional. In the \"non-smooth\" regime, we apply an unbiased estimator for a \"suitable\" polynomial approximation of the functional. In the \"smooth\" regime, we construct a bias corrected version of the Maximum Likelihood Estimator (MLE) based on Taylor expansion. We apply the general methodology to the problem of estimating the KL divergence between two discrete distributions from empirical data. We construct a minimax rate-optimal estimator which is adaptive in the sense that it does not require the knowledge of the support size nor the upper bound on the likelihood ratio. Moreover, the performance of the optimal estimator with <i>n</i> samples is essentially that of the MLE with <i>n</i> ln <i>n</i> samples, i.e., the <i>effective sample size enlargement</i> phenomenon holds.</p>","PeriodicalId":92224,"journal":{"name":"International Symposium on Information Theory and its Applications. International Symposium on Information Theory and its Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5812299/pdf/nihms910323.pdf","citationCount":"0","resultStr":"{\"title\":\"Minimax Rate-optimal Estimation of KL Divergence between Discrete Distributions.\",\"authors\":\"Yanjun Han, Jiantao Jiao, Tsachy Weissman\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>We refine the general methodology in [1] for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions with support size <i>S</i> comparable with the number of observations <i>n</i>. Specifically, we determine the \\\"smooth\\\" and \\\"non-smooth\\\" regimes based on the confidence set and the smoothness of the functional. In the \\\"non-smooth\\\" regime, we apply an unbiased estimator for a \\\"suitable\\\" polynomial approximation of the functional. In the \\\"smooth\\\" regime, we construct a bias corrected version of the Maximum Likelihood Estimator (MLE) based on Taylor expansion. We apply the general methodology to the problem of estimating the KL divergence between two discrete distributions from empirical data. We construct a minimax rate-optimal estimator which is adaptive in the sense that it does not require the knowledge of the support size nor the upper bound on the likelihood ratio. Moreover, the performance of the optimal estimator with <i>n</i> samples is essentially that of the MLE with <i>n</i> ln <i>n</i> samples, i.e., the <i>effective sample size enlargement</i> phenomenon holds.</p>\",\"PeriodicalId\":92224,\"journal\":{\"name\":\"International Symposium on Information Theory and its Applications. International Symposium on Information Theory and its Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5812299/pdf/nihms910323.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Symposium on Information Theory and its Applications. International Symposium on Information Theory and its Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Information Theory and its Applications. International Symposium on Information Theory and its Applications","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
我们完善了 [1] 中的一般方法,即构建和分析各类有限维度参数函数的本质最小估计器,并详细阐述了支持大小 S 与观察数 n 相当的离散分布的情况。具体而言,我们根据置信集和函数的平滑度确定了 "平滑 "和 "非平滑 "机制。在 "非平稳 "状态下,我们对函数的 "合适 "多项式近似采用无偏估计法。在 "平滑 "机制中,我们根据泰勒展开构建了最大似然估计器(MLE)的偏差修正版本。我们将一般方法应用于从经验数据中估计两个离散分布之间的 KL 分歧问题。我们构建了一个最小率最优估计器,它是自适应的,因为它不需要知道支持大小或似然比上界。此外,具有 n 个样本的最优估计器的性能基本上与具有 n ln n 个样本的 MLE 相同,即有效样本量扩大现象成立。
Minimax Rate-optimal Estimation of KL Divergence between Discrete Distributions.
We refine the general methodology in [1] for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions with support size S comparable with the number of observations n. Specifically, we determine the "smooth" and "non-smooth" regimes based on the confidence set and the smoothness of the functional. In the "non-smooth" regime, we apply an unbiased estimator for a "suitable" polynomial approximation of the functional. In the "smooth" regime, we construct a bias corrected version of the Maximum Likelihood Estimator (MLE) based on Taylor expansion. We apply the general methodology to the problem of estimating the KL divergence between two discrete distributions from empirical data. We construct a minimax rate-optimal estimator which is adaptive in the sense that it does not require the knowledge of the support size nor the upper bound on the likelihood ratio. Moreover, the performance of the optimal estimator with n samples is essentially that of the MLE with n ln n samples, i.e., the effective sample size enlargement phenomenon holds.