We study a variation of Bayesian $M$-ary hypothesis testing in which the test outputs a list of $L$ candidates out of the $M$ possible upon processing the observation. We study the minimum error probability of list hypothesis testing, where an error is defined as the event where the true hypothesis is not in the list output by the test. We derive two exact expressions of the minimum probability or error. The first is expressed as the error probability of a certain non-Bayesian binary hypothesis test and is reminiscent of the meta-converse bound by Polyanskiy, Poor and Verdú (2010). The second, is expressed as the tail probability of the likelihood ratio between the two distributions involved in the aforementioned non-Bayesian binary hypothesis test. Hypothesis testing, error probability, information theory.
{"title":"Minimum probability of error of list M-ary hypothesis testing","authors":"Ehsan Asadi Kangarshahi, A. Guillén i Fàbregas","doi":"10.1093/imaiai/iaad001","DOIUrl":"https://doi.org/10.1093/imaiai/iaad001","url":null,"abstract":"\u0000 We study a variation of Bayesian $M$-ary hypothesis testing in which the test outputs a list of $L$ candidates out of the $M$ possible upon processing the observation. We study the minimum error probability of list hypothesis testing, where an error is defined as the event where the true hypothesis is not in the list output by the test. We derive two exact expressions of the minimum probability or error. The first is expressed as the error probability of a certain non-Bayesian binary hypothesis test and is reminiscent of the meta-converse bound by Polyanskiy, Poor and Verdú (2010). The second, is expressed as the tail probability of the likelihood ratio between the two distributions involved in the aforementioned non-Bayesian binary hypothesis test. Hypothesis testing, error probability, information theory.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"14 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90910163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two important non-parametric approaches to clustering emerged in the 1970s: clustering by level sets or cluster tree as proposed by Hartigan, and clustering by gradient lines or gradient flow as proposed by Fukunaga and Hostetler. In a recent paper, we draw a connection between these two approaches, in particular, by showing that the gradient flow provides a way to move along the cluster tree. Here, we argue the case that these two approaches are fundamentally the same. We do so by proposing two ways of obtaining a partition from the cluster tree—each one of them very natural in its own right—and showing that both of them reduce to the partition given by the gradient flow under standard assumptions on the sampling density.
{"title":"A unifying view of modal clustering","authors":"Ery Arias-Castro;Wanli Qiao","doi":"10.1093/imaiai/iaac030","DOIUrl":"https://doi.org/10.1093/imaiai/iaac030","url":null,"abstract":"Two important non-parametric approaches to clustering emerged in the 1970s: clustering by level sets or cluster tree as proposed by Hartigan, and clustering by gradient lines or gradient flow as proposed by Fukunaga and Hostetler. In a recent paper, we draw a connection between these two approaches, in particular, by showing that the gradient flow provides a way to move along the cluster tree. Here, we argue the case that these two approaches are fundamentally the same. We do so by proposing two ways of obtaining a partition from the cluster tree—each one of them very natural in its own right—and showing that both of them reduce to the partition given by the gradient flow under standard assumptions on the sampling density.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"12 2","pages":"897-920"},"PeriodicalIF":1.6,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50298052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}