{"title":"The Compound Information Bottleneck Program","authors":"Michael Dikshtein, N. Weinberger, S. Shamai","doi":"10.1109/ISIT50566.2022.9834812","DOIUrl":null,"url":null,"abstract":"Motivated by the emerging technology of oblivious processing in remote radio heads with universal decoders, we formulate and analyze in this paper a compound version of the information bottleneck problem. In this problem, a Markov chain X→Y→ Z is assumed, and the marginals PX and PY are set. The mutual information between X and Z is sought to be maximized over the choice of the conditional probability of Z given Y from a given class, under the worst choice of the joint probability of the pair (X,Y) from a different class. We provide values, bounds, and various characterizations for specific instances of this problem: the binary symmetric case, the scalar Gaussian case, the vector Gaussian case, the symmetric modulo-additive case, and the total variation constraints case. Finally, for the general case, we propose a Blahut-Arimoto type of alternating iterations algorithm to find a consistent solution to this problem.","PeriodicalId":348168,"journal":{"name":"2022 IEEE International Symposium on Information Theory (ISIT)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT50566.2022.9834812","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Motivated by the emerging technology of oblivious processing in remote radio heads with universal decoders, we formulate and analyze in this paper a compound version of the information bottleneck problem. In this problem, a Markov chain X→Y→ Z is assumed, and the marginals PX and PY are set. The mutual information between X and Z is sought to be maximized over the choice of the conditional probability of Z given Y from a given class, under the worst choice of the joint probability of the pair (X,Y) from a different class. We provide values, bounds, and various characterizations for specific instances of this problem: the binary symmetric case, the scalar Gaussian case, the vector Gaussian case, the symmetric modulo-additive case, and the total variation constraints case. Finally, for the general case, we propose a Blahut-Arimoto type of alternating iterations algorithm to find a consistent solution to this problem.