Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447127
Yining Wang, Aarti Singh
Column subset selection (CSS) is the problem of selecting a small portion of columns from a large data matrix as one form of interpretable data summarization. Leverage score sampling, which enjoys both sound theoretical guarantee and superior empirical performance, is widely recognized as the state-of-the-art algorithm for column subset selection. In this paper, we revisit iterative norm sampling, another sampling based CSS algorithm proposed even before leverage score sampling, and demonstrate its competitive performance under a wide range of experimental settings. We also compare iterative norm sampling with several of its other competitors and show its superior performance in terms of both approximation accuracy and computational efficiency. We conclude that further theoretical investigation and practical consideration should be devoted to iterative norm sampling in column subset selection.
{"title":"An empirical comparison of sampling techniques for matrix column subset selection","authors":"Yining Wang, Aarti Singh","doi":"10.1109/ALLERTON.2015.7447127","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447127","url":null,"abstract":"Column subset selection (CSS) is the problem of selecting a small portion of columns from a large data matrix as one form of interpretable data summarization. Leverage score sampling, which enjoys both sound theoretical guarantee and superior empirical performance, is widely recognized as the state-of-the-art algorithm for column subset selection. In this paper, we revisit iterative norm sampling, another sampling based CSS algorithm proposed even before leverage score sampling, and demonstrate its competitive performance under a wide range of experimental settings. We also compare iterative norm sampling with several of its other competitors and show its superior performance in terms of both approximation accuracy and computational efficiency. We conclude that further theoretical investigation and practical consideration should be devoted to iterative norm sampling in column subset selection.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123263688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447059
Shahin Shahrampour, A. Rakhlin, A. Jadbabaie
This paper addresses the problem of distributed detection in fixed and switching networks. A network of agents observe partially informative signals about the unknown state of the world. Hence, they collaborate with each other to identify the true state. We propose an update rule building on distributed, stochastic optimization methods. Our main focus is on the finite-time analysis of the problem. For fixed networks, we bring forward the notion of Kullback-Leibler cost to measure the efficiency of the algorithm versus its centralized analog. We bound the cost in terms of the network size, spectral gap and relative entropy of agents' signal structures. We further consider the problem in random networks where the structure is realized according to a stationary distribution. We then prove that the convergence is exponentially fast (with high probability), and the non-asymptotic rate scales inversely in the spectral gap of the expected network.
{"title":"Finite-time analysis of the distributed detection problem","authors":"Shahin Shahrampour, A. Rakhlin, A. Jadbabaie","doi":"10.1109/ALLERTON.2015.7447059","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447059","url":null,"abstract":"This paper addresses the problem of distributed detection in fixed and switching networks. A network of agents observe partially informative signals about the unknown state of the world. Hence, they collaborate with each other to identify the true state. We propose an update rule building on distributed, stochastic optimization methods. Our main focus is on the finite-time analysis of the problem. For fixed networks, we bring forward the notion of Kullback-Leibler cost to measure the efficiency of the algorithm versus its centralized analog. We bound the cost in terms of the network size, spectral gap and relative entropy of agents' signal structures. We further consider the problem in random networks where the structure is realized according to a stationary distribution. We then prove that the convergence is exponentially fast (with high probability), and the non-asymptotic rate scales inversely in the spectral gap of the expected network.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114168504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7446979
Anusha Lalitha, T. Javidi
This paper considers a problem of distributed hypothesis testing and cooperative learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. We consider a social (“non-Bayesian”) learning rule from previous literature, in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a “non-Bayesian” linear consensus using the log-beliefs of their neighbors. For this learning rule, we know that under mild assumptions, the belief of any node in any incorrect parameter converges to zero exponentially fast, and the exponential rate of learning is a characterized by the network structure and the divergences between the observations' distributions. Tight bounds on the probability of deviating from this nominal rate in aperiodic networks is derived. The bounds are shown to hold for all conditional distributions which satisfy a mild bounded moment condition.
{"title":"On the rate of learning in distributed hypothesis testing","authors":"Anusha Lalitha, T. Javidi","doi":"10.1109/ALLERTON.2015.7446979","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7446979","url":null,"abstract":"This paper considers a problem of distributed hypothesis testing and cooperative learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. We consider a social (“non-Bayesian”) learning rule from previous literature, in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a “non-Bayesian” linear consensus using the log-beliefs of their neighbors. For this learning rule, we know that under mild assumptions, the belief of any node in any incorrect parameter converges to zero exponentially fast, and the exponential rate of learning is a characterized by the network structure and the divergences between the observations' distributions. Tight bounds on the probability of deviating from this nominal rate in aperiodic networks is derived. The bounds are shown to hold for all conditional distributions which satisfy a mild bounded moment condition.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116136979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/allerton.2015.7447061
Ramtin Pedarsani, Kangwook Lee, K. Ramchandran
We consider the problem of recovering a sparse covariance matrix Σ∈ℝn×n from m quadratic measurements yi = aiTΣai+wi, 1 ≤ i ≤ m, where ai ∈ ℓn is a measurement vector and wi is additive noise. We assume that ℝ has K non-zero off-diagonal entries. We first consider the simplified noiseless problem where wi = 0 for all i. We introduce two low complexity algorithms, the first a “message-passing” algorithm and the second a “forward” algorithm, that are based on a sparse-graph coding framework. We show that under some simplifying assumptions, the message passing algorithm can recover an arbitrarily-large fraction of the K non-zero components with cK measurements, where c is a small constant that can be precisely characterized. As one instance, the message passing algorithm can recover, with high probability, a fraction 1 - 10-4 of the non-zero components, using only m = 6K quadratic measurements, which is a small constant factor from the fundamental limit, with an optimal O(K) decoding complexity. We further show that the forward algorithm can recover all the K non-zero entries with high probability with m = Θ(K) measurements and O(K log(K)) decoding complexity. However, the forward algorithm suffers from significantly larger constants in terms of the number of required measurements, and is indeed less practical despite providing stronger theoretical guarantees. We then consider the noisy setting, and show that both proposed algorithms can be robustified to noise with m = Θ(K log2(n)) measurements. Finally, we provide extensive simulation results that support our theoretical claims.
{"title":"Sparse covariance estimation based on sparse-graph codes","authors":"Ramtin Pedarsani, Kangwook Lee, K. Ramchandran","doi":"10.1109/allerton.2015.7447061","DOIUrl":"https://doi.org/10.1109/allerton.2015.7447061","url":null,"abstract":"We consider the problem of recovering a sparse covariance matrix Σ∈ℝn×n from m quadratic measurements yi = aiTΣai+wi, 1 ≤ i ≤ m, where ai ∈ ℓn is a measurement vector and wi is additive noise. We assume that ℝ has K non-zero off-diagonal entries. We first consider the simplified noiseless problem where wi = 0 for all i. We introduce two low complexity algorithms, the first a “message-passing” algorithm and the second a “forward” algorithm, that are based on a sparse-graph coding framework. We show that under some simplifying assumptions, the message passing algorithm can recover an arbitrarily-large fraction of the K non-zero components with cK measurements, where c is a small constant that can be precisely characterized. As one instance, the message passing algorithm can recover, with high probability, a fraction 1 - 10-4 of the non-zero components, using only m = 6K quadratic measurements, which is a small constant factor from the fundamental limit, with an optimal O(K) decoding complexity. We further show that the forward algorithm can recover all the K non-zero entries with high probability with m = Θ(K) measurements and O(K log(K)) decoding complexity. However, the forward algorithm suffers from significantly larger constants in terms of the number of required measurements, and is indeed less practical despite providing stronger theoretical guarantees. We then consider the noisy setting, and show that both proposed algorithms can be robustified to noise with m = Θ(K log2(n)) measurements. Finally, we provide extensive simulation results that support our theoretical claims.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115295019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447176
Mehrdad Kiamari, A. Avestimehr
We propose the idea of extended networks, which is constructed by replicating the users in the two-user deterministic interference channel (DIC) and designing the interference structure among them, such that any rate that can be achieved by each user in the original network can also be achieved simultaneously by all replicas of that user in the extended network. We demonstrate that by carefully designing extended networks and applying the generalized cut-set (GCS) bound to them, we can derive a tight converse for the two-user DIC. Furthermore, we generalize our techniques to the three-user DIC, and demonstrate that the proposed approach also results in deriving a tight converse for the three-user DIC in the symmetric case.
{"title":"Are generalized cut-set bounds tight for the deterministic interference channel?","authors":"Mehrdad Kiamari, A. Avestimehr","doi":"10.1109/ALLERTON.2015.7447176","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447176","url":null,"abstract":"We propose the idea of extended networks, which is constructed by replicating the users in the two-user deterministic interference channel (DIC) and designing the interference structure among them, such that any rate that can be achieved by each user in the original network can also be achieved simultaneously by all replicas of that user in the extended network. We demonstrate that by carefully designing extended networks and applying the generalized cut-set (GCS) bound to them, we can derive a tight converse for the two-user DIC. Furthermore, we generalize our techniques to the three-user DIC, and demonstrate that the proposed approach also results in deriving a tight converse for the three-user DIC in the symmetric case.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116750671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447067
Fanny Yang, Sivaraman Balakrishnan, M. Wainwright
The Hidden Markov Model (HMM) is one of the main-stays of statistical modeling of discrete time series and is widely used in many applications. Estimating an HMM from its observation process is often addressed via the Baum-Welch algorithm, which performs well empirically when initialized reasonably close to the truth. This behavior could not be explained by existing theory which predicts susceptibility to bad local optima. In this paper we aim at closing the gap and provide a framework to characterize a sufficient basin of attraction for any global optimum in which Baum-Welch is guaranteed to converge linearly to an “optimally” small ball around the global optimum. The framework is then used to determine the linear rate of convergence and a sufficient initialization region for Baum-Welch applied on a two component isotropic hidden Markov mixture of Gaussians.
{"title":"Statistical and computational guarantees for the Baum-Welch algorithm","authors":"Fanny Yang, Sivaraman Balakrishnan, M. Wainwright","doi":"10.1109/ALLERTON.2015.7447067","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447067","url":null,"abstract":"The Hidden Markov Model (HMM) is one of the main-stays of statistical modeling of discrete time series and is widely used in many applications. Estimating an HMM from its observation process is often addressed via the Baum-Welch algorithm, which performs well empirically when initialized reasonably close to the truth. This behavior could not be explained by existing theory which predicts susceptibility to bad local optima. In this paper we aim at closing the gap and provide a framework to characterize a sufficient basin of attraction for any global optimum in which Baum-Welch is guaranteed to converge linearly to an “optimally” small ball around the global optimum. The framework is then used to determine the linear rate of convergence and a sufficient initialization region for Baum-Welch applied on a two component isotropic hidden Markov mixture of Gaussians.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122674305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447085
Huanyu Ding, D. Castañón
The problem of searching for an unknown object occurs in important applications, ranging from security, medicine and defense. Modern sensors have significant processing capabilities that allow for in situ processing and exploitation of the information to select what additional information to collect. In this paper, we discuss a class of dynamic, adaptive search problems involving multiple sensors sensing for a single stationary object, and formulate them as stochastic control problems with imperfect information. The objective of these problems is related to information entropy. This allows for a complete characterization of the optimal strategies and the optimal cost for the resulting finite-horizon stochastic control problems. We show that the computation of optimal policies can be reduced to solving a finite number of strictly concave maximization problems. We further show that the solution can be decoupled into a finite number of scalar concave maximization problems. We illustrate our results with experiments using multiple sensors searching for a single object.
{"title":"Optimal multi-vehicle adaptive search with entropy objectives","authors":"Huanyu Ding, D. Castañón","doi":"10.1109/ALLERTON.2015.7447085","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447085","url":null,"abstract":"The problem of searching for an unknown object occurs in important applications, ranging from security, medicine and defense. Modern sensors have significant processing capabilities that allow for in situ processing and exploitation of the information to select what additional information to collect. In this paper, we discuss a class of dynamic, adaptive search problems involving multiple sensors sensing for a single stationary object, and formulate them as stochastic control problems with imperfect information. The objective of these problems is related to information entropy. This allows for a complete characterization of the optimal strategies and the optimal cost for the resulting finite-horizon stochastic control problems. We show that the computation of optimal policies can be reduced to solving a finite number of strictly concave maximization problems. We further show that the solution can be decoupled into a finite number of scalar concave maximization problems. We illustrate our results with experiments using multiple sensors searching for a single object.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132745198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447069
Praveen Venkatesh, P. Grover
Granger causality is an established statistical measure of the “causal influence” that one stochastic process X has on another process Y. Along with its more recent generalization - Directed Information - Granger Causality has been used extensively in neuroscience, and in complex interconnected systems in general, to infer statistical causal influences. More recently, many works compare the Granger causality metrics along forward and reverse links (from X to Y and from Y to X), and interpret the direction of greater causal influence as the “direction of information flow”. In this paper, we question whether the direction yielded by comparing Granger Causality or Directed Information along forward and reverse links is always the same as the direction of information flow. We explore this question using two simple theoretical experiments, in which the true direction of information flow (the “ground truth”) is known by design. The experiments are based on a communication system with a feedback channel, and employ a strategy inspired by the work of Schalkwijk and Kailath. We show that in these experiments, the direction of information flow can be opposite to the direction of greater Granger causal influence or Directed Information. We also provide information-theoretic intuition for why such counterexamples are not surprising, and why Granger causality-based information-flow inferences will only get more tenuous in larger networks. We conclude that one must not use comparison/difference of Granger causality to infer the direction of information flow.
{"title":"Is the direction of greater Granger causal influence the same as the direction of information flow?","authors":"Praveen Venkatesh, P. Grover","doi":"10.1109/ALLERTON.2015.7447069","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447069","url":null,"abstract":"Granger causality is an established statistical measure of the “causal influence” that one stochastic process X has on another process Y. Along with its more recent generalization - Directed Information - Granger Causality has been used extensively in neuroscience, and in complex interconnected systems in general, to infer statistical causal influences. More recently, many works compare the Granger causality metrics along forward and reverse links (from X to Y and from Y to X), and interpret the direction of greater causal influence as the “direction of information flow”. In this paper, we question whether the direction yielded by comparing Granger Causality or Directed Information along forward and reverse links is always the same as the direction of information flow. We explore this question using two simple theoretical experiments, in which the true direction of information flow (the “ground truth”) is known by design. The experiments are based on a communication system with a feedback channel, and employ a strategy inspired by the work of Schalkwijk and Kailath. We show that in these experiments, the direction of information flow can be opposite to the direction of greater Granger causal influence or Directed Information. We also provide information-theoretic intuition for why such counterexamples are not surprising, and why Granger causality-based information-flow inferences will only get more tenuous in larger networks. We conclude that one must not use comparison/difference of Granger causality to infer the direction of information flow.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133970165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447164
Mina Karzand, Guy Bresler
We consider the problem of learning an Ising model for the purpose of subsequently performing inference from partial observations. This is in contrast to most other work on graphical model learning, which tries to learn the true underlying graph. This objective requires a lower bound on the strength of edges for identifiability of the model. We show that in the relatively simple case of tree models, the Chow-Liu algorithm learns a distribution with accurate low-order marginals despite the model possibly being non-identifiable. In other words, a model that appears rather different from the truth nevertheless allows to carry out inference accurately.
{"title":"Inferning trees","authors":"Mina Karzand, Guy Bresler","doi":"10.1109/ALLERTON.2015.7447164","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447164","url":null,"abstract":"We consider the problem of learning an Ising model for the purpose of subsequently performing inference from partial observations. This is in contrast to most other work on graphical model learning, which tries to learn the true underlying graph. This objective requires a lower bound on the strength of edges for identifiability of the model. We show that in the relatively simple case of tree models, the Chow-Liu algorithm learns a distribution with accurate low-order marginals despite the model possibly being non-identifiable. In other words, a model that appears rather different from the truth nevertheless allows to carry out inference accurately.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"369 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114861281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/ALLERTON.2015.7447023
Qianrui Li, Paul de Kerret, D. Gesbert, N. Gresset
We consider in this work the Distributed Channel State Information (DCSI) Broadcast Channel (BC) setting, in which the various Transmitters (TXs) compute elements of the precoder based on their individual estimates of the global multiuser channel matrix. Previous works relative to the DCSI setting assume the estimation errors at different TXs to be uncorrelated, while we consider in contrast in this work that the CSI noises can be correlated. This generalization bridges the gap between the fully distributed and the centralized setting, and offers an avenue to analyze partially centralized networks. In addition, we generalize the regularized Zero Forcing (ZF) precoding by letting each TX use a different regularization coefficient. Building upon random matrix theory tools, we obtain a deterministic equivalent for the rate achieved in the large system limit from which we can optimize the regularization coefficients at different TXs. This extended precoding scheme in which each TX applies the optimal regularization coefficient is denoted as “DCSI Regularized ZF” and we show by numerical simulations that it allows to significantly reduce the negative impact of the distributed CSI configuration and is robust to the distribution of CSI quality level across all TXs.
{"title":"Robust regularized ZF in decentralized Broadcast Channel with correlated CSI noise","authors":"Qianrui Li, Paul de Kerret, D. Gesbert, N. Gresset","doi":"10.1109/ALLERTON.2015.7447023","DOIUrl":"https://doi.org/10.1109/ALLERTON.2015.7447023","url":null,"abstract":"We consider in this work the Distributed Channel State Information (DCSI) Broadcast Channel (BC) setting, in which the various Transmitters (TXs) compute elements of the precoder based on their individual estimates of the global multiuser channel matrix. Previous works relative to the DCSI setting assume the estimation errors at different TXs to be uncorrelated, while we consider in contrast in this work that the CSI noises can be correlated. This generalization bridges the gap between the fully distributed and the centralized setting, and offers an avenue to analyze partially centralized networks. In addition, we generalize the regularized Zero Forcing (ZF) precoding by letting each TX use a different regularization coefficient. Building upon random matrix theory tools, we obtain a deterministic equivalent for the rate achieved in the large system limit from which we can optimize the regularization coefficients at different TXs. This extended precoding scheme in which each TX applies the optimal regularization coefficient is denoted as “DCSI Regularized ZF” and we show by numerical simulations that it allows to significantly reduce the negative impact of the distributed CSI configuration and is robust to the distribution of CSI quality level across all TXs.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117334191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}