Sagnik Bhattacharya, Amitalok J. Budkuley, S. Jaggi
{"title":"Shared Randomness in Arbitrarily Varying Channels","authors":"Sagnik Bhattacharya, Amitalok J. Budkuley, S. Jaggi","doi":"10.1109/ISIT.2019.8849801","DOIUrl":null,"url":null,"abstract":"We study an adversarial communication problem where sender Alice wishes to send a message m to receiver Bob over an arbitrarily varying channel (AVC) controlled by a malicious adversary James. We assume that Alice and Bob share randomness K unknown to James. Using K, Alice first encodes the message m to a codeword X and transmits it over the AVC. James knows the message m, the (randomized) codebook and the codeword X. James then inputs a jamming state S to disrupt communication; we assume a state-deterministic AVC where S completely specifies the channel noise. Bob receives a noisy version Y of codeword X; it outputs a message estimate $\\mathop {\\hat m}$ using Y and the shared randomness K. We study AVCs, called ‘adversary-weakened’ AVCs here, where the availability of shared randomness strictly improves the optimum throughput or capacity over it than when it is not available; the randomized coding capacity characterizes the largest rate possible when K is unrestricted. In this work, we characterize the exact threshold for the amount of shared randomness K so as to achieve the randomized coding capacity for ‘adversary-weakened’ AVCs.We show that exactly log(n) equiprobable and independent bits of randomness, shared between Alice and Bob and unknown to adversary James, are both necessary and sufficient for achieving randomized coding capacity for ‘adversary-weakened’ AVCs. For sufficiency, our achievability is based on a randomized code construction which uses deterministic list codes along with a polynomial hashing technique which uses the shared randomness. Our converse, which establishes the necessity of log(n) bits of shared randomness, uses a known approach for binary AVCs, and extends it to general ‘adversary-weakened’ AVCs using a notion of confusable codewords.","PeriodicalId":6708,"journal":{"name":"2019 IEEE International Symposium on Information Theory (ISIT)","volume":"1 1","pages":"627-631"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2019.8849801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
We study an adversarial communication problem where sender Alice wishes to send a message m to receiver Bob over an arbitrarily varying channel (AVC) controlled by a malicious adversary James. We assume that Alice and Bob share randomness K unknown to James. Using K, Alice first encodes the message m to a codeword X and transmits it over the AVC. James knows the message m, the (randomized) codebook and the codeword X. James then inputs a jamming state S to disrupt communication; we assume a state-deterministic AVC where S completely specifies the channel noise. Bob receives a noisy version Y of codeword X; it outputs a message estimate $\mathop {\hat m}$ using Y and the shared randomness K. We study AVCs, called ‘adversary-weakened’ AVCs here, where the availability of shared randomness strictly improves the optimum throughput or capacity over it than when it is not available; the randomized coding capacity characterizes the largest rate possible when K is unrestricted. In this work, we characterize the exact threshold for the amount of shared randomness K so as to achieve the randomized coding capacity for ‘adversary-weakened’ AVCs.We show that exactly log(n) equiprobable and independent bits of randomness, shared between Alice and Bob and unknown to adversary James, are both necessary and sufficient for achieving randomized coding capacity for ‘adversary-weakened’ AVCs. For sufficiency, our achievability is based on a randomized code construction which uses deterministic list codes along with a polynomial hashing technique which uses the shared randomness. Our converse, which establishes the necessity of log(n) bits of shared randomness, uses a known approach for binary AVCs, and extends it to general ‘adversary-weakened’ AVCs using a notion of confusable codewords.