Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174137
Kunping Huang, P. Siegel, Anxiao Jiang
When deep neural networks (DNNs) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the DNN’s performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the DNN’s performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the DNN as a function of its input, the error correction scheme is function-oriented. A main challenge is that a DNN often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its DNN’s performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC’s redundancy and DNN’s performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm achieves substantially better performance for the functional error correction task.
{"title":"Functional Error Correction for Reliable Neural Networks","authors":"Kunping Huang, P. Siegel, Anxiao Jiang","doi":"10.1109/ISIT44484.2020.9174137","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174137","url":null,"abstract":"When deep neural networks (DNNs) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the DNN’s performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the DNN’s performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the DNN as a function of its input, the error correction scheme is function-oriented. A main challenge is that a DNN often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its DNN’s performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC’s redundancy and DNN’s performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm achieves substantially better performance for the functional error correction task.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116404464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174023
F. Cicalese, Eros Rossi
We define a D-ary Fano code based on a natural generalization of the splitting criterion of the binary Fano code to the case of D-ary code. We show that this choice allows for an efficient computation of the code tree and also leads to a strong guarantee with respect to the redundancy of the resulting code: for any source distribution p = p1,… pn1) for D = 2, 3,4 the resulting code satisfiesbegin{equation*}bar L - {H_D}({mathbf{p}}) leq 1 - {p_{min }}, tag{1}end{equation*}where $bar L$ is the average codeword length, pmin = mini pi, and ${H_D}({mathbf{p}}) = sumnolimits_{i = 1}^n {{p_i}{{log }_D}frac{1}{{{p_i}}}} $ (the D-ary entropy);2) inequality (1) holds for every D ≥ 2 whenever every internal node has exactly D children in the code tree produced by our construction.We also formulate a conjecture on the basic step applied by our algorithm in each internal node of the code tree, that, if true, would imply that the bound in (1) is actually achieved for all D ≥ 2 without the restriction of item 2.
{"title":"On D-ary Fano Codes","authors":"F. Cicalese, Eros Rossi","doi":"10.1109/ISIT44484.2020.9174023","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174023","url":null,"abstract":"We define a D-ary Fano code based on a natural generalization of the splitting criterion of the binary Fano code to the case of D-ary code. We show that this choice allows for an efficient computation of the code tree and also leads to a strong guarantee with respect to the redundancy of the resulting code: for any source distribution p = p1,… pn1) for D = 2, 3,4 the resulting code satisfiesbegin{equation*}bar L - {H_D}({mathbf{p}}) leq 1 - {p_{min }}, tag{1}end{equation*}where $bar L$ is the average codeword length, pmin = mini pi, and ${H_D}({mathbf{p}}) = sumnolimits_{i = 1}^n {{p_i}{{log }_D}frac{1}{{{p_i}}}} $ (the D-ary entropy);2) inequality (1) holds for every D ≥ 2 whenever every internal node has exactly D children in the code tree produced by our construction.We also formulate a conjecture on the basic step applied by our algorithm in each internal node of the code tree, that, if true, would imply that the bound in (1) is actually achieved for all D ≥ 2 without the restriction of item 2.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129548947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174129
Qinghua Ding, S. Jaggi, Shashank Vatedka, Yihan Zhang
In this article, we revisit the classical problem of channel coding and obtain novel results on properties of capacity- achieving codes. Specifically, we give a linear algebraic characterization of the set of capacity-achieving input distributions for discrete memoryless channels. This allows us to characterize the dimension of the manifold on which the capacity-achieving distributions lie. We then proceed by examining empirical properties of capacity-achieving codebooks by showing that the joint-type of k-tuples of codewords in a good code must be close to the k- fold product of the capacity-achieving input distribution. While this conforms with the intuition that all capacity-achieving codes must behave like random capacity-achieving codes, we also show that some properties of random coding ensembles do not hold for all codes. We prove this by showing that there exist pairs of communication problems such that random code ensembles simultaneously attain capacities of both problems, but certain (superposition ensembles) do not.Due to lack of space, several proofs have been omitted but can be found at https://sites.google.com/view/yihan/ [1]
{"title":"Empirical Properties of Good Channel Codes","authors":"Qinghua Ding, S. Jaggi, Shashank Vatedka, Yihan Zhang","doi":"10.1109/ISIT44484.2020.9174129","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174129","url":null,"abstract":"In this article, we revisit the classical problem of channel coding and obtain novel results on properties of capacity- achieving codes. Specifically, we give a linear algebraic characterization of the set of capacity-achieving input distributions for discrete memoryless channels. This allows us to characterize the dimension of the manifold on which the capacity-achieving distributions lie. We then proceed by examining empirical properties of capacity-achieving codebooks by showing that the joint-type of k-tuples of codewords in a good code must be close to the k- fold product of the capacity-achieving input distribution. While this conforms with the intuition that all capacity-achieving codes must behave like random capacity-achieving codes, we also show that some properties of random coding ensembles do not hold for all codes. We prove this by showing that there exist pairs of communication problems such that random code ensembles simultaneously attain capacities of both problems, but certain (superposition ensembles) do not.Due to lack of space, several proofs have been omitted but can be found at https://sites.google.com/view/yihan/ [1]","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129822061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174036
E. B. Yacoub, G. Liva
The finite-length absorbing set enumerators for non-binary protograph based low-density parity-check (LDPC) ensembles are derived. An efficient method for the evaluation of the asymptotic absorbing set distributions is presented and evaluated.
{"title":"Asymptotic Absorbing Set Enumerators for Non-Binary Protograph-Based LDPC Code Ensembles","authors":"E. B. Yacoub, G. Liva","doi":"10.1109/ISIT44484.2020.9174036","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174036","url":null,"abstract":"The finite-length absorbing set enumerators for non-binary protograph based low-density parity-check (LDPC) ensembles are derived. An efficient method for the evaluation of the asymptotic absorbing set distributions is presented and evaluated.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9173959
Yanina Y. Shkel, H. Poor
The relationship between secrecy, compression rate, and shared secret key rate is surveyed under perfect secrecy, equivocation, maximal leakage, local differential privacy, and secrecy by design. It is emphasized that the utility cost of jointly compressing and securing data is very sensitive to (a) the adopted secrecy metric and (b) the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed length compression remains entropy for all secrecy measures studied. However, the fundamental limits of lossless variable-length compression are no longer entropy under perfect secrecy, secrecy by design, and sometimes under local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.
{"title":"A compression perspective on secrecy measures","authors":"Yanina Y. Shkel, H. Poor","doi":"10.1109/ISIT44484.2020.9173959","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9173959","url":null,"abstract":"The relationship between secrecy, compression rate, and shared secret key rate is surveyed under perfect secrecy, equivocation, maximal leakage, local differential privacy, and secrecy by design. It is emphasized that the utility cost of jointly compressing and securing data is very sensitive to (a) the adopted secrecy metric and (b) the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed length compression remains entropy for all secrecy measures studied. However, the fundamental limits of lossless variable-length compression are no longer entropy under perfect secrecy, secrecy by design, and sometimes under local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127232624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174371
Cem Kalkanli, Ayfer Özgür
Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(sqrt {Tlog (T)} )$ improving on an earlier bound of $O(sqrt T log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.
由于其广泛适用于在线学习问题以及良好的经验和理论表现,汤普森抽样最近引起了人们的极大兴趣。在本文中,我们分析了汤普森采样在典型高斯线性强盗设置下的性能。我们证明了在这种情况下,汤普森抽样的贝叶斯遗憾限为$O(sqrt {Tlog (T)} )$,改进了文献中关于无限紧致作用集的先前的$O(sqrt T log (T))$界。我们的证明依赖于柯西-施瓦茨型不等式,它本身就很有趣。
{"title":"An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting","authors":"Cem Kalkanli, Ayfer Özgür","doi":"10.1109/ISIT44484.2020.9174371","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174371","url":null,"abstract":"Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(sqrt {Tlog (T)} )$ improving on an earlier bound of $O(sqrt T log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128941403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174444
Yuanyuan Tang, Farzad Farnoud
Due to its high data density and longevity, DNA is considered a promising storage medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and substitution errors. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one substitution error. Because a substituted symbol can be duplicated many times (possibly as part of longer substrings), a single substitution can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. In particular, compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional substitution is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA.
{"title":"Error-correcting Codes for Short Tandem Duplication and Substitution Errors","authors":"Yuanyuan Tang, Farzad Farnoud","doi":"10.1109/ISIT44484.2020.9174444","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174444","url":null,"abstract":"Due to its high data density and longevity, DNA is considered a promising storage medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and substitution errors. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one substitution error. Because a substituted symbol can be duplicated many times (possibly as part of longer substrings), a single substitution can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. In particular, compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional substitution is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129040005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174019
N. Ronquillo, T. Javidi
Consider the problem of recovering an unknown sparse unit vector via a sequence of linear observations with stochastic magnitude and additive noise. An agent sequentially selects measurement vectors and collects observations subject to noise affected by the measurement vector. We propose two algorithms of varying computational complexity for sequentially and adaptively designing measurement vectors. The proposed algorithms aim to augment the learning of the unit common support vector with an estimate of the stochastic coefficient. Numerically, we study the probability of error in estimating the support achieved by our proposed algorithms and demonstrate improvements over random-coding based strategies utilized in prior works.
{"title":"Measurement Dependent Noisy Search with Stochastic Coefficients","authors":"N. Ronquillo, T. Javidi","doi":"10.1109/ISIT44484.2020.9174019","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174019","url":null,"abstract":"Consider the problem of recovering an unknown sparse unit vector via a sequence of linear observations with stochastic magnitude and additive noise. An agent sequentially selects measurement vectors and collects observations subject to noise affected by the measurement vector. We propose two algorithms of varying computational complexity for sequentially and adaptively designing measurement vectors. The proposed algorithms aim to augment the learning of the unit common support vector with an estimate of the stochastic coefficient. Numerically, we study the probability of error in estimating the support achieved by our proposed algorithms and demonstrate improvements over random-coding based strategies utilized in prior works.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174305
Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Avestimehr, Sreeram Kannan, P. Viswanath
Today’s blockchain designs suffer from a trilemma claiming that no blockchain system can simultaneously achieve decentralization, security, and performance scalability. For current blockchain systems, as more nodes join the network, the efficiency of the system (computation, communication, and storage) stays constant at best. A leading idea for enabling blockchains to scale efficiency is the notion of sharding: different subsets of nodes handle different portions of the blockchain, thereby reducing the load for each individual node. However, existing sharding proposals achieve efficiency scaling by compromising on trust - corrupting the nodes in a given shard will lead to the permanent loss of the corresponding portion of data. In this paper, we settle the trilemma by demonstrating a new protocol for coded storage and computation in blockchains. In particular, we propose PolyShard: "polynomially coded sharding" scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.
{"title":"PolyShard: Coded Sharding Achieves Linearly Scaling Efficiency and Security Simultaneously","authors":"Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Avestimehr, Sreeram Kannan, P. Viswanath","doi":"10.1109/ISIT44484.2020.9174305","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174305","url":null,"abstract":"Today’s blockchain designs suffer from a trilemma claiming that no blockchain system can simultaneously achieve decentralization, security, and performance scalability. For current blockchain systems, as more nodes join the network, the efficiency of the system (computation, communication, and storage) stays constant at best. A leading idea for enabling blockchains to scale efficiency is the notion of sharding: different subsets of nodes handle different portions of the blockchain, thereby reducing the load for each individual node. However, existing sharding proposals achieve efficiency scaling by compromising on trust - corrupting the nodes in a given shard will lead to the permanent loss of the corresponding portion of data. In this paper, we settle the trilemma by demonstrating a new protocol for coded storage and computation in blockchains. In particular, we propose PolyShard: \"polynomially coded sharding\" scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131960099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174076
Sonu Rathi, Anoop Thomas, Monolina Dutta
Coded caching is a technique which enables the server to reduce the peak traffic rate by making use of the caches available at each user. In the classical coded caching problem, a centralized server is connected to many users through an error free link. Each user have a dedicated cache memory. This paper considers the shared caching problem which is an extension of the coded caching problem in which each cache memory could be shared by more than one user. An existing prefetching and delivery scheme for the shared caching problem with better rate-memory tradeoff than the rest is studied and the optimality of the scheme is proved by using techniques from index coding. The worst case rate of the coded caching problem is also obtained by using cut-set bound techniques. An optimal linear error correcting delivery scheme is obtained for the shared caching problem satisfying certain conditions.
{"title":"An Optimal Linear Error Correcting Scheme for Shared Caching with Small Cache Sizes","authors":"Sonu Rathi, Anoop Thomas, Monolina Dutta","doi":"10.1109/ISIT44484.2020.9174076","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174076","url":null,"abstract":"Coded caching is a technique which enables the server to reduce the peak traffic rate by making use of the caches available at each user. In the classical coded caching problem, a centralized server is connected to many users through an error free link. Each user have a dedicated cache memory. This paper considers the shared caching problem which is an extension of the coded caching problem in which each cache memory could be shared by more than one user. An existing prefetching and delivery scheme for the shared caching problem with better rate-memory tradeoff than the rest is studied and the optimality of the scheme is proved by using techniques from index coding. The worst case rate of the coded caching problem is also obtained by using cut-set bound techniques. An optimal linear error correcting delivery scheme is obtained for the shared caching problem satisfying certain conditions.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132148658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}