首页 > 最新文献

2020 IEEE International Symposium on Information Theory (ISIT)最新文献

英文 中文
Functional Error Correction for Reliable Neural Networks 可靠神经网络的功能误差校正
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174137
Kunping Huang, P. Siegel, Anxiao Jiang
When deep neural networks (DNNs) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the DNN’s performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the DNN’s performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the DNN as a function of its input, the error correction scheme is function-oriented. A main challenge is that a DNN often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its DNN’s performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC’s redundancy and DNN’s performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm achieves substantially better performance for the functional error correction task.
当深度神经网络(dnn)在硬件上实现时,其权重需要存储在存储设备中。当噪声在存储的权重中积累时,深度神经网络的性能会下降。本文研究了如何利用纠错码来保护权值。与传统的数据存储纠错不同,优化的目标是优化纠错后DNN的性能,而不是最小化保护位中的不可纠错比特错误率。也就是说,通过将DNN视为其输入的函数,纠错方案是面向函数的。一个主要的挑战是,一个深度神经网络通常有数百万到数亿个权重,这给ECCs带来了巨大的冗余开销,并且权重与其深度神经网络性能之间的关系可能非常复杂。为了解决这一挑战,我们提出了一种选择性保护(SP)方案,该方案仅选择重要比特的子集进行ECC保护。为了找到这样的位,并在ECC的冗余和DNN的性能之间实现优化权衡,我们提出了一种基于深度强化学习的算法。实验结果表明,与自然基线方案相比,本文提出的算法在功能纠错任务上取得了明显更好的性能。
{"title":"Functional Error Correction for Reliable Neural Networks","authors":"Kunping Huang, P. Siegel, Anxiao Jiang","doi":"10.1109/ISIT44484.2020.9174137","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174137","url":null,"abstract":"When deep neural networks (DNNs) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the DNN’s performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the DNN’s performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the DNN as a function of its input, the error correction scheme is function-oriented. A main challenge is that a DNN often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its DNN’s performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC’s redundancy and DNN’s performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm achieves substantially better performance for the functional error correction task.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116404464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On D-ary Fano Codes 关于D-ary范诺码
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174023
F. Cicalese, Eros Rossi
We define a D-ary Fano code based on a natural generalization of the splitting criterion of the binary Fano code to the case of D-ary code. We show that this choice allows for an efficient computation of the code tree and also leads to a strong guarantee with respect to the redundancy of the resulting code: for any source distribution p = p1,… pn1) for D = 2, 3,4 the resulting code satisfiesbegin{equation*}bar L - {H_D}({mathbf{p}}) leq 1 - {p_{min }}, tag{1}end{equation*}where $bar L$ is the average codeword length, pmin = mini pi, and ${H_D}({mathbf{p}}) = sumnolimits_{i = 1}^n {{p_i}{{log }_D}frac{1}{{{p_i}}}} $ (the D-ary entropy);2) inequality (1) holds for every D ≥ 2 whenever every internal node has exactly D children in the code tree produced by our construction.We also formulate a conjecture on the basic step applied by our algorithm in each internal node of the code tree, that, if true, would imply that the bound in (1) is actually achieved for all D ≥ 2 without the restriction of item 2.
将二进制法诺码的分裂准则自然推广到d - 1码的情况,定义了d - 1法诺码。我们表明,这种选择允许代码树的有效计算,也导致了对结果代码冗余的强有力保证:对于任何源分布p = p1,…pn1),对于D = 2,3,4,得到的代码满足begin{equation*}bar L - {H_D}({mathbf{p}}) leq 1 - {p_{min }}, tag{1}end{equation*},其中$bar L$是平均码字长度,pmin = mini pi, ${H_D}({mathbf{p}}) = sumnolimits_{i = 1}^n {{p_i}{{log }_D}frac{1}{{{p_i}}}} $ (D-ary熵);2)不等式(1)对于由我们的构造产生的代码树中每个内部节点恰好有D个子节点时,当D≥2时成立。我们还对我们的算法在代码树的每个内部节点上应用的基本步骤提出了一个猜想,如果该猜想成立,则意味着对于所有D≥2而不受第2项的限制,实际上实现了(1)中的边界。
{"title":"On D-ary Fano Codes","authors":"F. Cicalese, Eros Rossi","doi":"10.1109/ISIT44484.2020.9174023","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174023","url":null,"abstract":"We define a D-ary Fano code based on a natural generalization of the splitting criterion of the binary Fano code to the case of D-ary code. We show that this choice allows for an efficient computation of the code tree and also leads to a strong guarantee with respect to the redundancy of the resulting code: for any source distribution p = p1,… pn1) for D = 2, 3,4 the resulting code satisfiesbegin{equation*}bar L - {H_D}({mathbf{p}}) leq 1 - {p_{min }}, tag{1}end{equation*}where $bar L$ is the average codeword length, pmin = mini pi, and ${H_D}({mathbf{p}}) = sumnolimits_{i = 1}^n {{p_i}{{log }_D}frac{1}{{{p_i}}}} $ (the D-ary entropy);2) inequality (1) holds for every D ≥ 2 whenever every internal node has exactly D children in the code tree produced by our construction.We also formulate a conjecture on the basic step applied by our algorithm in each internal node of the code tree, that, if true, would imply that the bound in (1) is actually achieved for all D ≥ 2 without the restriction of item 2.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129548947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical Properties of Good Channel Codes 好的信道码的经验性质
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174129
Qinghua Ding, S. Jaggi, Shashank Vatedka, Yihan Zhang
In this article, we revisit the classical problem of channel coding and obtain novel results on properties of capacity- achieving codes. Specifically, we give a linear algebraic characterization of the set of capacity-achieving input distributions for discrete memoryless channels. This allows us to characterize the dimension of the manifold on which the capacity-achieving distributions lie. We then proceed by examining empirical properties of capacity-achieving codebooks by showing that the joint-type of k-tuples of codewords in a good code must be close to the k- fold product of the capacity-achieving input distribution. While this conforms with the intuition that all capacity-achieving codes must behave like random capacity-achieving codes, we also show that some properties of random coding ensembles do not hold for all codes. We prove this by showing that there exist pairs of communication problems such that random code ensembles simultaneously attain capacities of both problems, but certain (superposition ensembles) do not.Due to lack of space, several proofs have been omitted but can be found at https://sites.google.com/view/yihan/ [1]
在本文中,我们重新审视了信道编码的经典问题,并在容量实现码的性质上获得了新的结果。具体地说,我们给出了离散无记忆信道的容量实现输入分布集的线性代数表征。这使我们能够描述实现能力分布所处的歧管的维度。然后,我们通过证明一个好的码字的k元组的联合类型必须接近于容量实现输入分布的k倍积来检验容量实现码本的经验性质。虽然这符合所有容量实现码必须表现得像随机容量实现码的直觉,但我们也表明随机编码集成的一些特性并不适用于所有码。我们通过显示存在对通信问题,使得随机码集成同时达到两个问题的能力,但某些(叠加集成)不能证明这一点。由于篇幅不足,省略了几个证明,但可以在https://sites.google.com/view/yihan/[1]找到
{"title":"Empirical Properties of Good Channel Codes","authors":"Qinghua Ding, S. Jaggi, Shashank Vatedka, Yihan Zhang","doi":"10.1109/ISIT44484.2020.9174129","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174129","url":null,"abstract":"In this article, we revisit the classical problem of channel coding and obtain novel results on properties of capacity- achieving codes. Specifically, we give a linear algebraic characterization of the set of capacity-achieving input distributions for discrete memoryless channels. This allows us to characterize the dimension of the manifold on which the capacity-achieving distributions lie. We then proceed by examining empirical properties of capacity-achieving codebooks by showing that the joint-type of k-tuples of codewords in a good code must be close to the k- fold product of the capacity-achieving input distribution. While this conforms with the intuition that all capacity-achieving codes must behave like random capacity-achieving codes, we also show that some properties of random coding ensembles do not hold for all codes. We prove this by showing that there exist pairs of communication problems such that random code ensembles simultaneously attain capacities of both problems, but certain (superposition ensembles) do not.Due to lack of space, several proofs have been omitted but can be found at https://sites.google.com/view/yihan/ [1]","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129822061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymptotic Absorbing Set Enumerators for Non-Binary Protograph-Based LDPC Code Ensembles 非二进制原型LDPC码集成的渐近吸收集枚举数
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174036
E. B. Yacoub, G. Liva
The finite-length absorbing set enumerators for non-binary protograph based low-density parity-check (LDPC) ensembles are derived. An efficient method for the evaluation of the asymptotic absorbing set distributions is presented and evaluated.
导出了基于非二进制原生图的低密度奇偶校验(LDPC)系综的有限长度吸收集枚举数。给出了一种求解渐近吸收集分布的有效方法,并对其进行了评价。
{"title":"Asymptotic Absorbing Set Enumerators for Non-Binary Protograph-Based LDPC Code Ensembles","authors":"E. B. Yacoub, G. Liva","doi":"10.1109/ISIT44484.2020.9174036","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174036","url":null,"abstract":"The finite-length absorbing set enumerators for non-binary protograph based low-density parity-check (LDPC) ensembles are derived. An efficient method for the evaluation of the asymptotic absorbing set distributions is presented and evaluated.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A compression perspective on secrecy measures 从压缩的角度看保密措施
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9173959
Yanina Y. Shkel, H. Poor
The relationship between secrecy, compression rate, and shared secret key rate is surveyed under perfect secrecy, equivocation, maximal leakage, local differential privacy, and secrecy by design. It is emphasized that the utility cost of jointly compressing and securing data is very sensitive to (a) the adopted secrecy metric and (b) the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed length compression remains entropy for all secrecy measures studied. However, the fundamental limits of lossless variable-length compression are no longer entropy under perfect secrecy, secrecy by design, and sometimes under local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.
在完全保密、模糊、最大泄漏、局部差分保密和设计保密等情况下,研究了保密、压缩率和共享密钥率之间的关系。需要强调的是,联合压缩和保护数据的效用成本对(a)采用的保密度量和(b)压缩设置的细节非常敏感。也就是说,尽管众所周知,传统无损变长压缩和几乎无损定长压缩的基本限制密切相关,但这种关系在许多保密措施中失效。几乎无损固定长度压缩的渐近基本极限对于所研究的所有保密措施都保持熵。然而,无损变长压缩的基本限制不再是在完全保密、设计保密和有时在局部差分保密下的熵。此外,在完全保密、设计保密、最大泄漏和局部差分保密的情况下,无损压缩和几乎无损压缩在密钥/保密权衡方面存在显著差异。
{"title":"A compression perspective on secrecy measures","authors":"Yanina Y. Shkel, H. Poor","doi":"10.1109/ISIT44484.2020.9173959","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9173959","url":null,"abstract":"The relationship between secrecy, compression rate, and shared secret key rate is surveyed under perfect secrecy, equivocation, maximal leakage, local differential privacy, and secrecy by design. It is emphasized that the utility cost of jointly compressing and securing data is very sensitive to (a) the adopted secrecy metric and (b) the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed length compression remains entropy for all secrecy measures studied. However, the fundamental limits of lossless variable-length compression are no longer entropy under perfect secrecy, secrecy by design, and sometimes under local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127232624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting 一种改进的高斯线性Bandit环境下汤普森采样的遗憾界
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174371
Cem Kalkanli, Ayfer Özgür
Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(sqrt {Tlog (T)} )$ improving on an earlier bound of $O(sqrt T log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.
由于其广泛适用于在线学习问题以及良好的经验和理论表现,汤普森抽样最近引起了人们的极大兴趣。在本文中,我们分析了汤普森采样在典型高斯线性强盗设置下的性能。我们证明了在这种情况下,汤普森抽样的贝叶斯遗憾限为$O(sqrt {Tlog (T)} )$,改进了文献中关于无限紧致作用集的先前的$O(sqrt T log (T))$界。我们的证明依赖于柯西-施瓦茨型不等式,它本身就很有趣。
{"title":"An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting","authors":"Cem Kalkanli, Ayfer Özgür","doi":"10.1109/ISIT44484.2020.9174371","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174371","url":null,"abstract":"Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(sqrt {Tlog (T)} )$ improving on an earlier bound of $O(sqrt T log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128941403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Error-correcting Codes for Short Tandem Duplication and Substitution Errors 短串联复制和替换错误的纠错码
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174444
Yuanyuan Tang, Farzad Farnoud
Due to its high data density and longevity, DNA is considered a promising storage medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and substitution errors. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one substitution error. Because a substituted symbol can be duplicated many times (possibly as part of longer substrings), a single substitution can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. In particular, compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional substitution is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA.
由于其高数据密度和寿命,DNA被认为是满足不断增长的数据存储需求的有前途的存储介质。然而,DNA序列中错误的多样性使得有效的错误纠正成为一项具有挑战性的任务。本文旨在解决同时纠正两类错误,即短串联复制和替代错误。我们的重点是长度最多为3的串联重复序列,并设计了用于纠正任意数量的重复错误和一个替代错误的代码。因为被替换的符号可以被重复多次(可能作为更长的子字符串的一部分),所以一次替换可能会影响检索到的单词的无界子字符串。然而,我们表明,通过适当的预处理,影响可能仅限于有限长度的子串,从而使有效的纠错成为可能。我们构造了一个用于纠正上述错误的代码,并给出了其率的下界。特别是,与只校正重复错误的最优编码相比,数值结果表明,当字母表大小为4时,防止额外替换的渐近代价仅为0.003 bits/symbol,这是DNA中数据存储的重要情况。
{"title":"Error-correcting Codes for Short Tandem Duplication and Substitution Errors","authors":"Yuanyuan Tang, Farzad Farnoud","doi":"10.1109/ISIT44484.2020.9174444","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174444","url":null,"abstract":"Due to its high data density and longevity, DNA is considered a promising storage medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and substitution errors. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one substitution error. Because a substituted symbol can be duplicated many times (possibly as part of longer substrings), a single substitution can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. In particular, compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional substitution is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129040005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Measurement Dependent Noisy Search with Stochastic Coefficients 随机系数测量相关噪声搜索
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174019
N. Ronquillo, T. Javidi
Consider the problem of recovering an unknown sparse unit vector via a sequence of linear observations with stochastic magnitude and additive noise. An agent sequentially selects measurement vectors and collects observations subject to noise affected by the measurement vector. We propose two algorithms of varying computational complexity for sequentially and adaptively designing measurement vectors. The proposed algorithms aim to augment the learning of the unit common support vector with an estimate of the stochastic coefficient. Numerically, we study the probability of error in estimating the support achieved by our proposed algorithms and demonstrate improvements over random-coding based strategies utilized in prior works.
考虑通过随机幅度和加性噪声的线性观测序列恢复未知稀疏单位向量的问题。智能体依次选择测量向量并收集受测量向量影响的噪声观测值。我们提出了两种不同计算复杂度的算法,用于顺序和自适应设计测量向量。提出的算法旨在通过估计随机系数来增强单位公共支持向量的学习。在数值上,我们研究了我们提出的算法在估计支持度时的误差概率,并演示了对先前工作中使用的基于随机编码的策略的改进。
{"title":"Measurement Dependent Noisy Search with Stochastic Coefficients","authors":"N. Ronquillo, T. Javidi","doi":"10.1109/ISIT44484.2020.9174019","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174019","url":null,"abstract":"Consider the problem of recovering an unknown sparse unit vector via a sequence of linear observations with stochastic magnitude and additive noise. An agent sequentially selects measurement vectors and collects observations subject to noise affected by the measurement vector. We propose two algorithms of varying computational complexity for sequentially and adaptively designing measurement vectors. The proposed algorithms aim to augment the learning of the unit common support vector with an estimate of the stochastic coefficient. Numerically, we study the probability of error in estimating the support achieved by our proposed algorithms and demonstrate improvements over random-coding based strategies utilized in prior works.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PolyShard: Coded Sharding Achieves Linearly Scaling Efficiency and Security Simultaneously PolyShard:编码分片同时实现线性扩展效率和安全性
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174305
Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Avestimehr, Sreeram Kannan, P. Viswanath
Today’s blockchain designs suffer from a trilemma claiming that no blockchain system can simultaneously achieve decentralization, security, and performance scalability. For current blockchain systems, as more nodes join the network, the efficiency of the system (computation, communication, and storage) stays constant at best. A leading idea for enabling blockchains to scale efficiency is the notion of sharding: different subsets of nodes handle different portions of the blockchain, thereby reducing the load for each individual node. However, existing sharding proposals achieve efficiency scaling by compromising on trust - corrupting the nodes in a given shard will lead to the permanent loss of the corresponding portion of data. In this paper, we settle the trilemma by demonstrating a new protocol for coded storage and computation in blockchains. In particular, we propose PolyShard: "polynomially coded sharding" scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.
今天的区块链设计陷入三难困境,声称没有区块链系统可以同时实现去中心化、安全性和性能可伸缩性。对于当前的区块链系统,随着越来越多的节点加入网络,系统的效率(计算、通信和存储)最多保持不变。使区块链能够扩展效率的一个主要想法是分片的概念:不同的节点子集处理区块链的不同部分,从而减少每个单独节点的负载。然而,现有的分片建议通过损害信任来实现效率扩展——破坏给定分片中的节点将导致相应部分数据的永久丢失。在本文中,我们通过展示区块链中编码存储和计算的新协议来解决三难困境。特别是,我们提出了PolyShard:“多项式编码分片”方案,该方案实现了存储效率,系统吞吐量以及信任的信息论上限,从而实现了真正的可扩展系统。
{"title":"PolyShard: Coded Sharding Achieves Linearly Scaling Efficiency and Security Simultaneously","authors":"Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Avestimehr, Sreeram Kannan, P. Viswanath","doi":"10.1109/ISIT44484.2020.9174305","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174305","url":null,"abstract":"Today’s blockchain designs suffer from a trilemma claiming that no blockchain system can simultaneously achieve decentralization, security, and performance scalability. For current blockchain systems, as more nodes join the network, the efficiency of the system (computation, communication, and storage) stays constant at best. A leading idea for enabling blockchains to scale efficiency is the notion of sharding: different subsets of nodes handle different portions of the blockchain, thereby reducing the load for each individual node. However, existing sharding proposals achieve efficiency scaling by compromising on trust - corrupting the nodes in a given shard will lead to the permanent loss of the corresponding portion of data. In this paper, we settle the trilemma by demonstrating a new protocol for coded storage and computation in blockchains. In particular, we propose PolyShard: \"polynomially coded sharding\" scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131960099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Optimal Linear Error Correcting Scheme for Shared Caching with Small Cache Sizes 小缓存共享缓存的最优线性纠错方案
Pub Date : 2020-06-01 DOI: 10.1109/ISIT44484.2020.9174076
Sonu Rathi, Anoop Thomas, Monolina Dutta
Coded caching is a technique which enables the server to reduce the peak traffic rate by making use of the caches available at each user. In the classical coded caching problem, a centralized server is connected to many users through an error free link. Each user have a dedicated cache memory. This paper considers the shared caching problem which is an extension of the coded caching problem in which each cache memory could be shared by more than one user. An existing prefetching and delivery scheme for the shared caching problem with better rate-memory tradeoff than the rest is studied and the optimality of the scheme is proved by using techniques from index coding. The worst case rate of the coded caching problem is also obtained by using cut-set bound techniques. An optimal linear error correcting delivery scheme is obtained for the shared caching problem satisfying certain conditions.
编码缓存是一种技术,它使服务器能够利用每个用户可用的缓存来降低峰值流量率。在经典的编码缓存问题中,集中式服务器通过无错误链接连接到许多用户。每个用户都有一个专用的缓存内存。本文研究的共享缓存问题是编码缓存问题的扩展,其中每个缓存可以由多个用户共享。研究了一种具有较好速率-内存权衡的共享缓存预取和分发方案,并利用索引编码技术证明了该方案的最优性。利用切集边界技术,得到了编码缓存问题的最坏情况率。对于满足一定条件的共享缓存问题,得到了最优的线性纠错传输方案。
{"title":"An Optimal Linear Error Correcting Scheme for Shared Caching with Small Cache Sizes","authors":"Sonu Rathi, Anoop Thomas, Monolina Dutta","doi":"10.1109/ISIT44484.2020.9174076","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174076","url":null,"abstract":"Coded caching is a technique which enables the server to reduce the peak traffic rate by making use of the caches available at each user. In the classical coded caching problem, a centralized server is connected to many users through an error free link. Each user have a dedicated cache memory. This paper considers the shared caching problem which is an extension of the coded caching problem in which each cache memory could be shared by more than one user. An existing prefetching and delivery scheme for the shared caching problem with better rate-memory tradeoff than the rest is studied and the optimality of the scheme is proved by using techniques from index coding. The worst case rate of the coded caching problem is also obtained by using cut-set bound techniques. An optimal linear error correcting delivery scheme is obtained for the shared caching problem satisfying certain conditions.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132148658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2020 IEEE International Symposium on Information Theory (ISIT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1