首页 > 最新文献

Proceedings of the 5th conference on Innovations in theoretical computer science最新文献

英文 中文
Partial tests, universal tests and decomposability 部分测试、通用测试和可分解性
E. Fischer, Yonatan Goldhirsh, Oded Lachish
For a property P and a sub-property P', we say that P is P'-partially testable with q queries} if there exists an algorithm that distinguishes, with high probability, inputs in P' from inputs ε-far from P, using q queries. Some natural properties require many queries to test, but can be partitioned into a small number of subsets for which they are partially testable with very few queries, sometimes even a number independent of the input size. For properties over {0,1}, the notion of being thus partitionable ties in closely with Merlin-Arthur proofs of Proximity (MAPs) as defined independently in [14] a partition into r partially-testable properties is the same as a Merlin-Arthur system where the proof consists of the identity of one of the r partially-testable properties, giving a 2-way translation to an O(log r) size proof. Our main result is that for some low complexity properties a partition as above cannot exist, and moreover that for each of our properties there does not exist even a single sub-property featuring both a large size and a query-efficient partial test, in particular improving the lower bound set in [14]. For this we use neither the traditional Yao-type arguments nor the more recent communication complexity method, but open up a new approach for proving lower bounds. First, we use entropy analysis, which allows us to apply our arguments directly to 2-sided tests, thus avoiding the cost of the conversion in [14] from 2-sided to 1-sided tests. Broadly speaking we use "distinguishing instances" of a supposed test to show that a uniformly random choice of a member of the sub-property has "low entropy areas", ultimately leading to it having a low total entropy and hence having a small base set. Additionally, to have our arguments apply to adaptive tests, we use a mechanism of "rearranging" the input bits (through a decision tree that adaptively reads the entire input) to expose the low entropy that would otherwise not be apparent. We also explore the possibility of a connection in the other direction, namely whether the existence of a good partition (or MAP) can lead to a relatively query-efficient standard property test. We provide some preliminary results concerning this question, including a simple lower bound on the possible trade-off. Our second major result is a positive trade-off result for the restricted framework of 1-sided proximity oblivious tests. This is achieved through the construction of a "universal tester" that works the same for all properties admitting the restricted test. Our tester is very related to the notion of sample-based testing (for a non-constant number of queries) as defined by Goldreich and Ron in [13]. In particular it partially resolves an open problem raised by [13].
对于属性P和子属性P',我们说P是P'-通过q查询部分可测试},如果存在一种算法,可以使用q查询以高概率区分P'中的输入和远离P的输入ε。一些自然属性需要许多查询来测试,但可以划分为少量的子集,这些子集可以用很少的查询进行部分测试,有时甚至是与输入大小无关的数量。对于{0,1}上的性质,可分区的概念与在[14]中独立定义的邻近性的Merlin-Arthur证明(MAPs)密切相关,划分为r个部分可测试性质与证明由r个部分可测试性质之一的恒等式组成的Merlin-Arthur系统相同,给出了一个2向转换为O(log r)大小的证明。我们的主要结果是,对于一些低复杂度的性质,不存在上述划分,而且对于我们的每个性质,甚至不存在一个既具有大尺寸又具有查询效率的部分测试的子性质,特别是改进了[14]中的下界集。为此,我们既没有使用传统的姚式论证,也没有使用最近的通信复杂度方法,而是开辟了一种证明下界的新方法。首先,我们使用熵分析,它允许我们直接将我们的参数应用于双侧检验,从而避免了[14]中从双侧检验到单侧检验的转换成本。一般来说,我们使用假定测试的“区分实例”来表明,子属性的成员的均匀随机选择具有“低熵区域”,最终导致它具有低总熵,因此具有小基集。此外,为了将我们的论点应用于自适应测试,我们使用了一种“重新排列”输入比特的机制(通过一个自适应读取整个输入的决策树)来暴露低熵,否则就不会很明显。我们还探讨了另一个方向上连接的可能性,即一个好的分区(或MAP)的存在是否能够导致查询效率相对较高的标准属性测试。我们提供了一些关于这个问题的初步结果,包括一个可能权衡的简单下界。我们的第二个主要结果是对单侧接近无关测试的限制框架的积极权衡结果。这是通过构建一个“通用测试仪”来实现的,该测试仪对所有允许限制测试的属性都具有相同的工作原理。我们的测试器与Goldreich和Ron在[13]中定义的基于样本的测试(针对非恒定数量的查询)的概念非常相关。特别是,它部分地解决了[13]提出的一个开放性问题。
{"title":"Partial tests, universal tests and decomposability","authors":"E. Fischer, Yonatan Goldhirsh, Oded Lachish","doi":"10.1145/2554797.2554841","DOIUrl":"https://doi.org/10.1145/2554797.2554841","url":null,"abstract":"For a property P and a sub-property P', we say that P is P'-partially testable with q queries} if there exists an algorithm that distinguishes, with high probability, inputs in P' from inputs ε-far from P, using q queries. Some natural properties require many queries to test, but can be partitioned into a small number of subsets for which they are partially testable with very few queries, sometimes even a number independent of the input size. For properties over {0,1}, the notion of being thus partitionable ties in closely with Merlin-Arthur proofs of Proximity (MAPs) as defined independently in [14] a partition into r partially-testable properties is the same as a Merlin-Arthur system where the proof consists of the identity of one of the r partially-testable properties, giving a 2-way translation to an O(log r) size proof. Our main result is that for some low complexity properties a partition as above cannot exist, and moreover that for each of our properties there does not exist even a single sub-property featuring both a large size and a query-efficient partial test, in particular improving the lower bound set in [14]. For this we use neither the traditional Yao-type arguments nor the more recent communication complexity method, but open up a new approach for proving lower bounds. First, we use entropy analysis, which allows us to apply our arguments directly to 2-sided tests, thus avoiding the cost of the conversion in [14] from 2-sided to 1-sided tests. Broadly speaking we use \"distinguishing instances\" of a supposed test to show that a uniformly random choice of a member of the sub-property has \"low entropy areas\", ultimately leading to it having a low total entropy and hence having a small base set. Additionally, to have our arguments apply to adaptive tests, we use a mechanism of \"rearranging\" the input bits (through a decision tree that adaptively reads the entire input) to expose the low entropy that would otherwise not be apparent. We also explore the possibility of a connection in the other direction, namely whether the existence of a good partition (or MAP) can lead to a relatively query-efficient standard property test. We provide some preliminary results concerning this question, including a simple lower bound on the possible trade-off. Our second major result is a positive trade-off result for the restricted framework of 1-sided proximity oblivious tests. This is achieved through the construction of a \"universal tester\" that works the same for all properties admitting the restricted test. Our tester is very related to the notion of sample-based testing (for a non-constant number of queries) as defined by Goldreich and Ron in [13]. In particular it partially resolves an open problem raised by [13].","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126938521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Energy-efficient circuit design 节能电路设计
A. Antoniadis, Neal Barcelo, Michael Nugent, K. Pruhs, Michele Scquizzato
We initiate the theoretical investigation of energy-efficient circuit design. We assume that the circuit design specifies the circuit layout as well as the supply voltages for the gates. To obtain maximum energy efficiency, the circuit design must balance the conflicting demands of minimizing the energy used per gate, and minimizing the number of gates in the circuit; If the energy supplied to the gates is small, then functional failures are likely, necessitating a circuit layout that is more fault-tolerant, and thus that has more gates. By leveraging previous work on fault-tolerant circuit design, we show general upper and lower bounds on the amount of energy required by a circuit to compute a given relation. We show that some circuits would be asymptotically more energy efficient if heterogeneous supply voltages were allowed, and show that for some circuits the most energy-efficient supply voltages are homogeneous over all gates.
我们开始了节能电路设计的理论研究。我们假设电路设计指定了电路布局以及栅极的电源电压。为了获得最大的能量效率,电路设计必须在最小化每个栅极所使用的能量和最小化电路中的栅极数量这两个相互冲突的需求之间取得平衡;如果提供给门的能量很小,则可能出现功能故障,因此需要更具容错性的电路布局,因此需要更多的门。通过利用以前在容错电路设计方面的工作,我们展示了电路计算给定关系所需能量的一般上限和下限。我们表明,如果允许异质供电电压,一些电路将逐渐提高能源效率,并表明对于某些电路,最节能的供电电压在所有门上都是均匀的。
{"title":"Energy-efficient circuit design","authors":"A. Antoniadis, Neal Barcelo, Michael Nugent, K. Pruhs, Michele Scquizzato","doi":"10.1145/2554797.2554826","DOIUrl":"https://doi.org/10.1145/2554797.2554826","url":null,"abstract":"We initiate the theoretical investigation of energy-efficient circuit design. We assume that the circuit design specifies the circuit layout as well as the supply voltages for the gates. To obtain maximum energy efficiency, the circuit design must balance the conflicting demands of minimizing the energy used per gate, and minimizing the number of gates in the circuit; If the energy supplied to the gates is small, then functional failures are likely, necessitating a circuit layout that is more fault-tolerant, and thus that has more gates. By leveraging previous work on fault-tolerant circuit design, we show general upper and lower bounds on the amount of energy required by a circuit to compute a given relation. We show that some circuits would be asymptotically more energy efficient if heterogeneous supply voltages were allowed, and show that for some circuits the most energy-efficient supply voltages are homogeneous over all gates.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125246860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Why do simple algorithms for triangle enumeration work in the real world? 为什么三角形枚举的简单算法在现实世界中工作?
Jonathan W. Berry, Luke Fostvedt, D. Nordman, C. Phillips, C. Seshadhri, Alyson G. Wilson
Triangle enumeration is a fundamental graph operation. Despite the lack of provably efficient (linear, or slightly super-linear) worst-case algorithms for this problem, practitioners run simple, efficient heuristics to find all triangles in graphs with millions of vertices. How are these heuristics exploiting the structure of these special graphs to provide major speedups in running time? We study one of the most prevalent algorithms used by practitioners. A trivial algorithm enumerates all paths of length 2, and checks if each such path is incident to a triangle. A good heuristic is to enumerate only those paths of length 2 where the middle vertex has the lowest degree. It is easily implemented and is empirically known to give remarkable speedups over the trivial algorithm. We study the behavior of this algorithm over graphs with heavy-tailed degree distributions, a defining feature of real-world graphs. The erased configuration model (ECM) efficiently generates a graph with asymptotically (almost) any desired degree sequence. We show that the expected running time of this algorithm over the distribution of graphs created by the ECM is controlled by the l4/3-norm of the degree sequence. As a corollary of our main theorem, we prove expected linear-time performance for degree sequences following a power law with exponent α ≥ 7/3, and non-trivial speedup whenever α ∈ (2,3).
三角枚举是一种基本的图运算。尽管对于这个问题缺乏可证明有效的(线性的,或者稍微超线性的)最坏情况算法,从业者还是运行简单、有效的启发式方法来找到具有数百万个顶点的图中的所有三角形。这些启发式方法如何利用这些特殊图的结构来提供运行时间上的主要加速?我们研究从业者使用的最流行的算法之一。一个简单的算法枚举长度为2的所有路径,并检查每个这样的路径是否与一个三角形相关。一个很好的启发式方法是只列举那些长度为2且中间顶点具有最低度的路径。它很容易实现,并且在经验上已知它比平凡的算法提供显着的加速。我们研究了该算法在具有重尾度分布的图上的行为,重尾度分布是现实世界图的一个定义特征。擦除组态模型(ECM)有效地生成具有渐近(几乎)任意期望阶序列的图。我们证明了该算法在ECM生成的图的分布上的期望运行时间是由度序列的14 /3范数控制的。作为主要定理的一个推论,我们证明了指数α≥7/3的幂律阶序列的期望线性时间性能,以及当α∈(2,3)时的非平凡加速。
{"title":"Why do simple algorithms for triangle enumeration work in the real world?","authors":"Jonathan W. Berry, Luke Fostvedt, D. Nordman, C. Phillips, C. Seshadhri, Alyson G. Wilson","doi":"10.1145/2554797.2554819","DOIUrl":"https://doi.org/10.1145/2554797.2554819","url":null,"abstract":"Triangle enumeration is a fundamental graph operation. Despite the lack of provably efficient (linear, or slightly super-linear) worst-case algorithms for this problem, practitioners run simple, efficient heuristics to find all triangles in graphs with millions of vertices. How are these heuristics exploiting the structure of these special graphs to provide major speedups in running time? We study one of the most prevalent algorithms used by practitioners. A trivial algorithm enumerates all paths of length 2, and checks if each such path is incident to a triangle. A good heuristic is to enumerate only those paths of length 2 where the middle vertex has the lowest degree. It is easily implemented and is empirically known to give remarkable speedups over the trivial algorithm. We study the behavior of this algorithm over graphs with heavy-tailed degree distributions, a defining feature of real-world graphs. The erased configuration model (ECM) efficiently generates a graph with asymptotically (almost) any desired degree sequence. We show that the expected running time of this algorithm over the distribution of graphs created by the ECM is controlled by the l4/3-norm of the degree sequence. As a corollary of our main theorem, we prove expected linear-time performance for degree sequences following a power law with exponent α ≥ 7/3, and non-trivial speedup whenever α ∈ (2,3).","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133998922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Session details: Session 4: 16:00--16:10 会话详情:会话4:16:00—16:10
David Xiao
{"title":"Session details: Session 4: 16:00--16:10","authors":"David Xiao","doi":"10.1145/3255056","DOIUrl":"https://doi.org/10.1145/3255056","url":null,"abstract":"","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125366707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Session 1: 08:30--8:40 会议详情:会议1:08:30—8:40
Kobbi Nissim
{"title":"Session details: Session 1: 08:30--8:40","authors":"Kobbi Nissim","doi":"10.1145/3255053","DOIUrl":"https://doi.org/10.1145/3255053","url":null,"abstract":"","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124384363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redrawing the boundaries on purchasing data from privacy-sensitive individuals 重新划定从隐私敏感的个人那里购买数据的界限
Kobbi Nissim, S. Vadhan, David Xiao
We prove new positive and negative results concerning the existence of truthful and individually rational mechanisms for purchasing private data from individuals with unbounded and sensitive privacy preferences. We strengthen the impossibility results of Ghosh and Roth (EC 2011) by extending it to a much wider class of privacy valuations. In particular, these include privacy valuations that are based on (ε δ)-differentially private mechanisms for non-zero δ, ones where the privacy costs are measured in a per-database manner (rather than taking the worst case), and ones that do not depend on the payments made to players (which might not be observable to an adversary). To bypass this impossibility result, we study a natural special setting where individuals have monotonic privacy valuations, which captures common contexts where certain values for private data are expected to lead to higher valuations for privacy (e. g. having a particular disease). We give new mechanisms that are individually rational for all players with monotonic privacy valuations, truthful for all players whose privacy valuations are not too large, and accurate if there are not too many players with too-large privacy valuations. We also prove matching lower bounds showing that in some respects our mechanism cannot be improved significantly.
我们证明了关于从具有无限和敏感隐私偏好的个人购买私人数据的真实和个体理性机制的存在的新的积极和消极结果。我们加强了Ghosh和Roth (EC 2011)的不可能结果,将其扩展到更广泛的隐私估值类别。特别是,这些包括基于(ε δ)的隐私评估——非零δ的差异隐私机制,隐私成本以每个数据库的方式衡量(而不是采取最坏的情况),以及不依赖于向玩家支付的费用(这可能不会被对手观察到)的隐私评估。为了绕过这个不可能的结果,我们研究了一个自然的特殊设置,其中个人具有单调的隐私估值,它捕获了私有数据的某些值预计会导致更高隐私估值的常见背景(例如患有特定疾病)。我们给出了新的机制,这些机制对于所有具有单调隐私估值的参与者来说都是理性的,对于所有隐私估值不太大的参与者来说都是真实的,如果没有太多的参与者具有过大的隐私估值,则是准确的。我们还证明了匹配下界,表明在某些方面我们的机制不能得到显著改进。
{"title":"Redrawing the boundaries on purchasing data from privacy-sensitive individuals","authors":"Kobbi Nissim, S. Vadhan, David Xiao","doi":"10.1145/2554797.2554835","DOIUrl":"https://doi.org/10.1145/2554797.2554835","url":null,"abstract":"We prove new positive and negative results concerning the existence of truthful and individually rational mechanisms for purchasing private data from individuals with unbounded and sensitive privacy preferences. We strengthen the impossibility results of Ghosh and Roth (EC 2011) by extending it to a much wider class of privacy valuations. In particular, these include privacy valuations that are based on (ε δ)-differentially private mechanisms for non-zero δ, ones where the privacy costs are measured in a per-database manner (rather than taking the worst case), and ones that do not depend on the payments made to players (which might not be observable to an adversary). To bypass this impossibility result, we study a natural special setting where individuals have monotonic privacy valuations, which captures common contexts where certain values for private data are expected to lead to higher valuations for privacy (e. g. having a particular disease). We give new mechanisms that are individually rational for all players with monotonic privacy valuations, truthful for all players whose privacy valuations are not too large, and accurate if there are not too many players with too-large privacy valuations. We also prove matching lower bounds showing that in some respects our mechanism cannot be improved significantly.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121749227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Robust device independent quantum key distribution 鲁棒设备无关量子密钥分发
U. Vazirani, Thomas Vidick
Quantum cryptography is based on the discovery that the laws of quantum mechanics allow levels of security that are impossible to replicate in a classical world [2, 8, 12]. Can such levels of security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This fundamental question in quantum cryptography dates back to the early nineties when the challenge of achieving device independent quantum key distribution, or DIQKD, was first formulated [9]. We answer this challenge affirmatively by exhibiting a robust protocol for DIQKD and rigorously proving its security. The protocol achieves a linear key rate while tolerating a constant noise rate in the devices. The security proof assumes only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and any adversary's laboratory. In particular, we emphasize that the devices may have quantum memory. All previous proofs of security relied either on the use of many independent pairs of devices [6, 4, 7], or on the absence of noise [10, 1]. To prove security for a DIQKD protocol it is necessary to establish at least that the generated key is truly random even in the presence of a quantum adversary. This is already a challenge, one that was recently resolved [14]. DIQKD is substantially harder, since now the protocol must also guarantee that the key is completely secret from the quantum adversary's point of view, and the entire protocol is robust against noise; this in spite of the substantial amounts of classical information leaked to the adversary throughout the protocol, as part of the error estimation and information reconciliation procedures. Our proof of security builds upon a number of techniques, including randomness extractors that are secure against quantum storage [3] as well as ideas originating in the coding strategy used in the proof of the Holevo-Schumacher-Westmoreland theorem [5, 11] which we apply to bound correlations across multiple rounds in a way not unrelated to information-theoretic proofs of the parallel repetition property for multiplayer games. Our main result can be understood as a new bound on monogamy [13] of entanglement in the type of complex scenario that arises in a key distribution protocol. Precise statements of our results and detailed proofs can be found at arXiv:1210.1810.
量子密码学的基础是发现量子力学定律允许在经典世界中不可能复制的安全级别[2,8,12]。即使在协议所依赖的量子设备不受信任的情况下,也能保证这种级别的安全吗?量子密码学中的这个基本问题可以追溯到上世纪90年代初,当时首次提出了实现与设备无关的量子密钥分发(DIQKD)的挑战[9]。我们通过展示一个健壮的DIQKD协议并严格证明其安全性来肯定地回答这个挑战。该协议实现了一个线性密钥率,同时在设备中容忍恒定的噪声率。安全证明只假设这些设备可以按照量子力学定律建模,并且在空间上彼此隔离,与任何对手的实验室隔离。我们特别强调,这些器件可能具有量子存储器。以前所有的安全性证明要么依赖于使用许多独立的设备对[6,4,7],要么依赖于没有噪声[10,1]。为了证明DIQKD协议的安全性,至少有必要确定即使存在量子对手,生成的密钥也是真正随机的。这已经是一个挑战,最近才得到解决[14]。DIQKD的难度要大得多,因为现在协议还必须保证从量子对手的角度来看,密钥是完全保密的,而且整个协议对噪声具有鲁棒性;尽管在整个协议中,作为错误估计和信息协调过程的一部分,大量的经典信息泄露给了对手。我们的安全性证明建立在许多技术的基础上,包括对量子存储安全的随机提取器[3],以及起源于Holevo-Schumacher-Westmoreland定理证明中使用的编码策略[5,11],我们将其应用于多个回合的绑定相关性,以一种与多人游戏并行重复属性的信息论证明并非无关的方式。我们的主要结果可以理解为在密钥分发协议中出现的复杂场景类型中纠缠的一夫一妻制[13]的新界限。我们的结果的精确陈述和详细的证明可以在arXiv:1210.1810找到。
{"title":"Robust device independent quantum key distribution","authors":"U. Vazirani, Thomas Vidick","doi":"10.1145/2554797.2554802","DOIUrl":"https://doi.org/10.1145/2554797.2554802","url":null,"abstract":"Quantum cryptography is based on the discovery that the laws of quantum mechanics allow levels of security that are impossible to replicate in a classical world [2, 8, 12]. Can such levels of security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This fundamental question in quantum cryptography dates back to the early nineties when the challenge of achieving device independent quantum key distribution, or DIQKD, was first formulated [9]. We answer this challenge affirmatively by exhibiting a robust protocol for DIQKD and rigorously proving its security. The protocol achieves a linear key rate while tolerating a constant noise rate in the devices. The security proof assumes only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and any adversary's laboratory. In particular, we emphasize that the devices may have quantum memory. All previous proofs of security relied either on the use of many independent pairs of devices [6, 4, 7], or on the absence of noise [10, 1]. To prove security for a DIQKD protocol it is necessary to establish at least that the generated key is truly random even in the presence of a quantum adversary. This is already a challenge, one that was recently resolved [14]. DIQKD is substantially harder, since now the protocol must also guarantee that the key is completely secret from the quantum adversary's point of view, and the entire protocol is robust against noise; this in spite of the substantial amounts of classical information leaked to the adversary throughout the protocol, as part of the error estimation and information reconciliation procedures. Our proof of security builds upon a number of techniques, including randomness extractors that are secure against quantum storage [3] as well as ideas originating in the coding strategy used in the proof of the Holevo-Schumacher-Westmoreland theorem [5, 11] which we apply to bound correlations across multiple rounds in a way not unrelated to information-theoretic proofs of the parallel repetition property for multiplayer games. Our main result can be understood as a new bound on monogamy [13] of entanglement in the type of complex scenario that arises in a key distribution protocol. Precise statements of our results and detailed proofs can be found at arXiv:1210.1810.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129147041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Session details: Session 6: 10:30--10:40 会议详情:会议6:10:30—10:40
V. Vaikuntanathan
{"title":"Session details: Session 6: 10:30--10:40","authors":"V. Vaikuntanathan","doi":"10.1145/3255058","DOIUrl":"https://doi.org/10.1145/3255058","url":null,"abstract":"","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115896518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear-time encodable codes meeting the gilbert-varshamov bound and their cryptographic applications 满足gilbert-varshamov界的线性时间可编码码及其密码学应用
E. Druk, Y. Ishai
A random linear code has good minimal distance with high probability. The conjectured intractability of decoding random linear codes has recently found many applications in cryptography. One disadvantage of random linear codes is that their encoding complexity grows quadratically with the message length. Motivated by this disadvantage, we present a randomized construction of linear error-correcting codes which can be encoded in linear time and yet enjoy several useful features of random linear codes. Our construction is based on a linear-time computable hash function due to Ishai, Kushilevitz, Ostrovsky and Sahai [25]. We demonstrate the usefulness of these new codes by presenting several applications in coding theory and cryptography. These include the first family of linear-time encodable codes meeting the Gilbert-Varshamov bound, the first nontrivial linear-time secret sharing schemes, and plausible candidates for symmetric encryption and identification schemes which can be conjectured to achieve better asymptotic efficiency/security tradeoffs than all current candidates.
随机线性码具有良好的最小距离和高概率。随机线性码解码的难解性最近在密码学中得到了许多应用。随机线性码的一个缺点是其编码复杂度随消息长度呈二次增长。针对这一缺点,我们提出了一种线性纠错码的随机结构,它可以在线性时间内编码,同时又具有随机线性码的几个有用的特征。我们的构造是基于Ishai, Kushilevitz, Ostrovsky和Sahai[25]的线性时间可计算哈希函数。我们通过介绍编码理论和密码学中的几个应用来证明这些新代码的有用性。其中包括满足Gilbert-Varshamov界的第一个线性时间可编码码族,第一个非平凡线性时间秘密共享方案,以及对称加密和识别方案的合理候选方案,这些方案可以推测出比所有现有候选方案更好的渐近效率/安全权衡。
{"title":"Linear-time encodable codes meeting the gilbert-varshamov bound and their cryptographic applications","authors":"E. Druk, Y. Ishai","doi":"10.1145/2554797.2554815","DOIUrl":"https://doi.org/10.1145/2554797.2554815","url":null,"abstract":"A random linear code has good minimal distance with high probability. The conjectured intractability of decoding random linear codes has recently found many applications in cryptography. One disadvantage of random linear codes is that their encoding complexity grows quadratically with the message length. Motivated by this disadvantage, we present a randomized construction of linear error-correcting codes which can be encoded in linear time and yet enjoy several useful features of random linear codes. Our construction is based on a linear-time computable hash function due to Ishai, Kushilevitz, Ostrovsky and Sahai [25]. We demonstrate the usefulness of these new codes by presenting several applications in coding theory and cryptography. These include the first family of linear-time encodable codes meeting the Gilbert-Varshamov bound, the first nontrivial linear-time secret sharing schemes, and plausible candidates for symmetric encryption and identification schemes which can be conjectured to achieve better asymptotic efficiency/security tradeoffs than all current candidates.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"9 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127043977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Session details: Session 10: 10:30--10:40 会议详情:会议10:10:30—10:40
Deeparnab Chakrabarty
{"title":"Session details: Session 10: 10:30--10:40","authors":"Deeparnab Chakrabarty","doi":"10.1145/3255062","DOIUrl":"https://doi.org/10.1145/3255062","url":null,"abstract":"","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114547813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 5th conference on Innovations in theoretical computer science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1