首页 > 最新文献

2007 IEEE International Test Conference最新文献

英文 中文
The Cost of Statistical Security in Proofs for Repeated Squaring 重复平方证明中的统计安全代价
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.ITC.2023.4
Cody R. Freitag, Ilan Komargodski
In recent years, the number of applications of the repeated squaring assumption has been growing rapidly. The assumption states that, given a group element x , an integer T , and an RSA modulus N , it is hard to compute x 2 T mod N – or even decide whether y ? = x 2 T mod N – in parallel time less than the trivial approach of simply computing T squares. This rise has been driven by efficient proof systems for repeated squaring, opening the door to more efficient constructions of verifiable delay functions, various secure computation primitives, and proof systems for more general languages. In this work, we study the complexity of statistically sound proofs for the repeated squaring relation. Technically, we consider proofs where the prover sends at most k ≥ 0 elements and the (probabilistic) verifier performs generic group operations over the group Z ⋆N . As our main contribution, we show that for any (one-round) proof with a randomized verifier (i.e., an MA proof) the verifier either runs in parallel time Ω( T/ ( k + 1)) with high probability, or is able to factor N given the proof provided by the prover. This shows that either the prover essentially sends p, q such that N = p · q (which is infeasible or undesirable in most applications), or a variant of Pietrzak’s proof of repeated squaring (ITCS 2019) has optimal verifier complexity O ( T/ ( k + 1)). In particular, it is impossible to obtain a statistically sound one-round proof of repeated squaring with efficiency on par with the computationally-sound protocol of Wesolowski (EUROCRYPT 2019), with a generic group verifier. We further extend our one-round lower bound to a natural class of recursive interactive proofs for repeated squaring. For r -round recursive proofs where the prover is allowed to send k group elements per round, we show that the verifier either runs in parallel time Ω( T/ ( k + 1) r ) with high probability, or is able to factor N given the proof transcript
近年来,重复平方假设的应用数量迅速增长。这个假设表明,给定一个群元素x、一个整数T和一个RSA模N,很难计算x2t模N,甚至很难决定y ?= x 2t模N -在并行时间内小于简单计算T平方的平凡方法。这种增长是由有效的重复平方证明系统驱动的,这为更有效地构造可验证的延迟函数、各种安全计算原语和更通用语言的证明系统打开了大门。在这项工作中,我们研究了重复平方关系的统计可靠证明的复杂性。从技术上讲,我们认为证明者最多发送k≥0个元素,并且(概率)验证者对组Z -百科N执行一般的组操作。作为我们的主要贡献,我们证明了对于随机验证者(即MA证明)的任何(一轮)证明,验证者要么以高概率并行时间Ω(T/ (k + 1))运行,要么能够在给定证明者提供的证明的情况下分解N。这表明证明者发送的p, q使得N = p·q(这在大多数应用中是不可行的或不希望的),或者Pietrzak的重复平方证明(ITCS 2019)的变体具有最佳验证者复杂度O (T/ (k + 1))。特别是,不可能获得统计上合理的重复平方的一轮证明,其效率与Wesolowski的计算合理的协议(EUROCRYPT 2019)相当,并且具有通用的组验证器。我们进一步将单轮下界推广到一类自然的重复平方的递归交互证明。对于允许证明者每轮发送k个组元素的r轮递归证明,我们表明验证者要么以高概率并行时间Ω(T/ (k + 1) r)运行,要么能够在给定证明副本的情况下分解N
{"title":"The Cost of Statistical Security in Proofs for Repeated Squaring","authors":"Cody R. Freitag, Ilan Komargodski","doi":"10.4230/LIPIcs.ITC.2023.4","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2023.4","url":null,"abstract":"In recent years, the number of applications of the repeated squaring assumption has been growing rapidly. The assumption states that, given a group element x , an integer T , and an RSA modulus N , it is hard to compute x 2 T mod N – or even decide whether y ? = x 2 T mod N – in parallel time less than the trivial approach of simply computing T squares. This rise has been driven by efficient proof systems for repeated squaring, opening the door to more efficient constructions of verifiable delay functions, various secure computation primitives, and proof systems for more general languages. In this work, we study the complexity of statistically sound proofs for the repeated squaring relation. Technically, we consider proofs where the prover sends at most k ≥ 0 elements and the (probabilistic) verifier performs generic group operations over the group Z ⋆N . As our main contribution, we show that for any (one-round) proof with a randomized verifier (i.e., an MA proof) the verifier either runs in parallel time Ω( T/ ( k + 1)) with high probability, or is able to factor N given the proof provided by the prover. This shows that either the prover essentially sends p, q such that N = p · q (which is infeasible or undesirable in most applications), or a variant of Pietrzak’s proof of repeated squaring (ITCS 2019) has optimal verifier complexity O ( T/ ( k + 1)). In particular, it is impossible to obtain a statistically sound one-round proof of repeated squaring with efficiency on par with the computationally-sound protocol of Wesolowski (EUROCRYPT 2019), with a generic group verifier. We further extend our one-round lower bound to a natural class of recursive interactive proofs for repeated squaring. For r -round recursive proofs where the prover is allowed to send k group elements per round, we show that the verifier either runs in parallel time Ω( T/ ( k + 1) r ) with high probability, or is able to factor N given the proof transcript","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"79 1","pages":"4:1-4:23"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83928249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Mergers and Applications to Registration-Based Encryption and Accumulators 基于注册的加密和累加器的在线合并和应用
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.ITC.2023.15
Mohammad Mahmoody, Wei Qi
In this work we study a new information theoretic problem, called online merging , that has direct applications for constructing public-state accumulators and registration-based encryption schemes. An online merger receives the sequence of sets { 1 } , { 2 } , . . . in an online way, and right after receiving { i } , it can re-partition the elements 1 , . . . , i into T 1 , . . . , T m i by merging some of these sets. The goal of the merger is to balance the trade-off between the maximum number of sets wid = max i ∈ [ n ] m i that co-exist at any moment, called the width of the scheme, with its depth dep = max i ∈ [ n ] d i , where d i is the number of times that the sets that contain i get merged. An online merger can be used to maintain a set of Merkle trees that occasionally get merged. An online merger can be directly used to obtain public-state accumulators (using collision-resistant hashing) and registration-based encryptions (relying on more assumptions). Doing so, the width of an online merger translates into the size of the public-parameter of the constructed scheme, and the depth of the online algorithm corresponds to the number of times that parties need to update their “witness” (for accumulators) or their decryption key (for RBE). In this work, we construct online mergers with poly (log n ) width and O (log n/ log log n ) depth, which can be shown to be optimal for all schemes with poly (log n ) width. More generally, we show how to achieve optimal depth for a given fixed width and to achieve a 2-approximate optimal width for a given depth d that can possibly grow as a function of n (e.g., d = 2 or d = log n/ log log n ). As applications, we obtain accumulators with O (log n/ log log n ) number of updates for parties’ witnesses (which can be shown to be optimal for accumulator digests of length poly (log n )) as well as registration based encryptions that again have an optimal O (log n/ log log n ) number of decryption updates, resolving the open question of Mahmoody, Rahimi, Qi [TCC’22] who proved that Ω(log n/ log log n ) number of decryption updates are necessary for any RBE (with public parameter of length poly (log n )). More generally, for any given number of decryption updates d = d ( n ) (under believable computational assumptions) our online merger implies RBE schemes with public parameters of length that is optimal, up to a constant factor that depends on the security parameter. For example, for any constant number of updates d , we get RBE schemes with public parameters of length O ( n 1 / ( d +1) ).
在这项工作中,我们研究了一个新的信息理论问题,称为在线合并,它直接应用于构造公共状态累加器和基于注册的加密方案。在线合并接收集合{1},{2},…的序列。以在线的方式,并且在收到{I}之后,它可以重新划分元素1,…i变成t1,…通过合并这些集合。合并的目标是在任意时刻共存的最大集合个数wid = maxi∈[n] m i与深度deep = maxi∈[n] d i之间取得平衡,其中di是包含i的集合合并的次数。在线合并可用于维护一组偶尔合并的Merkle树。在线合并可以直接用于获取公共状态累加器(使用抗冲突散列)和基于注册的加密(依赖于更多假设)。这样,在线合并的宽度转换为构建方案的公共参数的大小,在线算法的深度对应于各方需要更新其“见证”(对于累加器)或其解密密钥(对于RBE)的次数。在这项工作中,我们构建了具有poly (log n)宽度和O (log n/ log log n)深度的在线合并,可以证明它对于所有具有poly (log n)宽度的方案都是最优的。更一般地说,我们展示了如何在给定的固定宽度下实现最佳深度,以及如何在给定的深度d下实现2-近似最佳宽度,该宽度可能作为n的函数增长(例如,d = 2或d = log n/ log log n)。作为应用,我们获得了各方证人更新次数为O (log n/ log log n)的累加器(对于长度为poly (log n)的累加器摘要来说,这是最优的),以及基于注册的加密,同样具有最优的O (log n/ log log n)的解密更新次数,解决了Mahmoody, Rahimi,Qi [TCC ' 22]证明了Ω(log n/ log log n)解密更新次数对于任何RBE(公共参数长度为poly (log n))都是必要的。更一般地说,对于任何给定数量的解密更新d = d (n)(在可信的计算假设下),我们的在线合并意味着具有最优长度的公共参数的RBE方案,直至依赖于安全参数的常数因子。例如,对于任意常数更新d,我们得到公共参数长度为O (n 1 / (d +1))的RBE方案。
{"title":"Online Mergers and Applications to Registration-Based Encryption and Accumulators","authors":"Mohammad Mahmoody, Wei Qi","doi":"10.4230/LIPIcs.ITC.2023.15","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2023.15","url":null,"abstract":"In this work we study a new information theoretic problem, called online merging , that has direct applications for constructing public-state accumulators and registration-based encryption schemes. An online merger receives the sequence of sets { 1 } , { 2 } , . . . in an online way, and right after receiving { i } , it can re-partition the elements 1 , . . . , i into T 1 , . . . , T m i by merging some of these sets. The goal of the merger is to balance the trade-off between the maximum number of sets wid = max i ∈ [ n ] m i that co-exist at any moment, called the width of the scheme, with its depth dep = max i ∈ [ n ] d i , where d i is the number of times that the sets that contain i get merged. An online merger can be used to maintain a set of Merkle trees that occasionally get merged. An online merger can be directly used to obtain public-state accumulators (using collision-resistant hashing) and registration-based encryptions (relying on more assumptions). Doing so, the width of an online merger translates into the size of the public-parameter of the constructed scheme, and the depth of the online algorithm corresponds to the number of times that parties need to update their “witness” (for accumulators) or their decryption key (for RBE). In this work, we construct online mergers with poly (log n ) width and O (log n/ log log n ) depth, which can be shown to be optimal for all schemes with poly (log n ) width. More generally, we show how to achieve optimal depth for a given fixed width and to achieve a 2-approximate optimal width for a given depth d that can possibly grow as a function of n (e.g., d = 2 or d = log n/ log log n ). As applications, we obtain accumulators with O (log n/ log log n ) number of updates for parties’ witnesses (which can be shown to be optimal for accumulator digests of length poly (log n )) as well as registration based encryptions that again have an optimal O (log n/ log log n ) number of decryption updates, resolving the open question of Mahmoody, Rahimi, Qi [TCC’22] who proved that Ω(log n/ log log n ) number of decryption updates are necessary for any RBE (with public parameter of length poly (log n )). More generally, for any given number of decryption updates d = d ( n ) (under believable computational assumptions) our online merger implies RBE schemes with public parameters of length that is optimal, up to a constant factor that depends on the security parameter. For example, for any constant number of updates d , we get RBE schemes with public parameters of length O ( n 1 / ( d +1) ).","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"64 1","pages":"15:1-15:23"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80818277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Csirmaz's Duality Conjecture and Threshold Secret Sharing csimaz对偶猜想与阈值秘密共享
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.ITC.2023.3
Andrej Bogdanov
We conjecture that the smallest possible share size for binary secrets for the t-out-of-n and (n− t+1)out-of-n access structures is the same for all 1 ≤ t ≤ n. This is a strenghtening of a recent conjecture by Csirmaz (J. Math. Cryptol., 2020). We prove the conjecture for t = 2 and all n. Our proof gives a new (n − 1)-out-of-n secret sharing scheme for binary secrets with share alphabet size n. 2012 ACM Subject Classification Theory of computation → Randomness, geometry and discrete structures; Theory of computation → Cryptographic primitives; Mathematics of computing → Information theory; Security and privacy → Mathematical foundations of cryptography
我们推测,对于所有1≤t≤n的访问结构,t-out- n和(n−t+1)out- n的二进制秘密的最小可能共享大小是相同的。这是Csirmaz (J. Math)最近的一个猜想的加强。Cryptol。, 2020)。我们证明了t = 2和所有n的猜想。我们的证明给出了共享字母表大小为n的二进制秘密的一个新的(n−1)of-n的秘密共享方案。2012 ACM主题分类理论计算→随机,几何和离散结构;计算理论→密码学原语;计算数学→信息论;安全与隐私→密码学的数学基础
{"title":"Csirmaz's Duality Conjecture and Threshold Secret Sharing","authors":"Andrej Bogdanov","doi":"10.4230/LIPIcs.ITC.2023.3","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2023.3","url":null,"abstract":"We conjecture that the smallest possible share size for binary secrets for the t-out-of-n and (n− t+1)out-of-n access structures is the same for all 1 ≤ t ≤ n. This is a strenghtening of a recent conjecture by Csirmaz (J. Math. Cryptol., 2020). We prove the conjecture for t = 2 and all n. Our proof gives a new (n − 1)-out-of-n secret sharing scheme for binary secrets with share alphabet size n. 2012 ACM Subject Classification Theory of computation → Randomness, geometry and discrete structures; Theory of computation → Cryptographic primitives; Mathematics of computing → Information theory; Security and privacy → Mathematical foundations of cryptography","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"72 1","pages":"3:1-3:6"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76308151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exponential Correlated Randomness Is Necessary in Communication-Optimal Perfectly Secure Two-Party Computation 指数相关随机性是通信最优完全安全两方计算的必要条件
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.ITC.2023.18
Keitaro Hiwatashi, K. Nuida
Secure two-party computation is a cryptographic technique that enables two parties to compute a function jointly while keeping each input secret. It is known that most functions cannot be realized by information-theoretically secure two-party computation, but any function can be realized in the correlated randomness (CR) model, where a trusted dealer distributes input-independent CR to the parties beforehand. In the CR model, three kinds of complexities are mainly considered; the size of CR, the number of rounds
安全的两方计算是一种加密技术,它使双方能够共同计算一个函数,同时对每个输入保密。众所周知,大多数函数无法通过信息安全的两方计算实现,但在相关随机性(CR)模型中,任何函数都可以实现,其中可信的经销商事先将与输入无关的CR分发给各方。在CR模型中,主要考虑三种复杂性;CR的大小,轮数
{"title":"Exponential Correlated Randomness Is Necessary in Communication-Optimal Perfectly Secure Two-Party Computation","authors":"Keitaro Hiwatashi, K. Nuida","doi":"10.4230/LIPIcs.ITC.2023.18","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2023.18","url":null,"abstract":"Secure two-party computation is a cryptographic technique that enables two parties to compute a function jointly while keeping each input secret. It is known that most functions cannot be realized by information-theoretically secure two-party computation, but any function can be realized in the correlated randomness (CR) model, where a trusted dealer distributes input-independent CR to the parties beforehand. In the CR model, three kinds of complexities are mainly considered; the size of CR, the number of rounds","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"26 1","pages":"18:1-18:16"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82927609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Note on the Complexity of Private Simultaneous Messages with Many Parties 关于具有多方的私有同步消息的复杂性的说明
Pub Date : 2022-01-01 DOI: 10.4230/LIPIcs.ITC.2022.7
Marshall Ball, Tim Randolph
For k = ω (log n ), we prove a Ω( k 2 n/ log( kn )) lower bound on private simultaneous messages (PSM) with k parties who receive n -bit inputs. This extends the Ω( n ) lower bound due to Appelbaum, Holenstein, Mishra and Shayevitz [Journal of Cryptology, 2019] to the many-party ( k = ω (log n )) setting. It is the first PSM lower bound that increases quadratically with the number of parties, and moreover the first unconditional, explicit bound that grows with both k and n . This note extends the work of Ball, Holmgren, Ishai, Liu, and Malkin [ITCS 2020], who prove communication complexity lower bounds on decomposable randomized encodings (DREs), which correspond to the special case of k -party PSMs with n = 1. To give a concise and readable introduction to the method, we focus our presentation on perfect PSM schemes. Theory of computation Communication complexity;
对于k = ω (log n),我们证明了具有k个接收n位输入的私有同步消息(PSM)的Ω(k2n / log(kn))下界。这将Appelbaum, Holenstein, Mishra和Shayevitz [Journal of cryptoology, 2019]提出的Ω(n)下界扩展到多人(k = Ω(log n))设置。它是第一个PSM下界随着参与方的数量二次增长,而且是第一个无条件的、显式的下界同时随k和n增长。本文扩展了Ball, Holmgren, Ishai, Liu, and Malkin [ITCS 2020]的工作,他们证明了可分解随机编码(DREs)的通信复杂度下界,对应于n = 1的k方psm的特殊情况。为了简明易懂地介绍该方法,我们将重点介绍完美的PSM方案。通信复杂性计算理论;
{"title":"A Note on the Complexity of Private Simultaneous Messages with Many Parties","authors":"Marshall Ball, Tim Randolph","doi":"10.4230/LIPIcs.ITC.2022.7","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2022.7","url":null,"abstract":"For k = ω (log n ), we prove a Ω( k 2 n/ log( kn )) lower bound on private simultaneous messages (PSM) with k parties who receive n -bit inputs. This extends the Ω( n ) lower bound due to Appelbaum, Holenstein, Mishra and Shayevitz [Journal of Cryptology, 2019] to the many-party ( k = ω (log n )) setting. It is the first PSM lower bound that increases quadratically with the number of parties, and moreover the first unconditional, explicit bound that grows with both k and n . This note extends the work of Ball, Holmgren, Ishai, Liu, and Malkin [ITCS 2020], who prove communication complexity lower bounds on decomposable randomized encodings (DREs), which correspond to the special case of k -party PSMs with n = 1. To give a concise and readable introduction to the method, we focus our presentation on perfect PSM schemes. Theory of computation Communication complexity;","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"21 1","pages":"7:1-7:12"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80866907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Revisiting Collision and Local Opening Analysis of ABR Hash ABR哈希的碰撞与局部开度分析
Pub Date : 2022-01-01 DOI: 10.4230/LIPIcs.ITC.2022.11
C. Dhar, Y. Dodis, M. Nandi
The question of building the most efficient tn -to- n -bit collision-resistant hash function H from a smaller (say, 2 n -to- n -bit) compression function f is one of the fundamental questions in symmetric key cryptography. This question has a rich history, and was open for general t , until a recent breakthrough paper by Andreeva, Bhattacharyya and Roy at Eurocrypt’21, who designed an elegant mode (which we call ABR ) achieving roughly 2 t/ 3 calls to f , which matches the famous Stam’s bound from CRYPTO’08. Unfortunately, we have found serious issues in the claims made by the authors. These issues appear quite significant, and range from verifiably false statements to noticeable gaps in the proofs (e.g., omissions of important cases and unjustified bounds). We were unable to patch up the current proof provided by the authors. Instead, we prove from scratch the security of the ABR construction for the first non-trivial case t = 11 ( ABR mode of height 3), which was incorrectly handled by the authors. In particular, our result matches Stam’s bound for t = 11. While the general case is still open, we hope our techniques will prove useful to finally settle the question of the optimal efficiency of hash functions.
从较小的(例如,2n到n位)压缩函数f构建最有效的tn到n位抗碰撞哈希函数H的问题是对称密钥密码学中的基本问题之一。这个问题有着丰富的历史,并且对一般t开放,直到最近由Andreeva, Bhattacharyya和Roy在Eurocrypt ' 21上发表的突破性论文,他们设计了一个优雅的模式(我们称之为ABR),实现了大约2 t/ 3对f的调用,这与CRYPTO ' 08中著名的斯塔姆界相匹配。不幸的是,我们在作者的声明中发现了严重的问题。这些问题似乎相当重要,从可核实的虚假陈述到证明中明显的空白(例如,遗漏重要案例和不合理的界限)。我们无法修补作者提供的现有证据。相反,我们从头开始证明了作者错误处理的第一个非平凡情况t = 11(高度为3的ABR模式)的ABR构造的安全性。特别地,我们的结果与Stam在t = 11时的边界匹配。虽然一般的情况仍然是开放的,但我们希望我们的技术将证明对最终解决哈希函数的最佳效率问题是有用的。
{"title":"Revisiting Collision and Local Opening Analysis of ABR Hash","authors":"C. Dhar, Y. Dodis, M. Nandi","doi":"10.4230/LIPIcs.ITC.2022.11","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2022.11","url":null,"abstract":"The question of building the most efficient tn -to- n -bit collision-resistant hash function H from a smaller (say, 2 n -to- n -bit) compression function f is one of the fundamental questions in symmetric key cryptography. This question has a rich history, and was open for general t , until a recent breakthrough paper by Andreeva, Bhattacharyya and Roy at Eurocrypt’21, who designed an elegant mode (which we call ABR ) achieving roughly 2 t/ 3 calls to f , which matches the famous Stam’s bound from CRYPTO’08. Unfortunately, we have found serious issues in the claims made by the authors. These issues appear quite significant, and range from verifiably false statements to noticeable gaps in the proofs (e.g., omissions of important cases and unjustified bounds). We were unable to patch up the current proof provided by the authors. Instead, we prove from scratch the security of the ABR construction for the first non-trivial case t = 11 ( ABR mode of height 3), which was incorrectly handled by the authors. In particular, our result matches Stam’s bound for t = 11. While the general case is still open, we hope our techniques will prove useful to finally settle the question of the optimal efficiency of hash functions.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"19 1","pages":"11:1-11:22"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88112307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tight Estimate of the Local Leakage Resilience of the Additive Secret-Sharing Scheme & Its Consequences 加性秘密共享方案局部泄漏弹性的严密估计及其后果
Pub Date : 2022-01-01 DOI: 10.4230/LIPIcs.ITC.2022.16
H. K. Maji, H. Nguyen, Anat Paskin-Cherniavsky, Tom Suad, Mingyuan Wang, Xiuyu Ye, Albert Yu
Innovative side-channel attacks have repeatedly exposed the secrets of cryptosystems. Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO–2018) introduced local leakage resilience of secret-sharing schemes to study some of these vulnerabilities. In this framework, the objective is to characterize the unintended information revelation about the secret by obtaining independent leakage from each secret share. This work accurately quantifies the vulnerability of the additive secret-sharing scheme to local leakage attacks and its consequences for other secret-sharing schemes. Consider the additive secret-sharing scheme over a prime field among k parties, where the secret shares are stored in their natural binary representation, requiring λ bits – the security parameter. We prove that the reconstruction threshold k = ω (log λ ) is necessary to protect against local physical-bit probing attacks, improving the previous ω (log λ/ log log λ ) lower bound. This result is a consequence of accurately determining the distinguishing advantage of the “parity-of-parity” physical-bit local leakage attack proposed by Maji, Nguyen, Paskin-Cherniavsky, Suad, and Wang (EUROCRYPT–2021). Our lower bound is optimal because the additive secret-sharing scheme is perfectly secure against any ( k − 1)-bit (global) leakage and (statistically) secure against (arbitrary) one-bit local leakage attacks when k = ω (log λ ). Any physical-bit local leakage (1) physical-bit local leakage attacks on the Shamir secret-sharing scheme with adversarially-chosen evaluation places, and (2) local leakage attacks on the Massey secret-sharing scheme corresponding to any linear code. In particular, for Shamir’s secret-sharing scheme, the reconstruction threshold k = ω (log λ ) is necessary when the number of parties is n = O ( λ log λ ). Our analysis of the “parity-of-parity” attack’s distinguishing advantage establishes it as the best-known local leakage attack in these scenarios. Our work employs Fourier-analytic techniques to analyze the “parity-of-parity” attack on the additive secret-sharing scheme. We accurately estimate an exponential sum that captures the vulnerability of this secret-sharing scheme to the parity-of-parity attack, a quantity that is also closely related to the “discrepancy” of the Irwin-Hall probability distribution. Any findings and conclusions or recommendations expressed in this are those of the author(s) and do not necessarily reflect the views of the United or DARPA.
创新的侧信道攻击一再暴露了密码系统的秘密。Benhamouda, Degwekar, Ishai和Rabin (CRYPTO-2018)引入了秘密共享方案的局部泄漏弹性来研究其中的一些漏洞。在这个框架中,目标是通过获得每个秘密共享的独立泄漏来描述关于秘密的意外信息泄露。这项工作准确地量化了加性秘密共享方案对局部泄漏攻击的脆弱性及其对其他秘密共享方案的影响。考虑k方素数域上的加性秘密共享方案,其中秘密共享以其自然二进制表示形式存储,需要λ位—安全性参数。我们证明了重建阈值k = ω (log λ)是防止局部物理位探测攻击所必需的,改进了之前的ω (log λ/ log log λ)下界。这一结果是准确确定Maji、Nguyen、Paskin-Cherniavsky、Suad和Wang (EUROCRYPT-2021)提出的“奇偶校验”物理位局部泄漏攻击的显著优势的结果。我们的下界是最优的,因为当k = ω (log λ)时,加性秘密共享方案对任何(k−1)位(全局)泄漏都是完全安全的,并且(统计上)对(任意)位局部泄漏攻击是安全的。任意物理位局部泄漏(1)针对具有对抗性选择求值位置的Shamir秘密共享方案的物理位局部泄漏攻击,(2)针对任意线性码对应的Massey秘密共享方案的局部泄漏攻击。特别地,对于Shamir的秘密共享方案,当参与方数为n = O (λ log λ)时,重构阈值k = ω (log λ)是必要的。我们对“奇偶校验”攻击的显著优势进行了分析,确定它是这些场景中最著名的本地泄漏攻击。我们的工作采用傅里叶分析技术来分析加性秘密共享方案的“奇偶校验”攻击。我们准确地估计了一个指数和,它捕获了这个秘密共享方案对奇偶校验攻击的脆弱性,这个数量也与欧文-霍尔概率分布的“差异”密切相关。本文中表达的任何发现、结论或建议都是作者的观点,并不一定反映美国或DARPA的观点。
{"title":"Tight Estimate of the Local Leakage Resilience of the Additive Secret-Sharing Scheme & Its Consequences","authors":"H. K. Maji, H. Nguyen, Anat Paskin-Cherniavsky, Tom Suad, Mingyuan Wang, Xiuyu Ye, Albert Yu","doi":"10.4230/LIPIcs.ITC.2022.16","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2022.16","url":null,"abstract":"Innovative side-channel attacks have repeatedly exposed the secrets of cryptosystems. Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO–2018) introduced local leakage resilience of secret-sharing schemes to study some of these vulnerabilities. In this framework, the objective is to characterize the unintended information revelation about the secret by obtaining independent leakage from each secret share. This work accurately quantifies the vulnerability of the additive secret-sharing scheme to local leakage attacks and its consequences for other secret-sharing schemes. Consider the additive secret-sharing scheme over a prime field among k parties, where the secret shares are stored in their natural binary representation, requiring λ bits – the security parameter. We prove that the reconstruction threshold k = ω (log λ ) is necessary to protect against local physical-bit probing attacks, improving the previous ω (log λ/ log log λ ) lower bound. This result is a consequence of accurately determining the distinguishing advantage of the “parity-of-parity” physical-bit local leakage attack proposed by Maji, Nguyen, Paskin-Cherniavsky, Suad, and Wang (EUROCRYPT–2021). Our lower bound is optimal because the additive secret-sharing scheme is perfectly secure against any ( k − 1)-bit (global) leakage and (statistically) secure against (arbitrary) one-bit local leakage attacks when k = ω (log λ ). Any physical-bit local leakage (1) physical-bit local leakage attacks on the Shamir secret-sharing scheme with adversarially-chosen evaluation places, and (2) local leakage attacks on the Massey secret-sharing scheme corresponding to any linear code. In particular, for Shamir’s secret-sharing scheme, the reconstruction threshold k = ω (log λ ) is necessary when the number of parties is n = O ( λ log λ ). Our analysis of the “parity-of-parity” attack’s distinguishing advantage establishes it as the best-known local leakage attack in these scenarios. Our work employs Fourier-analytic techniques to analyze the “parity-of-parity” attack on the additive secret-sharing scheme. We accurately estimate an exponential sum that captures the vulnerability of this secret-sharing scheme to the parity-of-parity attack, a quantity that is also closely related to the “discrepancy” of the Irwin-Hall probability distribution. Any findings and conclusions or recommendations expressed in this are those of the author(s) and do not necessarily reflect the views of the United or DARPA.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"38 1","pages":"16:1-16:19"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72981700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
P4-free Partition and Cover Numbers & Applications p4免费分区和封面号码和应用程序
Pub Date : 2021-01-01 DOI: 10.4230/LIPIcs.ITC.2021.16
Alexander R. Block, Simina Brânzei, H. K. Maji, H. Mehta, Tamalika Mukherjee, H. Nguyen
P4-free graphs– also known as cographs, complement-reducible graphs, or hereditary Dacey graphs– have been well studied in graph theory. Motivated by computer science and information theory applications, our work encodes (flat) joint probability distributions and Boolean functions as bipartite graphs and studies bipartite P4-free graphs. For these applications, the graph properties of edge partitioning and covering a bipartite graph using the minimum number of these graphs are particularly relevant. Previously, such graph properties have appeared in leakage-resilient cryptography and (variants of) coloring problems. Interestingly, our covering problem is closely related to the well-studied problem of product (a.k.a., Prague) dimension of loopless undirected graphs, which allows us to employ algebraic lowerbounding techniques for the product/Prague dimension. We prove that computing these numbers is NP-complete, even for bipartite graphs. We establish a connection to the (unsolved) Zarankiewicz problem to show that there are bipartite graphs with size-N partite sets such that these numbers are at least ε · N1−2ε, for ε ∈ {1/3, 1/4, 1/5, . . . }. Finally, we accurately estimate these numbers for bipartite graphs encoding well-studied Boolean functions from circuit complexity, such as set intersection, set disjointness, and inequality. For applications in information theory and communication & cryptographic complexity, we consider a system where a setup samples from a (flat) joint distribution and gives the participants, Alice and Bob, their portion from this joint sample. Alice and Bob’s objective is to non-interactively establish a shared key and extract the left-over entropy from their portion of the samples as independent private randomness. A genie, who observes the joint sample, provides appropriate assistance to help Alice and Bob with their objective. Lower bounds to the minimum size of the genie’s assistance translate into communication and cryptographic lower bounds. We show that (the log2 of) the P4-free partition number of a graph encoding the joint distribution that the setup uses is equivalent to the size of the genie’s assistance. Consequently, the joint distributions corresponding to the bipartite graphs constructed above with high P4-free partition numbers correspond to joint distributions requiring more assistance from the genie. As a representative application in non-deterministic communication complexity, we study the communication complexity of nondeterministic protocols augmented by access to the equality oracle at the output. We show that (the log2 of) the P4-free cover number of the bipartite graph encoding a Boolean function f is equivalent to the minimum size of the nondeterministic input required by the parties (referred to as the communication complexity of f in this model). Consequently, the functions corresponding to the bipartite graphs with high P4-free cover numbers have high communication complexity. Furthermore, th
P4-free图——也称为图、互补可约图或遗传达西图——在图论中得到了很好的研究。在计算机科学和信息论应用的激励下,我们的工作将(平面)联合概率分布和布尔函数编码为二部图,并研究二部P4-free图。对于这些应用,边划分的图属性和使用这些图的最小数量覆盖二部图是特别相关的。以前,这样的图属性已经出现在防泄漏密码学和(变体)着色问题中。有趣的是,我们的覆盖问题与无环路无向图的积维(又名布拉格)问题密切相关,这允许我们对积维/布拉格维采用代数下限技术。我们证明了计算这些数是np完全的,即使对于二部图也是如此。我们建立了与(未解决的)Zarankiewicz问题的联系,证明存在具有- n个部集的二部图,当ε∈{1/ 3,1 / 4,1 /5,…时,这些数至少为ε·N1−2ε。}。最后,我们从电路复杂度(如集合交集、集合不相交和不等式)中准确地估计了编码布尔函数的二部图的这些数。对于信息论和通信与密码复杂性的应用,我们考虑一个系统,其中一个设置从(平面)联合分布中采样,并给出参与者Alice和Bob从该联合样本中获得的部分。Alice和Bob的目标是非交互地建立一个共享密钥,并从他们的样本中提取剩余的熵,作为独立的私有随机性。一个精灵,谁观察联合样本,提供适当的援助,帮助爱丽丝和鲍勃的目标。最小大小的下限将精灵的协助转化为通信和加密的下限。我们证明了编码该设置使用的联合分布的图的P4-free分区号的log2等于精灵辅助的大小。因此,与上面构造的具有高P4-free分区数的二部图对应的联合分布对应于需要精灵更多帮助的联合分布。作为非确定性通信复杂度的一个代表性应用,我们研究了通过访问输出处的等式预言符来增强的非确定性协议的通信复杂度。我们证明了编码布尔函数f的二部图的p4自由覆盖数的log2等于各方所需的不确定性输入的最小尺寸(在该模型中称为f的通信复杂度)。因此,具有高P4-free覆盖数的二部图所对应的函数具有较高的通信复杂度。此外,还有一些通信复杂性接近naïve协议的函数,其中不确定性输入揭示了一方的输入。最后,获得平等©Alexander R. Block, Simina br nzei, Hemanta K. Maji, Himanshi Mehta, Tamalika Mukherjee, Hai H. Nguyen;第二届信息理论密码学会议(ITC 2021)。编辑:Stefano Tessaro;第十六条;pp. 16:1-16:25莱布尼茨国际信息学论文集Schloss Dagstuhl - Leibniz- zentrum fr Informatik, Dagstuhl Publishing,德国16:2 P4-free Partition and Cover Numbers & Applications oracle通过一个常数因子降低了计算集不相交的通信复杂性,而不是各方无法访问平等oracle的模型。为了计算不等式函数,我们证明了通信复杂度呈指数下降,并且该界是最优的。另一方面,访问相等oracle对于计算集合交集(几乎)是无用的。2012 ACM主题分类安全与隐私→密码学的数学基础;安全与隐私→信息技术;计算理论→通信复杂性;计算数学→图论
{"title":"P4-free Partition and Cover Numbers & Applications","authors":"Alexander R. Block, Simina Brânzei, H. K. Maji, H. Mehta, Tamalika Mukherjee, H. Nguyen","doi":"10.4230/LIPIcs.ITC.2021.16","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2021.16","url":null,"abstract":"P4-free graphs– also known as cographs, complement-reducible graphs, or hereditary Dacey graphs– have been well studied in graph theory. Motivated by computer science and information theory applications, our work encodes (flat) joint probability distributions and Boolean functions as bipartite graphs and studies bipartite P4-free graphs. For these applications, the graph properties of edge partitioning and covering a bipartite graph using the minimum number of these graphs are particularly relevant. Previously, such graph properties have appeared in leakage-resilient cryptography and (variants of) coloring problems. Interestingly, our covering problem is closely related to the well-studied problem of product (a.k.a., Prague) dimension of loopless undirected graphs, which allows us to employ algebraic lowerbounding techniques for the product/Prague dimension. We prove that computing these numbers is NP-complete, even for bipartite graphs. We establish a connection to the (unsolved) Zarankiewicz problem to show that there are bipartite graphs with size-N partite sets such that these numbers are at least ε · N1−2ε, for ε ∈ {1/3, 1/4, 1/5, . . . }. Finally, we accurately estimate these numbers for bipartite graphs encoding well-studied Boolean functions from circuit complexity, such as set intersection, set disjointness, and inequality. For applications in information theory and communication & cryptographic complexity, we consider a system where a setup samples from a (flat) joint distribution and gives the participants, Alice and Bob, their portion from this joint sample. Alice and Bob’s objective is to non-interactively establish a shared key and extract the left-over entropy from their portion of the samples as independent private randomness. A genie, who observes the joint sample, provides appropriate assistance to help Alice and Bob with their objective. Lower bounds to the minimum size of the genie’s assistance translate into communication and cryptographic lower bounds. We show that (the log2 of) the P4-free partition number of a graph encoding the joint distribution that the setup uses is equivalent to the size of the genie’s assistance. Consequently, the joint distributions corresponding to the bipartite graphs constructed above with high P4-free partition numbers correspond to joint distributions requiring more assistance from the genie. As a representative application in non-deterministic communication complexity, we study the communication complexity of nondeterministic protocols augmented by access to the equality oracle at the output. We show that (the log2 of) the P4-free cover number of the bipartite graph encoding a Boolean function f is equivalent to the minimum size of the nondeterministic input required by the parties (referred to as the communication complexity of f in this model). Consequently, the functions corresponding to the bipartite graphs with high P4-free cover numbers have high communication complexity. Furthermore, th","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"42 1","pages":"16:1-16:25"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87078668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Perfectly Oblivious (Parallel) RAM Revisited, and Improved Constructions 完全遗忘(平行)RAM重新访问和改进结构
Pub Date : 2021-01-01 DOI: 10.4230/LIPIcs.ITC.2021.8
T-H. Hubert Chan, E. Shi, Wei-Kai Lin, Kartik Nayak
Oblivious RAM (ORAM) is a technique for compiling any RAM program to an oblivious counterpart, i.e., one whose access patterns do not leak information about the secret inputs. Similarly, Oblivious Parallel RAM (OPRAM) compiles a parallel RAM program to an oblivious counterpart. In this paper, we care about ORAM/OPRAM with perfect security, i.e., the access patterns must be identically distributed no matter what the program’s memory request sequence is. In the past, two types of perfect ORAMs/OPRAMs have been considered: constructions whose performance bounds hold in expectation (but may occasionally run more slowly); and constructions whose performance bounds hold deterministically (even though the algorithms themselves are randomized). In this paper, we revisit the performance metrics for perfect ORAM/OPRAM, and show novel constructions that achieve asymptotical improvements for all performance metrics. Our first result is a new perfectly secure OPRAM scheme with O(logN/ log logN) expected overhead. In comparison, prior literature has been stuck at O(logN) for more than a decade. Next, we show how to construct a perfect ORAM with O(logN/ log logN) deterministic simulation overhead. We further show how to make the scheme parallel, resulting in an perfect OPRAM with O(logN/ log logN) deterministic simulation overhead. For perfect ORAMs/OPRAMs with deterministic performance bounds, our results achieve subexponential improvement over the state-of-the-art. Specifically, the best known prior scheme incurs more than √ N deterministic simulation overhead (Raskin and Simkin, Asiacrypt’19); moreover, their scheme works only for the sequential setting and is not amenable to parallelization. Finally, we additionally consider perfect ORAMs/OPRAMs whose performance bounds hold with high probability. For this new performance metric, we show new constructions whose simulation overhead is upper bounded by O(log / log logN) except with negligible in N probability, i.e., we prove high-probability performance bounds that match the expected bounds mentioned earlier. Author ordering is randomized. T-H. Hubert Chan was partially supported by the Hong Kong RGC under the grants 17200418 and 17201220. Elaine Shi was partially supported by NSF CNS-1601879, an ONR YIP award, and a Packard Fellowship. Wei-Kai Lin was supported by a DARPA Brandeis award. Kartik Nayak was partially supported by NSF Award 2016393.
无关RAM (ORAM)是一种将任何RAM程序编译为无关的对应程序的技术,即其访问模式不会泄露有关秘密输入的信息。类似地,遗忘并行RAM (OPRAM)将并行RAM程序编译为遗忘对应程序。本文关注具有完美安全性的ORAM/OPRAM,即无论程序的内存请求顺序如何,访问模式必须是相同分布的。过去,人们考虑了两种类型的完美oram / opram:性能边界符合预期的结构(但偶尔可能运行得更慢);以及性能界限具有确定性的结构(即使算法本身是随机的)。在本文中,我们重新审视了完美ORAM/OPRAM的性能指标,并展示了实现所有性能指标渐近改进的新结构。我们的第一个结果是一个新的完全安全的OPRAM方案,预期开销为0 (logN/ log logN)。相比之下,之前的文献已经停留在0 (logN)超过十年。接下来,我们将展示如何构建一个具有O(logN/ log logN)确定性仿真开销的完美ORAM。我们进一步展示了如何使该方案并行,从而得到一个具有0 (logN/ log logN)确定性仿真开销的完美OPRAM。对于具有确定性性能界限的完美oram / opram,我们的结果比最先进的技术实现了次指数级的改进。具体来说,最著名的先验方案会导致超过√N的确定性模拟开销(Raskin和Simkin, Asiacrypt ' 19);此外,他们的方案只适用于顺序设置,不适合并行化。最后,我们还考虑了性能边界保持高概率的完美oram / opram。对于这个新的性能度量,我们展示了新的结构,其模拟开销的上限是0 (log / log logN),除了N概率可以忽略不计,也就是说,我们证明了高概率性能界限与前面提到的预期界限相匹配。作者排序是随机的。张茵。陈教授获得香港研资局拨款17200418及17201220的部分资助。Elaine Shi得到了NSF CNS-1601879、ONR YIP奖和帕卡德奖学金的部分支持。林伟凯获得了DARPA布兰迪斯奖的支持。Kartik Nayak获得了NSF Award 2016393的部分资助。
{"title":"Perfectly Oblivious (Parallel) RAM Revisited, and Improved Constructions","authors":"T-H. Hubert Chan, E. Shi, Wei-Kai Lin, Kartik Nayak","doi":"10.4230/LIPIcs.ITC.2021.8","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2021.8","url":null,"abstract":"Oblivious RAM (ORAM) is a technique for compiling any RAM program to an oblivious counterpart, i.e., one whose access patterns do not leak information about the secret inputs. Similarly, Oblivious Parallel RAM (OPRAM) compiles a parallel RAM program to an oblivious counterpart. In this paper, we care about ORAM/OPRAM with perfect security, i.e., the access patterns must be identically distributed no matter what the program’s memory request sequence is. In the past, two types of perfect ORAMs/OPRAMs have been considered: constructions whose performance bounds hold in expectation (but may occasionally run more slowly); and constructions whose performance bounds hold deterministically (even though the algorithms themselves are randomized). In this paper, we revisit the performance metrics for perfect ORAM/OPRAM, and show novel constructions that achieve asymptotical improvements for all performance metrics. Our first result is a new perfectly secure OPRAM scheme with O(logN/ log logN) expected overhead. In comparison, prior literature has been stuck at O(logN) for more than a decade. Next, we show how to construct a perfect ORAM with O(logN/ log logN) deterministic simulation overhead. We further show how to make the scheme parallel, resulting in an perfect OPRAM with O(logN/ log logN) deterministic simulation overhead. For perfect ORAMs/OPRAMs with deterministic performance bounds, our results achieve subexponential improvement over the state-of-the-art. Specifically, the best known prior scheme incurs more than √ N deterministic simulation overhead (Raskin and Simkin, Asiacrypt’19); moreover, their scheme works only for the sequential setting and is not amenable to parallelization. Finally, we additionally consider perfect ORAMs/OPRAMs whose performance bounds hold with high probability. For this new performance metric, we show new constructions whose simulation overhead is upper bounded by O(log / log logN) except with negligible in N probability, i.e., we prove high-probability performance bounds that match the expected bounds mentioned earlier. Author ordering is randomized. T-H. Hubert Chan was partially supported by the Hong Kong RGC under the grants 17200418 and 17201220. Elaine Shi was partially supported by NSF CNS-1601879, an ONR YIP award, and a Packard Fellowship. Wei-Kai Lin was supported by a DARPA Brandeis award. Kartik Nayak was partially supported by NSF Award 2016393.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"79 1","pages":"8:1-8:23"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83867641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Code Offset in the Exponent 指数中的代码偏移量
Pub Date : 2021-01-01 DOI: 10.4230/LIPIcs.ITC.2021.15
Luke Demarest, Benjamin Fuller, A. Russell
Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications – key derivation from biometrics and physical unclonable functions – which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable. 2012 ACM Subject Classification Security and privacy→ Information-theoretic techniques; Security and privacy → Biometrics
模糊提取器从噪声源中提取稳定的密钥。它们是从生物识别来源提取密钥的基本工具。本文引入了一种新的结构,即指数中的代码偏移量。该结构是第一个可重用的模糊提取器,同时支持具有相关符号和置信度信息的结构化、低熵分布。这些特性是由最相关的应用程序(生物识别和物理不可克隆函数的关键派生)特别激发的,这些应用程序通常具有低熵和额外的统计相关性,并受益于可以利用置信度信息提高效率的提取器。指数中的码差是码差结构的一组编码(Juels and Wattenberg, CCS 1999)。线性纠错码的随机码字被用作从噪声源采样值的一次性衬垫。指数中的代码偏移量不是直接编码,而是通过对加密强组中的生成器取幂进行编码。在一般群模型中,我们引入并描述了一个直接影响结构安全性的噪声源条件。我们的条件要求源分布和代码零空间中所有向量之间的内积是不可预测的。2012 ACM主题分类安全与隐私→信息理论技术;安全和隐私→生物识别
{"title":"Code Offset in the Exponent","authors":"Luke Demarest, Benjamin Fuller, A. Russell","doi":"10.4230/LIPIcs.ITC.2021.15","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITC.2021.15","url":null,"abstract":"Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications – key derivation from biometrics and physical unclonable functions – which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable. 2012 ACM Subject Classification Security and privacy→ Information-theoretic techniques; Security and privacy → Biometrics","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"56 1","pages":"15:1-15:23"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79360650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2007 IEEE International Test Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1