Pub Date : 2023-01-01DOI: 10.4230/LIPIcs.APPROX/RANDOM.2023.43
Andrej Bogdanov, T. Cheung, K. Dinesh, John C.S. Lui
We study the relative advantage of classical and quantum distinguishers of bounded query complexity over n -bit strings, focusing on the case of a single quantum query. A construction of Aaronson and Ambainis (STOC 2015) yields a pair of distributions that is ε -distinguishable by a one-query quantum algorithm, but O ( εk/ √ n )-indistinguishable by any non-adaptive k -query classical algorithm. We show that every pair of distributions that is ε -distinguishable by a one-query quantum algorithm is distinguishable with k classical queries and (1) advantage min { Ω( ε p k/n )) , Ω( ε 2 k 2 /n ) } non-adaptively (i.e., in one round), and (2) advantage Ω( ε 2 k/ √ n log n ) in two rounds. As part of our analysis, we introduce a general method for converting unbiased estimators into distinguishers.
我们研究了n位字符串上有界查询复杂度的经典区分符和量子区分符的相对优势,重点研究了单个量子查询的情况。Aaronson和Ambainis (STOC 2015)的构造得到了一对分布,ε -可被单查询量子算法区分,但O (εk/√n)-无法被任何非自适应k -查询经典算法区分。我们证明了每一对由单查询量子算法ε -可分辨的分布都可以用k个经典查询和(1)优势min {Ω(ε p k/n)), Ω(ε 2 k 2 /n)}非自适应(即一轮)和(2)优势Ω(ε 2 k/√n log n)在两轮中区分。作为分析的一部分,我们介绍了将无偏估计量转换为区分量的一般方法。
{"title":"Classical simulation of one-query quantum distinguishers","authors":"Andrej Bogdanov, T. Cheung, K. Dinesh, John C.S. Lui","doi":"10.4230/LIPIcs.APPROX/RANDOM.2023.43","DOIUrl":"https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2023.43","url":null,"abstract":"We study the relative advantage of classical and quantum distinguishers of bounded query complexity over n -bit strings, focusing on the case of a single quantum query. A construction of Aaronson and Ambainis (STOC 2015) yields a pair of distributions that is ε -distinguishable by a one-query quantum algorithm, but O ( εk/ √ n )-indistinguishable by any non-adaptive k -query classical algorithm. We show that every pair of distributions that is ε -distinguishable by a one-query quantum algorithm is distinguishable with k classical queries and (1) advantage min { Ω( ε p k/n )) , Ω( ε 2 k 2 /n ) } non-adaptively (i.e., in one round), and (2) advantage Ω( ε 2 k/ √ n log n ) in two rounds. As part of our analysis, we introduce a general method for converting unbiased estimators into distinguishers.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82670662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4230/LIPIcs.CCC.2023.35
R. Santhanam
We propose a new family of circuit-based sampling tasks, such that non-trivial algorithmic solutions to certain tasks from this family imply frontier uniform lower bounds such as “ NP is not in uniform ACC 0 ” and “ NP does not have uniform polynomial-size depth-two threshold circuits”. Indeed, the most general versions of our sampling tasks have implications for central open problems such as NP vs P and PSPACE vs P . We argue the soundness of our approach by showing that the non-trivial algorithmic solutions we require do follow from standard cryptographic assumptions. In addition, we give evidence that a version of our approach for uniform circuits is necessary in order to separate NP from P or PSPACE from P . We give an algorithmic characterization for the PSPACE vs P question: PSPACE ̸ = P iff either E has sub-exponential time non-uniform algorithms infinitely often or there are non-trivial space-efficient solutions to our sampling tasks for uniform Boolean circuits. We show how to use our framework to capture uniform versions of known non-uniform lower bounds, as well as classical uniform lower bounds such as the space hierarchy theorem and Allender’s uniform lower bound for the Permanent. We also apply our framework to prove new lower bounds: NP does not have polynomial-size uniform AC 0 circuits with a bottom layer of MOD 6 gates, nor does it have polynomial-size uniform AC 0 circuits with a bottom layer of threshold gates. Our proofs exploit recently defined probabilistic time-bounded variants of Kolmogorov complexity [36, 24, 34].
我们提出了一种新的基于电路的采样任务族,使得该族中某些任务的非平凡算法解意味着边界均匀下界,例如“NP不处于均匀的ACC 0”和“NP不具有均匀的多项式大小的深度-二阈值电路”。事实上,我们的抽样任务的最一般版本对中心开放问题(如NP vs P和PSPACE vs P)有影响。我们通过证明我们需要的非平凡算法解决方案确实遵循标准密码学假设来论证我们方法的合理性。此外,我们给出的证据表明,为了将NP从P或PSPACE从P分离出来,我们的方法的一个版本是必要的。我们给出了PSPACE vs P问题的一个算法表征:如果E具有次指数时间非均匀算法,或者对于均匀布尔电路的采样任务存在非平凡的空间高效解,则PSPACE = P。我们展示了如何使用我们的框架来捕获已知非均匀下界的均匀版本,以及经典的均匀下界,如空间层次定理和Allender的永久均匀下界。我们还应用我们的框架来证明新的下界:NP不具有底层为MOD 6门的多项式大小的均匀AC 0电路,也不具有底层为阈值门的多项式大小的均匀AC 0电路。我们的证明利用了最近定义的Kolmogorov复杂度的概率有界变体[36,24,34]。
{"title":"An Algorithmic Approach to Uniform Lower Bounds","authors":"R. Santhanam","doi":"10.4230/LIPIcs.CCC.2023.35","DOIUrl":"https://doi.org/10.4230/LIPIcs.CCC.2023.35","url":null,"abstract":"We propose a new family of circuit-based sampling tasks, such that non-trivial algorithmic solutions to certain tasks from this family imply frontier uniform lower bounds such as “ NP is not in uniform ACC 0 ” and “ NP does not have uniform polynomial-size depth-two threshold circuits”. Indeed, the most general versions of our sampling tasks have implications for central open problems such as NP vs P and PSPACE vs P . We argue the soundness of our approach by showing that the non-trivial algorithmic solutions we require do follow from standard cryptographic assumptions. In addition, we give evidence that a version of our approach for uniform circuits is necessary in order to separate NP from P or PSPACE from P . We give an algorithmic characterization for the PSPACE vs P question: PSPACE ̸ = P iff either E has sub-exponential time non-uniform algorithms infinitely often or there are non-trivial space-efficient solutions to our sampling tasks for uniform Boolean circuits. We show how to use our framework to capture uniform versions of known non-uniform lower bounds, as well as classical uniform lower bounds such as the space hierarchy theorem and Allender’s uniform lower bound for the Permanent. We also apply our framework to prove new lower bounds: NP does not have polynomial-size uniform AC 0 circuits with a bottom layer of MOD 6 gates, nor does it have polynomial-size uniform AC 0 circuits with a bottom layer of threshold gates. Our proofs exploit recently defined probabilistic time-bounded variants of Kolmogorov complexity [36, 24, 34].","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"354 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78954367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4230/LIPIcs.CCC.2023.20
Abhibhav Garg, R. Oliveira, Shir Peleg, A. Sengupta
We prove a higher codimensional radical Sylvester-Gallai type theorem for quadratic polynomials, simultaneously generalizing [20, 36]. Hansen’s theorem is a high-dimensional version of the classical Sylvester-Gallai theorem in which the incidence condition is given by high-dimensional flats instead of lines. We generalize Hansen’s theorem to the setting of quadratic forms in a polynomial ring, where the incidence condition is given by radical membership in a high-codimensional ideal. Our main theorem is also a generalization of the quadratic Sylvester–Gallai Theorem of [36]. Our work is the first to prove a radical Sylvester–Gallai type theorem for arbitrary codimension k ≥ 2, whereas previous works [36, 29, 30, 28] considered the case of codimension 2 ideals. Our techniques combine algebraic geometric and combinatorial arguments. A key ingredient is a structural result for ideals generated by a constant number of quadratics, showing that such ideals must be radical whenever the quadratic forms are far apart. Using the wide algebras defined in [28], combined with results about integral ring extensions and dimension theory, we develop new techniques for studying such ideals generated by quadratic forms. One advantage of our approach is that it does not need the finer classification theorems for codimension 2 complete intersection of quadratics proved in [36, 16].
{"title":"Radical Sylvester-Gallai Theorem for Tuples of Quadratics","authors":"Abhibhav Garg, R. Oliveira, Shir Peleg, A. Sengupta","doi":"10.4230/LIPIcs.CCC.2023.20","DOIUrl":"https://doi.org/10.4230/LIPIcs.CCC.2023.20","url":null,"abstract":"We prove a higher codimensional radical Sylvester-Gallai type theorem for quadratic polynomials, simultaneously generalizing [20, 36]. Hansen’s theorem is a high-dimensional version of the classical Sylvester-Gallai theorem in which the incidence condition is given by high-dimensional flats instead of lines. We generalize Hansen’s theorem to the setting of quadratic forms in a polynomial ring, where the incidence condition is given by radical membership in a high-codimensional ideal. Our main theorem is also a generalization of the quadratic Sylvester–Gallai Theorem of [36]. Our work is the first to prove a radical Sylvester–Gallai type theorem for arbitrary codimension k ≥ 2, whereas previous works [36, 29, 30, 28] considered the case of codimension 2 ideals. Our techniques combine algebraic geometric and combinatorial arguments. A key ingredient is a structural result for ideals generated by a constant number of quadratics, showing that such ideals must be radical whenever the quadratic forms are far apart. Using the wide algebras defined in [28], combined with results about integral ring extensions and dimension theory, we develop new techniques for studying such ideals generated by quadratic forms. One advantage of our approach is that it does not need the finer classification theorems for codimension 2 complete intersection of quadratics proved in [36, 16].","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"2015 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73902005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4230/LIPIcs.APPROX/RANDOM.2023.44
Chin Ho Lee, Edward Pyne, Salil P. Vadhan
We give new upper and lower bounds on the power of several restricted classes of arbitrary-order read-once branching programs (ROBPs) and standard-order ROBPs (SOBPs) that have received significant attention in the literature on pseudorandomness for space-bounded computation. Regular SOBPs of length n and width ⌊ w ( n +1) / 2 ⌋ can exactly simulate general SOBPs of length n and width w , and moreover an n/ 2 − o ( n ) blow-up in width is necessary for such a simulation. Our result extends and simplifies prior average-case simulations (Reingold, Trevisan, and Vadhan (STOC 2006), Bogdanov, Hoza, Prakriya, and Pyne (CCC 2022)), in particular implying that weighted pseudorandom generators (Braverman, Cohen, and Garg (SICOMP 2020)) for regular SOBPs of width poly( n ) or larger automatically extend to general SOBPs. Furthermore, our simulation also extends to general (even read-many) oblivious branching programs. There exist natural functions computable by regular SOBPs of constant width that are average-case hard for permutation SOBPs of exponential width. Indeed, we show that Inner-Product mod 2 is average-case hard for arbitrary-order permutation ROBPs of exponential width. There exist functions computable by constant-width arbitrary-order permutation ROBPs that are worst-case hard for exponential-width SOBPs. Read-twice permutation branching programs of subexponential width can simulate polynomial-width arbitrary-order ROBPs.
{"title":"On the Power of Regular and Permutation Branching Programs","authors":"Chin Ho Lee, Edward Pyne, Salil P. Vadhan","doi":"10.4230/LIPIcs.APPROX/RANDOM.2023.44","DOIUrl":"https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2023.44","url":null,"abstract":"We give new upper and lower bounds on the power of several restricted classes of arbitrary-order read-once branching programs (ROBPs) and standard-order ROBPs (SOBPs) that have received significant attention in the literature on pseudorandomness for space-bounded computation. Regular SOBPs of length n and width ⌊ w ( n +1) / 2 ⌋ can exactly simulate general SOBPs of length n and width w , and moreover an n/ 2 − o ( n ) blow-up in width is necessary for such a simulation. Our result extends and simplifies prior average-case simulations (Reingold, Trevisan, and Vadhan (STOC 2006), Bogdanov, Hoza, Prakriya, and Pyne (CCC 2022)), in particular implying that weighted pseudorandom generators (Braverman, Cohen, and Garg (SICOMP 2020)) for regular SOBPs of width poly( n ) or larger automatically extend to general SOBPs. Furthermore, our simulation also extends to general (even read-many) oblivious branching programs. There exist natural functions computable by regular SOBPs of constant width that are average-case hard for permutation SOBPs of exponential width. Indeed, we show that Inner-Product mod 2 is average-case hard for arbitrary-order permutation ROBPs of exponential width. There exist functions computable by constant-width arbitrary-order permutation ROBPs that are worst-case hard for exponential-width SOBPs. Read-twice permutation branching programs of subexponential width can simulate polynomial-width arbitrary-order ROBPs.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"51 1","pages":"44:1-44:22"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74723584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4230/LIPIcs.ICALP.2023.53
K. Efremenko, Gillat Kol, Dmitry Paramonov, Raghuvansh R. Saxena
Single-hop radio networks (SHRN) are a well studied abstraction of communication over a wireless channel. In this model, in every round, each of the n participating parties may decide to broadcast a message to all the others, potentially causing collisions. We consider the SHRN model in the presence of stochastic message drops (i.e., erasures ), where in every round, the message received by each party is erased (replaced by ⊥ ) with some small constant probability, independently. Our main result is a constant rate coding scheme , allowing one to run protocols designed to work over the (noiseless) SHRN model over the SHRN model with erasures. Our scheme converts any protocol Π of length at most exponential in n over the SHRN model to a protocol Π ′ that is resilient to constant fraction of erasures and has length linear in the length of Π. We mention that for the special case where the protocol Π is non-adaptive , i.e., the order of communication is fixed in advance, such a scheme was known. Nevertheless, adaptivity is widely used and is known to hugely boost the power of wireless channels, which makes handling the general case of adaptive protocols Π both important and more challenging. Indeed, to the best of our knowledge, our result is the first constant rate scheme that converts adaptive protocols to noise resilient ones in any multi-party model.
{"title":"Protecting Single-Hop Radio Networks from Message Drops","authors":"K. Efremenko, Gillat Kol, Dmitry Paramonov, Raghuvansh R. Saxena","doi":"10.4230/LIPIcs.ICALP.2023.53","DOIUrl":"https://doi.org/10.4230/LIPIcs.ICALP.2023.53","url":null,"abstract":"Single-hop radio networks (SHRN) are a well studied abstraction of communication over a wireless channel. In this model, in every round, each of the n participating parties may decide to broadcast a message to all the others, potentially causing collisions. We consider the SHRN model in the presence of stochastic message drops (i.e., erasures ), where in every round, the message received by each party is erased (replaced by ⊥ ) with some small constant probability, independently. Our main result is a constant rate coding scheme , allowing one to run protocols designed to work over the (noiseless) SHRN model over the SHRN model with erasures. Our scheme converts any protocol Π of length at most exponential in n over the SHRN model to a protocol Π ′ that is resilient to constant fraction of erasures and has length linear in the length of Π. We mention that for the special case where the protocol Π is non-adaptive , i.e., the order of communication is fixed in advance, such a scheme was known. Nevertheless, adaptivity is widely used and is known to hugely boost the power of wireless channels, which makes handling the general case of adaptive protocols Π both important and more challenging. Indeed, to the best of our knowledge, our result is the first constant rate scheme that converts adaptive protocols to noise resilient ones in any multi-party model.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74067993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4230/LIPIcs.MFCS.2023.34
A. Chattopadhyay, Yogesh Dahiya, M. Mahajan
We relate various complexity measures like sensitivity, block sensitivity, certificate complexity for multi-output functions to the query complexities of such functions. Using these relations, we show that the deterministic query complexity of total search problems is at most the third power of its pseudo-deterministic query complexity. Previously, a fourth-power relation was shown by Goldreich, Goldwasser and Ron (ITCS’13). Furthermore, we improve the known separation between pseudo-deterministic and randomized decision tree size for total search problems in two ways: (1) we exhibit an exp( e Ω( n 1 / 4 )) separation for the SearchCNF relation for random k -CNFs. This seems to be the first exponential lower bound on the pseudo-deterministic size complexity of SearchCNF associated with random k -CNFs. (2) we exhibit an exp(Ω( n )) separation for the ApproxHamWt relation. The previous best known separation for any relation was exp(Ω( n 1 / 2 )). We also separate pseudo-determinism from randomness in And and ( And , Or ) decision trees, and determinism from pseudo-determinism in Parity decision trees. For a hypercube colouring problem, that was introduced by Goldwasswer, Impagliazzo, Pitassi and Santhanam (CCC’21) to analyze the pseudo-deterministic complexity of a complete problem in TFNP dt , we prove that either the monotone block-sensitivity or the anti-monotone block sensitivity is Ω( n 1 / 3 ); Goldwasser et al. showed an Ω( n 1 / 2 ) bound for general block-sensitivity.
{"title":"Query Complexity of Search Problems","authors":"A. Chattopadhyay, Yogesh Dahiya, M. Mahajan","doi":"10.4230/LIPIcs.MFCS.2023.34","DOIUrl":"https://doi.org/10.4230/LIPIcs.MFCS.2023.34","url":null,"abstract":"We relate various complexity measures like sensitivity, block sensitivity, certificate complexity for multi-output functions to the query complexities of such functions. Using these relations, we show that the deterministic query complexity of total search problems is at most the third power of its pseudo-deterministic query complexity. Previously, a fourth-power relation was shown by Goldreich, Goldwasser and Ron (ITCS’13). Furthermore, we improve the known separation between pseudo-deterministic and randomized decision tree size for total search problems in two ways: (1) we exhibit an exp( e Ω( n 1 / 4 )) separation for the SearchCNF relation for random k -CNFs. This seems to be the first exponential lower bound on the pseudo-deterministic size complexity of SearchCNF associated with random k -CNFs. (2) we exhibit an exp(Ω( n )) separation for the ApproxHamWt relation. The previous best known separation for any relation was exp(Ω( n 1 / 2 )). We also separate pseudo-determinism from randomness in And and ( And , Or ) decision trees, and determinism from pseudo-determinism in Parity decision trees. For a hypercube colouring problem, that was introduced by Goldwasswer, Impagliazzo, Pitassi and Santhanam (CCC’21) to analyze the pseudo-deterministic complexity of a complete problem in TFNP dt , we prove that either the monotone block-sensitivity or the anti-monotone block sensitivity is Ω( n 1 / 3 ); Goldwasser et al. showed an Ω( n 1 / 2 ) bound for general block-sensitivity.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"243 1","pages":"34:1-34:15"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80546754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-27DOI: 10.4230/LIPIcs.STACS.2021.23
Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, Yuichi Yoshida
For a size parameter s : N → N, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0, 1} → {0, 1} (represented by a string of length N := 2) is at most a threshold s(n). A recent line of work exhibited “hardness magnification” phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ1 > 0, if MCSP[2μ1·n] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N1.01, then P 6= NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1. A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2μ2·n] in time N1.99, for some constant μ2 > μ1. 2. A non-deterministic (or parity) branching program of size o(N1.5/ logN) cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nechiporuk method to MKTP, which previously appeared to be difficult. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models. The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola 2019). En route, we obtain several related results: 1. There exists a (local) hitting set generator with seed length Õ( √ N) secure against read-once polynomial-size non-deterministic branching programs on N -bit inputs. 2. Any read-once co-non-deterministic branching program computing MCSP must have size at least 2Ω̃(N). 2012 ACM Subject Classification Theory of computation → Circuit complexity; Theory of computation → Pseudorandomness and derandomization
{"title":"One-Tape Turing Machine and Branching Program Lower Bounds for MCSP","authors":"Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, Yuichi Yoshida","doi":"10.4230/LIPIcs.STACS.2021.23","DOIUrl":"https://doi.org/10.4230/LIPIcs.STACS.2021.23","url":null,"abstract":"For a size parameter s : N → N, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0, 1} → {0, 1} (represented by a string of length N := 2) is at most a threshold s(n). A recent line of work exhibited “hardness magnification” phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ1 > 0, if MCSP[2μ1·n] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N1.01, then P 6= NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1. A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2μ2·n] in time N1.99, for some constant μ2 > μ1. 2. A non-deterministic (or parity) branching program of size o(N1.5/ logN) cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nechiporuk method to MKTP, which previously appeared to be difficult. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models. The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola 2019). En route, we obtain several related results: 1. There exists a (local) hitting set generator with seed length Õ( √ N) secure against read-once polynomial-size non-deterministic branching programs on N -bit inputs. 2. Any read-once co-non-deterministic branching program computing MCSP must have size at least 2Ω̃(N). 2012 ACM Subject Classification Theory of computation → Circuit complexity; Theory of computation → Pseudorandomness and derandomization","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"101 1","pages":"23:1-23:19"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75169395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-26DOI: 10.48550/arXiv.2212.13154
Yinan Li, Y. Qiao, A. Wigderson, Yuval Wigderson, Chuan-Hai Zhang
A fundamental fact about bounded-degree graph expanders is that three notions of expansion -- vertex expansion, edge expansion, and spectral expansion -- are all equivalent. In this paper, we study to what extent such a statement is true for linear-algebraic notions of expansion. There are two well-studied notions of linear-algebraic expansion, namely dimension expansion (defined in analogy to graph vertex expansion) and quantum expansion (defined in analogy to graph spectral expansion). Lubotzky and Zelmanov proved that the latter implies the former. We prove that the converse is false: there are dimension expanders which are not quantum expanders. Moreover, this asymmetry is explained by the fact that there are two distinct linear-algebraic analogues of graph edge expansion. The first of these is quantum edge expansion, which was introduced by Hastings, and which he proved to be equivalent to quantum expansion. We introduce a new notion, termed dimension edge expansion, which we prove is equivalent to dimension expansion and which is implied by quantum edge expansion. Thus, the separation above is implied by a finer one: dimension edge expansion is strictly weaker than quantum edge expansion. This new notion also leads to a new, more modular proof of the Lubotzky--Zelmanov result that quantum expanders are dimension expanders.
{"title":"On linear-algebraic notions of expansion","authors":"Yinan Li, Y. Qiao, A. Wigderson, Yuval Wigderson, Chuan-Hai Zhang","doi":"10.48550/arXiv.2212.13154","DOIUrl":"https://doi.org/10.48550/arXiv.2212.13154","url":null,"abstract":"A fundamental fact about bounded-degree graph expanders is that three notions of expansion -- vertex expansion, edge expansion, and spectral expansion -- are all equivalent. In this paper, we study to what extent such a statement is true for linear-algebraic notions of expansion. There are two well-studied notions of linear-algebraic expansion, namely dimension expansion (defined in analogy to graph vertex expansion) and quantum expansion (defined in analogy to graph spectral expansion). Lubotzky and Zelmanov proved that the latter implies the former. We prove that the converse is false: there are dimension expanders which are not quantum expanders. Moreover, this asymmetry is explained by the fact that there are two distinct linear-algebraic analogues of graph edge expansion. The first of these is quantum edge expansion, which was introduced by Hastings, and which he proved to be equivalent to quantum expansion. We introduce a new notion, termed dimension edge expansion, which we prove is equivalent to dimension expansion and which is implied by quantum edge expansion. Thus, the separation above is implied by a finer one: dimension edge expansion is strictly weaker than quantum edge expansion. This new notion also leads to a new, more modular proof of the Lubotzky--Zelmanov result that quantum expanders are dimension expanders.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90454013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.48550/arXiv.2212.08397
P. Harsha, Tulasimohan Molli, Ashutosh Shankar
Rossman [In $textit{Proc. $34$th Comput. Complexity Conf.}$, 2019] introduced the notion of $textit{criticality}$. The criticality of a Boolean function $f : {0,1}^n to {0,1}$ is the minimum $lambda geq 1$ such that for all positive integers $t$, [ Pr_{rho sim mathcal{R}_p}left[text{DT}_{text{depth}}(f|_{rho}) geq tright] leq (plambda)^t. ] H"astad's celebrated switching lemma shows that the criticality of any $k$-DNF is at most $O(k)$. Subsequent improvements to correlation bounds of $text{AC}^0$-circuits against parity showed that the criticality of any $text{AC}^0$-$textit{circuit}$ of size $S$ and depth $d+1$ is at most $O(log S)^d$ and any $textit{regular}$ $text{AC}^0$-$textit{formula}$ of size $S$ and depth $d+1$ is at most $Oleft(frac1d cdot log Sright)^d$. We strengthen these results by showing that the criticality of $textit{any}$ $text{AC}^0$-formula (not necessarily regular) of size $S$ and depth $d+1$ is at most $Oleft(frac1dcdot {log S}right)^d$, resolving a conjecture due to Rossman. This result also implies Rossman's optimal lower bound on the size of any depth-$d$ $text{AC}^0$-formula computing parity [$textit{Comput. Complexity, 27(2):209--223, 2018.}$]. Our result implies tight correlation bounds against parity, tight Fourier concentration results and improved $#$SAT algorithm for $text{AC}^0$-formulae.
{"title":"Criticality of AC0 formulae","authors":"P. Harsha, Tulasimohan Molli, Ashutosh Shankar","doi":"10.48550/arXiv.2212.08397","DOIUrl":"https://doi.org/10.48550/arXiv.2212.08397","url":null,"abstract":"Rossman [In $textit{Proc. $34$th Comput. Complexity Conf.}$, 2019] introduced the notion of $textit{criticality}$. The criticality of a Boolean function $f : {0,1}^n to {0,1}$ is the minimum $lambda geq 1$ such that for all positive integers $t$, [ Pr_{rho sim mathcal{R}_p}left[text{DT}_{text{depth}}(f|_{rho}) geq tright] leq (plambda)^t. ] H\"astad's celebrated switching lemma shows that the criticality of any $k$-DNF is at most $O(k)$. Subsequent improvements to correlation bounds of $text{AC}^0$-circuits against parity showed that the criticality of any $text{AC}^0$-$textit{circuit}$ of size $S$ and depth $d+1$ is at most $O(log S)^d$ and any $textit{regular}$ $text{AC}^0$-$textit{formula}$ of size $S$ and depth $d+1$ is at most $Oleft(frac1d cdot log Sright)^d$. We strengthen these results by showing that the criticality of $textit{any}$ $text{AC}^0$-formula (not necessarily regular) of size $S$ and depth $d+1$ is at most $Oleft(frac1dcdot {log S}right)^d$, resolving a conjecture due to Rossman. This result also implies Rossman's optimal lower bound on the size of any depth-$d$ $text{AC}^0$-formula computing parity [$textit{Comput. Complexity, 27(2):209--223, 2018.}$]. Our result implies tight correlation bounds against parity, tight Fourier concentration results and improved $#$SAT algorithm for $text{AC}^0$-formulae.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82876074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-06DOI: 10.48550/arXiv.2212.03348
Vahid R. Asadi, Alexander Golovnev, Tom Gur, Igor Shinkar, Sathyawageeswar Subramanian
We study the problem of designing worst-case to average-case reductions for quantum algorithms. For all linear problems, we provide an explicit and efficient transformation of quantum algorithms that are only correct on a small (even sub-constant) fraction of their inputs into ones that are correct on all inputs. This stands in contrast to the classical setting, where such results are only known for a small number of specific problems or restricted computational models. En route, we obtain a tight $Omega(n^2)$ lower bound on the average-case quantum query complexity of the Matrix-Vector Multiplication problem. Our techniques strengthen and generalise the recently introduced additive combinatorics framework for classical worst-case to average-case reductions (STOC 2022) to the quantum setting. We rely on quantum singular value transformations to construct quantum algorithms for linear verification in superposition and learning Bogolyubov subspaces from noisy quantum oracles. We use these tools to prove a quantum local correction lemma, which lies at the heart of our reductions, based on a noise-robust probabilistic generalisation of Bogolyubov's lemma from additive combinatorics.
{"title":"Quantum Worst-Case to Average-Case Reductions for All Linear Problems","authors":"Vahid R. Asadi, Alexander Golovnev, Tom Gur, Igor Shinkar, Sathyawageeswar Subramanian","doi":"10.48550/arXiv.2212.03348","DOIUrl":"https://doi.org/10.48550/arXiv.2212.03348","url":null,"abstract":"We study the problem of designing worst-case to average-case reductions for quantum algorithms. For all linear problems, we provide an explicit and efficient transformation of quantum algorithms that are only correct on a small (even sub-constant) fraction of their inputs into ones that are correct on all inputs. This stands in contrast to the classical setting, where such results are only known for a small number of specific problems or restricted computational models. En route, we obtain a tight $Omega(n^2)$ lower bound on the average-case quantum query complexity of the Matrix-Vector Multiplication problem. Our techniques strengthen and generalise the recently introduced additive combinatorics framework for classical worst-case to average-case reductions (STOC 2022) to the quantum setting. We rely on quantum singular value transformations to construct quantum algorithms for linear verification in superposition and learning Bogolyubov subspaces from noisy quantum oracles. We use these tools to prove a quantum local correction lemma, which lies at the heart of our reductions, based on a noise-robust probabilistic generalisation of Bogolyubov's lemma from additive combinatorics.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72844546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}