Pub Date : 2023-01-01DOI: 10.4086/toc.2023.v019a003
Irit Dinur, Inbal Livni Navon
$ newcommandf{f} newcommandpf{g} $ Given a function $f:[N]^krightarrow[M]^k$, the Z-test is a three-query test for checking if the function $f$ is a direct product, i.e., if there are functions $pf_1,ldots,pf_k:[N]to[M]$ such that $f(x_1,ldots,x_k)=(pf_1(x_1),ldots,pf_k(x_k))$ for every input $xin [N]^k$. This test was introduced by Impagliazzo et. al. (SICOMP 2012), who showed that if the test passes with probability $epsilon > exp(-sqrt k)$ then $f$ is $Omega(epsilon)$ correlated to a direct product function in some precise sense. It remained an open question whether the soundness of this test can be pushed all the way down to $exp(-k)$ (which would be optimal). This is our main result: we show that whenever $f$ passes the Z test with probability $epsilon > exp(-k)$, there must be a global reason for this, namely, $f$ is $Omega(epsilon)$ correlated to a direct product function, in the same sense of closeness. Towards proving our result we analyze the related (two-query) V-test, and prove a “restricted global structure” theorem for it. Such theorems were also proven in previous work on direct product testing in the small soundness regime. The most recent paper, by Dinur and Steurer (CCC 2014), analyzed the V test in the exponentially small soundness regime. We strengthen their conclusion by moving from an “in expectation” statement to a stronger “concentration of measure” type of statement, which we prove using reverse hyper-contractivity. This stronger statement allows us to proceed to analyze the Z test. ------------------ A preliminary version of this paper appeared in the Proceedings of the 32nd Computational Complexity Conference (CCC'17).
{"title":"","authors":"Irit Dinur, Inbal Livni Navon","doi":"10.4086/toc.2023.v019a003","DOIUrl":"https://doi.org/10.4086/toc.2023.v019a003","url":null,"abstract":"$ newcommandf{f} newcommandpf{g} $ Given a function $f:[N]^krightarrow[M]^k$, the Z-test is a three-query test for checking if the function $f$ is a direct product, i.e., if there are functions $pf_1,ldots,pf_k:[N]to[M]$ such that $f(x_1,ldots,x_k)=(pf_1(x_1),ldots,pf_k(x_k))$ for every input $xin [N]^k$. This test was introduced by Impagliazzo et. al. (SICOMP 2012), who showed that if the test passes with probability $epsilon > exp(-sqrt k)$ then $f$ is $Omega(epsilon)$ correlated to a direct product function in some precise sense. It remained an open question whether the soundness of this test can be pushed all the way down to $exp(-k)$ (which would be optimal). This is our main result: we show that whenever $f$ passes the Z test with probability $epsilon > exp(-k)$, there must be a global reason for this, namely, $f$ is $Omega(epsilon)$ correlated to a direct product function, in the same sense of closeness. Towards proving our result we analyze the related (two-query) V-test, and prove a “restricted global structure” theorem for it. Such theorems were also proven in previous work on direct product testing in the small soundness regime. The most recent paper, by Dinur and Steurer (CCC 2014), analyzed the V test in the exponentially small soundness regime. We strengthen their conclusion by moving from an “in expectation” statement to a stronger “concentration of measure” type of statement, which we prove using reverse hyper-contractivity. This stronger statement allows us to proceed to analyze the Z test. ------------------ A preliminary version of this paper appeared in the Proceedings of the 32nd Computational Complexity Conference (CCC'17).","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135913163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4086/toc.2023.v019a006
Linh Tran, Van Vu
$ $ A community of $n$ individuals splits into two camps, Red and Blue. The individuals are connected by a social network, which influences their colors. Every day each person changes their color according to the majority of their neighbors. Red (Blue) wins if everyone in the community becomes Red (Blue) at some point. We study this process when the underlying network is the random Erdős--Rényi graph $G(n, p)$. With a balanced initial state ($n/2$ persons in each camp), it is clear that each color wins with the same probability. Our study reveals that for any constants $p$ and $varepsilon$, there is a constant $c$ such that if one camp has at least $n/2 + c$ individuals at the initial state, then it wins with probability at least $1 - varepsilon$. The surprising fact here is that $c$ does not depend on $n$, the population of the community. When $p=1/2$ and $varepsilon =.1$, one can set $c=5$, meaning one camp has $n/2 + 5$ members initially. In other words, it takes only $5$ extra people to win an election with overwhelming odds. We also generalize the result to $p = p_n = text{o}(1)$ in a separate paper. ----------------- A preliminary version of this paper appeared in the Proceedings of the 24th International Conference on Randomization and Computation (RANDOM'20).
{"title":"","authors":"Linh Tran, Van Vu","doi":"10.4086/toc.2023.v019a006","DOIUrl":"https://doi.org/10.4086/toc.2023.v019a006","url":null,"abstract":"$ $ A community of $n$ individuals splits into two camps, Red and Blue. The individuals are connected by a social network, which influences their colors. Every day each person changes their color according to the majority of their neighbors. Red (Blue) wins if everyone in the community becomes Red (Blue) at some point. We study this process when the underlying network is the random Erdős--Rényi graph $G(n, p)$. With a balanced initial state ($n/2$ persons in each camp), it is clear that each color wins with the same probability. Our study reveals that for any constants $p$ and $varepsilon$, there is a constant $c$ such that if one camp has at least $n/2 + c$ individuals at the initial state, then it wins with probability at least $1 - varepsilon$. The surprising fact here is that $c$ does not depend on $n$, the population of the community. When $p=1/2$ and $varepsilon =.1$, one can set $c=5$, meaning one camp has $n/2 + 5$ members initially. In other words, it takes only $5$ extra people to win an election with overwhelming odds. We also generalize the result to $p = p_n = text{o}(1)$ in a separate paper. ----------------- A preliminary version of this paper appeared in the Proceedings of the 24th International Conference on Randomization and Computation (RANDOM'20).","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135660640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4086/toc.2023.v019a002
Rahul Jain, Raghunath Tewari
The reachability problem asks to decide if there exists a path from one vertex to another in a digraph. In a grid digraph, the vertices are the points of a two-dimensional square grid, and an edge can occur between a vertex and its immediate horizontal and vertical neighbors only. Asano and Doerr (CCCG'11) presented the first simultaneous time-space bound for reachability in grid digraphs by solving the problem in polynomial time and $O(n^{1/2 + epsilon})$ space. In 2018, the space complexity was improved to $tilde{O}(n^{1/3})$ by Ashida and Nakagawa (SoCG'18). In this paper, we show that there exists a polynomial-time algorithm that uses $O(n^{1/4 + epsilon})$ space to solve the reachability problem in a grid digraph containing $n$ vertices. We define and construct a new separator-like device called pseudoseparator to develop a divide-and-conquer algorithm. This algorithm works in a space-efficient manner to solve reachability. -------------- A conference version of this paper appeared in the Proceedings of the 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS'19).
{"title":"","authors":"Rahul Jain, Raghunath Tewari","doi":"10.4086/toc.2023.v019a002","DOIUrl":"https://doi.org/10.4086/toc.2023.v019a002","url":null,"abstract":"The reachability problem asks to decide if there exists a path from one vertex to another in a digraph. In a grid digraph, the vertices are the points of a two-dimensional square grid, and an edge can occur between a vertex and its immediate horizontal and vertical neighbors only. Asano and Doerr (CCCG'11) presented the first simultaneous time-space bound for reachability in grid digraphs by solving the problem in polynomial time and $O(n^{1/2 + epsilon})$ space. In 2018, the space complexity was improved to $tilde{O}(n^{1/3})$ by Ashida and Nakagawa (SoCG'18). In this paper, we show that there exists a polynomial-time algorithm that uses $O(n^{1/4 + epsilon})$ space to solve the reachability problem in a grid digraph containing $n$ vertices. We define and construct a new separator-like device called pseudoseparator to develop a divide-and-conquer algorithm. This algorithm works in a space-efficient manner to solve reachability. -------------- A conference version of this paper appeared in the Proceedings of the 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS'19).","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136367482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4086/toc.2023.v019a001
Iden Kalemaj, Sofya Raskhodnikova, Nithin Varma
We initiate the study of sublinear-time algorithms that access their input via an online adversarial erasure oracle. After answering each input query, such an oracle can erase $t$ input values. Our goal is to understand the complexity of basic computational tasks in extremely adversarial situations, where the algorithm's access to data is blocked during the execution of the algorithm in response to its actions. Specifically, we focus on property testing in the model with online erasures. We show that two fundamental properties of functions, linearity and quadraticity, can be tested for constant $t$ with asymptotically the same complexity as in the standard property testing model. For linearity testing, we prove tight bounds in terms of $t$, showing that the query complexity is $Theta(log t).$ In contrast to linearity and quadraticity, some other properties, including sortedness and the Lipschitz property of sequences, cannot be tested at all, even for $t=1$. Our investigation leads to a deeper understanding of the structure of violations of linearity and other widely studied properties. We also consider implications of our results for algorithms that are resilient to online adversarial corruptions instead of erasures. -------------- A preliminary version of this paper appeared in the Proceedings of the 13th Innovations in Theoretical Computer Science Conference (ITCS'22).
{"title":"","authors":"Iden Kalemaj, Sofya Raskhodnikova, Nithin Varma","doi":"10.4086/toc.2023.v019a001","DOIUrl":"https://doi.org/10.4086/toc.2023.v019a001","url":null,"abstract":"We initiate the study of sublinear-time algorithms that access their input via an online adversarial erasure oracle. After answering each input query, such an oracle can erase $t$ input values. Our goal is to understand the complexity of basic computational tasks in extremely adversarial situations, where the algorithm's access to data is blocked during the execution of the algorithm in response to its actions. Specifically, we focus on property testing in the model with online erasures. We show that two fundamental properties of functions, linearity and quadraticity, can be tested for constant $t$ with asymptotically the same complexity as in the standard property testing model. For linearity testing, we prove tight bounds in terms of $t$, showing that the query complexity is $Theta(log t).$ In contrast to linearity and quadraticity, some other properties, including sortedness and the Lipschitz property of sequences, cannot be tested at all, even for $t=1$. Our investigation leads to a deeper understanding of the structure of violations of linearity and other widely studied properties. We also consider implications of our results for algorithms that are resilient to online adversarial corruptions instead of erasures. -------------- A preliminary version of this paper appeared in the Proceedings of the 13th Innovations in Theoretical Computer Science Conference (ITCS'22).","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136217179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-29DOI: 10.4086/toc.2017.v013a006
Mrinal Kumar, Shubhangi Saraf
In recent years, there has been a flurry of activity towards proving lower bounds for homogeneous depth-4 arithmetic circuits, which has brought us very close to statements that are known to imply $textsf{VP} neq textsf{VNP}$. It is open if these techniques can go beyond homogeneity, and in this paper we make some progress in this direction by considering depth-4 circuits of low algebraic rank, which are a natural extension of homogeneous depth-4 circuits. A depth-4 circuit is a representation of an $N$-variate, degree-$n$ polynomial $P$ as [ P = sum_{i = 1}^T Q_{i1}cdot Q_{i2}cdot cdots cdot Q_{it} ; , ] where the $Q_{ij}$ are given by their monomial expansion. Homogeneity adds the constraint that for every $i in [T]$, $sum_{j} operatorname{deg}(Q_{ij}) = n$. We study an extension, where, for every $i in [T]$, the algebraic rank of the set ${Q_{i1}, Q_{i2}, ldots ,Q_{it}}$ of polynomials is at most some parameter $k$. Already for $k = n$, these circuits are a generalization of the class of homogeneous depth-4 circuits, where in particular $t leq n$ (and hence $k leq n$). We study lower bounds and polynomial identity tests for such circuits and prove the following results. We show an $exp{(Omega(sqrt{n}log N))}$ lower bound for such circuits for an explicit $N$ variate degree $n$ polynomial family when $k leq n$. We also show quasipolynomial hitting sets when the degree of each $Q_{ij}$ and the $k$ are at most $operatorname{poly}(log n)$. A key technical ingredient of the proofs, which may be of independent interest, is a result which states that over any field of characteristic zero, up to a translation, every polynomial in a set of polynomials can be written as a function of the polynomials in a transcendence basis of the set. We combine this with methods based on shifted partial derivatives to obtain our final results.
{"title":"Arithmetic Circuits with Locally Low Algebraic Rank","authors":"Mrinal Kumar, Shubhangi Saraf","doi":"10.4086/toc.2017.v013a006","DOIUrl":"https://doi.org/10.4086/toc.2017.v013a006","url":null,"abstract":"In recent years, there has been a flurry of activity towards proving lower bounds for homogeneous depth-4 arithmetic circuits, which has brought us very close to statements that are known to imply $textsf{VP} neq textsf{VNP}$. It is open if these techniques can go beyond homogeneity, and in this paper we make some progress in this direction by considering depth-4 circuits of low algebraic rank, which are a natural extension of homogeneous depth-4 circuits. A depth-4 circuit is a representation of an $N$-variate, degree-$n$ polynomial $P$ as [ P = sum_{i = 1}^T Q_{i1}cdot Q_{i2}cdot cdots cdot Q_{it} ; , ] where the $Q_{ij}$ are given by their monomial expansion. Homogeneity adds the constraint that for every $i in [T]$, $sum_{j} operatorname{deg}(Q_{ij}) = n$. We study an extension, where, for every $i in [T]$, the algebraic rank of the set ${Q_{i1}, Q_{i2}, ldots ,Q_{it}}$ of polynomials is at most some parameter $k$. Already for $k = n$, these circuits are a generalization of the class of homogeneous depth-4 circuits, where in particular $t leq n$ (and hence $k leq n$). \u0000We study lower bounds and polynomial identity tests for such circuits and prove the following results. We show an $exp{(Omega(sqrt{n}log N))}$ lower bound for such circuits for an explicit $N$ variate degree $n$ polynomial family when $k leq n$. We also show quasipolynomial hitting sets when the degree of each $Q_{ij}$ and the $k$ are at most $operatorname{poly}(log n)$. \u0000A key technical ingredient of the proofs, which may be of independent interest, is a result which states that over any field of characteristic zero, up to a translation, every polynomial in a set of polynomials can be written as a function of the polynomials in a transcendence basis of the set. We combine this with methods based on shifted partial derivatives to obtain our final results.","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"49 1","pages":"34:1-34:27"},"PeriodicalIF":1.0,"publicationDate":"2016-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89045287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.4086/toc.2017.v013a021
John Y. Kim, Swastik Kopparty
We give a polynomial time algorithm to decode multivariate polynomial codes of degree $d$ up to half their minimum distance, when the evaluation points are an arbitrary product set $S^m$, for every $d 0$. Our result gives an $m$-dimensional generalization of the well known decoding algorithms for Reed-Solomon codes, and can be viewed as giving an algorithmic version of the Schwartz-Zippel lemma.
{"title":"Decoding Reed-Muller Codes over Product Sets","authors":"John Y. Kim, Swastik Kopparty","doi":"10.4086/toc.2017.v013a021","DOIUrl":"https://doi.org/10.4086/toc.2017.v013a021","url":null,"abstract":"We give a polynomial time algorithm to decode multivariate polynomial codes of degree $d$ up to half their minimum distance, when the evaluation points are an arbitrary product set $S^m$, for every $d 0$. \u0000Our result gives an $m$-dimensional generalization of the well known decoding algorithms for Reed-Solomon codes, and can be viewed as giving an algorithmic version of the Schwartz-Zippel lemma.","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"6 1","pages":"11:1-11:28"},"PeriodicalIF":1.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80420826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-17DOI: 10.4086/toc.2017.v013a004
Cody Murray, Richard Ryan Williams
The Minimum Circuit Size Problem (MCSP) is: given the truth table of a Boolean function f and a size parameter k, is the circuit complexity of f at most k? This is the definitive problem of circuit synthesis, and it has been studied since the 1950s. Unlike many problems of its kind, MCSP is not known to be NP-hard, yet an efficient algorithm for this problem also seems very unlikely: for example, MCSP ∈ P would imply there are no pseudorandom functions. Although most NP-complete problems are complete under strong "local" reduction notions such as poly-logarithmic time projections, we show that MCSP is provably not NP-hard under O(n1/2-e)-time projections, for every e > 0. We prove that the NP-hardness of MCSP under (logtime-uniform) AC0 reductions would imply extremely strong lower bounds: NP ⊄ P/poly and E ⊄ i.o.-SIZE(2δn) for some δ > 0 (hence P = BPP also follows). We show that even the NP-hardness of MCSP under general polynomial-time reductions would separate complexity classes: EXP ≠ NP ∩ P/poly, which implies EXP ≠ ZPP. These results help explain why it has been so difficult to prove that MCSP is NP-hard. We also consider the nondeterministic generalization of MCSP: the Nondeterministic Minimum Circuit Size Problem (NMCSP), where one wishes to compute the nondeterministic circuit complexity of a given function. We prove that the Σ2P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP ⊄ P/poly.
{"title":"On the (Non) NP-Hardness of Computing Circuit Complexity","authors":"Cody Murray, Richard Ryan Williams","doi":"10.4086/toc.2017.v013a004","DOIUrl":"https://doi.org/10.4086/toc.2017.v013a004","url":null,"abstract":"The Minimum Circuit Size Problem (MCSP) is: given the truth table of a Boolean function f and a size parameter k, is the circuit complexity of f at most k? This is the definitive problem of circuit synthesis, and it has been studied since the 1950s. Unlike many problems of its kind, MCSP is not known to be NP-hard, yet an efficient algorithm for this problem also seems very unlikely: for example, MCSP ∈ P would imply there are no pseudorandom functions. \u0000 \u0000Although most NP-complete problems are complete under strong \"local\" reduction notions such as poly-logarithmic time projections, we show that MCSP is provably not NP-hard under O(n1/2-e)-time projections, for every e > 0. We prove that the NP-hardness of MCSP under (logtime-uniform) AC0 reductions would imply extremely strong lower bounds: NP ⊄ P/poly and E ⊄ i.o.-SIZE(2δn) for some δ > 0 (hence P = BPP also follows). We show that even the NP-hardness of MCSP under general polynomial-time reductions would separate complexity classes: EXP ≠ NP ∩ P/poly, which implies EXP ≠ ZPP. These results help explain why it has been so difficult to prove that MCSP is NP-hard. \u0000 \u0000We also consider the nondeterministic generalization of MCSP: the Nondeterministic Minimum Circuit Size Problem (NMCSP), where one wishes to compute the nondeterministic circuit complexity of a given function. We prove that the Σ2P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP ⊄ P/poly.","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"26 1","pages":"365-380"},"PeriodicalIF":1.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87516427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-03DOI: 10.4086/toc.2016.v012a018
Cedric Yen-Yu Lin, Han-Hsuan Lin
Inspired by the Elitzur-Vaidman bomb testing problem [arXiv:hep-th/9305002], we introduce a new query complexity model, which we call bomb query complexity $B(f)$. We investigate its relationship with the usual quantum query complexity $Q(f)$, and show that $B(f)=Theta(Q(f)^2)$. This result gives a new method to upper bound the quantum query complexity: we give a method of finding bomb query algorithms from classical algorithms, which then provide nonconstructive upper bounds on $Q(f)=Theta(sqrt{B(f)})$. We subsequently were able to give explicit quantum algorithms matching our upper bound method. We apply this method on the single-source shortest paths problem on unweighted graphs, obtaining an algorithm with $O(n^{1.5})$ quantum query complexity, improving the best known algorithm of $O(n^{1.5}sqrt{log n})$ [arXiv:quant-ph/0606127]. Applying this method to the maximum bipartite matching problem gives an $O(n^{1.75})$ algorithm, improving the best known trivial $O(n^2)$ upper bound.
受elitzer - vaidman炸弹测试问题[arXiv: help -th/9305002]的启发,我们引入了一种新的查询复杂度模型,我们称之为炸弹查询复杂度$B(f)$。我们研究了它与通常的量子查询复杂度$Q(f)$的关系,并表明$B(f)=Theta(Q(f)^2)$。这一结果给出了一种计算量子查询复杂度上界的新方法:我们给出了一种从经典算法中寻找炸弹查询算法的方法,然后给出了$Q(f)=Theta(sqrt{B(f)})$上的非构造上界。我们随后能够给出与上界方法相匹配的显式量子算法。我们将该方法应用于无加权图上的单源最短路径问题,得到了一个具有$O(n^{1.5})$量子查询复杂度的算法,改进了最著名的算法$O(n^{1.5}sqrt{log n})$ [arXiv: quantantph /0606127]。将该方法应用于最大二部匹配问题,给出了一个$O(n^{1.75})$算法,改进了已知的最平凡的$O(n^2)$上界。
{"title":"Upper Bounds on Quantum Query Complexity Inspired by the Elitzur--Vaidman Bomb Tester","authors":"Cedric Yen-Yu Lin, Han-Hsuan Lin","doi":"10.4086/toc.2016.v012a018","DOIUrl":"https://doi.org/10.4086/toc.2016.v012a018","url":null,"abstract":"Inspired by the Elitzur-Vaidman bomb testing problem [arXiv:hep-th/9305002], we introduce a new query complexity model, which we call bomb query complexity $B(f)$. We investigate its relationship with the usual quantum query complexity $Q(f)$, and show that $B(f)=Theta(Q(f)^2)$. \u0000This result gives a new method to upper bound the quantum query complexity: we give a method of finding bomb query algorithms from classical algorithms, which then provide nonconstructive upper bounds on $Q(f)=Theta(sqrt{B(f)})$. We subsequently were able to give explicit quantum algorithms matching our upper bound method. We apply this method on the single-source shortest paths problem on unweighted graphs, obtaining an algorithm with $O(n^{1.5})$ quantum query complexity, improving the best known algorithm of $O(n^{1.5}sqrt{log n})$ [arXiv:quant-ph/0606127]. Applying this method to the maximum bipartite matching problem gives an $O(n^{1.75})$ algorithm, improving the best known trivial $O(n^2)$ upper bound.","PeriodicalId":55992,"journal":{"name":"Theory of Computing","volume":"233 1","pages":"537-566"},"PeriodicalIF":1.0,"publicationDate":"2014-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77269048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}