We study the log-rank conjecture from the perspective of point-hyperplane incidence geometry. We formulate the following conjecture: Given a point set in ℝd that is covered by constant-sized sets of parallel hyperplanes, there exists an affine subspace that accounts for a large (i.e., 2–polylog(d)) fraction of the incidences, in the sense of containing a large fraction of the points and being contained in a large fraction of the hyperplanes. In other words, the point-hyperplane incidence graph for such configurations has a large complete bipartite subgraph. Alternatively, our conjecture may be interpreted linear-algebraically as follows: Any rank-d matrix containing at most O(1) distinct entries in each column contains a submatrix of fractional size 2–polylog(d), in which each column is constant. We prove that our conjecture is equivalent to the log-rank conjecture; the crucial ingredient of this proof is a reduction from bounds for parallel k-partitions to bounds for parallel (k-1)-partitions. We also introduce an (apparent) strengthening of the conjecture, which relaxes the requirements that the sets of hyperplanes be parallel. Motivated by the connections above, we revisit well-studied questions in point-hyperplane incidence geometry without structural assumptions (i.e., the existence of partitions). We give an elementary argument for the existence of complete bipartite subgraphs of density Ω (ε 2d/d) in any d-dimensional configuration with incidence density ε, qualitatively matching previous results proved using sophisticated geometric techniques. We also improve an upper-bound construction of Apfelbaum and Sharir [2], yielding a configuration whose complete bipartite subgraphs are exponentially small and whose incidence density is Ω (1/√ d). Finally, we discuss various constructions (due to others) of products of Boolean matrices which yield configurations with incidence density Ω (1) and complete bipartite subgraph density 2-Ω (√ d), and pose several questions for this special case in the alternative language of extremal set combinatorics. Our framework and results may help shed light on the difficulty of improving Lovett’s Õ(√ rank(f)) bound [20] for the log-rank conjecture. In particular, any improvement on this bound would imply the first complete bipartite subgraph size bounds for parallel 3-partitioned configurations which beat our generic bounds for unstructured configurations.
{"title":"Point-hyperplane Incidence Geometry and the Log-rank Conjecture","authors":"Noah G. Singer, M. Sudan","doi":"10.1145/3543684","DOIUrl":"https://doi.org/10.1145/3543684","url":null,"abstract":"We study the log-rank conjecture from the perspective of point-hyperplane incidence geometry. We formulate the following conjecture: Given a point set in ℝd that is covered by constant-sized sets of parallel hyperplanes, there exists an affine subspace that accounts for a large (i.e., 2–polylog(d)) fraction of the incidences, in the sense of containing a large fraction of the points and being contained in a large fraction of the hyperplanes. In other words, the point-hyperplane incidence graph for such configurations has a large complete bipartite subgraph. Alternatively, our conjecture may be interpreted linear-algebraically as follows: Any rank-d matrix containing at most O(1) distinct entries in each column contains a submatrix of fractional size 2–polylog(d), in which each column is constant. We prove that our conjecture is equivalent to the log-rank conjecture; the crucial ingredient of this proof is a reduction from bounds for parallel k-partitions to bounds for parallel (k-1)-partitions. We also introduce an (apparent) strengthening of the conjecture, which relaxes the requirements that the sets of hyperplanes be parallel. Motivated by the connections above, we revisit well-studied questions in point-hyperplane incidence geometry without structural assumptions (i.e., the existence of partitions). We give an elementary argument for the existence of complete bipartite subgraphs of density Ω (ε 2d/d) in any d-dimensional configuration with incidence density ε, qualitatively matching previous results proved using sophisticated geometric techniques. We also improve an upper-bound construction of Apfelbaum and Sharir [2], yielding a configuration whose complete bipartite subgraphs are exponentially small and whose incidence density is Ω (1/√ d). Finally, we discuss various constructions (due to others) of products of Boolean matrices which yield configurations with incidence density Ω (1) and complete bipartite subgraph density 2-Ω (√ d), and pose several questions for this special case in the alternative language of extremal set combinatorics. Our framework and results may help shed light on the difficulty of improving Lovett’s Õ(√ rank(f)) bound [20] for the log-rank conjecture. In particular, any improvement on this bound would imply the first complete bipartite subgraph size bounds for parallel 3-partitioned configurations which beat our generic bounds for unstructured configurations.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121590111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the constraint satisfaction problem (CSP) parameterized by a constraint language Γ (CSPΓ) and how the choice of Γ affects its worst-case time complexity. Under the exponential-time hypothesis (ETH), we rule out the existence of subexponential algorithms for finite-domain NP-complete CSPΓ problems. This extends to certain infinite-domain CSPs and structurally restricted problems. For CSPs with finite domain D and where all unary relations are available, we identify a relation SD such that the time complexity of the NP-complete problem CSP({SD}) is a lower bound for all NP-complete CSPs of this kind. We also prove that the time complexity of CSP({SD}) strictly decreases when |D| increases (unless the ETH is false) and provide stronger complexity results in the special case when |D|=3.
{"title":"Fine-Grained Time Complexity of Constraint Satisfaction Problems","authors":"P. Jonsson, Victor Lagerkvist, Biman Roy","doi":"10.1145/3434387","DOIUrl":"https://doi.org/10.1145/3434387","url":null,"abstract":"We study the constraint satisfaction problem (CSP) parameterized by a constraint language Γ (CSPΓ) and how the choice of Γ affects its worst-case time complexity. Under the exponential-time hypothesis (ETH), we rule out the existence of subexponential algorithms for finite-domain NP-complete CSPΓ problems. This extends to certain infinite-domain CSPs and structurally restricted problems. For CSPs with finite domain D and where all unary relations are available, we identify a relation SD such that the time complexity of the NP-complete problem CSP({SD}) is a lower bound for all NP-complete CSPs of this kind. We also prove that the time complexity of CSP({SD}) strictly decreases when |D| increases (unless the ETH is false) and provide stronger complexity results in the special case when |D|=3.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115135551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The partial string avoidability problem is stated as follows: given a finite set of strings with possible “holes” (wildcard symbols), determine whether there exists a two-sided infinite string containing no substrings from this set, assuming that a hole matches every symbol. The problem is known to be NP-hard and in PSPACE, and this article establishes its PSPACE-completeness. Next, string avoidability over the binary alphabet is interpreted as a version of conjunctive normal form satisfiability problem, where each clause has infinitely many shifted variants. Non-satisfiability of these formulas can be proved using variants of classical propositional proof systems, augmented with derivation rules for shifting proof lines (such as clauses, inequalities, polynomials, etc.). First, it is proved that there is a particular formula that has a short refutation in Resolution with a shift rule but requires classical proofs of exponential size. At the same time, it is shown that exponential lower bounds for classical proof systems can be translated for their shifted versions. Finally, it is shown that superpolynomial lower bounds on the size of shifted proofs would separate NP from PSPACE; a connection to lower bounds on circuit complexity is also established.
{"title":"Computational and Proof Complexity of Partial String Avoidability","authors":"D. Itsykson, A. Okhotin, V. Oparin","doi":"10.1145/3442365","DOIUrl":"https://doi.org/10.1145/3442365","url":null,"abstract":"The partial string avoidability problem is stated as follows: given a finite set of strings with possible “holes” (wildcard symbols), determine whether there exists a two-sided infinite string containing no substrings from this set, assuming that a hole matches every symbol. The problem is known to be NP-hard and in PSPACE, and this article establishes its PSPACE-completeness. Next, string avoidability over the binary alphabet is interpreted as a version of conjunctive normal form satisfiability problem, where each clause has infinitely many shifted variants. Non-satisfiability of these formulas can be proved using variants of classical propositional proof systems, augmented with derivation rules for shifting proof lines (such as clauses, inequalities, polynomials, etc.). First, it is proved that there is a particular formula that has a short refutation in Resolution with a shift rule but requires classical proofs of exponential size. At the same time, it is shown that exponential lower bounds for classical proof systems can be translated for their shifted versions. Finally, it is shown that superpolynomial lower bounds on the size of shifted proofs would separate NP from PSPACE; a connection to lower bounds on circuit complexity is also established.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129026447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amit Levi, R. Pallavoor, Sofya Raskhodnikova, Nithin M. Varma
We investigate sublinear-time algorithms that take partially erased graphs represented by adjacency lists as input. Our algorithms make degree and neighbor queries to the input graph and work with a specified fraction of adversarial erasures in adjacency entries. We focus on two computational tasks: testing if a graph is connected or ε-far from connected and estimating the average degree. For testing connectedness, we discover a threshold phenomenon: when the fraction of erasures is less than ε, this property can be tested efficiently (in time independent of the size of the graph); when the fraction of erasures is at least ε, then a number of queries linear in the size of the graph representation is required. Our erasure-resilient algorithm (for the special case with no erasures) is an improvement over the previously known algorithm for connectedness in the standard property testing model and has optimal dependence on the proximity parameter ε. For estimating the average degree, our results provide an “interpolation” between the query complexity for this computational task in the model with no erasures in two different settings: with only degree queries, investigated by Feige (SIAM J. Comput. ‘06), and with degree queries and neighbor queries, investigated by Goldreich and Ron (Random Struct. Algorithms ‘08) and Eden et al. (ICALP ‘17). We conclude with a discussion of our model and open questions raised by our work.
我们研究了采用由邻接表表示的部分擦除图作为输入的亚线性时间算法。我们的算法对输入图进行度查询和邻居查询,并在邻接条目中使用指定比例的对抗性擦除。我们关注两个计算任务:测试图是否连通或ε-远不连通以及估计平均程度。对于连通性的测试,我们发现了一个阈值现象:当擦除的比例小于ε时,可以有效地测试该属性(与图的大小无关);当擦除的比例至少为ε时,则需要在图表示的大小上进行一定数量的线性查询。我们的擦除弹性算法(对于没有擦除的特殊情况)是对先前已知的标准属性测试模型中连通性算法的改进,并且对邻近参数ε具有最佳依赖性。为了估计平均度,我们的结果在两种不同的设置下为模型中该计算任务的查询复杂性提供了一个“插值”,在没有擦除的情况下:只有度查询,由Feige (SIAM J. Comput)研究。' 06),以及由Goldreich和Ron(随机结构)研究的度查询和邻居查询。算法' 08)和Eden et al. (ICALP ' 17)。最后,我们讨论了我们的模型和我们工作中提出的开放性问题。
{"title":"Erasure-Resilient Sublinear-Time Graph Algorithms","authors":"Amit Levi, R. Pallavoor, Sofya Raskhodnikova, Nithin M. Varma","doi":"10.1145/3488250","DOIUrl":"https://doi.org/10.1145/3488250","url":null,"abstract":"We investigate sublinear-time algorithms that take partially erased graphs represented by adjacency lists as input. Our algorithms make degree and neighbor queries to the input graph and work with a specified fraction of adversarial erasures in adjacency entries. We focus on two computational tasks: testing if a graph is connected or ε-far from connected and estimating the average degree. For testing connectedness, we discover a threshold phenomenon: when the fraction of erasures is less than ε, this property can be tested efficiently (in time independent of the size of the graph); when the fraction of erasures is at least ε, then a number of queries linear in the size of the graph representation is required. Our erasure-resilient algorithm (for the special case with no erasures) is an improvement over the previously known algorithm for connectedness in the standard property testing model and has optimal dependence on the proximity parameter ε. For estimating the average degree, our results provide an “interpolation” between the query complexity for this computational task in the model with no erasures in two different settings: with only degree queries, investigated by Feige (SIAM J. Comput. ‘06), and with degree queries and neighbor queries, investigated by Goldreich and Ron (Random Struct. Algorithms ‘08) and Eden et al. (ICALP ‘17). We conclude with a discussion of our model and open questions raised by our work.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116139103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The complexity class ZPPNP[1] (corresponding to zero-error randomized algorithms with access to one NP oracle query) is known to have a number of curious properties. We further explore this class in the settings of time complexity, query complexity, and communication complexity. • For starters, we provide a new characterization: ZPPNP[1] equals the restriction of BPPNP[1] where the algorithm is only allowed to err when it forgoes the opportunity to make an NP oracle query. • Using the above characterization, we prove a query-to-communication lifting theorem, which translates any ZPPNP[1] decision tree lower bound for a function f into a ZPPNP[1] communication lower bound for a two-party version of f. • As an application, we use the above lifting theorem to prove that the ZPPNP[1] communication lower bound technique introduced by Göös, Pitassi, and Watson (ICALP 2016) is not tight. We also provide a “primal” characterization of this lower bound technique as a complexity class.
{"title":"A ZPPNP[1] Lifting Theorem","authors":"Thomas Watson","doi":"10.1145/3428673","DOIUrl":"https://doi.org/10.1145/3428673","url":null,"abstract":"The complexity class ZPPNP[1] (corresponding to zero-error randomized algorithms with access to one NP oracle query) is known to have a number of curious properties. We further explore this class in the settings of time complexity, query complexity, and communication complexity. • For starters, we provide a new characterization: ZPPNP[1] equals the restriction of BPPNP[1] where the algorithm is only allowed to err when it forgoes the opportunity to make an NP oracle query. • Using the above characterization, we prove a query-to-communication lifting theorem, which translates any ZPPNP[1] decision tree lower bound for a function f into a ZPPNP[1] communication lower bound for a two-party version of f. • As an application, we use the above lifting theorem to prove that the ZPPNP[1] communication lower bound technique introduced by Göös, Pitassi, and Watson (ICALP 2016) is not tight. We also provide a “primal” characterization of this lower bound technique as a complexity class.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129638220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Ganian, Ronald de Haan, Iyad A. Kanj, Stefan Szeider
Impagliazzo et al. proposed a framework, based on the logic fragment defining the complexity class SNP, to identify problems that are equivalent to k-CNF-Sat modulo subexponential-time reducibility (serf-reducibility). The subexponential-time solvability of any of these problems implies the failure of the Exponential Time Hypothesis (ETH). In this article, we extend the framework of Impagliazzo et al. and identify a larger set of problems that are equivalent to k-CNF-Sat modulo serf-reducibility. We propose a complexity class, referred to as Linear Monadic NP, that consists of all problems expressible in existential monadic second-order logic whose expressions have a linear measure in terms of a complexity parameter, which is usually the universe size of the problem. This research direction can be traced back to Fagin’s celebrated theorem stating that NP coincides with the class of problems expressible in existential second-order logic. Monadic NP, a well-studied class in the literature, is the restriction of the aforementioned logic fragment to existential monadic second-order logic. The proposed class Linear Monadic NP is then the restriction of Monadic NP to problems whose expressions have linear measure in the complexity parameter. We show that Linear Monadic NP includes many natural complete problems such as the satisfiability of linear-size circuits, dominating set, independent dominating set, and perfect code. Therefore, for any of these problems, its subexponential-time solvability is equivalent to the failure of ETH. We prove, using logic games, that the aforementioned problems are inexpressible in the monadic fragment of SNP, and hence, are not captured by the framework of Impagliazzo et al. Finally, we show that Feedback Vertex Set is inexpressible in existential monadic second-order logic, and hence is not in Linear Monadic NP, and investigate the existence of certain reductions between Feedback Vertex Set (and variants of it) and 3-CNF-Sat.
{"title":"On Existential MSO and Its Relation to ETH","authors":"R. Ganian, Ronald de Haan, Iyad A. Kanj, Stefan Szeider","doi":"10.1145/3417759","DOIUrl":"https://doi.org/10.1145/3417759","url":null,"abstract":"Impagliazzo et al. proposed a framework, based on the logic fragment defining the complexity class SNP, to identify problems that are equivalent to k-CNF-Sat modulo subexponential-time reducibility (serf-reducibility). The subexponential-time solvability of any of these problems implies the failure of the Exponential Time Hypothesis (ETH). In this article, we extend the framework of Impagliazzo et al. and identify a larger set of problems that are equivalent to k-CNF-Sat modulo serf-reducibility. We propose a complexity class, referred to as Linear Monadic NP, that consists of all problems expressible in existential monadic second-order logic whose expressions have a linear measure in terms of a complexity parameter, which is usually the universe size of the problem. This research direction can be traced back to Fagin’s celebrated theorem stating that NP coincides with the class of problems expressible in existential second-order logic. Monadic NP, a well-studied class in the literature, is the restriction of the aforementioned logic fragment to existential monadic second-order logic. The proposed class Linear Monadic NP is then the restriction of Monadic NP to problems whose expressions have linear measure in the complexity parameter. We show that Linear Monadic NP includes many natural complete problems such as the satisfiability of linear-size circuits, dominating set, independent dominating set, and perfect code. Therefore, for any of these problems, its subexponential-time solvability is equivalent to the failure of ETH. We prove, using logic games, that the aforementioned problems are inexpressible in the monadic fragment of SNP, and hence, are not captured by the framework of Impagliazzo et al. Finally, we show that Feedback Vertex Set is inexpressible in existential monadic second-order logic, and hence is not in Linear Monadic NP, and investigate the existence of certain reductions between Feedback Vertex Set (and variants of it) and 3-CNF-Sat.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132894642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suppose Alice and Bob each start with private randomness and no other input, and they wish to engage in a protocol in which Alice ends up with a set x⊆ [n] and Bob ends up with a set y⊆ [n], such that (x,y) is uniformly distributed over all pairs of disjoint sets. We prove that for some constant β < 1, this requires Ω (n) communication even to get within statistical distance 1− βn of the target distribution. Previously, Ambainis, Schulman, Ta-Shma, Vazirani, and Wigderson (FOCS 1998) proved that Ω (√n) communication is required to get within some constant statistical distance ɛ > 0 of the uniform distribution over all pairs of disjoint sets of size √n.
{"title":"A Lower Bound for Sampling Disjoint Sets","authors":"Thomas Watson","doi":"10.1145/3404858","DOIUrl":"https://doi.org/10.1145/3404858","url":null,"abstract":"Suppose Alice and Bob each start with private randomness and no other input, and they wish to engage in a protocol in which Alice ends up with a set x⊆ [n] and Bob ends up with a set y⊆ [n], such that (x,y) is uniformly distributed over all pairs of disjoint sets. We prove that for some constant β < 1, this requires Ω (n) communication even to get within statistical distance 1− βn of the target distribution. Previously, Ambainis, Schulman, Ta-Shma, Vazirani, and Wigderson (FOCS 1998) proved that Ω (√n) communication is required to get within some constant statistical distance ɛ > 0 of the uniform distribution over all pairs of disjoint sets of size √n.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122833040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can be computed by a Boolean circuit of size at most θ, for a given parameter θ. We improve several circuit lower bounds for MCSP, using pseudorandom generators (PRGs) that are local; a PRG is called local if its output bit strings, when viewed as the truth table of a Boolean function, can be computed by a Boolean circuit of small size. We get new and improved lower bounds for MCSP that almost match the best-known lower bounds against several circuit models. Specifically, we show that computing MCSP, on functions with a truth table of length N, requires • N3−o(1)-size de Morgan formulas, improving the recent N2−o(1) lower bound by Hirahara and Santhanam (CCC, 2017), • N2−o(1)-size formulas over an arbitrary basis or general branching programs (no non-trivial lower bound was known for MCSP against these models), and • 2Ω(N1/(d+1.01))-size depth-d AC0 circuits, improving the (implicit, in their work) exponential size lower bound by Allender et al. (SICOMP, 2006). The AC0 lower bound stated above matches the best-known AC0 lower bound (for PARITY) up to a small additive constant in the depth. Also, for the special case of depth-2 circuits (i.e., CNFs or DNFs), we get an optimal lower bound of 2Ω(N) for MCSP.
{"title":"Circuit Lower Bounds for MCSP from Local Pseudorandom Generators","authors":"Mahdi Cheraghchi, Valentine Kabanets, Zhenjian Lu, Dimitrios Myrisiotis","doi":"10.1145/3404860","DOIUrl":"https://doi.org/10.1145/3404860","url":null,"abstract":"The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can be computed by a Boolean circuit of size at most θ, for a given parameter θ. We improve several circuit lower bounds for MCSP, using pseudorandom generators (PRGs) that are local; a PRG is called local if its output bit strings, when viewed as the truth table of a Boolean function, can be computed by a Boolean circuit of small size. We get new and improved lower bounds for MCSP that almost match the best-known lower bounds against several circuit models. Specifically, we show that computing MCSP, on functions with a truth table of length N, requires • N3−o(1)-size de Morgan formulas, improving the recent N2−o(1) lower bound by Hirahara and Santhanam (CCC, 2017), • N2−o(1)-size formulas over an arbitrary basis or general branching programs (no non-trivial lower bound was known for MCSP against these models), and • 2Ω(N1/(d+1.01))-size depth-d AC0 circuits, improving the (implicit, in their work) exponential size lower bound by Allender et al. (SICOMP, 2006). The AC0 lower bound stated above matches the best-known AC0 lower bound (for PARITY) up to a small additive constant in the depth. Also, for the special case of depth-2 circuits (i.e., CNFs or DNFs), we get an optimal lower bound of 2Ω(N) for MCSP.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133594028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spoorthy Gunda, P. Jain, D. Lokshtanov, Saket Saurabh, P. Tale
A graph operation that contracts edges is one of the fundamental operations in the theory of graph minors. Parameterized Complexity of editing to a family of graphs by contracting k edges has recently gained substantial scientific attention, and several new results have been obtained. Some important families of graphs, namely, the subfamilies of chordal graphs, in the context of edge contractions, have proven to be significantly difficult than one might expect. In this article, we study the F-Contraction problem, where F is a subfamily of chordal graphs, in the realm of parameterized approximation. Formally, given a graph G and an integer k, F-Contraction asks whether there exists X ⊆ E(G) such that G/X ∈ F and |X| ≤ k. Here, G/X is the graph obtained from G by contracting edges in X. We obtain the following results for the F-Contraction problem: • Clique Contraction is known to be FPT. However, unless NP⊆ coNP/poly, it does not admit a polynomial kernel. We show that it admits a polynomial-size approximate kernelization scheme (PSAKS). That is, it admits a (1 + ε)-approximate kernel with O(kf(ε)) vertices for every ε > 0. • Split Contraction is known to be W[1]-Hard. We deconstruct this intractability result in two ways. First, we give a (2+ε)-approximate polynomial kernel for Split Contraction (which also implies a factor (2+ε)-FPT-approximation algorithm for Split Contraction). Furthermore, we show that, assuming Gap-ETH, there is no (5/4-δ)-FPT-approximation algorithm for Split Contraction. Here, ε, δ > 0 are fixed constants. • Chordal Contraction is known to be W[2]-Hard. We complement this result by observing that the existing W[2]-hardness reduction can be adapted to show that, assuming FPT≠ W[1], there is no F(k)-FPT-approximation algorithm for Chordal Contraction. Here, F(k) is an arbitrary function depending on k alone. We say that an algorithm is an h(k)-FPT-approximation algorithm for the F-Contraction problem, if it runs in FPT time, and on any input (G, k) such that there exists X ⊆ E(G) satisfying G/X ∈ F and |X| ≤ k, it outputs an edge set Y of size at most h(k) ċ k for which G/Y is in F.
{"title":"On the Parameterized Approximability of Contraction to Classes of Chordal Graphs","authors":"Spoorthy Gunda, P. Jain, D. Lokshtanov, Saket Saurabh, P. Tale","doi":"10.1145/3470869","DOIUrl":"https://doi.org/10.1145/3470869","url":null,"abstract":"A graph operation that contracts edges is one of the fundamental operations in the theory of graph minors. Parameterized Complexity of editing to a family of graphs by contracting k edges has recently gained substantial scientific attention, and several new results have been obtained. Some important families of graphs, namely, the subfamilies of chordal graphs, in the context of edge contractions, have proven to be significantly difficult than one might expect. In this article, we study the F-Contraction problem, where F is a subfamily of chordal graphs, in the realm of parameterized approximation. Formally, given a graph G and an integer k, F-Contraction asks whether there exists X ⊆ E(G) such that G/X ∈ F and |X| ≤ k. Here, G/X is the graph obtained from G by contracting edges in X. We obtain the following results for the F-Contraction problem: • Clique Contraction is known to be FPT. However, unless NP⊆ coNP/poly, it does not admit a polynomial kernel. We show that it admits a polynomial-size approximate kernelization scheme (PSAKS). That is, it admits a (1 + ε)-approximate kernel with O(kf(ε)) vertices for every ε > 0. • Split Contraction is known to be W[1]-Hard. We deconstruct this intractability result in two ways. First, we give a (2+ε)-approximate polynomial kernel for Split Contraction (which also implies a factor (2+ε)-FPT-approximation algorithm for Split Contraction). Furthermore, we show that, assuming Gap-ETH, there is no (5/4-δ)-FPT-approximation algorithm for Split Contraction. Here, ε, δ > 0 are fixed constants. • Chordal Contraction is known to be W[2]-Hard. We complement this result by observing that the existing W[2]-hardness reduction can be adapted to show that, assuming FPT≠ W[1], there is no F(k)-FPT-approximation algorithm for Chordal Contraction. Here, F(k) is an arbitrary function depending on k alone. We say that an algorithm is an h(k)-FPT-approximation algorithm for the F-Contraction problem, if it runs in FPT time, and on any input (G, k) such that there exists X ⊆ E(G) satisfying G/X ∈ F and |X| ≤ k, it outputs an edge set Y of size at most h(k) ċ k for which G/Y is in F.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131200428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The classic TQBF problem is to determine who has a winning strategy in a game played on a given conjunctive normal form formula (CNF), where the two players alternate turns picking truth values for the variables in a given order, and the winner is determined by whether the CNF gets satisfied. We study variants of this game in which the variables may be played in any order, and each turn consists of picking a remaining variable and a truth value for it. For the version where the set of variables is partitioned into two halves and each player may only pick variables from his or her half, we prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for unbounded-width CNFs (Schaefer, STOC 1976). For the general unordered version (where each variable can be picked by either player), we also prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for 6-CNFs (Ahlroth and Orponen, MFCS 2012) and PSPACE-complete for positive 11-CNFs (Schaefer, STOC 1976).
{"title":"Complexity of Unordered CNF Games","authors":"Md Lutfar Rahman, Thomas Watson","doi":"10.1145/3397478","DOIUrl":"https://doi.org/10.1145/3397478","url":null,"abstract":"The classic TQBF problem is to determine who has a winning strategy in a game played on a given conjunctive normal form formula (CNF), where the two players alternate turns picking truth values for the variables in a given order, and the winner is determined by whether the CNF gets satisfied. We study variants of this game in which the variables may be played in any order, and each turn consists of picking a remaining variable and a truth value for it. For the version where the set of variables is partitioned into two halves and each player may only pick variables from his or her half, we prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for unbounded-width CNFs (Schaefer, STOC 1976). For the general unordered version (where each variable can be picked by either player), we also prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for 6-CNFs (Ahlroth and Orponen, MFCS 2012) and PSPACE-complete for positive 11-CNFs (Schaefer, STOC 1976).","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124443752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}