SIAM Journal on Computing, Ahead of Print. Abstract. We consider the problem of clustering mixtures of mean-separated Gaussians in high dimensions. We are given samples from a mixture of [math] identity covariance Gaussians, so that the minimum pairwise distance between any two pairs of means is at least [math], for some parameter [math], and the goal is to recover the ground truth clustering of these samples. It is folklore that separation [math] is both necessary and sufficient to recover a good clustering (say, with constant or [math] error), at least information-theoretically. However, the estimators which achieve this guarantee are inefficient. We give the first algorithm which runs in polynomial time in both [math] and the dimension [math], and which almost matches this guarantee. More precisely, we give an algorithm which takes polynomially many samples and time, and which can successfully recover a good clustering, so long as the separation is [math], for any [math]. Previously, polynomial time algorithms were only known for this problem when the separation was polynomial in [math], and all algorithms which could tolerate [math] separation required quasipolynomial time. We also extend our result to mixtures of translations of a distribution which satisfies the Poincaré inequality, under additional mild assumptions. Our main technical tool, which we believe is of independent interest, is a novel way to implicitly represent and estimate high degree moments of a distribution, which allows us to extract important information about high degree moments without ever writing down the full moment tensors explicitly.
{"title":"Clustering Mixtures with Almost Optimal Separation in Polynomial Time","authors":"Jerry Li, Allen Liu","doi":"10.1137/22m1538788","DOIUrl":"https://doi.org/10.1137/22m1538788","url":null,"abstract":"SIAM Journal on Computing, Ahead of Print. <br/> Abstract. We consider the problem of clustering mixtures of mean-separated Gaussians in high dimensions. We are given samples from a mixture of [math] identity covariance Gaussians, so that the minimum pairwise distance between any two pairs of means is at least [math], for some parameter [math], and the goal is to recover the ground truth clustering of these samples. It is folklore that separation [math] is both necessary and sufficient to recover a good clustering (say, with constant or [math] error), at least information-theoretically. However, the estimators which achieve this guarantee are inefficient. We give the first algorithm which runs in polynomial time in both [math] and the dimension [math], and which almost matches this guarantee. More precisely, we give an algorithm which takes polynomially many samples and time, and which can successfully recover a good clustering, so long as the separation is [math], for any [math]. Previously, polynomial time algorithms were only known for this problem when the separation was polynomial in [math], and all algorithms which could tolerate [math] separation required quasipolynomial time. We also extend our result to mixtures of translations of a distribution which satisfies the Poincaré inequality, under additional mild assumptions. Our main technical tool, which we believe is of independent interest, is a novel way to implicitly represent and estimate high degree moments of a distribution, which allows us to extract important information about high degree moments without ever writing down the full moment tensors explicitly.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"2015 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Chudnovsky, Marcin Pilipczuk, Michał Pilipczuk, Stéphan Thomassé
SIAM Journal on Computing, Volume 53, Issue 1, Page 47-86, February 2024. Abstract. In the Maximum Independent Set problem we are asked to find a set of pairwise nonadjacent vertices in a given graph with the maximum possible cardinality. In general graphs, this classical problem is known to be NP-hard and hard to approximate within a factor of [math] for any [math]. Due to this, investigating the complexity of Maximum Independent Set in various graph classes in hope of finding better tractability results is an active research direction. In [math]-free graphs, that is, graphs not containing a fixed graph [math] as an induced subgraph, the problem is known to remain NP-hard and APX-hard whenever [math] contains a cycle, a vertex of degree at least four, or two vertices of degree at least three in one connected component. For the remaining cases, where every component of [math] is a path or a subdivided claw, the complexity of Maximum Independent Set remains widely open, with only a handful of polynomial-time solvability results for small graphs [math] such as [math], [math], the claw, or the fork. We prove that for every such “possibly tractable” graph [math] there exists an algorithm that, given an [math]-free graph [math] and an accuracy parameter [math], finds an independent set in [math] of cardinality within a factor of [math] of the optimum in time exponential in a polynomial of [math] and [math]. Furthermore, an independent set of maximum size can be found in subexponential time [math]. That is, we show that for every graph [math] for which Maximum Independent Set is not known to be APX-hard and SUBEXP-hard in [math]-free graphs, the problem admits a quasi-polynomial time approximation scheme and a subexponential-time exact algorithm in this graph class. Our algorithms also work in the more general weighted setting, where the input graph is supplied with a weight function on vertices and we are maximizing the total weight of an independent set.
{"title":"Quasi-Polynomial Time Approximation Schemes for the Maximum Weight Independent Set Problem in [math]-Free Graphs","authors":"Maria Chudnovsky, Marcin Pilipczuk, Michał Pilipczuk, Stéphan Thomassé","doi":"10.1137/20m1333778","DOIUrl":"https://doi.org/10.1137/20m1333778","url":null,"abstract":"SIAM Journal on Computing, Volume 53, Issue 1, Page 47-86, February 2024. <br/> Abstract. In the Maximum Independent Set problem we are asked to find a set of pairwise nonadjacent vertices in a given graph with the maximum possible cardinality. In general graphs, this classical problem is known to be NP-hard and hard to approximate within a factor of [math] for any [math]. Due to this, investigating the complexity of Maximum Independent Set in various graph classes in hope of finding better tractability results is an active research direction. In [math]-free graphs, that is, graphs not containing a fixed graph [math] as an induced subgraph, the problem is known to remain NP-hard and APX-hard whenever [math] contains a cycle, a vertex of degree at least four, or two vertices of degree at least three in one connected component. For the remaining cases, where every component of [math] is a path or a subdivided claw, the complexity of Maximum Independent Set remains widely open, with only a handful of polynomial-time solvability results for small graphs [math] such as [math], [math], the claw, or the fork. We prove that for every such “possibly tractable” graph [math] there exists an algorithm that, given an [math]-free graph [math] and an accuracy parameter [math], finds an independent set in [math] of cardinality within a factor of [math] of the optimum in time exponential in a polynomial of [math] and [math]. Furthermore, an independent set of maximum size can be found in subexponential time [math]. That is, we show that for every graph [math] for which Maximum Independent Set is not known to be APX-hard and SUBEXP-hard in [math]-free graphs, the problem admits a quasi-polynomial time approximation scheme and a subexponential-time exact algorithm in this graph class. Our algorithms also work in the more general weighted setting, where the input graph is supplied with a weight function on vertices and we are maximizing the total weight of an independent set.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"140 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139924410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Ahead of Print. Abstract. Given a directed graph as input, we show how to efficiently find a shortest (directed, simple) cycle on an even number of vertices. As far as we know, no polynomial-time algorithm was previously known for this problem. In fact, finding any even cycle in a directed graph in polynomial time was open for more than two decades until Robertson, Seymour, and Thomas (Ann. of Math. (2), 150 (1999), pp. 929–975) and, independently, McCuaig (Electron. J. Combin., 11 (2004), R7900) (announced jointly at STOC 1997) gave an efficiently testable structural characterization of even-cycle-free directed graphs. Methodologically, our algorithm relies on the standard framework of algebraic fingerprinting and randomized polynomial identity testing over a finite field and, in fact, relies on a generating polynomial implicit in a paper of Vazirani and Yannakakis (Discrete Appl. Math., 25 (1989), pp. 179–190) that enumerates weighted cycle covers by the parity of their number of cycles as a difference of a permanent and a determinant polynomial. The need to work with the permanent—known to be #P-hard apart from a very restricted choice of coefficient rings (L. G. Valiant, Theoret. Comput. Sci., 8 (1979), pp. 189–201)—is where our main technical contribution occurs. We design a family of finite commutative rings of characteristic 4 that simultaneously (i) give a nondegenerate representation for the generating polynomial identity via the permanent and the determinant, (ii) support efficient permanent computations by extension of Valiant’s techniques, and (iii) enable emulation of finite-field arithmetic in characteristic 2. Here our work is foreshadowed by that of Björklund and Husfeldt (SIAM J. Comput., 48 (2019), pp. 1698–1710) who used a considerably less efficient commutative ring design—in particular, one lacking finite-field emulation—to obtain a polynomial-time algorithm for the shortest two disjoint paths problem in undirected graphs. Building on work of Gilbert and Tarjan (Numer. Math., 50 (1986), pp. 377–404) as well as Alon and Yuster (J. ACM, 42 (2013), pp. 844–856), we also show how ideas from the nested dissection technique for solving linear equation systems—introduced by George (SIAM J. Numer. Anal., 10 (1973), pp. 345–363) for symmetric positive definite real matrices—leads to faster algorithm designs in our present finite-ring randomized context when we have control of the separator structure of the input graph; for example, this happens when the input has bounded genus.
{"title":"The Shortest Even Cycle Problem Is Tractable","authors":"Andreas Björklund, Thore Husfeldt, Petteri Kaski","doi":"10.1137/22m1538260","DOIUrl":"https://doi.org/10.1137/22m1538260","url":null,"abstract":"SIAM Journal on Computing, Ahead of Print. <br/> Abstract. Given a directed graph as input, we show how to efficiently find a shortest (directed, simple) cycle on an even number of vertices. As far as we know, no polynomial-time algorithm was previously known for this problem. In fact, finding any even cycle in a directed graph in polynomial time was open for more than two decades until Robertson, Seymour, and Thomas (Ann. of Math. (2), 150 (1999), pp. 929–975) and, independently, McCuaig (Electron. J. Combin., 11 (2004), R7900) (announced jointly at STOC 1997) gave an efficiently testable structural characterization of even-cycle-free directed graphs. Methodologically, our algorithm relies on the standard framework of algebraic fingerprinting and randomized polynomial identity testing over a finite field and, in fact, relies on a generating polynomial implicit in a paper of Vazirani and Yannakakis (Discrete Appl. Math., 25 (1989), pp. 179–190) that enumerates weighted cycle covers by the parity of their number of cycles as a difference of a permanent and a determinant polynomial. The need to work with the permanent—known to be #P-hard apart from a very restricted choice of coefficient rings (L. G. Valiant, Theoret. Comput. Sci., 8 (1979), pp. 189–201)—is where our main technical contribution occurs. We design a family of finite commutative rings of characteristic 4 that simultaneously (i) give a nondegenerate representation for the generating polynomial identity via the permanent and the determinant, (ii) support efficient permanent computations by extension of Valiant’s techniques, and (iii) enable emulation of finite-field arithmetic in characteristic 2. Here our work is foreshadowed by that of Björklund and Husfeldt (SIAM J. Comput., 48 (2019), pp. 1698–1710) who used a considerably less efficient commutative ring design—in particular, one lacking finite-field emulation—to obtain a polynomial-time algorithm for the shortest two disjoint paths problem in undirected graphs. Building on work of Gilbert and Tarjan (Numer. Math., 50 (1986), pp. 377–404) as well as Alon and Yuster (J. ACM, 42 (2013), pp. 844–856), we also show how ideas from the nested dissection technique for solving linear equation systems—introduced by George (SIAM J. Numer. Anal., 10 (1973), pp. 345–363) for symmetric positive definite real matrices—leads to faster algorithm designs in our present finite-ring randomized context when we have control of the separator structure of the input graph; for example, this happens when the input has bounded genus.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"11 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139755010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Gamarnik, Aukosh Jagannath, Alexander S. Wein
SIAM Journal on Computing, Volume 53, Issue 1, Page 1-46, February 2024. Abstract. We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising [math]-spin glass model and (b) finding a large independent set in a sparse Erdős–Rényi graph. The following families of algorithms are considered: (a) low-degree polynomials of the input—a general framework that captures many prior algorithms; (b) low-depth Boolean circuits; (c) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm. We show that these families of algorithms cannot have high success probability. For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem). Our proof uses the fact that these models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objectives are above a certain threshold are either close to or far from each other. The crux of our proof is that the classes of algorithms we consider exhibit a form of stability (noise-insensitivity): a small perturbation of the input induces a small perturbation of the output. We show by an interpolation argument that stable algorithms cannot overcome the OGP barrier. The stability of Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials and Boolean circuits is established using tools from Gaussian and Boolean analysis—namely hypercontractivity and total influence, as well as a novel lower bound for random walks avoiding certain subsets, which we expect to be of independent interest. In the case of Boolean circuits, the result also makes use of Linial–Mansour–Nisan’s classical theorem. Our techniques apply more broadly to low influence functions, and we expect that they may apply more generally.
{"title":"Hardness of Random Optimization Problems for Boolean Circuits, Low-Degree Polynomials, and Langevin Dynamics","authors":"David Gamarnik, Aukosh Jagannath, Alexander S. Wein","doi":"10.1137/22m150263x","DOIUrl":"https://doi.org/10.1137/22m150263x","url":null,"abstract":"SIAM Journal on Computing, Volume 53, Issue 1, Page 1-46, February 2024. <br/> Abstract. We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising [math]-spin glass model and (b) finding a large independent set in a sparse Erdős–Rényi graph. The following families of algorithms are considered: (a) low-degree polynomials of the input—a general framework that captures many prior algorithms; (b) low-depth Boolean circuits; (c) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm. We show that these families of algorithms cannot have high success probability. For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem). Our proof uses the fact that these models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objectives are above a certain threshold are either close to or far from each other. The crux of our proof is that the classes of algorithms we consider exhibit a form of stability (noise-insensitivity): a small perturbation of the input induces a small perturbation of the output. We show by an interpolation argument that stable algorithms cannot overcome the OGP barrier. The stability of Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials and Boolean circuits is established using tools from Gaussian and Boolean analysis—namely hypercontractivity and total influence, as well as a novel lower bound for random walks avoiding certain subsets, which we expect to be of independent interest. In the case of Boolean circuits, the result also makes use of Linial–Mansour–Nisan’s classical theorem. Our techniques apply more broadly to low influence functions, and we expect that they may apply more generally.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"89 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139755014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Ahead of Print. Abstract. Let [math] be two multiplicatively independent integers. Cobham’s famous theorem states that a set [math] is both [math]-recognizable and [math]-recognizable if and only if it is definable in Presburger arithmetic. Here we show the following strengthening: let [math] be [math]-recognizable, and let [math] be [math]-recognizable such that both [math] and [math] are not definable in Presburger arithmetic. Then the first-order logical theory of [math] is undecidable. This is in contrast to a well-known theorem of Büchi stating that the first-order logical theory of [math] is decidable.
{"title":"A Strong Version of Cobham’s Theorem","authors":"Philipp Hieronymi, Chris Schulz","doi":"10.1137/22m1538065","DOIUrl":"https://doi.org/10.1137/22m1538065","url":null,"abstract":"SIAM Journal on Computing, Ahead of Print. <br/> Abstract. Let [math] be two multiplicatively independent integers. Cobham’s famous theorem states that a set [math] is both [math]-recognizable and [math]-recognizable if and only if it is definable in Presburger arithmetic. Here we show the following strengthening: let [math] be [math]-recognizable, and let [math] be [math]-recognizable such that both [math] and [math] are not definable in Presburger arithmetic. Then the first-order logical theory of [math] is undecidable. This is in contrast to a well-known theorem of Büchi stating that the first-order logical theory of [math] is decidable.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"2 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139496847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Ahead of Print. Abstract. We study discrepancy minimization for vectors in [math] under various settings. The main result is the analysis of a new simple random process in high dimensions through a comparison argument. As corollaries, we obtain bounds which are tight up to logarithmic factors for online vector balancing against oblivious adversaries, resolving several questions posed by Bansal et al. [STOC, ACM, New York, 2020, pp. 1139–1152], as well as a linear time algorithm for logarithmic bounds for the Komlós conjecture.
SIAM 计算期刊》,提前印刷。 摘要我们研究了[math]中各种设置下向量的差异最小化。主要结果是通过比较论证分析了一种新的高维简单随机过程。作为推论,我们得到了针对遗忘对手的在线矢量平衡的对数紧约束,解决了 Bansal 等人提出的几个问题[STOC, ACM, New York, 2020, pp.
{"title":"Discrepancy Minimization via a Self-Balancing Walk","authors":"Ryan Alweiss, Yang P. Liu, Mehtaab S. Sawhney","doi":"10.1137/21m1442450","DOIUrl":"https://doi.org/10.1137/21m1442450","url":null,"abstract":"SIAM Journal on Computing, Ahead of Print. <br/> Abstract. We study discrepancy minimization for vectors in [math] under various settings. The main result is the analysis of a new simple random process in high dimensions through a comparison argument. As corollaries, we obtain bounds which are tight up to logarithmic factors for online vector balancing against oblivious adversaries, resolving several questions posed by Bansal et al. [STOC, ACM, New York, 2020, pp. 1139–1152], as well as a linear time algorithm for logarithmic bounds for the Komlós conjecture.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"123 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139461903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Volume 52, Issue 6, Page FOCS18-349-FOCS18-382, December 2023. Abstract. There are significant obstacles to establishing an equivalence between the worst-case and average-case hardness of [math]. Several results suggest that black-box worst-case to average-case reductions are not likely to be used for reducing any worst-case problem outside [math] to a distributional [math] problem. This paper overcomes the barrier. We present the first non-black-box worst-case to average-case reduction from a problem conjectured to be outside [math] to a distributional [math] problem. Specifically, we consider the minimum time-bounded Kolmogorov complexity problem (MINKT) and prove that there exists a zero-error randomized polynomial-time algorithm approximating the minimum time-bounded Kolmogorov complexity [math] within an additive error [math] if its average-case version admits an errorless heuristic polynomial-time algorithm. We observe that the approximation version of MINKT is Random 3SAT-hard, and more generally it is harder than avoiding any polynomial-time computable hitting set generator that extends its seed of length [math] by [math], which provides strong evidence that the approximation problem is outside [math] and thus our reductions are non-black-box. Our reduction can be derandomized at the cost of the quality of the approximation. We also show that, given a truth table of size [math], approximating the minimum circuit size within a factor of [math] is in [math] for some constant [math] iff its average-case version is easy. Our results can be seen as a new approach for excluding Heuristica. In particular, proving [math]-hardness of the approximation versions of MINKT or the minimum circuit size problem is sufficient for establishing an equivalence between the worst-case and average-case hardness of [math].
{"title":"Non-Black-Box Worst-Case to Average-Case Reductions Within [math]","authors":"Shuichi Hirahara","doi":"10.1137/19m124705x","DOIUrl":"https://doi.org/10.1137/19m124705x","url":null,"abstract":"SIAM Journal on Computing, Volume 52, Issue 6, Page FOCS18-349-FOCS18-382, December 2023. <br/> Abstract. There are significant obstacles to establishing an equivalence between the worst-case and average-case hardness of [math]. Several results suggest that black-box worst-case to average-case reductions are not likely to be used for reducing any worst-case problem outside [math] to a distributional [math] problem. This paper overcomes the barrier. We present the first non-black-box worst-case to average-case reduction from a problem conjectured to be outside [math] to a distributional [math] problem. Specifically, we consider the minimum time-bounded Kolmogorov complexity problem (MINKT) and prove that there exists a zero-error randomized polynomial-time algorithm approximating the minimum time-bounded Kolmogorov complexity [math] within an additive error [math] if its average-case version admits an errorless heuristic polynomial-time algorithm. We observe that the approximation version of MINKT is Random 3SAT-hard, and more generally it is harder than avoiding any polynomial-time computable hitting set generator that extends its seed of length [math] by [math], which provides strong evidence that the approximation problem is outside [math] and thus our reductions are non-black-box. Our reduction can be derandomized at the cost of the quality of the approximation. We also show that, given a truth table of size [math], approximating the minimum circuit size within a factor of [math] is in [math] for some constant [math] iff its average-case version is easy. Our results can be seen as a new approach for excluding Heuristica. In particular, proving [math]-hardness of the approximation versions of MINKT or the minimum circuit size problem is sufficient for establishing an equivalence between the worst-case and average-case hardness of [math].","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"33 4 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138693277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Special Section on the Fifty-Ninth Annual IEEE Symposium on Foundations of Computer Science (2018)","authors":"Elette Boyle, Vincent Cohen-Addad, Alexandra Kolla, Mikkel Thorup","doi":"10.1137/23m1617011","DOIUrl":"https://doi.org/10.1137/23m1617011","url":null,"abstract":"SIAM Journal on Computing, Volume 52, Issue 6, Page FOCS18-i-FOCS18-i, December 2023. <br/>","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"81 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138683463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Volume 52, Issue 6, Page 1413-1463, December 2023. Abstract. We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and probabilistically checkable proofs of proximity. Namely, we show that the structure of every algorithm that makes [math] adaptive queries and satisfies a natural robustness condition admits a sample-based algorithm with [math] sample complexity, following the definition of Goldreich and Ron [ACM Trans. Comput. Theory, 8 (2016), 7]. We prove that this transformation is nearly optimal. Our theorem also admits a scheme for constructing privacy-preserving local algorithms. Using the unified view that our structural theorem provides, we obtain results regarding various types of local algorithms, including the following. We strengthen the state-of-the-art lower bound for relaxed locally decodable codes, obtaining an exponential improvement on the dependency in query complexity; this resolves an open problem raised by Gur and Lachish [SIAM J. Comput., 50 (2021), pp. 788–813]. We show that any (constant-query) testable property admits a sample-based tester with sublinear sample complexity; this resolves a problem left open in a work of Fischer, Lachish, and Vasudev [Proceedings of the 56th Annual Symposium on Foundations of Computer Science, IEEE, 2015, pp. 1163–1182], bypassing an exponential blowup caused by previous techniques in the case of adaptive testers. We prove that the known separation between proofs of proximity and testers is essentially maximal; this resolves a problem left open by Gur and Rothblum [Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017, pp. 39:1–39:43; Comput. Complexity, 27 (2018), pp. 99–207] regarding sublinear-time delegation of computation. Our techniques strongly rely on relaxed sunflower lemmas and the Hajnal–Szemerédi theorem.
SIAM 计算期刊》,第 52 卷第 6 期,第 1413-1463 页,2023 年 12 月。 摘要。我们证明了一系列局部算法的一般结构定理,这些算法包括属性测试器、局部解码器和可概率检查的邻近性证明。也就是说,我们证明了,按照 Goldreich 和 Ron [ACM Trans. Comput. Theory, 8 (2016), 7] 的定义,每个进行 [math] 自适应查询并满足一个自然鲁棒性条件的算法的结构都承认一个具有 [math] 样本复杂度的基于样本的算法。我们证明这种变换几乎是最优的。我们的定理还提出了一种构建隐私保护局部算法的方案。利用我们的结构定理所提供的统一观点,我们获得了有关各种局部算法的结果,包括以下结果。我们加强了松弛局部可解码代码的最新下限,在查询复杂性的依赖性上获得了指数级的改进;这解决了 Gur 和 Lachish [SIAM J. Comput., 50 (2021), pp.]我们证明了任何(恒定查询)可测试属性都允许具有亚线性采样复杂度的基于采样的测试器;这解决了 Fischer、Lachish 和 Vasudev [第 56 届计算机科学基础年度研讨会论文集,IEEE,2015,第 1163-1182 页] 著作中的一个未决问题,绕过了以前的技术在自适应测试器中引起的指数级爆炸。我们证明了近似性证明和测试器之间的已知分离本质上是最大的;这解决了 Gur 和 Rothblum [Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017, pp.复杂性》,27 (2018),第 99-207 页]关于计算的亚线性时间委托的问题。我们的技术在很大程度上依赖于松弛向日葵定理和 Hajnal-Szemerédi 定理。
{"title":"A Structural Theorem for Local Algorithms with Applications to Coding, Testing, and Verification","authors":"Marcel Dall’Agnol, Tom Gur, Oded Lachish","doi":"10.1137/21m1422781","DOIUrl":"https://doi.org/10.1137/21m1422781","url":null,"abstract":"SIAM Journal on Computing, Volume 52, Issue 6, Page 1413-1463, December 2023. <br/> Abstract. We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and probabilistically checkable proofs of proximity. Namely, we show that the structure of every algorithm that makes [math] adaptive queries and satisfies a natural robustness condition admits a sample-based algorithm with [math] sample complexity, following the definition of Goldreich and Ron [ACM Trans. Comput. Theory, 8 (2016), 7]. We prove that this transformation is nearly optimal. Our theorem also admits a scheme for constructing privacy-preserving local algorithms. Using the unified view that our structural theorem provides, we obtain results regarding various types of local algorithms, including the following. We strengthen the state-of-the-art lower bound for relaxed locally decodable codes, obtaining an exponential improvement on the dependency in query complexity; this resolves an open problem raised by Gur and Lachish [SIAM J. Comput., 50 (2021), pp. 788–813]. We show that any (constant-query) testable property admits a sample-based tester with sublinear sample complexity; this resolves a problem left open in a work of Fischer, Lachish, and Vasudev [Proceedings of the 56th Annual Symposium on Foundations of Computer Science, IEEE, 2015, pp. 1163–1182], bypassing an exponential blowup caused by previous techniques in the case of adaptive testers. We prove that the known separation between proofs of proximity and testers is essentially maximal; this resolves a problem left open by Gur and Rothblum [Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017, pp. 39:1–39:43; Comput. Complexity, 27 (2018), pp. 99–207] regarding sublinear-time delegation of computation. Our techniques strongly rely on relaxed sunflower lemmas and the Hajnal–Szemerédi theorem.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"20 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138580485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesper Nederlof, Jakub Pawlewicz, Céline M. F. Swennenhuis, Karol Węgrzycki
SIAM Journal on Computing, Volume 52, Issue 6, Page 1369-1412, December 2023. Abstract. In the Bin Packing problem one is given [math] items with weights [math] and [math] bins with capacities [math]. The goal is to partition the items into sets [math] such that [math] for every bin [math], where [math] denotes [math]. Björklund, Husfeldt, and Koivisto [SIAM J. Comput., 39 (2009), pp. 546–563] presented an [math] time algorithm for Bin Packing (the [math] notation omits factors polynomial in the input size). In this paper, we show that for every [math] there exists a constant [math] such that an instance of Bin Packing with [math] bins can be solved in [math] randomized time. Before our work, such improved algorithms were not known even for [math]. A key step in our approach is the following new result in Littlewood–Offord theory on the additive combinatorics of subset sums: For every [math] there exists an [math] such that if [math] for some [math], then [math].
SIAM Journal on Computing, vol . 52, Issue 6, Page 1369-1412, December 2023。摘要。在装箱问题中,给定[math]重量为[math]的物品和[math]容量为[math]的箱子。目标是将项目划分为集合[math],使得[math]对应每个箱子[math],其中[math]表示[math]。Björklund,胡思德,Koivisto [SIAM J. computer]。[j], 39 (2009), pp. 546-563]提出了一种Bin Packing的[math]时间算法([math]符号省略了输入大小中的多项式因子)。在本文中,我们证明了对于每个[math]存在一个常数[math],使得具有[math]个箱子的Bin Packing实例可以在[math]随机时间内求解。在我们的工作之前,这种改进的算法甚至在[数学]中都不为人所知。我们方法的关键一步是在littlewood - offford理论中关于子集和的加性组合的以下新结果:对于每一个[math]存在一个[math],如果[math]对于某些[math],则[math]。
{"title":"A Faster Exponential Time Algorithm for Bin Packing With a Constant Number of Bins via Additive Combinatorics","authors":"Jesper Nederlof, Jakub Pawlewicz, Céline M. F. Swennenhuis, Karol Węgrzycki","doi":"10.1137/22m1478112","DOIUrl":"https://doi.org/10.1137/22m1478112","url":null,"abstract":"SIAM Journal on Computing, Volume 52, Issue 6, Page 1369-1412, December 2023. <br/> Abstract. In the Bin Packing problem one is given [math] items with weights [math] and [math] bins with capacities [math]. The goal is to partition the items into sets [math] such that [math] for every bin [math], where [math] denotes [math]. Björklund, Husfeldt, and Koivisto [SIAM J. Comput., 39 (2009), pp. 546–563] presented an [math] time algorithm for Bin Packing (the [math] notation omits factors polynomial in the input size). In this paper, we show that for every [math] there exists a constant [math] such that an instance of Bin Packing with [math] bins can be solved in [math] randomized time. Before our work, such improved algorithms were not known even for [math]. A key step in our approach is the following new result in Littlewood–Offord theory on the additive combinatorics of subset sums: For every [math] there exists an [math] such that if [math] for some [math], then [math].","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"42 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}