For a fixed “pattern” graph , the colored -subgraph isomorphism problem (denoted by ) asks, given an -vertex graph and a coloring , whether contains a properly colored copy of . The complexity of this problem is tied to parameterized versions of and , among other questions. An overarching goal is to understand the complexity of , under different computational models, in terms of natural invariants of the pattern graph . In this paper, we establish a close relationship between the formula complexity of and an invariant known as tree-depth (denoted by). is known to be solvable by monotone formulas of size . Our main result is an lower bound for formulas that are monotone or have sublogarithmic depth. This complements a lower bound of Li, Razborov, and Rossman [SIAM J. Comput., 46 (2017), pp. 936–971] relating tree-width and circuit size. As a corollary, it implies a stronger homomorphism preservation theorem for first-order logic on finite structures [B. Rossman, An improved homomorphism preservation theorem from lower bounds in circuit complexity, in 8th Innovations in Theoretical Computer Science Conference, LIPIcs. Leibniz Int. Proc. Inform. 67, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, Germany, 2017, 27]. The technical core of this result is an lower bound in the special case where is a complete binary tree of height , which we establish using the pathset framework introduced in B. Rossman [SIAM J. Comput., 47 (2018), pp. 1986–2028]. (The lower bound for general patterns follows via a recent excluded-minor characterization of tree-depth [W. Czerwiński, W. Nadara, and M. Pilipczuk, SIAM J. Discrete Math., 35 (2021), pp. 934–947; K. Kawarabayashi and B. Rossman, A polynomial excluded-minor approximation of treedepth, in Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms, 2018, pp. 234–246]. Additional results of this paper extend the pathset framework and improve upon both the best known upper and lower bounds on the average-case formula size of when is a path.
对于一个固定的“模式”图,有色子图同构问题(表示为)问,给定一个顶点图和一个着色,是否包含一个适当着色的副本。这个问题的复杂性与和的参数化版本以及其他问题有关。总体目标是根据模式图的自然不变量来理解在不同计算模型下的复杂性。在本文中,我们建立了公式复杂度与树深度的不变量之间的密切关系。已知可由大小的单调公式求解。我们的主要结果是单调的或具有次对数深度的公式的下界。这补充了Li, Razborov和Rossman [SIAM J. Comput]的下界。, 46 (2017), pp. 936-971]与树宽度和电路大小有关。作为一个推论,它暗示了有限结构上一阶逻辑的一个更强的同态保持定理[B]。Rossman,一种基于电路复杂度下界的改进同态保存定理,在第8届理论计算机科学创新会议上,LIPIcs。莱布尼茨Int。第67编,达格施图尔城堡。Leibniz-Zent。通知。[j].德国,瓦登,2017,27。该结果的技术核心是在高度完全二叉树的特殊情况下的下界,我们使用B. Rossman [SIAM J. Comput]中引入的路径集框架建立了下界。, 47 (2018), pp. 1986-2028]。(一般模式的下限是通过最近的树深度的排除次要特征[W]。Czerwiński, W. Nadara, M. Pilipczuk, SIAM J.离散数学。, 35(2021),第934-947页;K. Kawarabayashi和B. Rossman,树深度的多项式排除小逼近,2018年ACM-SIAM离散算法研讨会论文集,2018,pp. 234-246]。本文的其他结果扩展了路径集框架,并改进了when是一条路径的平均情况公式大小的已知上界和下界。
{"title":"Tree-Depth and the Formula Complexity of Subgraph Isomorphism","authors":"Deepanshu Kush, Benjamin Rossman","doi":"10.1137/20m1372925","DOIUrl":"https://doi.org/10.1137/20m1372925","url":null,"abstract":"For a fixed “pattern” graph , the colored -subgraph isomorphism problem (denoted by ) asks, given an -vertex graph and a coloring , whether contains a properly colored copy of . The complexity of this problem is tied to parameterized versions of and , among other questions. An overarching goal is to understand the complexity of , under different computational models, in terms of natural invariants of the pattern graph . In this paper, we establish a close relationship between the formula complexity of and an invariant known as tree-depth (denoted by). is known to be solvable by monotone formulas of size . Our main result is an lower bound for formulas that are monotone or have sublogarithmic depth. This complements a lower bound of Li, Razborov, and Rossman [SIAM J. Comput., 46 (2017), pp. 936–971] relating tree-width and circuit size. As a corollary, it implies a stronger homomorphism preservation theorem for first-order logic on finite structures [B. Rossman, An improved homomorphism preservation theorem from lower bounds in circuit complexity, in 8th Innovations in Theoretical Computer Science Conference, LIPIcs. Leibniz Int. Proc. Inform. 67, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, Germany, 2017, 27]. The technical core of this result is an lower bound in the special case where is a complete binary tree of height , which we establish using the pathset framework introduced in B. Rossman [SIAM J. Comput., 47 (2018), pp. 1986–2028]. (The lower bound for general patterns follows via a recent excluded-minor characterization of tree-depth [W. Czerwiński, W. Nadara, and M. Pilipczuk, SIAM J. Discrete Math., 35 (2021), pp. 934–947; K. Kawarabayashi and B. Rossman, A polynomial excluded-minor approximation of treedepth, in Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms, 2018, pp. 234–246]. Additional results of this paper extend the pathset framework and improve upon both the best known upper and lower bounds on the average-case formula size of when is a path.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135533409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A longstanding open problem in algorithmic mechanism design is to design truthful mechanisms that are computationally efficient and (approximately) maximize welfare in combinatorial auctions with submodular bidders. The first such mechanism was obtained by Dobzinski, Nisan, and Schapira [Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, ACM, New York, 2005, pp. 610–618] who gave an -approximation, where is the number of items. This problem has been studied extensively since, culminating in an -approximation mechanism by Dobzinski [Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, ACM, New York, 2016, pp. 940–948]. We present a computationally-efficient truthful mechanism with an approximation ratio that improves upon the state-of-the-art by an exponential factor. In particular, our mechanism achieves an -approximation in expectation, uses only demand queries, and has universal truthfulness guarantee. This settles an open question of Dobzinski on whether is the best approximation ratio in this setting in the negative.
在算法机制设计中,一个长期存在的开放问题是,在具有子模块投标人的组合拍卖中,设计具有计算效率和(近似)最大化福利的真实机制。第一个这样的机制是由Dobzinski, Nisan和Schapira获得的[第37届ACM计算理论研讨会论文集,巴尔的摩,MD, ACM,纽约,2005,pp. 610-618],他们给出了一个-近似,其中是项目的数量。自那以后,这个问题得到了广泛的研究,最终由Dobzinski提出了一个近似机制[第48届ACM SIGACT计算理论研讨会论文集,STOC 2016, Cambridge, MA, ACM, New York, 2016, pp. 940-948]。我们提出了一种计算效率高的真实机制,其近似比通过指数因子提高了最先进的水平。特别地,我们的机制实现了期望的-近似,只使用需求查询,并具有普遍的真实性保证。这就解决了Dobzinski的一个悬而未决的问题,即在这种情况下,最佳近似比率是否为负值。
{"title":"Improved Truthful Mechanisms for Combinatorial Auctions with Submodular Bidders","authors":"Sepehr Assadi, Sahil Singla","doi":"10.1137/20m1316068","DOIUrl":"https://doi.org/10.1137/20m1316068","url":null,"abstract":"A longstanding open problem in algorithmic mechanism design is to design truthful mechanisms that are computationally efficient and (approximately) maximize welfare in combinatorial auctions with submodular bidders. The first such mechanism was obtained by Dobzinski, Nisan, and Schapira [Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, ACM, New York, 2005, pp. 610–618] who gave an -approximation, where is the number of items. This problem has been studied extensively since, culminating in an -approximation mechanism by Dobzinski [Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, ACM, New York, 2016, pp. 940–948]. We present a computationally-efficient truthful mechanism with an approximation ratio that improves upon the state-of-the-art by an exponential factor. In particular, our mechanism achieves an -approximation in expectation, uses only demand queries, and has universal truthfulness guarantee. This settles an open question of Dobzinski on whether is the best approximation ratio in this setting in the negative.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135727871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aris Filos-Ratsikas, Yiannis Giannakopoulos, Alexandros Hollender, Philip Lazos, Diogo Poças
We consider the problem of computing a (pure) Bayes-Nash equilibrium in the first-price auction with continuous value distributions and discrete bidding space. We prove that when bidders have independent subjective prior beliefs about the value distributions of the other bidders, computing an $varepsilon$-equilibrium of the auction is PPAD-complete, and computing an exact equilibrium is FIXP-complete. We also provide an efficient algorithm for solving a special case of the problem, for a fixed number of bidders and available bids.
{"title":"On the Complexity of Equilibrium Computation in First-Price Auctions","authors":"Aris Filos-Ratsikas, Yiannis Giannakopoulos, Alexandros Hollender, Philip Lazos, Diogo Poças","doi":"10.1137/21m1435823","DOIUrl":"https://doi.org/10.1137/21m1435823","url":null,"abstract":"We consider the problem of computing a (pure) Bayes-Nash equilibrium in the first-price auction with continuous value distributions and discrete bidding space. We prove that when bidders have independent subjective prior beliefs about the value distributions of the other bidders, computing an $varepsilon$-equilibrium of the auction is PPAD-complete, and computing an exact equilibrium is FIXP-complete. We also provide an efficient algorithm for solving a special case of the problem, for a fixed number of bidders and available bids.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135727872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove the existence of an oblivious routing scheme that is -competitive in terms of , thus resolving a well-known question in oblivious routing. Concretely, consider an undirected network and a set of packets each with its own source and destination. The objective is to choose a path for each packet, from its source to its destination, so as to minimize , defined as follows: The dilation is the maximum path hop length, and the congestion is the maximum number of paths that include any single edge. The routing scheme obliviously and randomly selects a path for each packet independent of (the existence of) the other packets. Despite this obliviousness, the selected paths have within a factor of the best possible value. More precisely, for any integer hop constraint , this oblivious routing scheme selects paths of length at most and is -competitive in terms of congestion in comparison to the best possible congestion achievable via paths of length at most hops. These paths can be sampled in polynomial time. This result can be viewed as an analogue of the celebrated oblivious routing results of Räcke [Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002; Proceedings of the 40th Annual ACM Symposium on Theory of Computing, 2008], which are -competitive in terms of congestion but are not competitive in terms of dilation.
{"title":"Hop-Constrained Oblivious Routing","authors":"Mohsen Ghaffari, Bernhard Haeupler, Goran Zuzic","doi":"10.1137/21m1443467","DOIUrl":"https://doi.org/10.1137/21m1443467","url":null,"abstract":"We prove the existence of an oblivious routing scheme that is -competitive in terms of , thus resolving a well-known question in oblivious routing. Concretely, consider an undirected network and a set of packets each with its own source and destination. The objective is to choose a path for each packet, from its source to its destination, so as to minimize , defined as follows: The dilation is the maximum path hop length, and the congestion is the maximum number of paths that include any single edge. The routing scheme obliviously and randomly selects a path for each packet independent of (the existence of) the other packets. Despite this obliviousness, the selected paths have within a factor of the best possible value. More precisely, for any integer hop constraint , this oblivious routing scheme selects paths of length at most and is -competitive in terms of congestion in comparison to the best possible congestion achievable via paths of length at most hops. These paths can be sampled in polynomial time. This result can be viewed as an analogue of the celebrated oblivious routing results of Räcke [Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002; Proceedings of the 40th Annual ACM Symposium on Theory of Computing, 2008], which are -competitive in terms of congestion but are not competitive in terms of dilation.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136252248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sébastien Bubeck, Yin Tat Lee, Yuanzhi Li, Mark Sellke
Let be a family of sets in some metric space. In the -chasing problem, an online algorithm observes a request sequence of sets in and responds (online) by giving a sequence of points in these sets. The movement cost is the distance between consecutive such points. The competitive ratio is the worst case ratio (over request sequences) between the total movement of the online algorithm and the smallest movement one could have achieved by knowing in advance the request sequence. The family is said to be chaseable if there exists an online algorithm with finite competitive ratio. In 1991, Linial and Friedman conjectured that the family of convex sets in Euclidean space is chaseable. We prove this conjecture.
{"title":"Competitively Chasing Convex Bodies","authors":"Sébastien Bubeck, Yin Tat Lee, Yuanzhi Li, Mark Sellke","doi":"10.1137/20m1312332","DOIUrl":"https://doi.org/10.1137/20m1312332","url":null,"abstract":"Let be a family of sets in some metric space. In the -chasing problem, an online algorithm observes a request sequence of sets in and responds (online) by giving a sequence of points in these sets. The movement cost is the distance between consecutive such points. The competitive ratio is the worst case ratio (over request sequences) between the total movement of the online algorithm and the smallest movement one could have achieved by knowing in advance the request sequence. The family is said to be chaseable if there exists an online algorithm with finite competitive ratio. In 1991, Linial and Friedman conjectured that the family of convex sets in Euclidean space is chaseable. We prove this conjecture.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136180936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Submodular function minimization (SFM) and matroid intersection are fundamental discrete optimization problems with applications in many fields. It is well known that both of these can be solved making queries to a relevant oracle (evaluation oracle for SFM and rank oracle for matroid intersection), where denotes the universe size. However, all known polynomial query algorithms are highly adaptive, requiring at least rounds of querying the oracle. A natural question is whether these can be efficiently solved in a highly parallel manner, namely, with queries using only polylogarithmic rounds of adaptivity. An important step towards understanding the adaptivity needed for efficient parallel SFM was taken recently in the work of Balkanski and Singer who showed that any SFM algorithm making queries necessarily requires rounds. This left open the possibility of efficient SFM algorithms in polylogarithmic rounds. For matroid intersection, even the possibility of a constant round, query algorithm was not hitherto ruled out. In this work, we prove that any, possibly randomized, algorithm for submodular function minimization or matroid intersection making queries requires (Throughout the paper, we use the usual convention of using to denote and using to denote for some unspecified constant ) rounds of adaptivity. In fact, we show a polynomial lower bound on the number of rounds of adaptivity even for algorithms that make at most queries for any constant . Therefore, even though SFM and matroid intersection are efficiently solvable, they are not highly parallelizable in the oracle model.
{"title":"A Polynomial Lower Bound on the Number of Rounds for Parallel Submodular Function Minimization and Matroid Intersection","authors":"Deeparnab Chakrabarty, Yu Chen, Sanjeev Khanna","doi":"10.1137/22m147685x","DOIUrl":"https://doi.org/10.1137/22m147685x","url":null,"abstract":"Submodular function minimization (SFM) and matroid intersection are fundamental discrete optimization problems with applications in many fields. It is well known that both of these can be solved making queries to a relevant oracle (evaluation oracle for SFM and rank oracle for matroid intersection), where denotes the universe size. However, all known polynomial query algorithms are highly adaptive, requiring at least rounds of querying the oracle. A natural question is whether these can be efficiently solved in a highly parallel manner, namely, with queries using only polylogarithmic rounds of adaptivity. An important step towards understanding the adaptivity needed for efficient parallel SFM was taken recently in the work of Balkanski and Singer who showed that any SFM algorithm making queries necessarily requires rounds. This left open the possibility of efficient SFM algorithms in polylogarithmic rounds. For matroid intersection, even the possibility of a constant round, query algorithm was not hitherto ruled out. In this work, we prove that any, possibly randomized, algorithm for submodular function minimization or matroid intersection making queries requires (Throughout the paper, we use the usual convention of using to denote and using to denote for some unspecified constant ) rounds of adaptivity. In fact, we show a polynomial lower bound on the number of rounds of adaptivity even for algorithms that make at most queries for any constant . Therefore, even though SFM and matroid intersection are efficiently solvable, they are not highly parallelizable in the oracle model.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136252251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Volume 51, Issue 6, Page 1703-1742, December 2022. Abstract. Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, “spectral sparsification” reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with [math] nodes and [math] edges, outputs a classical description of an [math]-spectral sparsifier in sublinear time [math]. This contrasts with the optimal classical complexity [math]. We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for [math]-wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut.
SIAM Journal on Computing, vol . 51, Issue 6, Page 1703-1742, December 2022。摘要。图稀疏化是大量算法的基础,从切问题的近似算法到图拉普拉斯线性系统的求解。在其最强的形式中,“谱稀疏化”将边缘的数量减少到节点数量的近似线性,同时近似地保留了图的切割和谱结构。在这项工作中,我们展示了谱稀疏化的多项式量子加速及其许多应用。特别地,我们给出了一个量子算法,给定一个带有[math]节点和[math]边的加权图,在亚线性时间[math]输出一个[math]-谱稀疏器的经典描述。这与最优的经典复杂性[数学]形成了对比。我们还证明了我们的量子算法是最优的,直到多对数因子。该算法建立在稀疏化、图形生成、最短路径的量子算法和[数学]独立随机字符串的有效构造的现有结果的基础上。我们的算法意味着求解拉普拉斯系统和近似一系列切割问题(如最小切割和最稀疏切割)的量子加速。
{"title":"Quantum Speedup for Graph Sparsification, Cut Approximation, and Laplacian Solving","authors":"Simon Apers, Ronald de Wolf","doi":"10.1137/21m1391018","DOIUrl":"https://doi.org/10.1137/21m1391018","url":null,"abstract":"SIAM Journal on Computing, Volume 51, Issue 6, Page 1703-1742, December 2022. <br/> Abstract. Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, “spectral sparsification” reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with [math] nodes and [math] edges, outputs a classical description of an [math]-spectral sparsifier in sublinear time [math]. This contrasts with the optimal classical complexity [math]. We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for [math]-wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"22 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fedor V. Fomin, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, Saket Saurabh
SIAM Journal on Computing, Volume 51, Issue 6, Page 1866-1930, December 2022. Abstract. We prove the following theorem. Given a planar graph [math] and an integer [math], it is possible in polynomial time to randomly sample a subset [math] of vertices of [math] with the following properties: [math] induces a subgraph of [math] of treewidth [math], and for every connected subgraph [math] of [math] on at most [math] vertices, the probability that [math] covers the whole vertex set of [math] is at least [math], where [math] is the number of vertices of [math]. Together with standard dynamic programming techniques for graphs of bounded treewidth, this result gives a versatile technique for obtaining (randomized) subexponential-time parameterized algorithms for problems on planar graphs, usually with running time bound [math]. The technique can be applied to problems expressible as searching for a small, connected pattern with a prescribed property in a large host graph; examples of such problems include Directed [math]-Path, Weighted [math]-Path, Vertex Cover Local Search, and Subgraph Isomorphism, among others. Up to this point, it was open whether these problems could be solved in subexponential parameterized time on planar graphs, because they are not amenable to the classic technique of bidimensionality. Furthermore, all our results hold in fact on any class of graphs that exclude a fixed apex graph as a minor, in particular on graphs embeddable in any fixed surface.
{"title":"Subexponential Parameterized Algorithms for Planar and Apex-Minor-Free Graphs via Low Treewidth Pattern Covering","authors":"Fedor V. Fomin, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, Saket Saurabh","doi":"10.1137/19m1262504","DOIUrl":"https://doi.org/10.1137/19m1262504","url":null,"abstract":"SIAM Journal on Computing, Volume 51, Issue 6, Page 1866-1930, December 2022. <br/> Abstract. We prove the following theorem. Given a planar graph [math] and an integer [math], it is possible in polynomial time to randomly sample a subset [math] of vertices of [math] with the following properties: [math] induces a subgraph of [math] of treewidth [math], and for every connected subgraph [math] of [math] on at most [math] vertices, the probability that [math] covers the whole vertex set of [math] is at least [math], where [math] is the number of vertices of [math]. Together with standard dynamic programming techniques for graphs of bounded treewidth, this result gives a versatile technique for obtaining (randomized) subexponential-time parameterized algorithms for problems on planar graphs, usually with running time bound [math]. The technique can be applied to problems expressible as searching for a small, connected pattern with a prescribed property in a large host graph; examples of such problems include Directed [math]-Path, Weighted [math]-Path, Vertex Cover Local Search, and Subgraph Isomorphism, among others. Up to this point, it was open whether these problems could be solved in subexponential parameterized time on planar graphs, because they are not amenable to the classic technique of bidimensionality. Furthermore, all our results hold in fact on any class of graphs that exclude a fixed apex graph as a minor, in particular on graphs embeddable in any fixed surface.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"18 4","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ilan Komargodski, Tal Moran, Moni Naor, Rafael Pass, Alon Rosen, Eylon Yogev
SIAM Journal on Computing, Volume 51, Issue 6, Page 1769-1795, December 2022. Abstract. A program obfuscator takes a program and outputs a “scrambled” version of it, where the goal is that the obfuscated program will not reveal much about its structure beyond what is apparent from executing it. There are several ways of formalizing this goal. Specifically, in indistinguishability obfuscation, first defined by Barak et al. [Advances in Cryptology - CRYPTO, 2001, Lect. Notes Comput. Sci. 2139, Springer, Berlin, Heidelberg, pp. 1–18], the requirement is that the results of obfuscating any two functionally equivalent programs (circuits) will be computationally indistinguishable. In 2013, a fascinating candidate construction for indistinguishability obfuscation was proposed by Garg et al. [Proceedings of the Symposium on Theory of Computing Conference, STOC, ACM, 2013, pp. 467–476]. This has led to a flurry of discovery of intriguing constructions of primitives and protocols whose existence was not previously known (for instance, fully deniable encryption by Sahai and Waters [Proceedings of the Symposium on Theory of Computing, 2014, STOC, pp. 475–484]). Most of them explicitly rely on additional hardness assumptions, such as one-way functions. Our goal is to get rid of this extra assumption. We cannot argue that indistinguishability obfuscation of all polynomial-time circuits implies the existence of one-way functions, since if [math], then program obfuscation (under the indistinguishability notion) is possible. Instead, the ultimate goal is to argue that if [math] and program obfuscation is possible, then one-way functions exist. Our main result is that if [math] and there is an efficient (even imperfect) indistinguishability obfuscator, then there are one-way functions. In addition, we show that the existence of an indistinguishability obfuscator implies (unconditionally) the existence of SZK-arguments for [math]. This, in turn, provides an alternative version of our main result, based on the assumption of hard-on-the-average [math] problems. To get some of our results we need obfuscators for simple programs such as [math] circuits.
SIAM Journal on Computing, vol . 51, Issue 6, Page 1769-1795, December 2022。摘要。程序混淆器接受一个程序并输出它的“混乱”版本,其目标是混淆后的程序除了执行它时显而易见的内容外,不会透露太多有关其结构的信息。有几种方法可以形式化这个目标。具体来说,在不可区分混淆中,首先由Barak等人定义。[密码学进展- CRYPTO, 2001,选]。指出第一版。Sci. 2139, Springer, Berlin, Heidelberg, pp. 1-18],其要求是混淆任何两个功能等效的程序(电路)的结果将在计算上不可区分。2013年,Garg等人提出了一个引人入胜的不可区分混淆候选结构[计算理论会议论文集,STOC, ACM, 2013, pp. 467-476]。这导致了一系列有趣的原语和协议结构的发现,这些原语和协议的存在以前并不知道(例如,Sahai和Waters的完全可否认加密[计算理论研讨会论文集,2014年,STOC,第475-484页])。它们中的大多数显式地依赖于额外的硬度假设,例如单向函数。我们的目标是去掉这个额外的假设。我们不能争辩说所有多项式时间电路的不可区分混淆意味着单向函数的存在,因为如果[数学],那么程序混淆(在不可区分概念下)是可能的。相反,最终目标是论证如果[数学]和程序混淆是可能的,那么单向函数是存在的。我们的主要结果是,如果[math]和有一个有效的(甚至不完美的)不可区分混淆器,那么就有单向函数。此外,我们证明了不可区分混淆器的存在(无条件地)意味着[math]的szk参数的存在。这反过来又为我们的主要结果提供了另一种版本,该结果基于对平均难度[数学]问题的假设。为了得到我们的一些结果,我们需要对简单的程序(如[数学]电路)使用混淆器。
{"title":"One-Way Functions and (Im)perfect Obfuscation","authors":"Ilan Komargodski, Tal Moran, Moni Naor, Rafael Pass, Alon Rosen, Eylon Yogev","doi":"10.1137/15m1048549","DOIUrl":"https://doi.org/10.1137/15m1048549","url":null,"abstract":"SIAM Journal on Computing, Volume 51, Issue 6, Page 1769-1795, December 2022. <br/> Abstract. A program obfuscator takes a program and outputs a “scrambled” version of it, where the goal is that the obfuscated program will not reveal much about its structure beyond what is apparent from executing it. There are several ways of formalizing this goal. Specifically, in indistinguishability obfuscation, first defined by Barak et al. [Advances in Cryptology - CRYPTO, 2001, Lect. Notes Comput. Sci. 2139, Springer, Berlin, Heidelberg, pp. 1–18], the requirement is that the results of obfuscating any two functionally equivalent programs (circuits) will be computationally indistinguishable. In 2013, a fascinating candidate construction for indistinguishability obfuscation was proposed by Garg et al. [Proceedings of the Symposium on Theory of Computing Conference, STOC, ACM, 2013, pp. 467–476]. This has led to a flurry of discovery of intriguing constructions of primitives and protocols whose existence was not previously known (for instance, fully deniable encryption by Sahai and Waters [Proceedings of the Symposium on Theory of Computing, 2014, STOC, pp. 475–484]). Most of them explicitly rely on additional hardness assumptions, such as one-way functions. Our goal is to get rid of this extra assumption. We cannot argue that indistinguishability obfuscation of all polynomial-time circuits implies the existence of one-way functions, since if [math], then program obfuscation (under the indistinguishability notion) is possible. Instead, the ultimate goal is to argue that if [math] and program obfuscation is possible, then one-way functions exist. Our main result is that if [math] and there is an efficient (even imperfect) indistinguishability obfuscator, then there are one-way functions. In addition, we show that the existence of an indistinguishability obfuscator implies (unconditionally) the existence of SZK-arguments for [math]. This, in turn, provides an alternative version of our main result, based on the assumption of hard-on-the-average [math] problems. To get some of our results we need obfuscators for simple programs such as [math] circuits.","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"8 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Computing, Volume 51, Issue 6, Page 1839-1865, December 2022. Abstract. Locally correctable codes (LCCs) are error correcting codes [math] which admit local algorithms that correct any individual symbol of a corrupted codeword via a minuscule number of queries. For systematic codes, this notion is stronger than that of locally decodable codes (LDCs), where the goal is to only recover individual symbols of the message. One of the central problems in algorithmic coding theory is to construct [math]-query LCCs and LDCs with minimal block length. Alas, state-of-the-art of such codes requires super-polynomial block length to admit [math]-query algorithms for local correction and decoding, despite much attention during the last two decades. The study of relaxed LCCs and LDCs, which allow the correction algorithm to abort (but not err) on a small fraction of the locations, provides a way to circumvent this barrier. This relaxation turned out to allow constant-query correcting and decoding algorithms for codes with polynomial block length. Focusing on local correction, Gur, Ramnarayan, and Rothblum [Proceedings of the 9th Innovations in Theoretical Computer Science Conference, ITCS’18, 2018, pp. 1–27] showed that there exist [math]-query relaxed LCCs that achieve nearly-quartic block length [math], for an arbitrarily small constant [math]. We construct an [math]-query relaxed LCC with nearly-linear block length [math], for an arbitrarily small constant [math]. This significantly narrows the gap between the lower bound which states that there are no [math]-query relaxed LCCs with block length [math]. In particular, our construction matches the parameters achieved by Ben-Sasson et al. [SIAM J. Comput., 36 (2006), pp. 889–974], who constructed relaxed LDCs with the same parameters. This resolves an open problem raised by Gur, Ramnarayan, and Rothblum [Proceedings of the 9th Innovations in Theoretical Computer Science Conference, ITCS’18, 2018, pp. 1–27].
SIAM计算杂志,第51卷,第6期,1839-1865页,2022年12月。摘要。局部可校正码(lcc)是一种错误校正码[数学],它允许本地算法通过少量查询来纠正损坏码字的任何单个符号。对于系统代码,这个概念比局部可解码代码(ldc)更强,ldc的目标是仅恢复消息的单个符号。算法编码理论的核心问题之一是构造具有最小块长度的[数学]查询lcc和ldc。唉,尽管在过去二十年中受到了很多关注,但这种代码的最新技术需要超多项式块长度才能允许[数学]查询算法进行局部校正和解码。放宽lcc和ldc的研究提供了一种绕过这一障碍的方法,它们允许校正算法在一小部分位置上中断(但不会出错)。事实证明,这种松弛允许对具有多项式块长度的代码进行恒查询纠错和解码算法。Gur, Ramnarayan和Rothblum[第九届理论计算机科学创新会议论文集,ITCS ' 18, 2018, pp. 1-27]专注于局部校正,表明存在[math]-查询放宽lcc,对于任意小的常数[math],实现近四分之一块长度[math]。对于任意小的常数[math],我们构造了一个具有近似线性块长度[math]的[math]查询松弛LCC。这大大缩小了下界之间的差距,下界表明不存在块长度为[math]的[math]查询放宽lcc。特别是,我们的构造与Ben-Sasson等人获得的参数相匹配[SIAM J. Comput]。, 36 (2006), pp. 889-974],他们用相同的参数构建了宽松的最不发达国家。这解决了Gur, Ramnarayan和Rothblum提出的一个开放问题[第九届理论计算机科学创新会议论集,ITCS ' 18, 2018, pp. 1-27]。
{"title":"Relaxed Locally Correctable Codes with Nearly-Linear Block Length and Constant Query Complexity","authors":"Alessandro Chiesa, Tom Gur, Igor Shinkar","doi":"10.1137/20m135515x","DOIUrl":"https://doi.org/10.1137/20m135515x","url":null,"abstract":"SIAM Journal on Computing, Volume 51, Issue 6, Page 1839-1865, December 2022. <br/> Abstract. Locally correctable codes (LCCs) are error correcting codes [math] which admit local algorithms that correct any individual symbol of a corrupted codeword via a minuscule number of queries. For systematic codes, this notion is stronger than that of locally decodable codes (LDCs), where the goal is to only recover individual symbols of the message. One of the central problems in algorithmic coding theory is to construct [math]-query LCCs and LDCs with minimal block length. Alas, state-of-the-art of such codes requires super-polynomial block length to admit [math]-query algorithms for local correction and decoding, despite much attention during the last two decades. The study of relaxed LCCs and LDCs, which allow the correction algorithm to abort (but not err) on a small fraction of the locations, provides a way to circumvent this barrier. This relaxation turned out to allow constant-query correcting and decoding algorithms for codes with polynomial block length. Focusing on local correction, Gur, Ramnarayan, and Rothblum [Proceedings of the 9th Innovations in Theoretical Computer Science Conference, ITCS’18, 2018, pp. 1–27] showed that there exist [math]-query relaxed LCCs that achieve nearly-quartic block length [math], for an arbitrarily small constant [math]. We construct an [math]-query relaxed LCC with nearly-linear block length [math], for an arbitrarily small constant [math]. This significantly narrows the gap between the lower bound which states that there are no [math]-query relaxed LCCs with block length [math]. In particular, our construction matches the parameters achieved by Ben-Sasson et al. [SIAM J. Comput., 36 (2006), pp. 889–974], who constructed relaxed LDCs with the same parameters. This resolves an open problem raised by Gur, Ramnarayan, and Rothblum [Proceedings of the 9th Innovations in Theoretical Computer Science Conference, ITCS’18, 2018, pp. 1–27].","PeriodicalId":49532,"journal":{"name":"SIAM Journal on Computing","volume":"11 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}