Pub Date : 2020-09-28DOI: 10.1137/1.9781611976496.23
Marcin Pilipczuk, Michal Pilipczuk, Paweł Rzaͅżewski
In a recent breakthrough work, Gartland and Lokshtanov [FOCS 2020] showed a quasi-polynomial-time algorithm for Maximum Weight Independent Set in $P_t$-free graphs, that is, graphs excluding a fixed path as an induced subgraph. Their algorithm runs in time $n^{mathcal{O}(log^3 n)}$, where $t$ is assumed to be a constant. Inspired by their ideas, we present an arguably simpler algorithm with an improved running time bound of $n^{mathcal{O}(log^2 n)}$. Our main insight is that a connected $P_t$-free graph always contains a vertex $w$ whose neighborhood intersects, for a constant fraction of pairs ${u,v} in binom{V(G)}{2}$, a constant fraction of induced $u-v$ paths. Since a $P_t$-free graph contains $mathcal{O}(n^{t-1})$ induced paths in total, branching on such a vertex and recursing independently on the connected components leads to a quasi-polynomial running time bound. We also show that the same approach can be used to obtain quasi-polynomial-time algorithms for related problems, including Maximum Weight Induced Matching and 3-Coloring.
{"title":"Quasi-polynomial-time algorithm for Independent Set in Pt-free graphs via shrinking the space of induced paths","authors":"Marcin Pilipczuk, Michal Pilipczuk, Paweł Rzaͅżewski","doi":"10.1137/1.9781611976496.23","DOIUrl":"https://doi.org/10.1137/1.9781611976496.23","url":null,"abstract":"In a recent breakthrough work, Gartland and Lokshtanov [FOCS 2020] showed a quasi-polynomial-time algorithm for Maximum Weight Independent Set in $P_t$-free graphs, that is, graphs excluding a fixed path as an induced subgraph. Their algorithm runs in time $n^{mathcal{O}(log^3 n)}$, where $t$ is assumed to be a constant. Inspired by their ideas, we present an arguably simpler algorithm with an improved running time bound of $n^{mathcal{O}(log^2 n)}$. Our main insight is that a connected $P_t$-free graph always contains a vertex $w$ whose neighborhood intersects, for a constant fraction of pairs ${u,v} in binom{V(G)}{2}$, a constant fraction of induced $u-v$ paths. Since a $P_t$-free graph contains $mathcal{O}(n^{t-1})$ induced paths in total, branching on such a vertex and recursing independently on the connected components leads to a quasi-polynomial running time bound. We also show that the same approach can be used to obtain quasi-polynomial-time algorithms for related problems, including Maximum Weight Induced Matching and 3-Coloring.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"76 1","pages":"204-209"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77394649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-06DOI: 10.1137/1.9781611976496.24
S. KarthikC., I. Navon
In the $(k,h)$-SetCover problem, we are given a collection $mathcal{S}$ of sets over a universe $U$, and the goal is to distinguish between the case that $mathcal{S}$ contains $k$ sets which cover $U$, from the case that at least $h$ sets in $mathcal{S}$ are needed to cover $U$. Lin (ICALP'19) recently showed a gap creating reduction from the $(k,k+1)$-SetCover problem on universe of size $O_k(log |mathcal{S}|)$ to the $left(k,sqrt[k]{frac{log|mathcal{S}|}{loglog |mathcal{S}|}}cdot kright)$-SetCover problem on universe of size $|mathcal{S}|$. In this paper, we prove a more scalable version of his result: given any error correcting code $C$ over alphabet $[q]$, rate $rho$, and relative distance $delta$, we use $C$ to create a reduction from the $(k,k+1)$-SetCover problem on universe $U$ to the $left(k,sqrt[2k]{frac{2}{1-delta}}right)$-SetCover problem on universe of size $frac{log|mathcal{S}|}{rho}cdot|U|^{q^k}$. Lin established his result by composing the input SetCover instance (that has no gap) with a special threshold graph constructed from extremal combinatorial object called universal sets, resulting in a final SetCover instance with gap. Our reduction follows along the exact same lines, except that we generate the threshold graphs specified by Lin simply using the basic properties of the error correcting code $C$. We use the same threshold graphs mentioned above to prove inapproximability results, under W[1]$neq$FPT and ETH, for the $k$-MaxCover problem introduced by Chalermsook et al. (SICOMP'20). Our inapproximaiblity results match the bounds obtained by Karthik et al. (JACM'19), although their proof framework is very different, and involves generalization of the distributed PCP framework. Prior to this work, it was not clear how to adopt the proof strategy of Lin to prove inapproximability results for $k$-MaxCover.
{"title":"On Hardness of Approximation of Parameterized Set Cover and Label Cover: Threshold Graphs from Error Correcting Codes","authors":"S. KarthikC., I. Navon","doi":"10.1137/1.9781611976496.24","DOIUrl":"https://doi.org/10.1137/1.9781611976496.24","url":null,"abstract":"In the $(k,h)$-SetCover problem, we are given a collection $mathcal{S}$ of sets over a universe $U$, and the goal is to distinguish between the case that $mathcal{S}$ contains $k$ sets which cover $U$, from the case that at least $h$ sets in $mathcal{S}$ are needed to cover $U$. Lin (ICALP'19) recently showed a gap creating reduction from the $(k,k+1)$-SetCover problem on universe of size $O_k(log |mathcal{S}|)$ to the $left(k,sqrt[k]{frac{log|mathcal{S}|}{loglog |mathcal{S}|}}cdot kright)$-SetCover problem on universe of size $|mathcal{S}|$. In this paper, we prove a more scalable version of his result: given any error correcting code $C$ over alphabet $[q]$, rate $rho$, and relative distance $delta$, we use $C$ to create a reduction from the $(k,k+1)$-SetCover problem on universe $U$ to the $left(k,sqrt[2k]{frac{2}{1-delta}}right)$-SetCover problem on universe of size $frac{log|mathcal{S}|}{rho}cdot|U|^{q^k}$. \u0000Lin established his result by composing the input SetCover instance (that has no gap) with a special threshold graph constructed from extremal combinatorial object called universal sets, resulting in a final SetCover instance with gap. Our reduction follows along the exact same lines, except that we generate the threshold graphs specified by Lin simply using the basic properties of the error correcting code $C$. \u0000We use the same threshold graphs mentioned above to prove inapproximability results, under W[1]$neq$FPT and ETH, for the $k$-MaxCover problem introduced by Chalermsook et al. (SICOMP'20). Our inapproximaiblity results match the bounds obtained by Karthik et al. (JACM'19), although their proof framework is very different, and involves generalization of the distributed PCP framework. Prior to this work, it was not clear how to adopt the proof strategy of Lin to prove inapproximability results for $k$-MaxCover.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"50 1","pages":"210-223"},"PeriodicalIF":0.0,"publicationDate":"2020-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87577170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1137/1.9781611976496.22
Georg Anegg, Haris Angelidakis, R. Zenklusen
A well-known conjecture of Furedi, Kahn, and Seymour (1993) on non-uniform hypergraph matching states that for any hypergraph with edge weights $w$, there exists a matching $M$ such that the inequality $sum_{ein M} g(e) w(e) geq mathrm{OPT}_{mathrm{LP}}$ holds with $g(e)=|e|-1+frac{1}{|e|}$, where $mathrm{OPT}_{mathrm{LP}}$ denotes the optimal value of the canonical LP relaxation. While the conjecture remains open, the strongest result towards it was very recently obtained by Brubach, Sankararaman, Srinivasan, and Xu (2020)---building on and strengthening prior work by Bansal, Gupta, Li, Mestre, Nagarajan, and Rudra (2012)---showing that the aforementioned inequality holds with $g(e)=|e|+O(|e|exp(-|e|))$. Actually, their method works in a more general sampling setting, where, given a point $x$ of the canonical LP relaxation, the task is to efficiently sample a matching $M$ containing each edge $e$ with probability at least $frac{x(e)}{g(e)}$. We present simpler and easy-to-analyze procedures leading to improved results. More precisely, for any solution $x$ to the canonical LP, we introduce a simple algorithm based on exponential clocks for Brubach et al.'s sampling setting achieving $g(e)=|e|-(|e|-1)x(e)$. Apart from the slight improvement in $g$, our technique may open up new ways to attack the original conjecture. Moreover, we provide a short and arguably elegant analysis showing that a natural greedy approach for the original setting of the conjecture shows the inequality for the same $g(e)=|e|-(|e|-1)x(e)$ even for the more general hypergraph $b$-matching problem.
Furedi, Kahn, and Seymour(1993)关于非均匀超图匹配的一个著名猜想指出,对于任何边权为$w$的超图,存在一个匹配$M$,使得不等式$sum_{ein M} g(e) w(e) geq mathrm{OPT}_{mathrm{LP}}$与$g(e)=|e|-1+frac{1}{|e|}$成立,其中$mathrm{OPT}_{mathrm{LP}}$表示正则LP松弛的最优值。虽然这一猜想仍然是开放的,但Brubach、Sankararaman、Srinivasan和Xu(2020)最近获得了最有力的结果——基于并加强了Bansal、Gupta、Li、Mestre、Nagarajan和Rudra(2012)的先前工作——表明上述不平等适用于$g(e)=|e|+O(|e|exp(-|e|))$。实际上,他们的方法适用于更一般的采样设置,其中,给定一个正则LP松弛的点$x$,任务是以至少$frac{x(e)}{g(e)}$的概率有效地采样包含每个边$e$的匹配$M$。我们提供更简单和易于分析的程序,从而改善结果。更准确地说,对于规范LP的任何解$x$,我们引入了一个基于指数时钟的简单算法,用于Brubach等人的采样设置,实现$g(e)=|e|-(|e|-1)x(e)$。除了对$g$的轻微改进之外,我们的技术可能会开辟新的方法来攻击原始猜想。此外,我们提供了一个简短而优雅的分析,表明对猜想的原始设置的自然贪婪方法显示了相同$g(e)=|e|-(|e|-1)x(e)$的不等式,甚至对于更一般的超图$b$匹配问题。
{"title":"Simpler and Stronger Approaches for Non-Uniform Hypergraph Matching and the Füredi, Kahn, and Seymour Conjecture","authors":"Georg Anegg, Haris Angelidakis, R. Zenklusen","doi":"10.1137/1.9781611976496.22","DOIUrl":"https://doi.org/10.1137/1.9781611976496.22","url":null,"abstract":"A well-known conjecture of Furedi, Kahn, and Seymour (1993) on non-uniform hypergraph matching states that for any hypergraph with edge weights $w$, there exists a matching $M$ such that the inequality $sum_{ein M} g(e) w(e) geq mathrm{OPT}_{mathrm{LP}}$ holds with $g(e)=|e|-1+frac{1}{|e|}$, where $mathrm{OPT}_{mathrm{LP}}$ denotes the optimal value of the canonical LP relaxation. \u0000While the conjecture remains open, the strongest result towards it was very recently obtained by Brubach, Sankararaman, Srinivasan, and Xu (2020)---building on and strengthening prior work by Bansal, Gupta, Li, Mestre, Nagarajan, and Rudra (2012)---showing that the aforementioned inequality holds with $g(e)=|e|+O(|e|exp(-|e|))$. \u0000Actually, their method works in a more general sampling setting, where, given a point $x$ of the canonical LP relaxation, the task is to efficiently sample a matching $M$ containing each edge $e$ with probability at least $frac{x(e)}{g(e)}$. \u0000We present simpler and easy-to-analyze procedures leading to improved results. More precisely, for any solution $x$ to the canonical LP, we introduce a simple algorithm based on exponential clocks for Brubach et al.'s sampling setting achieving $g(e)=|e|-(|e|-1)x(e)$. \u0000Apart from the slight improvement in $g$, our technique may open up new ways to attack the original conjecture. \u0000Moreover, we provide a short and arguably elegant analysis showing that a natural greedy approach for the original setting of the conjecture shows the inequality for the same $g(e)=|e|-(|e|-1)x(e)$ even for the more general hypergraph $b$-matching problem.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"124 1 1","pages":"196-203"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88622133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-26DOI: 10.1137/1.9781611976496.26
Daniel Gibney, Gary Hoppenworth, Sharma V. Thankachan
The CNF formula satisfiability problem (CNF-SAT) has been reduced to many fundamental problems in P to prove tight lower bounds under the Strong Exponential Time Hypothesis (SETH). Recently, the works of Abboud, Hansen, Vassilevska W. and Williams (STOC 16), and later, Abboud and Bringmann (ICALP 18) have proposed basing lower bounds on the hardness of general boolean formula satisfiability (Formula-SAT). Reductions from Formula-SAT have two advantages over the usual reductions from CNF-SAT: (1) conjectures on the hardness of Formula-SAT are arguably much more plausible than those of CNF-SAT, and (2) these reductions give consequences even for logarithmic improvements in a problems upper bounds. Here we give tight reductions from Formula-SAT to two more problems: pattern matching on labeled graphs (PMLG) and subtree isomorphism. Previous reductions from Formula-SAT were to sequence alignment problems such as Edit Distance, LCS, and Frechet Distance and required some technical work. This paper uses ideas similar to those used previously, but in a decidedly simpler setting, helping to illustrate the most salient features of the underlying techniques.
为了证明强指数时间假设(SETH)下的紧下界,将CNF公式可满足性问题(CNF- sat)简化为P中的许多基本问题。最近,Abboud, Hansen, Vassilevska W. and Williams (STOC 16)以及后来的Abboud and Bringmann (ICALP 18)提出了基于一般布尔公式可满足性(formula - sat)的硬度的下界。来自sat公式的约简比来自CNF-SAT的通常约简有两个优点:(1)关于sat公式的硬度的猜想可以说比CNF-SAT的猜想更可信,(2)这些约简甚至对问题上界的对数改进也有影响。这里我们给出了从sat公式到另外两个问题的严格约简:标记图上的模式匹配(ppmlg)和子树同构。此前,Formula-SAT的缩减主要针对序列比对问题,如Edit Distance、LCS和Frechet Distance,需要一些技术工作。本文使用了类似于以前使用过的思想,但是在一个明显更简单的环境中,帮助说明底层技术的最显著特征。
{"title":"Simple Reductions from Formula-SAT to Pattern Matching on Labeled Graphs and Subtree Isomorphism","authors":"Daniel Gibney, Gary Hoppenworth, Sharma V. Thankachan","doi":"10.1137/1.9781611976496.26","DOIUrl":"https://doi.org/10.1137/1.9781611976496.26","url":null,"abstract":"The CNF formula satisfiability problem (CNF-SAT) has been reduced to many fundamental problems in P to prove tight lower bounds under the Strong Exponential Time Hypothesis (SETH). Recently, the works of Abboud, Hansen, Vassilevska W. and Williams (STOC 16), and later, Abboud and Bringmann (ICALP 18) have proposed basing lower bounds on the hardness of general boolean formula satisfiability (Formula-SAT). Reductions from Formula-SAT have two advantages over the usual reductions from CNF-SAT: (1) conjectures on the hardness of Formula-SAT are arguably much more plausible than those of CNF-SAT, and (2) these reductions give consequences even for logarithmic improvements in a problems upper bounds. \u0000Here we give tight reductions from Formula-SAT to two more problems: pattern matching on labeled graphs (PMLG) and subtree isomorphism. Previous reductions from Formula-SAT were to sequence alignment problems such as Edit Distance, LCS, and Frechet Distance and required some technical work. This paper uses ideas similar to those used previously, but in a decidedly simpler setting, helping to illustrate the most salient features of the underlying techniques.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"75 1","pages":"232-242"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82240705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-24DOI: 10.1137/1.9781611976496.6
Kyriakos Axiotis, A. Backurs, K. Bringmann, Ce Jin, Vasileios Nakos, Christos Tzamos, Hongxun Wu
We revisit the Subset Sum problem over the finite cyclic group $mathbb{Z}_m$ for some given integer $m$. A series of recent works has provided asymptotically optimal algorithms for this problem under the Strong Exponential Time Hypothesis. Koiliaris and Xu (SODA'17, TALG'19) gave a deterministic algorithm running in time $tilde{O}(m^{5/4})$, which was later improved to $O(m log^7 m)$ randomized time by Axiotis et al. (SODA'19). In this work, we present two simple algorithms for the Modular Subset Sum problem running in near-linear time in $m$, both efficiently implementing Bellman's iteration over $mathbb{Z}_m$. The first one is a randomized algorithm running in time $O(mlog^2 m)$, that is based solely on rolling hash and an elementary data-structure for prefix sums; to illustrate its simplicity we provide a short and efficient implementation of the algorithm in Python. Our second solution is a deterministic algorithm running in time $O(m mathrm{polylog} m)$, that uses dynamic data structures for string manipulation. We further show that the techniques developed in this work can also lead to simple algorithms for the All Pairs Non-Decreasing Paths Problem (APNP) on undirected graphs, matching the asymptotically optimal running time of $tilde{O}(n^2)$ provided in the recent work of Duan et al. (ICALP'19).
{"title":"Fast and Simple Modular Subset Sum","authors":"Kyriakos Axiotis, A. Backurs, K. Bringmann, Ce Jin, Vasileios Nakos, Christos Tzamos, Hongxun Wu","doi":"10.1137/1.9781611976496.6","DOIUrl":"https://doi.org/10.1137/1.9781611976496.6","url":null,"abstract":"We revisit the Subset Sum problem over the finite cyclic group $mathbb{Z}_m$ for some given integer $m$. A series of recent works has provided asymptotically optimal algorithms for this problem under the Strong Exponential Time Hypothesis. Koiliaris and Xu (SODA'17, TALG'19) gave a deterministic algorithm running in time $tilde{O}(m^{5/4})$, which was later improved to $O(m log^7 m)$ randomized time by Axiotis et al. (SODA'19). In this work, we present two simple algorithms for the Modular Subset Sum problem running in near-linear time in $m$, both efficiently implementing Bellman's iteration over $mathbb{Z}_m$. The first one is a randomized algorithm running in time $O(mlog^2 m)$, that is based solely on rolling hash and an elementary data-structure for prefix sums; to illustrate its simplicity we provide a short and efficient implementation of the algorithm in Python. Our second solution is a deterministic algorithm running in time $O(m mathrm{polylog} m)$, that uses dynamic data structures for string manipulation. We further show that the techniques developed in this work can also lead to simple algorithms for the All Pairs Non-Decreasing Paths Problem (APNP) on undirected graphs, matching the asymptotically optimal running time of $tilde{O}(n^2)$ provided in the recent work of Duan et al. (ICALP'19).","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"43 1","pages":"57-67"},"PeriodicalIF":0.0,"publicationDate":"2020-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81395062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-19DOI: 10.1137/1.9781611976496.14
A. Bostan, R. Mori
We present a simple and fast algorithm for computing the $N$-th term of a given linearly recurrent sequence. Our new algorithm uses $O(mathsf{M}(d) log N)$ arithmetic operations, where $d$ is the order of the recurrence, and $mathsf{M}(d)$ denotes the number of arithmetic operations for computing the product of two polynomials of degree $d$. The state-of-the-art algorithm, due to Charles Fiduccia (1985), has the same arithmetic complexity up to a constant factor. Our algorithm is simpler, faster and obtained by a totally different method. We also discuss several algorithmic applications, notably to polynomial modular exponentiation, powering of matrices and high-order lifting.
{"title":"A Simple and Fast Algorithm for Computing the N-th Term of a Linearly Recurrent Sequence","authors":"A. Bostan, R. Mori","doi":"10.1137/1.9781611976496.14","DOIUrl":"https://doi.org/10.1137/1.9781611976496.14","url":null,"abstract":"We present a simple and fast algorithm for computing the $N$-th term of a given linearly recurrent sequence. Our new algorithm uses $O(mathsf{M}(d) log N)$ arithmetic operations, where $d$ is the order of the recurrence, and $mathsf{M}(d)$ denotes the number of arithmetic operations for computing the product of two polynomials of degree $d$. The state-of-the-art algorithm, due to Charles Fiduccia (1985), has the same arithmetic complexity up to a constant factor. Our algorithm is simpler, faster and obtained by a totally different method. We also discuss several algorithmic applications, notably to polynomial modular exponentiation, powering of matrices and high-order lifting.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"2013 1","pages":"118-132"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86433726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-19DOI: 10.1137/1.9781611976496.9
Thatchaphol Saranurak
We show a deterministic algorithm for computing edge connectivity of a simple graph with $m$ edges in $m^{1+o(1)}$ time. Although the fastest deterministic algorithm by Henzinger, Rao, and Wang [SODA'17] has a faster running time of $O(mlog^{2}mloglog m)$, we believe that our algorithm is conceptually simpler. The key tool for this simplication is the expander decomposition. We exploit it in a very straightforward way compared to how it has been previously used in the literature.
{"title":"A Simple Deterministic Algorithm for Edge Connectivity","authors":"Thatchaphol Saranurak","doi":"10.1137/1.9781611976496.9","DOIUrl":"https://doi.org/10.1137/1.9781611976496.9","url":null,"abstract":"We show a deterministic algorithm for computing edge connectivity of a simple graph with $m$ edges in $m^{1+o(1)}$ time. Although the fastest deterministic algorithm by Henzinger, Rao, and Wang [SODA'17] has a faster running time of $O(mlog^{2}mloglog m)$, we believe that our algorithm is conceptually simpler. The key tool for this simplication is the expander decomposition. We exploit it in a very straightforward way compared to how it has been previously used in the literature.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"43 1","pages":"80-85"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81338961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-12DOI: 10.1137/1.9781611976496.2
G. Brodal
Chazelle [JACM00] introduced the soft heap as a building block for efficient minimum spanning tree algorithms, and recently Kaplan et al. [SOSA2019] showed how soft heaps can be applied to achieve simpler algorithms for various selection problems. A soft heap trades-off accuracy for efficiency, by allowing $epsilon N$ of the items in a heap to be corrupted after a total of $N$ insertions, where a corrupted item is an item with artificially increased key and $0 < epsilon leq 1/2$ is a fixed error parameter. Chazelle's soft heaps are based on binomial trees and support insertions in amortized $O(lg(1/epsilon))$ time and extract-min operations in amortized $O(1)$ time. In this paper we explore the design space of soft heaps. The main contribution of this paper is an alternative soft heap implementation based on merging sorted sequences, with time bounds matching those of Chazelle's soft heaps. We also discuss a variation of the soft heap by Kaplan et al. [SICOMP2013], where we avoid performing insertions lazily. It is based on ternary trees instead of binary trees and matches the time bounds of Kaplan et al., i.e. amortized $O(1)$ insertions and amortized $O(lg(1/epsilon))$ extract-min. Both our data structures only introduce corruptions after extract-min operations which return the set of items corrupted by the operation.
{"title":"Soft Sequence Heaps","authors":"G. Brodal","doi":"10.1137/1.9781611976496.2","DOIUrl":"https://doi.org/10.1137/1.9781611976496.2","url":null,"abstract":"Chazelle [JACM00] introduced the soft heap as a building block for efficient minimum spanning tree algorithms, and recently Kaplan et al. [SOSA2019] showed how soft heaps can be applied to achieve simpler algorithms for various selection problems. A soft heap trades-off accuracy for efficiency, by allowing $epsilon N$ of the items in a heap to be corrupted after a total of $N$ insertions, where a corrupted item is an item with artificially increased key and $0 < epsilon leq 1/2$ is a fixed error parameter. Chazelle's soft heaps are based on binomial trees and support insertions in amortized $O(lg(1/epsilon))$ time and extract-min operations in amortized $O(1)$ time. \u0000In this paper we explore the design space of soft heaps. The main contribution of this paper is an alternative soft heap implementation based on merging sorted sequences, with time bounds matching those of Chazelle's soft heaps. We also discuss a variation of the soft heap by Kaplan et al. [SICOMP2013], where we avoid performing insertions lazily. It is based on ternary trees instead of binary trees and matches the time bounds of Kaplan et al., i.e. amortized $O(1)$ insertions and amortized $O(lg(1/epsilon))$ extract-min. Both our data structures only introduce corruptions after extract-min operations which return the set of items corrupted by the operation.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"69 6 1","pages":"14-24"},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88487005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-17DOI: 10.1137/1.9781611977066.9
Kwangjun Ahn, S. Sra
The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for designing optimization algorithms. In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov’s accelerated gradient method (AGM). The key observation is that AGM is a simple approximation of PPM, which results in an elementary derivation of the update equations and stepsizes of AGM. This view also leads to a transparent and conceptually simple analysis of AGM’s convergence by using the analysis of PPM. The derivations also naturally extend to the strongly convex case. Ultimately, the results presented in this paper are of both didactic and conceptual value; they unify and explain existing variants of AGM while motivating other accelerated methods for practically relevant settings.
{"title":"Understanding Nesterov's Acceleration via Proximal Point Method","authors":"Kwangjun Ahn, S. Sra","doi":"10.1137/1.9781611977066.9","DOIUrl":"https://doi.org/10.1137/1.9781611977066.9","url":null,"abstract":"The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for designing optimization algorithms. In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov’s accelerated gradient method (AGM). The key observation is that AGM is a simple approximation of PPM, which results in an elementary derivation of the update equations and stepsizes of AGM. This view also leads to a transparent and conceptually simple analysis of AGM’s convergence by using the analysis of PPM. The derivations also naturally extend to the strongly convex case. Ultimately, the results presented in this paper are of both didactic and conceptual value; they unify and explain existing variants of AGM while motivating other accelerated methods for practically relevant settings.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"236 1","pages":"117-130"},"PeriodicalIF":0.0,"publicationDate":"2020-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77157363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1137/1.9781611976014.11
Kent Quanrud
{"title":"Nearly linear time approximations for mixed packing and covering problems without data structures or randomization","authors":"Kent Quanrud","doi":"10.1137/1.9781611976014.11","DOIUrl":"https://doi.org/10.1137/1.9781611976014.11","url":null,"abstract":"","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"98 1","pages":"69-80"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75602836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}