首页 > 最新文献

Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)最新文献

英文 中文
Quasi-polynomial-time algorithm for Independent Set in Pt-free graphs via shrinking the space of induced paths 通过压缩诱导路径空间求无pt图中独立集的拟多项式时间算法
Pub Date : 2020-09-28 DOI: 10.1137/1.9781611976496.23
Marcin Pilipczuk, Michal Pilipczuk, Paweł Rzaͅżewski
In a recent breakthrough work, Gartland and Lokshtanov [FOCS 2020] showed a quasi-polynomial-time algorithm for Maximum Weight Independent Set in $P_t$-free graphs, that is, graphs excluding a fixed path as an induced subgraph. Their algorithm runs in time $n^{mathcal{O}(log^3 n)}$, where $t$ is assumed to be a constant. Inspired by their ideas, we present an arguably simpler algorithm with an improved running time bound of $n^{mathcal{O}(log^2 n)}$. Our main insight is that a connected $P_t$-free graph always contains a vertex $w$ whose neighborhood intersects, for a constant fraction of pairs ${u,v} in binom{V(G)}{2}$, a constant fraction of induced $u-v$ paths. Since a $P_t$-free graph contains $mathcal{O}(n^{t-1})$ induced paths in total, branching on such a vertex and recursing independently on the connected components leads to a quasi-polynomial running time bound. We also show that the same approach can be used to obtain quasi-polynomial-time algorithms for related problems, including Maximum Weight Induced Matching and 3-Coloring.
在最近的一项突破性工作中,Gartland和Lokshtanov [FOCS 2020]展示了$P_t$ free图(即不包含固定路径作为诱导子图的图)中最大权重独立集的拟多项式时间算法。他们的算法运行时间为$n^{mathcal{O}(log^ 3n)}$,其中$t$被假定为常数。受他们想法的启发,我们提出了一个更简单的算法,改进了运行时间界限$n^{mathcal{O}(log^ 2n)}$。我们的主要观点是,一个连通的$P_t$自由图总是包含一个顶点$w$,它的邻域相交,对于binom{v (G)}{2}$中的一对${u,v} 的常数分数,诱导的$u-v$路径的常数分数。由于一个$P_t$自由的图总共包含$mathcal{O}(n^{t-1})$条诱导路径,在这样一个顶点上分支并在连接的分量上独立递归导致一个拟多项式的运行时间边界。我们还证明了同样的方法可以用于获得相关问题的拟多项式时间算法,包括最大权重诱导匹配和3-着色。
{"title":"Quasi-polynomial-time algorithm for Independent Set in Pt-free graphs via shrinking the space of induced paths","authors":"Marcin Pilipczuk, Michal Pilipczuk, Paweł Rzaͅżewski","doi":"10.1137/1.9781611976496.23","DOIUrl":"https://doi.org/10.1137/1.9781611976496.23","url":null,"abstract":"In a recent breakthrough work, Gartland and Lokshtanov [FOCS 2020] showed a quasi-polynomial-time algorithm for Maximum Weight Independent Set in $P_t$-free graphs, that is, graphs excluding a fixed path as an induced subgraph. Their algorithm runs in time $n^{mathcal{O}(log^3 n)}$, where $t$ is assumed to be a constant. Inspired by their ideas, we present an arguably simpler algorithm with an improved running time bound of $n^{mathcal{O}(log^2 n)}$. Our main insight is that a connected $P_t$-free graph always contains a vertex $w$ whose neighborhood intersects, for a constant fraction of pairs ${u,v} in binom{V(G)}{2}$, a constant fraction of induced $u-v$ paths. Since a $P_t$-free graph contains $mathcal{O}(n^{t-1})$ induced paths in total, branching on such a vertex and recursing independently on the connected components leads to a quasi-polynomial running time bound. We also show that the same approach can be used to obtain quasi-polynomial-time algorithms for related problems, including Maximum Weight Induced Matching and 3-Coloring.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"76 1","pages":"204-209"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77394649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
On Hardness of Approximation of Parameterized Set Cover and Label Cover: Threshold Graphs from Error Correcting Codes 参数化集盖和标签盖的逼近硬度:纠错码的阈值图
Pub Date : 2020-09-06 DOI: 10.1137/1.9781611976496.24
S. KarthikC., I. Navon
In the $(k,h)$-SetCover problem, we are given a collection $mathcal{S}$ of sets over a universe $U$, and the goal is to distinguish between the case that $mathcal{S}$ contains $k$ sets which cover $U$, from the case that at least $h$ sets in $mathcal{S}$ are needed to cover $U$. Lin (ICALP'19) recently showed a gap creating reduction from the $(k,k+1)$-SetCover problem on universe of size $O_k(log |mathcal{S}|)$ to the $left(k,sqrt[k]{frac{log|mathcal{S}|}{loglog |mathcal{S}|}}cdot kright)$-SetCover problem on universe of size $|mathcal{S}|$. In this paper, we prove a more scalable version of his result: given any error correcting code $C$ over alphabet $[q]$, rate $rho$, and relative distance $delta$, we use $C$ to create a reduction from the $(k,k+1)$-SetCover problem on universe $U$ to the $left(k,sqrt[2k]{frac{2}{1-delta}}right)$-SetCover problem on universe of size $frac{log|mathcal{S}|}{rho}cdot|U|^{q^k}$. Lin established his result by composing the input SetCover instance (that has no gap) with a special threshold graph constructed from extremal combinatorial object called universal sets, resulting in a final SetCover instance with gap. Our reduction follows along the exact same lines, except that we generate the threshold graphs specified by Lin simply using the basic properties of the error correcting code $C$. We use the same threshold graphs mentioned above to prove inapproximability results, under W[1]$neq$FPT and ETH, for the $k$-MaxCover problem introduced by Chalermsook et al. (SICOMP'20). Our inapproximaiblity results match the bounds obtained by Karthik et al. (JACM'19), although their proof framework is very different, and involves generalization of the distributed PCP framework. Prior to this work, it was not clear how to adopt the proof strategy of Lin to prove inapproximability results for $k$-MaxCover.
在$(k,h)$ -SetCover问题中,我们给出了$U$宇宙上集合的$mathcal{S}$集合,目的是区分$mathcal{S}$包含$k$集合覆盖$U$的情况,与$mathcal{S}$中至少需要$h$集合覆盖$U$的情况。Lin (ICALP'19)最近展示了从大小为$O_k(log |mathcal{S}|)$的宇宙上的$(k,k+1)$ -SetCover问题到大小为$|mathcal{S}|$的宇宙上的$left(k,sqrt[k]{frac{log|mathcal{S}|}{loglog |mathcal{S}|}}cdot kright)$ -SetCover问题之间产生的差距减少。在本文中,我们证明了他的结果的一个更可扩展的版本:给定任何纠错码$C$超过字母$[q]$,比率$rho$和相对距离$delta$,我们使用$C$创建了一个从$U$宇宙的$(k,k+1)$ -SetCover问题到$left(k,sqrt[2k]{frac{2}{1-delta}}right)$ -SetCover问题的约简,宇宙的大小为$frac{log|mathcal{S}|}{rho}cdot|U|^{q^k}$。Lin通过将输入的SetCover实例(没有间隙)与一个由称为通用集的极值组合对象构造的特殊阈值图组合在一起来建立他的结果,从而产生一个具有间隙的最终SetCover实例。我们的还原遵循完全相同的路线,除了我们仅使用纠错代码$C$的基本属性生成Lin指定的阈值图。对于Chalermsook等人(SICOMP'20)提出的$k$ -MaxCover问题,我们使用上述相同的阈值图来证明W[1] $neq$ FPT和ETH下的不可逼近性结果。我们的不可近似性结果与Karthik等人(JACM'19)获得的界相匹配,尽管他们的证明框架非常不同,并且涉及分布式PCP框架的泛化。在这项工作之前,如何采用Lin的证明策略来证明$k$ -MaxCover的不可逼近性结果并不清楚。
{"title":"On Hardness of Approximation of Parameterized Set Cover and Label Cover: Threshold Graphs from Error Correcting Codes","authors":"S. KarthikC., I. Navon","doi":"10.1137/1.9781611976496.24","DOIUrl":"https://doi.org/10.1137/1.9781611976496.24","url":null,"abstract":"In the $(k,h)$-SetCover problem, we are given a collection $mathcal{S}$ of sets over a universe $U$, and the goal is to distinguish between the case that $mathcal{S}$ contains $k$ sets which cover $U$, from the case that at least $h$ sets in $mathcal{S}$ are needed to cover $U$. Lin (ICALP'19) recently showed a gap creating reduction from the $(k,k+1)$-SetCover problem on universe of size $O_k(log |mathcal{S}|)$ to the $left(k,sqrt[k]{frac{log|mathcal{S}|}{loglog |mathcal{S}|}}cdot kright)$-SetCover problem on universe of size $|mathcal{S}|$. In this paper, we prove a more scalable version of his result: given any error correcting code $C$ over alphabet $[q]$, rate $rho$, and relative distance $delta$, we use $C$ to create a reduction from the $(k,k+1)$-SetCover problem on universe $U$ to the $left(k,sqrt[2k]{frac{2}{1-delta}}right)$-SetCover problem on universe of size $frac{log|mathcal{S}|}{rho}cdot|U|^{q^k}$. \u0000Lin established his result by composing the input SetCover instance (that has no gap) with a special threshold graph constructed from extremal combinatorial object called universal sets, resulting in a final SetCover instance with gap. Our reduction follows along the exact same lines, except that we generate the threshold graphs specified by Lin simply using the basic properties of the error correcting code $C$. \u0000We use the same threshold graphs mentioned above to prove inapproximability results, under W[1]$neq$FPT and ETH, for the $k$-MaxCover problem introduced by Chalermsook et al. (SICOMP'20). Our inapproximaiblity results match the bounds obtained by Karthik et al. (JACM'19), although their proof framework is very different, and involves generalization of the distributed PCP framework. Prior to this work, it was not clear how to adopt the proof strategy of Lin to prove inapproximability results for $k$-MaxCover.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"50 1","pages":"210-223"},"PeriodicalIF":0.0,"publicationDate":"2020-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87577170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Simpler and Stronger Approaches for Non-Uniform Hypergraph Matching and the Füredi, Kahn, and Seymour Conjecture 非一致超图匹配的更简单和更强大的方法和fredi, Kahn和Seymour猜想
Pub Date : 2020-09-01 DOI: 10.1137/1.9781611976496.22
Georg Anegg, Haris Angelidakis, R. Zenklusen
A well-known conjecture of Furedi, Kahn, and Seymour (1993) on non-uniform hypergraph matching states that for any hypergraph with edge weights $w$, there exists a matching $M$ such that the inequality $sum_{ein M} g(e) w(e) geq mathrm{OPT}_{mathrm{LP}}$ holds with $g(e)=|e|-1+frac{1}{|e|}$, where $mathrm{OPT}_{mathrm{LP}}$ denotes the optimal value of the canonical LP relaxation. While the conjecture remains open, the strongest result towards it was very recently obtained by Brubach, Sankararaman, Srinivasan, and Xu (2020)---building on and strengthening prior work by Bansal, Gupta, Li, Mestre, Nagarajan, and Rudra (2012)---showing that the aforementioned inequality holds with $g(e)=|e|+O(|e|exp(-|e|))$. Actually, their method works in a more general sampling setting, where, given a point $x$ of the canonical LP relaxation, the task is to efficiently sample a matching $M$ containing each edge $e$ with probability at least $frac{x(e)}{g(e)}$. We present simpler and easy-to-analyze procedures leading to improved results. More precisely, for any solution $x$ to the canonical LP, we introduce a simple algorithm based on exponential clocks for Brubach et al.'s sampling setting achieving $g(e)=|e|-(|e|-1)x(e)$. Apart from the slight improvement in $g$, our technique may open up new ways to attack the original conjecture. Moreover, we provide a short and arguably elegant analysis showing that a natural greedy approach for the original setting of the conjecture shows the inequality for the same $g(e)=|e|-(|e|-1)x(e)$ even for the more general hypergraph $b$-matching problem.
Furedi, Kahn, and Seymour(1993)关于非均匀超图匹配的一个著名猜想指出,对于任何边权为$w$的超图,存在一个匹配$M$,使得不等式$sum_{ein M} g(e) w(e) geq mathrm{OPT}_{mathrm{LP}}$与$g(e)=|e|-1+frac{1}{|e|}$成立,其中$mathrm{OPT}_{mathrm{LP}}$表示正则LP松弛的最优值。虽然这一猜想仍然是开放的,但Brubach、Sankararaman、Srinivasan和Xu(2020)最近获得了最有力的结果——基于并加强了Bansal、Gupta、Li、Mestre、Nagarajan和Rudra(2012)的先前工作——表明上述不平等适用于$g(e)=|e|+O(|e|exp(-|e|))$。实际上,他们的方法适用于更一般的采样设置,其中,给定一个正则LP松弛的点$x$,任务是以至少$frac{x(e)}{g(e)}$的概率有效地采样包含每个边$e$的匹配$M$。我们提供更简单和易于分析的程序,从而改善结果。更准确地说,对于规范LP的任何解$x$,我们引入了一个基于指数时钟的简单算法,用于Brubach等人的采样设置,实现$g(e)=|e|-(|e|-1)x(e)$。除了对$g$的轻微改进之外,我们的技术可能会开辟新的方法来攻击原始猜想。此外,我们提供了一个简短而优雅的分析,表明对猜想的原始设置的自然贪婪方法显示了相同$g(e)=|e|-(|e|-1)x(e)$的不等式,甚至对于更一般的超图$b$匹配问题。
{"title":"Simpler and Stronger Approaches for Non-Uniform Hypergraph Matching and the Füredi, Kahn, and Seymour Conjecture","authors":"Georg Anegg, Haris Angelidakis, R. Zenklusen","doi":"10.1137/1.9781611976496.22","DOIUrl":"https://doi.org/10.1137/1.9781611976496.22","url":null,"abstract":"A well-known conjecture of Furedi, Kahn, and Seymour (1993) on non-uniform hypergraph matching states that for any hypergraph with edge weights $w$, there exists a matching $M$ such that the inequality $sum_{ein M} g(e) w(e) geq mathrm{OPT}_{mathrm{LP}}$ holds with $g(e)=|e|-1+frac{1}{|e|}$, where $mathrm{OPT}_{mathrm{LP}}$ denotes the optimal value of the canonical LP relaxation. \u0000While the conjecture remains open, the strongest result towards it was very recently obtained by Brubach, Sankararaman, Srinivasan, and Xu (2020)---building on and strengthening prior work by Bansal, Gupta, Li, Mestre, Nagarajan, and Rudra (2012)---showing that the aforementioned inequality holds with $g(e)=|e|+O(|e|exp(-|e|))$. \u0000Actually, their method works in a more general sampling setting, where, given a point $x$ of the canonical LP relaxation, the task is to efficiently sample a matching $M$ containing each edge $e$ with probability at least $frac{x(e)}{g(e)}$. \u0000We present simpler and easy-to-analyze procedures leading to improved results. More precisely, for any solution $x$ to the canonical LP, we introduce a simple algorithm based on exponential clocks for Brubach et al.'s sampling setting achieving $g(e)=|e|-(|e|-1)x(e)$. \u0000Apart from the slight improvement in $g$, our technique may open up new ways to attack the original conjecture. \u0000Moreover, we provide a short and arguably elegant analysis showing that a natural greedy approach for the original setting of the conjecture shows the inequality for the same $g(e)=|e|-(|e|-1)x(e)$ even for the more general hypergraph $b$-matching problem.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"124 1 1","pages":"196-203"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88622133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Simple Reductions from Formula-SAT to Pattern Matching on Labeled Graphs and Subtree Isomorphism 标记图上从sat公式到模式匹配的简单约简及子树同构
Pub Date : 2020-08-26 DOI: 10.1137/1.9781611976496.26
Daniel Gibney, Gary Hoppenworth, Sharma V. Thankachan
The CNF formula satisfiability problem (CNF-SAT) has been reduced to many fundamental problems in P to prove tight lower bounds under the Strong Exponential Time Hypothesis (SETH). Recently, the works of Abboud, Hansen, Vassilevska W. and Williams (STOC 16), and later, Abboud and Bringmann (ICALP 18) have proposed basing lower bounds on the hardness of general boolean formula satisfiability (Formula-SAT). Reductions from Formula-SAT have two advantages over the usual reductions from CNF-SAT: (1) conjectures on the hardness of Formula-SAT are arguably much more plausible than those of CNF-SAT, and (2) these reductions give consequences even for logarithmic improvements in a problems upper bounds. Here we give tight reductions from Formula-SAT to two more problems: pattern matching on labeled graphs (PMLG) and subtree isomorphism. Previous reductions from Formula-SAT were to sequence alignment problems such as Edit Distance, LCS, and Frechet Distance and required some technical work. This paper uses ideas similar to those used previously, but in a decidedly simpler setting, helping to illustrate the most salient features of the underlying techniques.
为了证明强指数时间假设(SETH)下的紧下界,将CNF公式可满足性问题(CNF- sat)简化为P中的许多基本问题。最近,Abboud, Hansen, Vassilevska W. and Williams (STOC 16)以及后来的Abboud and Bringmann (ICALP 18)提出了基于一般布尔公式可满足性(formula - sat)的硬度的下界。来自sat公式的约简比来自CNF-SAT的通常约简有两个优点:(1)关于sat公式的硬度的猜想可以说比CNF-SAT的猜想更可信,(2)这些约简甚至对问题上界的对数改进也有影响。这里我们给出了从sat公式到另外两个问题的严格约简:标记图上的模式匹配(ppmlg)和子树同构。此前,Formula-SAT的缩减主要针对序列比对问题,如Edit Distance、LCS和Frechet Distance,需要一些技术工作。本文使用了类似于以前使用过的思想,但是在一个明显更简单的环境中,帮助说明底层技术的最显著特征。
{"title":"Simple Reductions from Formula-SAT to Pattern Matching on Labeled Graphs and Subtree Isomorphism","authors":"Daniel Gibney, Gary Hoppenworth, Sharma V. Thankachan","doi":"10.1137/1.9781611976496.26","DOIUrl":"https://doi.org/10.1137/1.9781611976496.26","url":null,"abstract":"The CNF formula satisfiability problem (CNF-SAT) has been reduced to many fundamental problems in P to prove tight lower bounds under the Strong Exponential Time Hypothesis (SETH). Recently, the works of Abboud, Hansen, Vassilevska W. and Williams (STOC 16), and later, Abboud and Bringmann (ICALP 18) have proposed basing lower bounds on the hardness of general boolean formula satisfiability (Formula-SAT). Reductions from Formula-SAT have two advantages over the usual reductions from CNF-SAT: (1) conjectures on the hardness of Formula-SAT are arguably much more plausible than those of CNF-SAT, and (2) these reductions give consequences even for logarithmic improvements in a problems upper bounds. \u0000Here we give tight reductions from Formula-SAT to two more problems: pattern matching on labeled graphs (PMLG) and subtree isomorphism. Previous reductions from Formula-SAT were to sequence alignment problems such as Edit Distance, LCS, and Frechet Distance and required some technical work. This paper uses ideas similar to those used previously, but in a decidedly simpler setting, helping to illustrate the most salient features of the underlying techniques.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"75 1","pages":"232-242"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82240705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Fast and Simple Modular Subset Sum 快速简单模子集和
Pub Date : 2020-08-24 DOI: 10.1137/1.9781611976496.6
Kyriakos Axiotis, A. Backurs, K. Bringmann, Ce Jin, Vasileios Nakos, Christos Tzamos, Hongxun Wu
We revisit the Subset Sum problem over the finite cyclic group $mathbb{Z}_m$ for some given integer $m$. A series of recent works has provided asymptotically optimal algorithms for this problem under the Strong Exponential Time Hypothesis. Koiliaris and Xu (SODA'17, TALG'19) gave a deterministic algorithm running in time $tilde{O}(m^{5/4})$, which was later improved to $O(m log^7 m)$ randomized time by Axiotis et al. (SODA'19). In this work, we present two simple algorithms for the Modular Subset Sum problem running in near-linear time in $m$, both efficiently implementing Bellman's iteration over $mathbb{Z}_m$. The first one is a randomized algorithm running in time $O(mlog^2 m)$, that is based solely on rolling hash and an elementary data-structure for prefix sums; to illustrate its simplicity we provide a short and efficient implementation of the algorithm in Python. Our second solution is a deterministic algorithm running in time $O(m mathrm{polylog} m)$, that uses dynamic data structures for string manipulation. We further show that the techniques developed in this work can also lead to simple algorithms for the All Pairs Non-Decreasing Paths Problem (APNP) on undirected graphs, matching the asymptotically optimal running time of $tilde{O}(n^2)$ provided in the recent work of Duan et al. (ICALP'19).
对于给定整数$m$,我们重新讨论有限循环群$mathbb{Z}_m$上的子集和问题。最近的一系列研究在强指数时间假设下给出了该问题的渐近最优算法。Koiliaris和Xu (SODA’17,TALG’19)给出了在时间$tilde{O}(m^{5/4})$中运行的确定性算法,后来Axiotis等人(SODA’19)将其改进为$O(m log^7 m)$随机时间。在这项工作中,我们提出了两种简单的算法,用于在$m$中运行的近线性时间内的模子集和问题,两者都有效地实现了$mathbb{Z}_m$上的Bellman迭代。第一种是运行在时间$O(mlog^2 m)$上的随机算法,它完全基于滚动哈希和前缀和的基本数据结构;为了说明它的简单性,我们在Python中提供了一个简短而有效的算法实现。我们的第二个解决方案是实时运行的确定性算法$O(m mathrm{polylog} m)$,它使用动态数据结构进行字符串操作。我们进一步表明,在这项工作中开发的技术也可以导致无向图上的所有对非递减路径问题(APNP)的简单算法,匹配Duan等人(ICALP'19)最近工作中提供的$tilde{O}(n^2)$的渐近最优运行时间。
{"title":"Fast and Simple Modular Subset Sum","authors":"Kyriakos Axiotis, A. Backurs, K. Bringmann, Ce Jin, Vasileios Nakos, Christos Tzamos, Hongxun Wu","doi":"10.1137/1.9781611976496.6","DOIUrl":"https://doi.org/10.1137/1.9781611976496.6","url":null,"abstract":"We revisit the Subset Sum problem over the finite cyclic group $mathbb{Z}_m$ for some given integer $m$. A series of recent works has provided asymptotically optimal algorithms for this problem under the Strong Exponential Time Hypothesis. Koiliaris and Xu (SODA'17, TALG'19) gave a deterministic algorithm running in time $tilde{O}(m^{5/4})$, which was later improved to $O(m log^7 m)$ randomized time by Axiotis et al. (SODA'19). In this work, we present two simple algorithms for the Modular Subset Sum problem running in near-linear time in $m$, both efficiently implementing Bellman's iteration over $mathbb{Z}_m$. The first one is a randomized algorithm running in time $O(mlog^2 m)$, that is based solely on rolling hash and an elementary data-structure for prefix sums; to illustrate its simplicity we provide a short and efficient implementation of the algorithm in Python. Our second solution is a deterministic algorithm running in time $O(m mathrm{polylog} m)$, that uses dynamic data structures for string manipulation. We further show that the techniques developed in this work can also lead to simple algorithms for the All Pairs Non-Decreasing Paths Problem (APNP) on undirected graphs, matching the asymptotically optimal running time of $tilde{O}(n^2)$ provided in the recent work of Duan et al. (ICALP'19).","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"43 1","pages":"57-67"},"PeriodicalIF":0.0,"publicationDate":"2020-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81395062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Simple and Fast Algorithm for Computing the N-th Term of a Linearly Recurrent Sequence 计算线性循环序列第n项的一种简单快速算法
Pub Date : 2020-08-19 DOI: 10.1137/1.9781611976496.14
A. Bostan, R. Mori
We present a simple and fast algorithm for computing the $N$-th term of a given linearly recurrent sequence. Our new algorithm uses $O(mathsf{M}(d) log N)$ arithmetic operations, where $d$ is the order of the recurrence, and $mathsf{M}(d)$ denotes the number of arithmetic operations for computing the product of two polynomials of degree $d$. The state-of-the-art algorithm, due to Charles Fiduccia (1985), has the same arithmetic complexity up to a constant factor. Our algorithm is simpler, faster and obtained by a totally different method. We also discuss several algorithmic applications, notably to polynomial modular exponentiation, powering of matrices and high-order lifting.
我们提出了一种简单快速的算法来计算给定线性循环序列的第N项。我们的新算法使用$O(mathsf{M}(d) log N)$算术运算,其中$d$是递归的阶数,$mathsf{M}(d)$表示计算两个阶为$d$的多项式的乘积的算术运算次数。Charles Fiduccia(1985)提出的最先进的算法具有相同的算术复杂度,直到一个常数因子。我们的算法更简单,更快,并且是通过完全不同的方法得到的。我们还讨论了几种算法的应用,特别是多项式模幂,矩阵幂和高阶提升。
{"title":"A Simple and Fast Algorithm for Computing the N-th Term of a Linearly Recurrent Sequence","authors":"A. Bostan, R. Mori","doi":"10.1137/1.9781611976496.14","DOIUrl":"https://doi.org/10.1137/1.9781611976496.14","url":null,"abstract":"We present a simple and fast algorithm for computing the $N$-th term of a given linearly recurrent sequence. Our new algorithm uses $O(mathsf{M}(d) log N)$ arithmetic operations, where $d$ is the order of the recurrence, and $mathsf{M}(d)$ denotes the number of arithmetic operations for computing the product of two polynomials of degree $d$. The state-of-the-art algorithm, due to Charles Fiduccia (1985), has the same arithmetic complexity up to a constant factor. Our algorithm is simpler, faster and obtained by a totally different method. We also discuss several algorithmic applications, notably to polynomial modular exponentiation, powering of matrices and high-order lifting.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"2013 1","pages":"118-132"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86433726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Simple Deterministic Algorithm for Edge Connectivity 边缘连通的一种简单的确定性算法
Pub Date : 2020-08-19 DOI: 10.1137/1.9781611976496.9
Thatchaphol Saranurak
We show a deterministic algorithm for computing edge connectivity of a simple graph with $m$ edges in $m^{1+o(1)}$ time. Although the fastest deterministic algorithm by Henzinger, Rao, and Wang [SODA'17] has a faster running time of $O(mlog^{2}mloglog m)$, we believe that our algorithm is conceptually simpler. The key tool for this simplication is the expander decomposition. We exploit it in a very straightforward way compared to how it has been previously used in the literature.
在$m^{1+o(1)}$时间内,给出了计算$m$条边的简单图的边连通性的确定性算法。虽然Henzinger, Rao和Wang [SODA'17]提出的最快的确定性算法的运行时间更快,为$O(mlog^{2}mloglog m)$,但我们认为我们的算法在概念上更简单。这种简化的关键工具是展开器分解。与之前在文献中使用的方法相比,我们以一种非常直接的方式利用了它。
{"title":"A Simple Deterministic Algorithm for Edge Connectivity","authors":"Thatchaphol Saranurak","doi":"10.1137/1.9781611976496.9","DOIUrl":"https://doi.org/10.1137/1.9781611976496.9","url":null,"abstract":"We show a deterministic algorithm for computing edge connectivity of a simple graph with $m$ edges in $m^{1+o(1)}$ time. Although the fastest deterministic algorithm by Henzinger, Rao, and Wang [SODA'17] has a faster running time of $O(mlog^{2}mloglog m)$, we believe that our algorithm is conceptually simpler. The key tool for this simplication is the expander decomposition. We exploit it in a very straightforward way compared to how it has been previously used in the literature.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"43 1","pages":"80-85"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81338961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Soft Sequence Heaps 软序列堆
Pub Date : 2020-08-12 DOI: 10.1137/1.9781611976496.2
G. Brodal
Chazelle [JACM00] introduced the soft heap as a building block for efficient minimum spanning tree algorithms, and recently Kaplan et al. [SOSA2019] showed how soft heaps can be applied to achieve simpler algorithms for various selection problems. A soft heap trades-off accuracy for efficiency, by allowing $epsilon N$ of the items in a heap to be corrupted after a total of $N$ insertions, where a corrupted item is an item with artificially increased key and $0 < epsilon leq 1/2$ is a fixed error parameter. Chazelle's soft heaps are based on binomial trees and support insertions in amortized $O(lg(1/epsilon))$ time and extract-min operations in amortized $O(1)$ time. In this paper we explore the design space of soft heaps. The main contribution of this paper is an alternative soft heap implementation based on merging sorted sequences, with time bounds matching those of Chazelle's soft heaps. We also discuss a variation of the soft heap by Kaplan et al. [SICOMP2013], where we avoid performing insertions lazily. It is based on ternary trees instead of binary trees and matches the time bounds of Kaplan et al., i.e. amortized $O(1)$ insertions and amortized $O(lg(1/epsilon))$ extract-min. Both our data structures only introduce corruptions after extract-min operations which return the set of items corrupted by the operation.
Chazelle [JACM00]介绍了软堆作为高效最小生成树算法的构建块,最近Kaplan等人[SOSA2019]展示了如何应用软堆来实现各种选择问题的更简单算法。软堆通过允许在总共$N$次插入之后损坏堆中的$epsilon N$项来权衡准确性和效率,其中损坏的项是人为增加键的项,$0 < epsilon leq 1/2$是固定的错误参数。Chazelle的软堆是基于二叉树的,支持在平摊$O(lg(1/epsilon))$时间内插入和在平摊$O(1)$时间内提取最小值操作。本文对软堆的设计空间进行了探讨。本文的主要贡献是一种基于合并排序序列的软堆实现,其时间界限与Chazelle软堆的时间界限相匹配。我们还讨论了Kaplan等人[SICOMP2013]的软堆变体,其中我们避免了惰性插入。它基于三叉树而不是二叉树,并且匹配Kaplan等人的时间限制,即平摊$O(1)$插入和平摊$O(lg(1/epsilon))$ extract-min。我们的两种数据结构只在提取最小操作之后引入损坏,提取最小操作返回被操作损坏的项集。
{"title":"Soft Sequence Heaps","authors":"G. Brodal","doi":"10.1137/1.9781611976496.2","DOIUrl":"https://doi.org/10.1137/1.9781611976496.2","url":null,"abstract":"Chazelle [JACM00] introduced the soft heap as a building block for efficient minimum spanning tree algorithms, and recently Kaplan et al. [SOSA2019] showed how soft heaps can be applied to achieve simpler algorithms for various selection problems. A soft heap trades-off accuracy for efficiency, by allowing $epsilon N$ of the items in a heap to be corrupted after a total of $N$ insertions, where a corrupted item is an item with artificially increased key and $0 < epsilon leq 1/2$ is a fixed error parameter. Chazelle's soft heaps are based on binomial trees and support insertions in amortized $O(lg(1/epsilon))$ time and extract-min operations in amortized $O(1)$ time. \u0000In this paper we explore the design space of soft heaps. The main contribution of this paper is an alternative soft heap implementation based on merging sorted sequences, with time bounds matching those of Chazelle's soft heaps. We also discuss a variation of the soft heap by Kaplan et al. [SICOMP2013], where we avoid performing insertions lazily. It is based on ternary trees instead of binary trees and matches the time bounds of Kaplan et al., i.e. amortized $O(1)$ insertions and amortized $O(lg(1/epsilon))$ extract-min. Both our data structures only introduce corruptions after extract-min operations which return the set of items corrupted by the operation.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"69 6 1","pages":"14-24"},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88487005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Understanding Nesterov's Acceleration via Proximal Point Method 用近点法理解Nesterov加速度
Pub Date : 2020-05-17 DOI: 10.1137/1.9781611977066.9
Kwangjun Ahn, S. Sra
The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for designing optimization algorithms. In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov’s accelerated gradient method (AGM). The key observation is that AGM is a simple approximation of PPM, which results in an elementary derivation of the update equations and stepsizes of AGM. This view also leads to a transparent and conceptually simple analysis of AGM’s convergence by using the analysis of PPM. The derivations also naturally extend to the strongly convex case. Ultimately, the results presented in this paper are of both didactic and conceptual value; they unify and explain existing variants of AGM while motivating other accelerated methods for practically relevant settings.
近点法(PPM)是优化中的一种基本方法,经常被用作设计优化算法的基石。在这项工作中,我们使用PPM方法提供概念上简单的推导以及不同版本的Nesterov加速梯度方法(AGM)的收敛性分析。关键的观察是,AGM是PPM的简单近似值,这导致AGM的更新方程和步长的基本推导。通过使用PPM的分析,这种观点还导致了AGM收敛性的透明和概念上的简单分析。推导也自然地扩展到强凸情况。最后,本文提出的结果具有教学和概念价值;它们统一并解释了AGM的现有变体,同时为实际相关设置激发了其他加速方法。
{"title":"Understanding Nesterov's Acceleration via Proximal Point Method","authors":"Kwangjun Ahn, S. Sra","doi":"10.1137/1.9781611977066.9","DOIUrl":"https://doi.org/10.1137/1.9781611977066.9","url":null,"abstract":"The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for designing optimization algorithms. In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov’s accelerated gradient method (AGM). The key observation is that AGM is a simple approximation of PPM, which results in an elementary derivation of the update equations and stepsizes of AGM. This view also leads to a transparent and conceptually simple analysis of AGM’s convergence by using the analysis of PPM. The derivations also naturally extend to the strongly convex case. Ultimately, the results presented in this paper are of both didactic and conceptual value; they unify and explain existing variants of AGM while motivating other accelerated methods for practically relevant settings.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"236 1","pages":"117-130"},"PeriodicalIF":0.0,"publicationDate":"2020-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77157363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nearly linear time approximations for mixed packing and covering problems without data structures or randomization 无数据结构或随机化的混合包装和覆盖问题的近线性时间逼近
Pub Date : 2020-01-01 DOI: 10.1137/1.9781611976014.11
Kent Quanrud
{"title":"Nearly linear time approximations for mixed packing and covering problems without data structures or randomization","authors":"Kent Quanrud","doi":"10.1137/1.9781611976014.11","DOIUrl":"https://doi.org/10.1137/1.9781611976014.11","url":null,"abstract":"","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"98 1","pages":"69-80"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75602836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1