Pub Date : 2022-11-20DOI: 10.48550/arXiv.2211.10969
Xiaohui Bei, N. Gravin, P. Lu, Zhihao Gavin Tang
Motivated by practical concerns in the online advertising industry, we study a bidder subset selection problem in single-item auctions. In this problem, a large pool of candidate bidders have independent values sampled from known prior distributions. The seller needs to pick a subset of bidders and run a given auction format on the selected subset to maximize her expected revenue. We propose two frameworks for the subset restrictions: (i) capacity constraint on the set of selected bidders; and (ii) incurred costs for the bidders invited to the auction. For the second-price auction with anonymous reserve (SPA-AR), we give constant approximation polynomial time algorithms in both frameworks (in the latter framework under mild assumptions about the market). Our results are in stark contrast to the previous work of Mehta, Nadav, Psomas, Rubinstein [NeurIPS 2020], who showed hardness of approximation for the SPA without a reserve price. We also give complimentary approximation results for other well-studied auction formats such as anonymous posted pricing and sequential posted pricing. On a technical level, we find that the revenue of SPA-AR as a set function $f(S)$ of its bidders $S$ is fractionally-subadditive but not submodular. Our bidder selection problem with invitation costs is a natural question about (approximately) answering a demand oracle for $f(cdot)$ under a given vector of costs, a common computational assumption in the literature on combinatorial auctions.
{"title":"Bidder Subset Selection Problem in Auction Design","authors":"Xiaohui Bei, N. Gravin, P. Lu, Zhihao Gavin Tang","doi":"10.48550/arXiv.2211.10969","DOIUrl":"https://doi.org/10.48550/arXiv.2211.10969","url":null,"abstract":"Motivated by practical concerns in the online advertising industry, we study a bidder subset selection problem in single-item auctions. In this problem, a large pool of candidate bidders have independent values sampled from known prior distributions. The seller needs to pick a subset of bidders and run a given auction format on the selected subset to maximize her expected revenue. We propose two frameworks for the subset restrictions: (i) capacity constraint on the set of selected bidders; and (ii) incurred costs for the bidders invited to the auction. For the second-price auction with anonymous reserve (SPA-AR), we give constant approximation polynomial time algorithms in both frameworks (in the latter framework under mild assumptions about the market). Our results are in stark contrast to the previous work of Mehta, Nadav, Psomas, Rubinstein [NeurIPS 2020], who showed hardness of approximation for the SPA without a reserve price. We also give complimentary approximation results for other well-studied auction formats such as anonymous posted pricing and sequential posted pricing. On a technical level, we find that the revenue of SPA-AR as a set function $f(S)$ of its bidders $S$ is fractionally-subadditive but not submodular. Our bidder selection problem with invitation costs is a natural question about (approximately) answering a demand oracle for $f(cdot)$ under a given vector of costs, a common computational assumption in the literature on combinatorial auctions.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"28 1","pages":"3788-3801"},"PeriodicalIF":0.0,"publicationDate":"2022-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90432126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-18DOI: 10.48550/arXiv.2211.09964
Nadiia Chepurko, K. Clarkson, Praneeth Kacham, David P. Woodruff
We study fundamental problems in linear algebra, such as finding a maximal linearly independent subset of rows or columns (a basis), solving linear regression, or computing a subspace embedding. For these problems, we consider input matrices $mathbf{A}inmathbb{R}^{ntimes d}$ with $n>d$. The input can be read in $text{nnz}(mathbf{A})$ time, which denotes the number of nonzero entries of $mathbf{A}$. In this paper, we show that beyond the time required to read the input matrix, these fundamental linear algebra problems can be solved in $d^{omega}$ time, i.e., where $omega approx 2.37$ is the current matrix-multiplication exponent. To do so, we introduce a constant-factor subspace embedding with the optimal $m=mathcal{O}(d)$ number of rows, and which can be applied in time $mathcal{O}left(frac{text{nnz}(mathbf{A})}{alpha}right) + d^{2 + alpha}text{poly}(log d)$ for any trade-off parameter $alpha>0$, tightening a recent result by Chepurko et. al. [SODA 2022] that achieves an $exp(text{poly}(loglog n))$ distortion with $m=dcdottext{poly}(loglog d)$ rows in $mathcal{O}left(frac{text{nnz}(mathbf{A})}{alpha}+d^{2+alpha+o(1)}right)$ time. Our subspace embedding uses a recently shown property of stacked Subsampled Randomized Hadamard Transforms (SRHT), which actually increase the input dimension, to"spread"the mass of an input vector among a large number of coordinates, followed by random sampling. To control the effects of random sampling, we use fast semidefinite programming to reweight the rows. We then use our constant-factor subspace embedding to give the first optimal runtime algorithms for finding a maximal linearly independent subset of columns, regression, and leverage score sampling. To do so, we also introduce a novel subroutine that iteratively grows a set of independent rows, which may be of independent interest.
{"title":"Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time","authors":"Nadiia Chepurko, K. Clarkson, Praneeth Kacham, David P. Woodruff","doi":"10.48550/arXiv.2211.09964","DOIUrl":"https://doi.org/10.48550/arXiv.2211.09964","url":null,"abstract":"We study fundamental problems in linear algebra, such as finding a maximal linearly independent subset of rows or columns (a basis), solving linear regression, or computing a subspace embedding. For these problems, we consider input matrices $mathbf{A}inmathbb{R}^{ntimes d}$ with $n>d$. The input can be read in $text{nnz}(mathbf{A})$ time, which denotes the number of nonzero entries of $mathbf{A}$. In this paper, we show that beyond the time required to read the input matrix, these fundamental linear algebra problems can be solved in $d^{omega}$ time, i.e., where $omega approx 2.37$ is the current matrix-multiplication exponent. To do so, we introduce a constant-factor subspace embedding with the optimal $m=mathcal{O}(d)$ number of rows, and which can be applied in time $mathcal{O}left(frac{text{nnz}(mathbf{A})}{alpha}right) + d^{2 + alpha}text{poly}(log d)$ for any trade-off parameter $alpha>0$, tightening a recent result by Chepurko et. al. [SODA 2022] that achieves an $exp(text{poly}(loglog n))$ distortion with $m=dcdottext{poly}(loglog d)$ rows in $mathcal{O}left(frac{text{nnz}(mathbf{A})}{alpha}+d^{2+alpha+o(1)}right)$ time. Our subspace embedding uses a recently shown property of stacked Subsampled Randomized Hadamard Transforms (SRHT), which actually increase the input dimension, to\"spread\"the mass of an input vector among a large number of coordinates, followed by random sampling. To control the effects of random sampling, we use fast semidefinite programming to reweight the rows. We then use our constant-factor subspace embedding to give the first optimal runtime algorithms for finding a maximal linearly independent subset of columns, regression, and leverage score sampling. To do so, we also introduce a novel subroutine that iteratively grows a set of independent rows, which may be of independent interest.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"8 1","pages":"4026-4049"},"PeriodicalIF":0.0,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82132367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-18DOI: 10.48550/arXiv.2211.10398
Sungjin Im, Shi Li
We revisit two well-studied scheduling problems in the unrelated machines setting where each job can have a different processing time on each machine. For minimizing total weighted completion time we give a 1.45-approximation, which improves upon the previous 1.488-approximation [Im and Shadloo SODA 2020]. The key technical ingredient in this improvement lies in a new rounding scheme that gives strong negative correlation with less restrictions. For minimizing $L_k$-norms of machine loads, inspired by [Kalaitzis et al. SODA 2017], we give better approximation algorithms. In particular we give a $sqrt {4/3}$-approximation for the $L_2$-norm which improves upon the former $sqrt 2$-approximations due to [Azar-Epstein STOC 2005] and [Kumar et al. JACM 2009].
我们将在不相关的机器设置中重新讨论两个经过充分研究的调度问题,其中每个作业在每台机器上具有不同的处理时间。为了最小化总加权完成时间,我们给出了1.45近似值,这比之前的1.488近似值有所改进[Im和Shadloo SODA 2020]。这种改进的关键技术成分在于一种新的舍入方案,该方案在限制较少的情况下提供了强负相关。为了最小化机器负载的L_k -规范,受到Kalaitzis等人的启发。SODA 2017],我们给出了更好的近似算法。特别地,我们给出了L_2 -范数的$sqrt{4/3}$-近似,它改进了由于[Azar-Epstein STOC 2005]和[Kumar等人]而得到的$sqrt 2$-近似。JACM 2009]。
{"title":"Improved Approximations for Unrelated Machine Scheduling","authors":"Sungjin Im, Shi Li","doi":"10.48550/arXiv.2211.10398","DOIUrl":"https://doi.org/10.48550/arXiv.2211.10398","url":null,"abstract":"We revisit two well-studied scheduling problems in the unrelated machines setting where each job can have a different processing time on each machine. For minimizing total weighted completion time we give a 1.45-approximation, which improves upon the previous 1.488-approximation [Im and Shadloo SODA 2020]. The key technical ingredient in this improvement lies in a new rounding scheme that gives strong negative correlation with less restrictions. For minimizing $L_k$-norms of machine loads, inspired by [Kalaitzis et al. SODA 2017], we give better approximation algorithms. In particular we give a $sqrt {4/3}$-approximation for the $L_2$-norm which improves upon the former $sqrt 2$-approximations due to [Azar-Epstein STOC 2005] and [Kumar et al. JACM 2009].","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"7 1","pages":"2917-2946"},"PeriodicalIF":0.0,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82269681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-17DOI: 10.48550/arXiv.2211.09341
Dor Minzer, Kai Zheng
The Cube versus Cube test is a variant of the well-known Plane versus Plane test of Raz and Safra, in which to each $3$-dimensional affine subspace $C$ of $mathbb{F}_q^n$, a polynomial of degree at most $d$, $T(C)$, is assigned in a somewhat locally consistent manner: taking two cubes $C_1, C_2$ that intersect in a plane uniformly at random, the probability that $T(C_1)$ and $T(C_2)$ agree on $C_1cap C_2$ is at least some $epsilon$. An element of interest is the soundness threshold of this test, i.e. the smallest value of $epsilon$, such that this amount of local consistency implies a global structure; namely, that there is a global degree $d$ function $g$ such that $g|_{C} equiv T(C)$ for at least $Omega(epsilon)$ fraction of the cubes. We show that the cube versus cube low degree test has soundness ${sf poly}(d)/q$. This result achieves the optimal dependence on $q$ for soundness in low degree testing and improves upon previous soundness results of ${sf poly}(d)/q^{1/2}$ due to Bhangale, Dinur and Navon.
{"title":"Approaching the Soundness Barrier: A Near Optimal Analysis of the Cube versus Cube Test","authors":"Dor Minzer, Kai Zheng","doi":"10.48550/arXiv.2211.09341","DOIUrl":"https://doi.org/10.48550/arXiv.2211.09341","url":null,"abstract":"The Cube versus Cube test is a variant of the well-known Plane versus Plane test of Raz and Safra, in which to each $3$-dimensional affine subspace $C$ of $mathbb{F}_q^n$, a polynomial of degree at most $d$, $T(C)$, is assigned in a somewhat locally consistent manner: taking two cubes $C_1, C_2$ that intersect in a plane uniformly at random, the probability that $T(C_1)$ and $T(C_2)$ agree on $C_1cap C_2$ is at least some $epsilon$. An element of interest is the soundness threshold of this test, i.e. the smallest value of $epsilon$, such that this amount of local consistency implies a global structure; namely, that there is a global degree $d$ function $g$ such that $g|_{C} equiv T(C)$ for at least $Omega(epsilon)$ fraction of the cubes. We show that the cube versus cube low degree test has soundness ${sf poly}(d)/q$. This result achieves the optimal dependence on $q$ for soundness in low degree testing and improves upon previous soundness results of ${sf poly}(d)/q^{1/2}$ due to Bhangale, Dinur and Navon.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"8 1","pages":"2761-2776"},"PeriodicalIF":0.0,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88080564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-16DOI: 10.48550/arXiv.2211.09106
Xinrui Jia, O. Svensson, Weiqiang Yuan
Given a graph with edges colored red or blue and an integer $k$, the exact perfect matching problem asks if there exists a perfect matching with exactly $k$ red edges. There exists a randomized polylogarithmic-time parallel algorithm to solve this problem, dating back to the eighties, but no deterministic polynomial-time algorithm is known, even for bipartite graphs. In this paper we show that there is no sub-exponential sized linear program that can describe the convex hull of exact matchings in bipartite graphs. In fact, we prove something stronger, that there is no sub-exponential sized linear program to describe the convex hull of perfect matchings with an odd number of red edges.
{"title":"The Exact Bipartite Matching Polytope Has Exponential Extension Complexity","authors":"Xinrui Jia, O. Svensson, Weiqiang Yuan","doi":"10.48550/arXiv.2211.09106","DOIUrl":"https://doi.org/10.48550/arXiv.2211.09106","url":null,"abstract":"Given a graph with edges colored red or blue and an integer $k$, the exact perfect matching problem asks if there exists a perfect matching with exactly $k$ red edges. There exists a randomized polylogarithmic-time parallel algorithm to solve this problem, dating back to the eighties, but no deterministic polynomial-time algorithm is known, even for bipartite graphs. In this paper we show that there is no sub-exponential sized linear program that can describe the convex hull of exact matchings in bipartite graphs. In fact, we prove something stronger, that there is no sub-exponential sized linear program to describe the convex hull of perfect matchings with an odd number of red edges.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"42 1","pages":"1635-1654"},"PeriodicalIF":0.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91274983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-14DOI: 10.48550/arXiv.2211.07327
Tommaso d'Orsi, Rajai Nasser, Gleb Novikov, David Steurer
We consider estimation models of the form $Y=X^*+N$, where $X^*$ is some $m$-dimensional signal we wish to recover, and $N$ is symmetrically distributed noise that may be unbounded in all but a small $alpha$ fraction of the entries. We introduce a family of algorithms that under mild assumptions recover the signal $X^*$ in all estimation problems for which there exists a sum-of-squares algorithm that succeeds in recovering the signal $X^*$ when the noise $N$ is Gaussian. This essentially shows that it is enough to design a sum-of-squares algorithm for an estimation problem with Gaussian noise in order to get the algorithm that works with the symmetric noise model. Our framework extends far beyond previous results on symmetric noise models and is even robust to adversarial perturbations. As concrete examples, we investigate two problems for which no efficient algorithms were known to work for heavy-tailed noise: tensor PCA and sparse PCA. For the former, our algorithm recovers the principal component in polynomial time when the signal-to-noise ratio is at least $tilde{O}(n^{p/4}/alpha)$, that matches (up to logarithmic factors) current best known algorithmic guarantees for Gaussian noise. For the latter, our algorithm runs in quasipolynomial time and matches the state-of-the-art guarantees for quasipolynomial time algorithms in the case of Gaussian noise. Using a reduction from the planted clique problem, we provide evidence that the quasipolynomial time is likely to be necessary for sparse PCA with symmetric noise. In our proofs we use bounds on the covering numbers of sets of pseudo-expectations, which we obtain by certifying in sum-of-squares upper bounds on the Gaussian complexities of sets of solutions. This approach for bounding the covering numbers of sets of pseudo-expectations may be interesting in its own right and may find other application in future works.
{"title":"Higher degree sum-of-squares relaxations robust against oblivious outliers","authors":"Tommaso d'Orsi, Rajai Nasser, Gleb Novikov, David Steurer","doi":"10.48550/arXiv.2211.07327","DOIUrl":"https://doi.org/10.48550/arXiv.2211.07327","url":null,"abstract":"We consider estimation models of the form $Y=X^*+N$, where $X^*$ is some $m$-dimensional signal we wish to recover, and $N$ is symmetrically distributed noise that may be unbounded in all but a small $alpha$ fraction of the entries. We introduce a family of algorithms that under mild assumptions recover the signal $X^*$ in all estimation problems for which there exists a sum-of-squares algorithm that succeeds in recovering the signal $X^*$ when the noise $N$ is Gaussian. This essentially shows that it is enough to design a sum-of-squares algorithm for an estimation problem with Gaussian noise in order to get the algorithm that works with the symmetric noise model. Our framework extends far beyond previous results on symmetric noise models and is even robust to adversarial perturbations. As concrete examples, we investigate two problems for which no efficient algorithms were known to work for heavy-tailed noise: tensor PCA and sparse PCA. For the former, our algorithm recovers the principal component in polynomial time when the signal-to-noise ratio is at least $tilde{O}(n^{p/4}/alpha)$, that matches (up to logarithmic factors) current best known algorithmic guarantees for Gaussian noise. For the latter, our algorithm runs in quasipolynomial time and matches the state-of-the-art guarantees for quasipolynomial time algorithms in the case of Gaussian noise. Using a reduction from the planted clique problem, we provide evidence that the quasipolynomial time is likely to be necessary for sparse PCA with symmetric noise. In our proofs we use bounds on the covering numbers of sets of pseudo-expectations, which we obtain by certifying in sum-of-squares upper bounds on the Gaussian complexities of sets of solutions. This approach for bounding the covering numbers of sets of pseudo-expectations may be interesting in its own right and may find other application in future works.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"32 1","pages":"3513-3550"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77521388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-14DOI: 10.48550/arXiv.2211.07606
Manuela Fischer, Yannic Maus, Magn'us M. Halld'orsson
We give a randomized $Delta$-coloring algorithm in the LOCAL model that runs in $text{poly} log log n$ rounds, where $n$ is the number of nodes of the input graph and $Delta$ is its maximum degree. This means that randomized $Delta$-coloring is a rare distributed coloring problem with an upper and lower bound in the same ballpark, $text{poly}loglog n$, given the known $Omega(log_Deltalog n)$ lower bound [Brandt et al., STOC '16]. Our main technical contribution is a constant time reduction to a constant number of $(text{deg}+1)$-list coloring instances, for $Delta = omega(log^4 n)$, resulting in a $text{poly} loglog n$-round CONGEST algorithm for such graphs. This reduction is of independent interest for other settings, including providing a new proof of Brooks' theorem for high degree graphs, and leading to a constant-round Congested Clique algorithm in such graphs. When $Delta=omega(log^{21} n)$, our algorithm even runs in $O(log^* n)$ rounds, showing that the base in the $Omega(log_Deltalog n)$ lower bound is unavoidable. Previously, the best LOCAL algorithm for all considered settings used a logarithmic number of rounds. Our result is the first CONGEST algorithm for $Delta$-coloring non-constant degree graphs.
{"title":"Fast Distributed Brooks' Theorem","authors":"Manuela Fischer, Yannic Maus, Magn'us M. Halld'orsson","doi":"10.48550/arXiv.2211.07606","DOIUrl":"https://doi.org/10.48550/arXiv.2211.07606","url":null,"abstract":"We give a randomized $Delta$-coloring algorithm in the LOCAL model that runs in $text{poly} log log n$ rounds, where $n$ is the number of nodes of the input graph and $Delta$ is its maximum degree. This means that randomized $Delta$-coloring is a rare distributed coloring problem with an upper and lower bound in the same ballpark, $text{poly}loglog n$, given the known $Omega(log_Deltalog n)$ lower bound [Brandt et al., STOC '16]. Our main technical contribution is a constant time reduction to a constant number of $(text{deg}+1)$-list coloring instances, for $Delta = omega(log^4 n)$, resulting in a $text{poly} loglog n$-round CONGEST algorithm for such graphs. This reduction is of independent interest for other settings, including providing a new proof of Brooks' theorem for high degree graphs, and leading to a constant-round Congested Clique algorithm in such graphs. When $Delta=omega(log^{21} n)$, our algorithm even runs in $O(log^* n)$ rounds, showing that the base in the $Omega(log_Deltalog n)$ lower bound is unavoidable. Previously, the best LOCAL algorithm for all considered settings used a logarithmic number of rounds. Our result is the first CONGEST algorithm for $Delta$-coloring non-constant degree graphs.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"100 1","pages":"2567-2588"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75861387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.48550/arXiv.2211.05769
Ruoxu Cen, W. He, Jason Li, Debmalya Panigrahi
We give an almost-linear time algorithm for the Steiner connectivity augmentation problem: given an undirected graph, find a smallest (or minimum weight) set of edges whose addition makes a given set of terminals $tau$-connected (for any given $tau>0$). The running time of our algorithm is dominated by polylogarithmic calls to any maximum flow subroutine; using the recent almost-linear time maximum flow algorithm (Chen et al., FOCS 2022), we get an almost-linear running time for our algorithm as well. This is tight up to the polylogarithmic factor even for just two terminals. Prior to our work, an almost-linear (in fact, near-linear) running time was known only for the special case of global connectivity augmentation, i.e., when all vertices are terminals (Cen et al., STOC 2022). We also extend our algorithm to the closely related Steiner splitting-off problem, where the edges incident on a vertex have to be {em split-off} while maintaining the (Steiner) connectivity of a given set of terminals. Prior to our work, a nearly-linear time algorithm was known only for the special case of global connectivity (Cen et al., STOC 2022). The only known generalization beyond global connectivity was to preserve all pairwise connectivities using a much slower algorithm that makes $n$ calls to an all-pairs maximum flow (or Gomory-Hu tree) subroutine (Lau and Yung, SICOMP 2013), as against polylog(n) calls to a (single-pair) maximum flow subroutine in this work.
对于斯坦纳连通性增强问题,我们给出了一个几乎线性的时间算法:给定一个无向图,找到一个最小(或最小权值)的边集,其加法使给定的一组终端$tau$-连通(对于任何给定的$tau>0$)。算法的运行时间主要由对任意最大流量子程序的多对数调用决定;使用最近的几乎线性时间最大流量算法(Chen等人,FOCS 2022),我们的算法也得到了几乎线性的运行时间。即使只有两个终端,这也与多对数因子紧密相关。在我们的工作之前,只有在全局连接增强的特殊情况下,即当所有顶点都是终端时,才知道几乎线性(实际上是近线性)的运行时间(Cen等人,STOC 2022)。我们还将我们的算法扩展到密切相关的斯坦纳分离问题,其中一个顶点上的边必须是{em分离},同时保持给定端点集的(斯坦纳)连通性。在我们的工作之前,已知的近线性时间算法仅适用于全局连接的特殊情况(Cen et al., STOC 2022)。除了全局连接之外,唯一已知的泛化是使用一种更慢的算法来保持所有的成对连接,该算法对全对最大流量(或Gomory-Hu树)子程序进行$n$调用(Lau和Yung, SICOMP 2013),而不是在这项工作中对(单对)最大流量子程序进行polylog(n)调用。
{"title":"Steiner Connectivity Augmentation and Splitting-off in Poly-logarithmic Maximum Flows","authors":"Ruoxu Cen, W. He, Jason Li, Debmalya Panigrahi","doi":"10.48550/arXiv.2211.05769","DOIUrl":"https://doi.org/10.48550/arXiv.2211.05769","url":null,"abstract":"We give an almost-linear time algorithm for the Steiner connectivity augmentation problem: given an undirected graph, find a smallest (or minimum weight) set of edges whose addition makes a given set of terminals $tau$-connected (for any given $tau>0$). The running time of our algorithm is dominated by polylogarithmic calls to any maximum flow subroutine; using the recent almost-linear time maximum flow algorithm (Chen et al., FOCS 2022), we get an almost-linear running time for our algorithm as well. This is tight up to the polylogarithmic factor even for just two terminals. Prior to our work, an almost-linear (in fact, near-linear) running time was known only for the special case of global connectivity augmentation, i.e., when all vertices are terminals (Cen et al., STOC 2022). We also extend our algorithm to the closely related Steiner splitting-off problem, where the edges incident on a vertex have to be {em split-off} while maintaining the (Steiner) connectivity of a given set of terminals. Prior to our work, a nearly-linear time algorithm was known only for the special case of global connectivity (Cen et al., STOC 2022). The only known generalization beyond global connectivity was to preserve all pairwise connectivities using a much slower algorithm that makes $n$ calls to an all-pairs maximum flow (or Gomory-Hu tree) subroutine (Lau and Yung, SICOMP 2013), as against polylog(n) calls to a (single-pair) maximum flow subroutine in this work.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"143 1","pages":"2449-2488"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75854047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.48550/arXiv.2211.05509
L. Pesenti, Adrian Vladu
We introduce a new algorithmic framework for discrepancy minimization based on regularization. We demonstrate how varying the regularizer allows us to re-interpret several breakthrough works in algorithmic discrepancy, ranging from Spencer's theorem [Spencer 1985, Bansal 2010] to Banaszczyk's bounds [Banaszczyk 1998, Bansal-Dadush-Garg 2016]. Using our techniques, we also show that the Beck-Fiala and Koml'os conjectures are true in a new regime of pseudorandom instances.
{"title":"Discrepancy Minimization via Regularization","authors":"L. Pesenti, Adrian Vladu","doi":"10.48550/arXiv.2211.05509","DOIUrl":"https://doi.org/10.48550/arXiv.2211.05509","url":null,"abstract":"We introduce a new algorithmic framework for discrepancy minimization based on regularization. We demonstrate how varying the regularizer allows us to re-interpret several breakthrough works in algorithmic discrepancy, ranging from Spencer's theorem [Spencer 1985, Bansal 2010] to Banaszczyk's bounds [Banaszczyk 1998, Bansal-Dadush-Garg 2016]. Using our techniques, we also show that the Beck-Fiala and Koml'os conjectures are true in a new regime of pseudorandom instances.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"62 1","pages":"1734-1758"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74403808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.48550/arXiv.2211.05345
Timothy M. Chan
We consider problems related to finding short cycles, small cliques, small independent sets, and small subgraphs in geometric intersection graphs. We obtain a plethora of new results. For example: * For the intersection graph of $n$ line segments in the plane, we give algorithms to find a 3-cycle in $O(n^{1.408})$ time, a size-3 independent set in $O(n^{1.652})$ time, a 4-clique in near-$O(n^{24/13})$ time, and a $k$-clique (or any $k$-vertex induced subgraph) in $O(n^{0.565k+O(1)})$ time for any constant $k$; we can also compute the girth in near-$O(n^{3/2})$ time. * For the intersection graph of $n$ axis-aligned boxes in a constant dimension $d$, we give algorithms to find a 3-cycle in $O(n^{1.408})$ time for any $d$, a 4-clique (or any 4-vertex induced subgraph) in $O(n^{1.715})$ time for any $d$, a size-4 independent set in near-$O(n^{3/2})$ time for any $d$, a size-5 independent set in near-$O(n^{4/3})$ time for $d=2$, and a $k$-clique (or any $k$-vertex induced subgraph) in $O(n^{0.429k+O(1)})$ time for any $d$ and any constant $k$. * For the intersection graph of $n$ fat objects in any constant dimension $d$, we give an algorithm to find any $k$-vertex (non-induced) subgraph in $O(nlog n)$ time for any constant $k$, generalizing a result by Kaplan, Klost, Mulzer, Roddity, Seiferth, and Sharir (1999) for 3-cycles in 2D disk graphs. A variety of techniques is used, including geometric range searching, biclique covers,"high-low"tricks, graph degeneracy and separators, and shifted quadtrees. We also prove a near-$Omega(n^{4/3})$ conditional lower bound for finding a size-4 independent set for boxes.
{"title":"Finding Triangles and Other Small Subgraphs in Geometric Intersection Graphs","authors":"Timothy M. Chan","doi":"10.48550/arXiv.2211.05345","DOIUrl":"https://doi.org/10.48550/arXiv.2211.05345","url":null,"abstract":"We consider problems related to finding short cycles, small cliques, small independent sets, and small subgraphs in geometric intersection graphs. We obtain a plethora of new results. For example: * For the intersection graph of $n$ line segments in the plane, we give algorithms to find a 3-cycle in $O(n^{1.408})$ time, a size-3 independent set in $O(n^{1.652})$ time, a 4-clique in near-$O(n^{24/13})$ time, and a $k$-clique (or any $k$-vertex induced subgraph) in $O(n^{0.565k+O(1)})$ time for any constant $k$; we can also compute the girth in near-$O(n^{3/2})$ time. * For the intersection graph of $n$ axis-aligned boxes in a constant dimension $d$, we give algorithms to find a 3-cycle in $O(n^{1.408})$ time for any $d$, a 4-clique (or any 4-vertex induced subgraph) in $O(n^{1.715})$ time for any $d$, a size-4 independent set in near-$O(n^{3/2})$ time for any $d$, a size-5 independent set in near-$O(n^{4/3})$ time for $d=2$, and a $k$-clique (or any $k$-vertex induced subgraph) in $O(n^{0.429k+O(1)})$ time for any $d$ and any constant $k$. * For the intersection graph of $n$ fat objects in any constant dimension $d$, we give an algorithm to find any $k$-vertex (non-induced) subgraph in $O(nlog n)$ time for any constant $k$, generalizing a result by Kaplan, Klost, Mulzer, Roddity, Seiferth, and Sharir (1999) for 3-cycles in 2D disk graphs. A variety of techniques is used, including geometric range searching, biclique covers,\"high-low\"tricks, graph degeneracy and separators, and shifted quadtrees. We also prove a near-$Omega(n^{4/3})$ conditional lower bound for finding a size-4 independent set for boxes.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"146 1","pages":"1777-1805"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80576452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}