首页 > 最新文献

Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms最新文献

英文 中文
A Polynomial Time Algorithm for Finding a Minimum 4-Partition of a Submodular Function 求子模函数最小4分的多项式时间算法
{"title":"A Polynomial Time Algorithm for Finding a Minimum 4-Partition of a Submodular Function","authors":"Tsuyoshi Hirayama, Yuhao Liu, K. Makino, Ke Shi, Chao Xu","doi":"10.1137/1.9781611977554.ch64","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch64","url":null,"abstract":"","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"1 1","pages":"1680-1691"},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76170463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Player-optimal Stable Regret for Bandit Learning in Matching Markets 匹配市场中盗贼学习的玩家最优稳定遗憾
The problem of matching markets has been studied for a long time in the literature due to its wide range of applications. Finding a stable matching is a common equilibrium objective in this problem. Since market participants are usually uncertain of their preferences, a rich line of recent works study the online setting where one-side participants (players) learn their unknown preferences from iterative interactions with the other side (arms). Most previous works in this line are only able to derive theoretical guarantees for player-pessimal stable regret, which is defined compared with the players' least-preferred stable matching. However, under the pessimal stable matching, players only obtain the least reward among all stable matchings. To maximize players' profits, player-optimal stable matching would be the most desirable. Though citet{basu21beyond} successfully bring an upper bound for player-optimal stable regret, their result can be exponentially large if players' preference gap is small. Whether a polynomial guarantee for this regret exists is a significant but still open problem. In this work, we provide a new algorithm named explore-then-Gale-Shapley (ETGS) and show that the optimal stable regret of each player can be upper bounded by $O(Klog T/Delta^2)$ where $K$ is the number of arms, $T$ is the horizon and $Delta$ is the players' minimum preference gap among the first $N+1$-ranked arms. This result significantly improves previous works which either have a weaker player-pessimal stable matching objective or apply only to markets with special assumptions. When the preferences of participants satisfy some special conditions, our regret upper bound also matches the previously derived lower bound.
匹配市场问题由于其广泛的应用范围,在文献中已经被研究了很长时间。寻找一个稳定的匹配是这一问题中常见的平衡目标。由于市场参与者通常不确定自己的偏好,最近有大量研究在线环境的工作,其中一方参与者(玩家)从与另一方(手臂)的反复互动中了解他们未知的偏好。这方面以往的研究大多只能推导出玩家-悲观稳定遗憾的理论保证,而玩家-悲观稳定遗憾是通过玩家最不喜欢的稳定匹配来定义的。而在悲观稳定匹配下,参与者在所有稳定匹配中获得的奖励最少。为了最大化玩家的利益,玩家最优的稳定匹配是最理想的。虽然citet{basu21beyond}成功地给出了玩家最优稳定后悔的上限,但如果玩家的偏好差距很小,他们的结果可能会呈指数级增长。这种遗憾是否存在多项式保证是一个重要但仍未解决的问题。在这项工作中,我们提供了一种名为探索-然后- gale - shapley (ETGS)的新算法,并表明每个参与者的最优稳定后悔可以由$O(Klog T/Delta^2)$上界,其中$K$是手臂的数量,$T$是视界,$Delta$是参与者在前$N+1$ -排名手臂之间的最小偏好差距。这一结果显著改善了以前的工作,这些工作要么具有较弱的参与者悲观稳定匹配目标,要么仅适用于具有特殊假设的市场。当参与者的偏好满足某些特殊条件时,我们的遗憾上界也与之前导出的下界匹配。
{"title":"Player-optimal Stable Regret for Bandit Learning in Matching Markets","authors":"Fang-yuan Kong, Shuai Li","doi":"10.1137/1.9781611977554.ch55","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch55","url":null,"abstract":"The problem of matching markets has been studied for a long time in the literature due to its wide range of applications. Finding a stable matching is a common equilibrium objective in this problem. Since market participants are usually uncertain of their preferences, a rich line of recent works study the online setting where one-side participants (players) learn their unknown preferences from iterative interactions with the other side (arms). Most previous works in this line are only able to derive theoretical guarantees for player-pessimal stable regret, which is defined compared with the players' least-preferred stable matching. However, under the pessimal stable matching, players only obtain the least reward among all stable matchings. To maximize players' profits, player-optimal stable matching would be the most desirable. Though citet{basu21beyond} successfully bring an upper bound for player-optimal stable regret, their result can be exponentially large if players' preference gap is small. Whether a polynomial guarantee for this regret exists is a significant but still open problem. In this work, we provide a new algorithm named explore-then-Gale-Shapley (ETGS) and show that the optimal stable regret of each player can be upper bounded by $O(Klog T/Delta^2)$ where $K$ is the number of arms, $T$ is the horizon and $Delta$ is the players' minimum preference gap among the first $N+1$-ranked arms. This result significantly improves previous works which either have a weaker player-pessimal stable matching objective or apply only to markets with special assumptions. When the preferences of participants satisfy some special conditions, our regret upper bound also matches the previously derived lower bound.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"16 1","pages":"1512-1522"},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84219989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Square Detection Over General Alphabets 通用字母的最优平方检测
Squares (fragments of the form $xx$, for some string $x$) are arguably the most natural type of repetition in strings. The basic algorithmic question concerning squares is to check if a given string of length $n$ is square-free, that is, does not contain a fragment of such form. Main and Lorentz [J. Algorithms 1984] designed an $mathcal{O}(nlog n)$ time algorithm for this problem, and proved a matching lower bound assuming the so-called general alphabet, meaning that the algorithm is only allowed to check if two characters are equal. However, their lower bound also assumes that there are $Omega(n)$ distinct symbols in the string. As an open question, they asked if there is a faster algorithm if one restricts the size of the alphabet. Crochemore [Theor. Comput. Sci. 1986] designed a linear-time algorithm for constant-size alphabets, and combined with more recent results his approach in fact implies such an algorithm for linearly-sortable alphabets. Very recently, Ellert and Fischer [ICALP 2021] significantly relaxed this assumption by designing a linear-time algorithm for general ordered alphabets, that is, assuming a linear order on the characters that permits constant time order comparisons. However, the open question of Main and Lorentz from 1984 remained unresolved for general (unordered) alphabets. In this paper, we show that testing square-freeness of a length-$n$ string over general alphabet of size $sigma$ can be done with $mathcal{O}(nlog sigma)$ comparisons, and cannot be done with $o(nlog sigma)$ comparisons. We complement this result with an $mathcal{O}(nlog sigma)$ time algorithm in the Word RAM model. Finally, we extend the algorithm to reporting all the runs (maximal repetitions) in the same complexity.
正方形(对于某些字符串来说,形式为$xx$的片段$x$)可以说是字符串中最自然的重复类型。关于平方的基本算法问题是检查长度为$n$的给定字符串是否无平方,也就是说,不包含这种形式的片段。Main和Lorentz [J]。Algorithms 1984]针对这个问题设计了一个$mathcal{O}(nlog n)$时间算法,并证明了一个假设所谓通用字母表的匹配下界,即算法只允许检查两个字符是否相等。但是,它们的下界还假设字符串中有$Omega(n)$不同的符号。作为一个开放的问题,他们问,如果限制字母表的大小,是否有更快的算法。克罗切莫尔[理论]计算。Sci. 1986]为恒定大小的字母设计了一个线性时间算法,结合最近的结果,他的方法实际上暗示了这样一个线性排序字母的算法。最近,Ellert和Fischer [ICALP 2021]通过为一般有序字母设计线性时间算法显著放宽了这一假设,即假设字符上的线性顺序允许常数时间顺序比较。然而,从1984年开始,对于通用(无序)字母,美因和洛伦兹的开放问题仍然没有解决。在本文中,我们证明了长度- $n$字符串在大小为$sigma$的一般字母表上的平方自由度测试可以用$mathcal{O}(nlog sigma)$比较来完成,而不能用$o(nlog sigma)$比较来完成。我们用Word RAM模型中的$mathcal{O}(nlog sigma)$时间算法来补充这个结果。最后,我们将算法扩展到报告相同复杂度下的所有运行(最大重复)。
{"title":"Optimal Square Detection Over General Alphabets","authors":"J. Ellert, Paweł Gawrychowski, Garance Gourdel","doi":"10.1137/1.9781611977554.ch189","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch189","url":null,"abstract":"Squares (fragments of the form $xx$, for some string $x$) are arguably the most natural type of repetition in strings. The basic algorithmic question concerning squares is to check if a given string of length $n$ is square-free, that is, does not contain a fragment of such form. Main and Lorentz [J. Algorithms 1984] designed an $mathcal{O}(nlog n)$ time algorithm for this problem, and proved a matching lower bound assuming the so-called general alphabet, meaning that the algorithm is only allowed to check if two characters are equal. However, their lower bound also assumes that there are $Omega(n)$ distinct symbols in the string. As an open question, they asked if there is a faster algorithm if one restricts the size of the alphabet. Crochemore [Theor. Comput. Sci. 1986] designed a linear-time algorithm for constant-size alphabets, and combined with more recent results his approach in fact implies such an algorithm for linearly-sortable alphabets. Very recently, Ellert and Fischer [ICALP 2021] significantly relaxed this assumption by designing a linear-time algorithm for general ordered alphabets, that is, assuming a linear order on the characters that permits constant time order comparisons. However, the open question of Main and Lorentz from 1984 remained unresolved for general (unordered) alphabets. In this paper, we show that testing square-freeness of a length-$n$ string over general alphabet of size $sigma$ can be done with $mathcal{O}(nlog sigma)$ comparisons, and cannot be done with $o(nlog sigma)$ comparisons. We complement this result with an $mathcal{O}(nlog sigma)$ time algorithm in the Word RAM model. Finally, we extend the algorithm to reporting all the runs (maximal repetitions) in the same complexity.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"110 1","pages":"5220-5242"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81611563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fully Dynamic Exact Edge Connectivity in Sublinear Time 在亚线性时间内的完全动态精确边连通性
Given a simple $n$-vertex, $m$-edge graph $G$ undergoing edge insertions and deletions, we give two new fully dynamic algorithms for exactly maintaining the edge connectivity of $G$ in $tilde{O}(n)$ worst-case update time and $tilde{O}(m^{1-1/16})$ amortized update time, respectively. Prior to our work, all dynamic edge connectivity algorithms assumed bounded edge connectivity, guaranteed approximate solutions, or were restricted to edge insertions only. Our results answer in the affirmative an open question posed by Thorup [Combinatorica'07].
给定一个简单的$n$顶点,$m$边的图$G$,在$tilde{O}(n)$最坏情况更新时间和$tilde{O}(m^{1-1/16})$平摊更新时间内,我们分别给出了两种新的完全动态算法来精确地保持$G$的边连通性。在我们的工作之前,所有动态边缘连接算法都假设有界边缘连接,保证近似解,或者仅限于边缘插入。我们的结果肯定地回答了Thorup [Combinatorica'07]提出的一个开放性问题。
{"title":"Fully Dynamic Exact Edge Connectivity in Sublinear Time","authors":"Gramoz Goranci, M. Henzinger, Danupon Nanongkai, Thatchaphol Saranurak, M. Thorup, Christian Wulff-Nilsen","doi":"10.1137/1.9781611977554.ch3","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch3","url":null,"abstract":"Given a simple $n$-vertex, $m$-edge graph $G$ undergoing edge insertions and deletions, we give two new fully dynamic algorithms for exactly maintaining the edge connectivity of $G$ in $tilde{O}(n)$ worst-case update time and $tilde{O}(m^{1-1/16})$ amortized update time, respectively. Prior to our work, all dynamic edge connectivity algorithms assumed bounded edge connectivity, guaranteed approximate solutions, or were restricted to edge insertions only. Our results answer in the affirmative an open question posed by Thorup [Combinatorica'07].","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"547 1","pages":"70-86"},"PeriodicalIF":0.0,"publicationDate":"2023-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77070501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Maximal k-Edge-Connected Subgraphs in Weighted Graphs via Local Random Contraction 基于局部随机收缩的加权图中最大k边连通子图
The emph{maximal $k$-edge-connected subgraphs} problem is a classical graph clustering problem studied since the 70's. Surprisingly, no non-trivial technique for this problem in weighted graphs is known: a very straightforward recursive-mincut algorithm with $Omega(mn)$ time has remained the fastest algorithm until now. All previous progress gives a speed-up only when the graph is unweighted, and $k$ is small enough (e.g.~Henzinger~et~al.~(ICALP'15), Chechik~et~al.~(SODA'17), and Forster~et~al.~(SODA'20)). We give the first algorithm that breaks through the long-standing $tilde{O}(mn)$-time barrier in emph{weighted undirected} graphs. More specifically, we show a maximal $k$-edge-connected subgraphs algorithm that takes only $tilde{O}(mcdotmin{m^{3/4},n^{4/5}})$ time. As an immediate application, we can $(1+epsilon)$-approximate the emph{strength} of all edges in undirected graphs in the same running time. Our key technique is the first local cut algorithm with emph{exact} cut-value guarantees whose running time depends only on the output size. All previous local cut algorithms either have running time depending on the cut value of the output, which can be arbitrarily slow in weighted graphs or have approximate cut guarantees.
emph{极大$k$边连通子图}问题是70年代以来研究的一个经典的图聚类问题。令人惊讶的是,对于加权图中的这个问题,还没有已知的非平凡技术:到目前为止,一个非常简单的递归最小切算法($Omega(mn)$时间)仍然是最快的算法。所有之前的进展只有在图未加权且$k$足够小时才会加速(例如Henzinger et al. (ICALP'15), Chechik et al. (SODA'17)和Forster et al. (SODA'20))。我们给出了第一个突破emph{加权无向}图中存在已久的$tilde{O}(mn)$时间障碍的算法。更具体地说,我们展示了一个极大的$k$ -边连接子图算法,它只需要$tilde{O}(mcdotmin{m^{3/4},n^{4/5}})$时间。作为一个直接的应用,我们可以$(1+epsilon)$ -在相同的运行时间内近似无向图中所有边的emph{强度}。我们的关键技术是第一个具有精emph{确切}值保证的局部切算法,其运行时间仅取决于输出大小。所有以前的局部切算法的运行时间取决于输出的切值,这在加权图中可能会任意慢,或者有近似切保证。
{"title":"Maximal k-Edge-Connected Subgraphs in Weighted Graphs via Local Random Contraction","authors":"Chaitanya Nalam, Thatchaphol Saranurak","doi":"10.48550/arXiv.2302.02290","DOIUrl":"https://doi.org/10.48550/arXiv.2302.02290","url":null,"abstract":"The emph{maximal $k$-edge-connected subgraphs} problem is a classical graph clustering problem studied since the 70's. Surprisingly, no non-trivial technique for this problem in weighted graphs is known: a very straightforward recursive-mincut algorithm with $Omega(mn)$ time has remained the fastest algorithm until now. All previous progress gives a speed-up only when the graph is unweighted, and $k$ is small enough (e.g.~Henzinger~et~al.~(ICALP'15), Chechik~et~al.~(SODA'17), and Forster~et~al.~(SODA'20)). We give the first algorithm that breaks through the long-standing $tilde{O}(mn)$-time barrier in emph{weighted undirected} graphs. More specifically, we show a maximal $k$-edge-connected subgraphs algorithm that takes only $tilde{O}(mcdotmin{m^{3/4},n^{4/5}})$ time. As an immediate application, we can $(1+epsilon)$-approximate the emph{strength} of all edges in undirected graphs in the same running time. Our key technique is the first local cut algorithm with emph{exact} cut-value guarantees whose running time depends only on the output size. All previous local cut algorithms either have running time depending on the cut value of the output, which can be arbitrarily slow in weighted graphs or have approximate cut guarantees.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"11 1","pages":"183-211"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90168105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Balanced Allocations with Heterogeneous Bins: The Power of Memory 异构箱的均衡分配:内存的力量
We consider the allocation of $m$ balls (jobs) into $n$ bins (servers). In the standard Two-Choice process, at each step $t=1,2,ldots,m$ we first sample two bins uniformly at random and place a ball in the least loaded bin. It is well-known that for any $m geq n$, this results in a gap (difference between the maximum and average load) of $log_2 log n + Theta(1)$ (with high probability). In this work, we consider the Memory process [Mitzenmacher, Prabhakar and Shah 2002] where instead of two choices, we only sample one bin per step but we have access to a cache which can store the location of one bin. Mitzenmacher, Prabhakar and Shah showed that in the lightly loaded case ($m = n$), the Memory process achieves a gap of $mathcal{O}(log log n)$. Extending the setting of Mitzenmacher et al. in two ways, we first allow the number of balls $m$ to be arbitrary, which includes the challenging heavily loaded case where $m geq n$. Secondly, we follow the heterogeneous bins model of Wieder [Wieder 2007], where the sampling distribution of bins can be biased up to some arbitrary multiplicative constant. Somewhat surprisingly, we prove that even in this setting, the Memory process still achieves an $mathcal{O}(log log n)$ gap bound. This is in stark contrast with the Two-Choice (or any $d$-Choice with $d=mathcal{O}(1)$) process, where it is known that the gap diverges as $m rightarrow infty$ [Wieder 2007]. Further, we show that for any sampling distribution independent of $m$ (but possibly dependent on $n$) the Memory process has a gap that can be bounded independently of $m$. Finally, we prove a tight gap bound of $mathcal{O}(log n)$ for Memory in another relaxed setting with heterogeneous (weighted) balls and a cache which can only be maintained for two steps.
我们考虑将$m$球(作业)分配到$n$箱(服务器)中。在标准的两选过程中,在每一步$t=1,2,ldots,m$,我们首先均匀随机地对两个箱子进行抽样,并将一个球放入装载最少的箱子中。众所周知,对于任何$m geq n$,这会导致(高概率)$log_2 log n + Theta(1)$的差距(最大和平均负载之间的差异)。在这项工作中,我们考虑内存过程[Mitzenmacher, Prabhakar and Shah 2002],其中我们每一步只采样一个bin,而不是两个选择,但我们可以访问可以存储一个bin位置的缓存。Mitzenmacher, Prabhakar和Shah表明,在轻负载情况下($m = n$), Memory进程实现了$mathcal{O}(log log n)$的间隙。以两种方式扩展Mitzenmacher等人的设置,我们首先允许球的数量$m$是任意的,其中包括具有挑战性的重载情况$m geq n$。其次,我们遵循Wieder [Wieder 2007]的异构箱模型,其中箱的抽样分布可以偏置到一些任意的乘法常数。有些令人惊讶的是,我们证明即使在这种设置下,Memory进程仍然达到$mathcal{O}(log log n)$间隙界限。这与两种选择(或任何$d$ -选择与$d=mathcal{O}(1)$)过程形成鲜明对比,其中已知差距发散为$m rightarrow infty$ [Wieder 2007]。此外,我们表明,对于任何独立于$m$(但可能依赖于$n$)的抽样分布,Memory进程都有一个可以独立于$m$限定的间隙。最后,我们证明了在另一种具有异构(加权)球和只能维持两步的缓存的宽松设置下内存的紧密间隙界$mathcal{O}(log n)$。
{"title":"Balanced Allocations with Heterogeneous Bins: The Power of Memory","authors":"Dimitrios Los, Thomas Sauerwald, John Sylvester","doi":"10.1137/1.9781611977554.ch169","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch169","url":null,"abstract":"We consider the allocation of $m$ balls (jobs) into $n$ bins (servers). In the standard Two-Choice process, at each step $t=1,2,ldots,m$ we first sample two bins uniformly at random and place a ball in the least loaded bin. It is well-known that for any $m geq n$, this results in a gap (difference between the maximum and average load) of $log_2 log n + Theta(1)$ (with high probability). In this work, we consider the Memory process [Mitzenmacher, Prabhakar and Shah 2002] where instead of two choices, we only sample one bin per step but we have access to a cache which can store the location of one bin. Mitzenmacher, Prabhakar and Shah showed that in the lightly loaded case ($m = n$), the Memory process achieves a gap of $mathcal{O}(log log n)$. Extending the setting of Mitzenmacher et al. in two ways, we first allow the number of balls $m$ to be arbitrary, which includes the challenging heavily loaded case where $m geq n$. Secondly, we follow the heterogeneous bins model of Wieder [Wieder 2007], where the sampling distribution of bins can be biased up to some arbitrary multiplicative constant. Somewhat surprisingly, we prove that even in this setting, the Memory process still achieves an $mathcal{O}(log log n)$ gap bound. This is in stark contrast with the Two-Choice (or any $d$-Choice with $d=mathcal{O}(1)$) process, where it is known that the gap diverges as $m rightarrow infty$ [Wieder 2007]. Further, we show that for any sampling distribution independent of $m$ (but possibly dependent on $n$) the Memory process has a gap that can be bounded independently of $m$. Finally, we prove a tight gap bound of $mathcal{O}(log n)$ for Memory in another relaxed setting with heterogeneous (weighted) balls and a cache which can only be maintained for two steps.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"45 1","pages":"4448-4477"},"PeriodicalIF":0.0,"publicationDate":"2023-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83649691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Approximating Knapsack and Partition via Dense Subset Sums 用密集子集和逼近背包和分区
Knapsack and Partition are two important additive problems whose fine-grained complexities in the $(1-varepsilon)$-approximation setting are not yet settled. In this work, we make progress on both problems by giving improved algorithms. - Knapsack can be $(1 - varepsilon)$-approximated in $tilde O(n + (1/varepsilon) ^ {2.2} )$ time, improving the previous $tilde O(n + (1/varepsilon) ^ {2.25} )$ by Jin (ICALP'19). There is a known conditional lower bound of $(n+varepsilon)^{2-o(1)}$ based on $(min,+)$-convolution hypothesis. - Partition can be $(1 - varepsilon)$-approximated in $tilde O(n + (1/varepsilon) ^ {1.25} )$ time, improving the previous $tilde O(n + (1/varepsilon) ^ {1.5} )$ by Bringmann and Nakos (SODA'21). There is a known conditional lower bound of $(1/varepsilon)^{1-o(1)}$ based on Strong Exponential Time Hypothesis. Both of our new algorithms apply the additive combinatorial results on dense subset sums by Galil and Margalit (SICOMP'91), Bringmann and Wellnitz (SODA'21). Such techniques have not been explored in the context of Knapsack prior to our work. In addition, we design several new methods to speed up the divide-and-conquer steps which naturally arise in solving additive problems.
backpack和Partition是两个重要的可加性问题,它们在$(1-varepsilon)$ -近似条件下的细粒度复杂性尚未得到解决。在这项工作中,我们通过给出改进的算法在这两个问题上取得了进展。-背包可以在$tilde O(n + (1/varepsilon) ^ {2.2} )$时间内$(1 - varepsilon)$ -逼近,改进了Jin (ICALP'19)之前的$tilde O(n + (1/varepsilon) ^ {2.25} )$。根据$(min,+)$ -卷积假设,存在已知的$(n+varepsilon)^{2-o(1)}$的条件下界。-分区可以在$tilde O(n + (1/varepsilon) ^ {1.25} )$时间内进行$(1 - varepsilon)$ -近似,改进了Bringmann和Nakos (SODA'21)之前的$tilde O(n + (1/varepsilon) ^ {1.5} )$。基于强指数时间假设,存在已知的$(1/varepsilon)^{1-o(1)}$的条件下界。我们的两种新算法都将加性组合结果应用于Galil和Margalit (SICOMP'91), Bringmann和Wellnitz (SODA'21)的密集子集和。在我们的工作之前,这些技术还没有在背包的背景下进行过探索。此外,我们还设计了几种新的方法来加快求解加性问题时自然出现的分治步骤。
{"title":"Approximating Knapsack and Partition via Dense Subset Sums","authors":"Mingyang Deng, Ce Jin, Xiao Mao","doi":"10.1137/1.9781611977554.ch113","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch113","url":null,"abstract":"Knapsack and Partition are two important additive problems whose fine-grained complexities in the $(1-varepsilon)$-approximation setting are not yet settled. In this work, we make progress on both problems by giving improved algorithms. - Knapsack can be $(1 - varepsilon)$-approximated in $tilde O(n + (1/varepsilon) ^ {2.2} )$ time, improving the previous $tilde O(n + (1/varepsilon) ^ {2.25} )$ by Jin (ICALP'19). There is a known conditional lower bound of $(n+varepsilon)^{2-o(1)}$ based on $(min,+)$-convolution hypothesis. - Partition can be $(1 - varepsilon)$-approximated in $tilde O(n + (1/varepsilon) ^ {1.25} )$ time, improving the previous $tilde O(n + (1/varepsilon) ^ {1.5} )$ by Bringmann and Nakos (SODA'21). There is a known conditional lower bound of $(1/varepsilon)^{1-o(1)}$ based on Strong Exponential Time Hypothesis. Both of our new algorithms apply the additive combinatorial results on dense subset sums by Galil and Margalit (SICOMP'91), Bringmann and Wellnitz (SODA'21). Such techniques have not been explored in the context of Knapsack prior to our work. In addition, we design several new methods to speed up the divide-and-conquer steps which naturally arise in solving additive problems.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"72 1","pages":"2961-2979"},"PeriodicalIF":0.0,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86909724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An Improved Approximation for Maximum Weighted k-Set Packing 最大加权k集填充的一种改进逼近
We consider the weighted $k$-set packing problem, in which we are given a collection of weighted sets, each with at most $k$ elements and must return a collection of pairwise disjoint sets with maximum total weight. For $k = 3$, this problem generalizes the classical 3-dimensional matching problem listed as one of the Karp's original 21 NP-complete problems. We give an algorithm attaining an approximation factor of $1.786$ for weighted 3-set packing, improving on the recent best result of $2-frac{1}{63,700,992}$ due to Neuwohner. Our algorithm is based on the local search procedure of Berman that attempts to improve the sum of squared weights rather than the problem's objective. When using exchanges of size at most $k$, this algorithm attains an approximation factor of $frac{k+1}{2}$. Using exchanges of size $k^2(k-1) + k$, we provide a relatively simple analysis to obtain an approximation factor of 1.811 when $k = 3$. We then show that the tools we develop can be adapted to larger exchanges of size $2k^2(k-1) + k$ to give an approximation factor of 1.786. Although our primary focus is on the case $k = 3$, our approach in fact gives slightly stronger improvements on the factor $frac{k+1}{2}$ for all $k>3$. As in previous works, our guarantees hold also for the more general problem of finding a maximum weight independent set in a $(k+1)$-claw free graph.
我们考虑加权$k$集填充问题,在这个问题中,我们给定一个加权集合的集合,每个集合最多有$k$个元素,并且必须返回一个总权重最大的成对不相交集合的集合。对于$k = 3$,这个问题推广了经典的三维匹配问题,作为Karp的原始21个np完全问题之一。我们给出了一种算法,对于加权3集包装,它的近似因子为$1.786$,改进了Neuwohner最近的最佳结果$2-frac{1}{63,700,992}$。我们的算法是基于局部搜索过程的伯曼,试图提高权的平方和,而不是问题的目标。当使用大小不超过$k$的交换时,该算法获得的近似因子为$frac{k+1}{2}$。使用大小为$k^2(k-1) + k$的交换,我们提供了一个相对简单的分析,以获得$k = 3$时的近似因子1.811。然后,我们表明,我们开发的工具可以适用于规模为$2k^2(k-1) + k$的更大的交换,从而给出1.786的近似因子。虽然我们主要关注的是$k = 3$的情况,但我们的方法实际上对所有$k>3$的因子$frac{k+1}{2}$提供了稍强的改进。与之前的工作一样,我们的保证也适用于更一般的问题,即在$(k+1)$无爪图中找到最大权重独立集。
{"title":"An Improved Approximation for Maximum Weighted k-Set Packing","authors":"Theophile Thiery, J. Ward","doi":"10.48550/arXiv.2301.07537","DOIUrl":"https://doi.org/10.48550/arXiv.2301.07537","url":null,"abstract":"We consider the weighted $k$-set packing problem, in which we are given a collection of weighted sets, each with at most $k$ elements and must return a collection of pairwise disjoint sets with maximum total weight. For $k = 3$, this problem generalizes the classical 3-dimensional matching problem listed as one of the Karp's original 21 NP-complete problems. We give an algorithm attaining an approximation factor of $1.786$ for weighted 3-set packing, improving on the recent best result of $2-frac{1}{63,700,992}$ due to Neuwohner. Our algorithm is based on the local search procedure of Berman that attempts to improve the sum of squared weights rather than the problem's objective. When using exchanges of size at most $k$, this algorithm attains an approximation factor of $frac{k+1}{2}$. Using exchanges of size $k^2(k-1) + k$, we provide a relatively simple analysis to obtain an approximation factor of 1.811 when $k = 3$. We then show that the tools we develop can be adapted to larger exchanges of size $2k^2(k-1) + k$ to give an approximation factor of 1.786. Although our primary focus is on the case $k = 3$, our approach in fact gives slightly stronger improvements on the factor $frac{k+1}{2}$ for all $k>3$. As in previous works, our guarantees hold also for the more general problem of finding a maximum weight independent set in a $(k+1)$-claw free graph.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"31 1","pages":"1138-1162"},"PeriodicalIF":0.0,"publicationDate":"2023-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81996149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Non-Stochastic CDF Estimation Using Threshold Queries 使用阈值查询的非随机CDF估计
Estimating the empirical distribution of a scalar-valued data set is a basic and fundamental task. In this paper, we tackle the problem of estimating an empirical distribution in a setting with two challenging features. First, the algorithm does not directly observe the data; instead, it only asks a limited number of threshold queries about each sample. Second, the data are not assumed to be independent and identically distributed; instead, we allow for an arbitrary process generating the samples, including an adaptive adversary. These considerations are relevant, for example, when modeling a seller experimenting with posted prices to estimate the distribution of consumers' willingness to pay for a product: offering a price and observing a consumer's purchase decision is equivalent to asking a single threshold query about their value, and the distribution of consumers' values may be non-stationary over time, as early adopters may differ markedly from late adopters. Our main result quantifies, to within a constant factor, the sample complexity of estimating the empirical CDF of a sequence of elements of $[n]$, up to $varepsilon$ additive error, using one threshold query per sample. The complexity depends only logarithmically on $n$, and our result can be interpreted as extending the existing logarithmic-complexity results for noisy binary search to the more challenging setting where noise is non-stochastic. Along the way to designing our algorithm, we consider a more general model in which the algorithm is allowed to make a limited number of simultaneous threshold queries on each sample. We solve this problem using Blackwell's Approachability Theorem and the exponential weights method. As a side result of independent interest, we characterize the minimum number of simultaneous threshold queries required by deterministic CDF estimation algorithms.
估计标量值数据集的经验分布是一项基本和基本的任务。在本文中,我们解决了在具有两个挑战性特征的设置中估计经验分布的问题。首先,该算法不直接观察数据;相反,它只询问关于每个样本的有限数量的阈值查询。其次,不假设数据是独立和同分布的;相反,我们允许任意过程生成样本,包括自适应对手。这些考虑因素是相关的,例如,当对一个卖家进行实验,用公布的价格来估计消费者购买产品的意愿分布时:提供价格并观察消费者的购买决定相当于询问关于其价值的单一阈值查询,消费者价值的分布可能随着时间的推移而非平稳,因为早期采用者可能与后期采用者明显不同。我们的主要结果量化,在一个常数因子内,估计$[n]$元素序列的经验CDF的样本复杂性,直到$varepsilon$加性误差,每个样本使用一个阈值查询。复杂度仅以对数方式依赖于$n$,我们的结果可以被解释为将现有的噪声二叉搜索的对数复杂度结果扩展到噪声是非随机的更具挑战性的设置。在设计算法的过程中,我们考虑了一个更通用的模型,在这个模型中,算法允许对每个样本同时进行有限数量的阈值查询。我们利用Blackwell的可接近性定理和指数权重法解决了这个问题。作为独立兴趣的附带结果,我们描述了确定性CDF估计算法所需的同时阈值查询的最小数量。
{"title":"Non-Stochastic CDF Estimation Using Threshold Queries","authors":"Princewill Okoroafor, Vaishnavi Gupta, Robert D. Kleinberg, Eleanor Goh","doi":"10.48550/arXiv.2301.05682","DOIUrl":"https://doi.org/10.48550/arXiv.2301.05682","url":null,"abstract":"Estimating the empirical distribution of a scalar-valued data set is a basic and fundamental task. In this paper, we tackle the problem of estimating an empirical distribution in a setting with two challenging features. First, the algorithm does not directly observe the data; instead, it only asks a limited number of threshold queries about each sample. Second, the data are not assumed to be independent and identically distributed; instead, we allow for an arbitrary process generating the samples, including an adaptive adversary. These considerations are relevant, for example, when modeling a seller experimenting with posted prices to estimate the distribution of consumers' willingness to pay for a product: offering a price and observing a consumer's purchase decision is equivalent to asking a single threshold query about their value, and the distribution of consumers' values may be non-stationary over time, as early adopters may differ markedly from late adopters. Our main result quantifies, to within a constant factor, the sample complexity of estimating the empirical CDF of a sequence of elements of $[n]$, up to $varepsilon$ additive error, using one threshold query per sample. The complexity depends only logarithmically on $n$, and our result can be interpreted as extending the existing logarithmic-complexity results for noisy binary search to the more challenging setting where noise is non-stochastic. Along the way to designing our algorithm, we consider a more general model in which the algorithm is allowed to make a limited number of simultaneous threshold queries on each sample. We solve this problem using Blackwell's Approachability Theorem and the exponential weights method. As a side result of independent interest, we characterize the minimum number of simultaneous threshold queries required by deterministic CDF estimation algorithms.","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"24 1","pages":"3551-3572"},"PeriodicalIF":0.0,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83522318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved girth approximation in weighted undirected graphs 改进的加权无向图的周长近似
{"title":"Improved girth approximation in weighted undirected graphs","authors":"Avi Kadria, L. Roditty, Aaron Sidford, V. V. Williams, Uri Zwick","doi":"10.1137/1.9781611977554.ch85","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch85","url":null,"abstract":"","PeriodicalId":92709,"journal":{"name":"Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms","volume":"16 1","pages":"2242-2255"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76971335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1