首页 > 最新文献

ACM Transactions on Algorithms最新文献

英文 中文
Generic Techniques for Building Top-k Structures 构建Top-k结构的通用技术
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-10-10 DOI: https://dl.acm.org/doi/10.1145/3546074
Saladi Rahul, Yufei Tao

A reporting query returns the objects satisfying a predicate q from an input set. In prioritized reporting, each object carries a real-valued weight (which can be query dependent), and a query returns the objects that satisfy q and have weights at least a threshold τ. A top-k query finds, among all the objects satisfying q, the k ones of the largest weights; a max query is a special instance with k = 1. We want to design data structures of small space to support queries (and possibly updates) efficiently.

Previous work has shown that a top-k structure can also support max and prioritized queries with no performance deterioration. This article explores the opposite direction: do prioritized queries, possibly combined with max queries, imply top-k search? Subject to mild conditions, we provide affirmative answers with two reduction techniques. The first converts a prioritized structure into a static top-k structure with the same space complexity and only a logarithmic blowup in query time. If a max structure is available in addition, our second reduction yields a top-k structure with no degradation in expected performance (this holds for the space, query, and update complexities). Our techniques significantly simplify the design of top-k structures because structures for max and prioritized queries are often easier to obtain. We demonstrate this by developing top-k structures for interval stabbing, 3D dominance, halfspace reporting, linear ranking, and L nearest neighbor search in the RAM and the external memory computation models.

报告查询从输入集中返回满足谓词q的对象。在优先级报告中,每个对象都带有实值权重(可以与查询相关),查询返回满足q且权重至少为阈值τ的对象。top-k查询在所有满足q的对象中,找出k个权值最大的对象;Max查询是k = 1的特殊实例。我们希望设计小空间的数据结构来有效地支持查询(和可能的更新)。以前的工作表明,top-k结构也可以支持最大和优先级查询,而不会导致性能下降。本文探讨了相反的方向:优先查询(可能与max查询结合使用)是否意味着top-k搜索?在温和的条件下,我们用两种还原技术给出肯定的答案。第一种方法将优先级结构转换为具有相同空间复杂度的静态top-k结构,并且查询时间只有对数级增长。如果还有一个max结构可用,我们的第二次缩减会产生top-k结构,而不会降低预期性能(这适用于空间、查询和更新复杂性)。我们的技术极大地简化了top-k结构的设计,因为用于最大和优先级查询的结构通常更容易获得。我们通过在RAM和外部存储器计算模型中开发用于区间刺入、3D优势、半空间报告、线性排序和L∞最近邻搜索的top-k结构来证明这一点。
{"title":"Generic Techniques for Building Top-k Structures","authors":"Saladi Rahul, Yufei Tao","doi":"https://dl.acm.org/doi/10.1145/3546074","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3546074","url":null,"abstract":"<p>A <i>reporting query</i> returns the objects satisfying a predicate <i>q</i> from an input set. In <i>prioritized reporting</i>, each object carries a real-valued weight (which can be query dependent), and a query returns the objects that satisfy <i>q</i> and have weights at least a threshold τ. A <i>top-<i>k</i> query</i> finds, among all the objects satisfying <i>q</i>, the <i>k</i> ones of the largest weights; a <i>max query</i> is a special instance with <i>k</i> = 1. We want to design data structures of small space to support queries (and possibly updates) efficiently.</p><p>Previous work has shown that a top-<i>k</i> structure can also support max and prioritized queries with no performance deterioration. This article explores the opposite direction: do prioritized queries, possibly combined with max queries, imply top-<i>k</i> search? Subject to mild conditions, we provide affirmative answers with two reduction techniques. The first converts a prioritized structure into a static top-<i>k</i> structure with the same space complexity and only a logarithmic blowup in query time. If a max structure is available in addition, our second reduction yields a top-<i>k</i> structure with no degradation in expected performance (this holds for the space, query, and update complexities). Our techniques significantly simplify the design of top-<i>k</i> structures because structures for max and prioritized queries are often easier to obtain. We demonstrate this by developing top-<i>k</i> structures for interval stabbing, 3D dominance, halfspace reporting, linear ranking, and <i>L</i><sub>∞</sub> nearest neighbor search in the RAM and the external memory computation models.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"27 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138516996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems 粘性布朗舍入及其在约束满足问题中的应用
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-10-10 DOI: https://dl.acm.org/doi/10.1145/3459096
Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh

Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [31] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known.

In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and Max-DiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations.

半定规划是设计和分析组合优化问题近似算法的有力工具。特别是Goemans和Williamson[31]的随机超平面舍入方法已经被广泛研究了二十多年,产生了对原始技术的各种扩展和美观的算法,用于广泛的应用。尽管这种方法对某些问题(例如Max-Cut)产生了严格的近似保证,但对于许多其他问题(例如Max-SAT和Max-DiCut),严格的近似比仍然未知。造成这种情况的一个主要原因是,已知的舍入半确定松弛的技术很少。在本文中,我们提出了一种新的基于布朗运动的半确定规划舍入的一般简便方法。我们的方法受到算法差异理论的启发。我们开发并展示了分析我们新的舍入算法的工具,利用布朗运动理论、复杂分析和偏微分方程的数学机制。关注约束满足问题,我们将我们的方法应用于几个经典问题,包括Max-Cut, Max-2SAT和Max-DiCut,并推导出与最知名结果竞争的新算法。为了说明我们方法的通用性和一般适用性,我们给出了具有侧约束的最大割问题的新近似算法,该算法关键地利用了粘布朗运动的测量集中结果,粘布朗运动是超平面舍入及其推广中缺少的一个特征。
{"title":"Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems","authors":"Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh","doi":"https://dl.acm.org/doi/10.1145/3459096","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3459096","url":null,"abstract":"<p>Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [31] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., <span>Max-Cut</span>, for many others, e.g., <span>Max-SAT</span> and <span>Max-DiCut</span>, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known. </p><p>In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including <span>Max-Cut</span>, <span>Max-2SAT</span>, and <span>Max-DiCut</span>, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the <span>Max-Cut</span> problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"2 4","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tolerant Testers of Image Properties 图像特性公差测试仪
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-10-10 DOI: https://dl.acm.org/doi/10.1145/3531527
Piotr Berman, Meiram Murzabulatov, Sofya Raskhodnikova

We initiate a systematic study of tolerant testers of image properties or, equivalently, algorithms that approximate the distance from a given image to the desired property. Image processing is a particularly compelling area of applications for sublinear-time algorithms and, specifically, property testing. However, for testing algorithms to reach their full potential in image processing, they have to be tolerant, which allows them to be resilient to noise.

We design efficient approximation algorithms for the following fundamental questions: What fraction of pixels have to be changed in an image so it becomes a half-plane? A representation of a convex object? A representation of a connected object? More precisely, our algorithms approximate the distance to three basic properties (being a half-plane, convexity, and connectedness) within a small additive error ε, after reading poly(1/ε) pixels, independent of the image size. We also design an efficient agnostic proper PAC learner of convex sets (continuous and discrete) in two dimensions under the uniform distribution.

Our algorithms require very simple access to the input: uniform random samples for the half-plane property and convexity, and samples from uniformly random blocks for connectedness. However, the analysis of the algorithms, especially for convexity, requires many geometric and combinatorial insights. For example, in the analysis of the algorithm for convexity, we define a set of reference polygons Pε such that (1) every convex image has a nearby polygon in Pε and (2) one can use dynamic programming to quickly compute the smallest empirical distance to a polygon in Pε.

我们开始对图像属性的容忍度测试进行系统研究,或者等效地,对从给定图像到期望属性的距离进行近似的算法进行系统研究。图像处理是亚线性时间算法的一个特别引人注目的应用领域,特别是属性测试。然而,为了测试算法在图像处理中充分发挥其潜力,它们必须具有容忍度,这使它们能够适应噪声。我们为以下基本问题设计了有效的近似算法:图像中需要改变多少像素才能使其成为半平面?一个凸面物体的表示?一个连接对象的表示?更准确地说,我们的算法在读取多边形(1/ε)像素后,在一个小的附加误差ε内近似地逼近到三个基本属性(半平面、凸性和连通性)的距离,与图像大小无关。我们还设计了一个有效的二维均匀分布凸集(连续和离散)的不可知论适当PAC学习器。我们的算法需要非常简单的输入:均匀随机样本用于半平面性质和凸性,均匀随机样本用于连通性。然而,对算法的分析,特别是对凸性的分析,需要许多几何和组合的见解。例如,在分析凸性算法时,我们定义了一组参考多边形Pε,使得(1)每个凸图像在Pε中都有一个邻近的多边形,(2)可以使用动态规划快速计算到Pε中多边形的最小经验距离。
{"title":"Tolerant Testers of Image Properties","authors":"Piotr Berman, Meiram Murzabulatov, Sofya Raskhodnikova","doi":"https://dl.acm.org/doi/10.1145/3531527","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3531527","url":null,"abstract":"<p>We initiate a systematic study of tolerant testers of image properties or, equivalently, algorithms that approximate the distance from a given image to the desired property. Image processing is a particularly compelling area of applications for sublinear-time algorithms and, specifically, property testing. However, for testing algorithms to reach their full potential in image processing, they have to be tolerant, which allows them to be resilient to noise.</p><p>We design efficient approximation algorithms for the following fundamental questions: What fraction of pixels have to be changed in an image so it becomes a half-plane? A representation of a convex object? A representation of a connected object? More precisely, our algorithms approximate the distance to three basic properties (being a half-plane, convexity, and connectedness) within a small additive error ε, after reading <i>poly</i>(1/ε) pixels, independent of the image size. We also design an efficient agnostic proper PAC learner of convex sets (continuous and discrete) in two dimensions under the uniform distribution.</p><p>Our algorithms require very simple access to the input: uniform random samples for the half-plane property and convexity, and samples from uniformly random blocks for connectedness. However, the analysis of the algorithms, especially for convexity, requires many geometric and combinatorial insights. For example, in the analysis of the algorithm for convexity, we define a set of reference polygons <i>P</i><sub>ε</sub> such that (1) every convex image has a nearby polygon in <i>P</i><sub>ε</sub> and (2) one can use dynamic programming to quickly compute the smallest empirical distance to a polygon in <i>P</i><sub>ε</sub>.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"2 6‐7","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exponential Separations in Local Privacy 指数分离在本地隐私
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-10-10 DOI: https://dl.acm.org/doi/10.1145/3459095
Matthew Joseph, Jieming Mao, Aaron Roth

We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues. We use this connection to prove sample complexity lower bounds for locally differentially private protocols as straightforward corollaries of results from communication complexity. In particular, we (1) use a communication lower bound for the hidden layers problem to prove an exponential sample complexity separation between sequentially and fully interactive locally private protocols, and (2) use a communication lower bound for the pointer chasing problem to prove an exponential sample complexity separation between k-round and (k+1)-round sequentially interactive locally private protocols, for every k.

我们证明了两玩家游戏的通信复杂性与其多玩家局部私有模拟的样本复杂性之间的一般联系。我们使用这个连接来证明局部差分私有协议的样本复杂性下界,作为通信复杂性结果的直接推论。特别是,我们(1)使用隐层问题的通信下界来证明顺序交互和完全交互局部私有协议之间的指数样本复杂度分离,并且(2)使用指针跟踪问题的通信下界来证明k轮和(k+1)轮顺序交互局部私有协议之间的指数样本复杂度分离,对于每k。
{"title":"Exponential Separations in Local Privacy","authors":"Matthew Joseph, Jieming Mao, Aaron Roth","doi":"https://dl.acm.org/doi/10.1145/3459095","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3459095","url":null,"abstract":"<p>We prove a general connection between the <i>communication</i> complexity of two-player games and the <i>sample</i> complexity of their multi-player locally private analogues. We use this connection to prove sample complexity lower bounds for locally differentially private protocols as straightforward corollaries of results from communication complexity. In particular, we (1) use a communication lower bound for the hidden layers problem to prove an exponential sample complexity separation between sequentially and fully interactive locally private protocols, and (2) use a communication lower bound for the pointer chasing problem to prove an exponential sample complexity separation between <i>k</i>-round and (<i>k+1</i>)-round sequentially interactive locally private protocols, for every <i>k</i>.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"2 2","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ranked Document Retrieval in External Memory 外部存储器中的排序文档检索
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-09-22 DOI: 10.1145/3559763
R. Shah, Cheng Sheng, Sharma V. Thankachan, J. Vitter
The ranked (or top-k) document retrieval problem is defined as follows: preprocess a collection {T1,T2,… ,Td} of d strings (called documents) of total length n into a data structure, such that for any given query (P,k), where P is a string (called pattern) of length p ≥ 1 and k ∈ [1,d] is an integer, the identifiers of those k documents that are most relevant to P can be reported, ideally in the sorted order of their relevance. The seminal work by Hon et al. [FOCS 2009 and Journal of the ACM 2014] presented an O(n)-space (in words) data structure with O(p+k log k) query time. The query time was later improved to O(p+k) [SODA 2012] and further to O(p/ log σn+k) [SIAM Journal on Computing 2017] by Navarro and Nekrich, where σ is the alphabet size. We revisit this problem in the external memory model and present three data structures. The first one takes O(n)-space and answer queries in O(p/B + log B n + k/B+ log * (n/B)) I/Os, where B is the block size. The second one takes O(n log * (n/B)) space and answer queries in optimal O(p/B + log B n + k/B) I/Os. In both cases, the answers are reported in the unsorted order of relevance. To handle sorted top-k document retrieval, we present an O(n log (d/B)) space data structure with optimal query cost.
排序(或top-k)文档检索问题定义如下:将总长为n的d个字符串(称为文档)的集合{T1,T2,…,Td}预处理为数据结构,使得对于任何给定的查询(P,k),其中P是长度P≥1的字符串(称称为模式),k∈[1,d]是整数,可以报告与P最相关的那k个文档的标识符,理想情况下是按照其相关性的排序顺序。Hon等人的开创性工作【FOCS 2009和ACM杂志2014】提出了一种具有O(p+k log k)查询时间的O(n)-空间(大写)数据结构。后来,Navarro和Nekrich将查询时间改进为O(p+k)[SODA 2012],并进一步改进为0(p/logσn+k)[ISIAM Journal on Computing 2017],其中σ是字母表大小。我们在外部内存模型中重新审视这个问题,并提出了三种数据结构。第一个占用O(n)空间,并在O(p/B+log Bn+k/B+log*(n/B))I/O中回答查询,其中B是块大小。第二种方法占用O(n-log*(n/B))空间,并在最优O(p/B+log Bn+k/B)I/O中回答查询。在这两种情况下,答案都是按相关性的未排序顺序报告的。为了处理排序的top-k文档检索,我们提出了一种具有最优查询成本的O(n-log(d/B))空间数据结构。
{"title":"Ranked Document Retrieval in External Memory","authors":"R. Shah, Cheng Sheng, Sharma V. Thankachan, J. Vitter","doi":"10.1145/3559763","DOIUrl":"https://doi.org/10.1145/3559763","url":null,"abstract":"The ranked (or top-k) document retrieval problem is defined as follows: preprocess a collection {T1,T2,… ,Td} of d strings (called documents) of total length n into a data structure, such that for any given query (P,k), where P is a string (called pattern) of length p ≥ 1 and k ∈ [1,d] is an integer, the identifiers of those k documents that are most relevant to P can be reported, ideally in the sorted order of their relevance. The seminal work by Hon et al. [FOCS 2009 and Journal of the ACM 2014] presented an O(n)-space (in words) data structure with O(p+k log k) query time. The query time was later improved to O(p+k) [SODA 2012] and further to O(p/ log σn+k) [SIAM Journal on Computing 2017] by Navarro and Nekrich, where σ is the alphabet size. We revisit this problem in the external memory model and present three data structures. The first one takes O(n)-space and answer queries in O(p/B + log B n + k/B+ log * (n/B)) I/Os, where B is the block size. The second one takes O(n log * (n/B)) space and answer queries in optimal O(p/B + log B n + k/B) I/Os. In both cases, the answers are reported in the unsorted order of relevance. To handle sorted top-k document retrieval, we present an O(n log (d/B)) space data structure with optimal query cost.","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"19 1","pages":"1 - 12"},"PeriodicalIF":1.3,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43217564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cubic Algorithm for Computing the Hermite Normal Form of a Nonsingular Integer Matrix 计算非奇异整矩阵Hermite范式的三次算法
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-09-21 DOI: 10.1145/3617996
Stavros Birmpilis, G. Labahn, A. Storjohann
A Las Vegas randomized algorithm is given to compute the Hermite normal form of a nonsingular integer matrix A of dimension n. The algorithm uses quadratic integer multiplication and cubic matrix multiplication and has running time bounded by O(n3(log n + log ||A||)2(log n)2) bit operations, where ||A|| = max ij|Aij| denotes the largest entry of A in absolute value. A variant of the algorithm that uses pseudo-linear integer multiplication is given that has running time (n3log ||A||)1 + o(1) bit operations, where the exponent `` + o(1)′′ captures additional factors (c_1 (log n)^{c_2} (rm {loglog} ||A||)^{c_3} ) for positive real constants c1, c2, c3.
给出了一个Las Vegas随机算法来计算维数为n的非奇异整数矩阵A的Hermite正规形式。该算法使用二次整数乘法和三次矩阵乘法,运行时间以O(n3(log n+日志 ||A||)2(日志 n) 2)位运算,其中||A||=max ij|Aij|表示绝对值中A的最大条目。给出了使用伪线性整数乘法的算法的一个变体,该变体具有运行时间(n3log ||A||)1+o(1)位运算,其中指数“+o(1。
{"title":"A Cubic Algorithm for Computing the Hermite Normal Form of a Nonsingular Integer Matrix","authors":"Stavros Birmpilis, G. Labahn, A. Storjohann","doi":"10.1145/3617996","DOIUrl":"https://doi.org/10.1145/3617996","url":null,"abstract":"A Las Vegas randomized algorithm is given to compute the Hermite normal form of a nonsingular integer matrix A of dimension n. The algorithm uses quadratic integer multiplication and cubic matrix multiplication and has running time bounded by O(n3(log n + log ||A||)2(log n)2) bit operations, where ||A|| = max ij|Aij| denotes the largest entry of A in absolute value. A variant of the algorithm that uses pseudo-linear integer multiplication is given that has running time (n3log ||A||)1 + o(1) bit operations, where the exponent `` + o(1)′′ captures additional factors (c_1 (log n)^{c_2} (rm {loglog} ||A||)^{c_3} ) for positive real constants c1, c2, c3.","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"1 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47916917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Polynomial-Time Algorithm for 1/3-Approximate Nash Equilibria in Bimatrix Games 双矩阵对策中1/3-近似纳什均衡的多项式时间算法
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-04-25 DOI: 10.1145/3606697
Argyrios Deligkas, M. Fasoulakis, E. Markakis
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis [38], with an approximation guarantee of (0.3393 + δ), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a ((frac{1}{3}+delta) ) -Nash equilibrium, for any constant δ > 0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of [38], and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.
自从双矩阵对策中纳什均衡的著名PPAD完备性结果以来,一长串的研究都集中在计算ε-近似纳什均衡的多项式时间算法上。在多项式时间内找到最佳的近似保证是解决近似平衡复杂性的一个基本而非琐碎的追求。尽管付出了大量的努力,Tsaknakis和Spirakis[38]的算法在过去15年中仍然是最先进的,其近似保证为(0.3393+δ)。在本文中,我们提出了Tsaknakis-Spirakis算法的一个新的改进,得到了一个多项式时间算法,该算法计算任何常数δ>0的((frac{1}{3}+delta))-Nash平衡。我们方法的主要思想是超越[38]的优化框架中定义的原始和对偶策略的凸组合的使用,并丰富策略库,我们从中构建在算法的某些瓶颈情况下输出的策略配置文件。
{"title":"A Polynomial-Time Algorithm for 1/3-Approximate Nash Equilibria in Bimatrix Games","authors":"Argyrios Deligkas, M. Fasoulakis, E. Markakis","doi":"10.1145/3606697","DOIUrl":"https://doi.org/10.1145/3606697","url":null,"abstract":"Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis [38], with an approximation guarantee of (0.3393 + δ), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a ((frac{1}{3}+delta) ) -Nash equilibrium, for any constant δ > 0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of [38], and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49225586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A PTAS for Capacitated Vehicle Routing on Trees 树上电容车辆路径的PTAS
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-11-05 DOI: 10.1145/3575799
Claire Mathieu, Hang Zhou
We give a polynomial time approximation scheme (PTAS) for the unit demand capacitated vehicle routing problem (CVRP) on trees, for the entire range of the tour capacity. The result extends to the splittable CVRP.
针对单位需求车辆路径问题(CVRP),在整个行程容量范围内,给出了一个多项式时间逼近格式(PTAS)。结果扩展到可分割的CVRP。
{"title":"A PTAS for Capacitated Vehicle Routing on Trees","authors":"Claire Mathieu, Hang Zhou","doi":"10.1145/3575799","DOIUrl":"https://doi.org/10.1145/3575799","url":null,"abstract":"We give a polynomial time approximation scheme (PTAS) for the unit demand capacitated vehicle routing problem (CVRP) on trees, for the entire range of the tour capacity. The result extends to the splittable CVRP.","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"19 1","pages":"1 - 28"},"PeriodicalIF":1.3,"publicationDate":"2021-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46329433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Hopcroft’s Problem, Log-Star Shaving, 2D Fractional Cascading, and Decision Trees Hopcroft问题,Log-Star剃须,2D分数级联和决策树
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-11-05 DOI: 10.1137/1.9781611977073.10
Timothy M. Chan, D. Zheng
We revisit Hopcroft’s problem and related fundamental problems about geometric range searching. Given n points and n lines in the plane, we show how to count the number of point-line incidence pairs or the number of point-above-line pairs in O(n4/3) time, which matches the conjectured lower bound and improves the best previous time bound of (n^{4/3}2^{O(log ^*n)} ) obtained almost 30 years ago by Matoušek. We describe two interesting and different ways to achieve the result: the first is randomized and uses a new 2D version of fractional cascading for arrangements of lines; the second is deterministic and uses decision trees in a manner inspired by the sorting technique of Fredman (1976). The second approach extends to any constant dimension. Many consequences follow from these new ideas: for example, we obtain an O(n4/3)-time algorithm for line segment intersection counting in the plane, O(n4/3)-time randomized algorithms for distance selection in the plane and bichromatic closest pair and Euclidean minimum spanning tree in three or four dimensions, and a randomized data structure for halfplane range counting in the plane with O(n4/3) preprocessing time and space and O(n1/3) query time.
我们重新讨论了Hopcroft问题和有关几何范围搜索的基本问题。给定平面中的n个点和n条线,我们展示了如何在O(n4/3)时间内计算点-线关联对的数量或点-线上对的数量,这与推测的下界相匹配,并改进了Matoušek在近30年前获得的(n^{4/3}2^{O(log^*n)})的最佳先前时间边界。我们描述了两种有趣且不同的方法来实现结果:第一种是随机的,并使用新的2D版本的分数级联来排列线;第二种是确定性的,并以受Fredman(1976)排序技术启发的方式使用决策树。第二种方法扩展到任何恒定维度。这些新思想产生了许多结果:例如,我们获得了平面内线段相交计数的O(n4/3)-时间算法,平面内距离选择的O(n/3)-随机算法,以及三维或四维的双色最接近对和欧几里得最小生成树,以及用于在具有O(n4/3)预处理时间和空间以及O(n1/3)查询时间的平面中进行半平面范围计数的随机化数据结构。
{"title":"Hopcroft’s Problem, Log-Star Shaving, 2D Fractional Cascading, and Decision Trees","authors":"Timothy M. Chan, D. Zheng","doi":"10.1137/1.9781611977073.10","DOIUrl":"https://doi.org/10.1137/1.9781611977073.10","url":null,"abstract":"We revisit Hopcroft’s problem and related fundamental problems about geometric range searching. Given n points and n lines in the plane, we show how to count the number of point-line incidence pairs or the number of point-above-line pairs in O(n4/3) time, which matches the conjectured lower bound and improves the best previous time bound of (n^{4/3}2^{O(log ^*n)} ) obtained almost 30 years ago by Matoušek. We describe two interesting and different ways to achieve the result: the first is randomized and uses a new 2D version of fractional cascading for arrangements of lines; the second is deterministic and uses decision trees in a manner inspired by the sorting technique of Fredman (1976). The second approach extends to any constant dimension. Many consequences follow from these new ideas: for example, we obtain an O(n4/3)-time algorithm for line segment intersection counting in the plane, O(n4/3)-time randomized algorithms for distance selection in the plane and bichromatic closest pair and Euclidean minimum spanning tree in three or four dimensions, and a randomized data structure for halfplane range counting in the plane with O(n4/3) preprocessing time and space and O(n1/3) query time.","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49116051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An Improved Algorithm for The k-Dyck Edit Distance Problem k-Dyck编辑距离问题的一种改进算法
IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-11-03 DOI: 10.1137/1.9781611977073.144
Dvir Fried, Shay Golan, T. Kociumaka, T. Kopelowitz, E. Porat, Tatiana Starikovskaya
A Dyck sequence is a sequence of opening and closing parentheses (of various types) that is balanced. The Dyck edit distance of a given sequence of parentheses S is the smallest number of edit operations (insertions, deletions, and substitutions) needed to transform S into a Dyck sequence. We consider the threshold Dyck edit distance problem, where the input is a sequence of parentheses S and a positive integer k, and the goal is to compute the Dyck edit distance of S only if the distance is at most k, and otherwise report that the distance is larger than k. Backurs and Onak [PODS’16] showed that the threshold Dyck edit distance problem can be solved in O(n + k16) time. In this work, we design new algorithms for the threshold Dyck edit distance problem which costs O(n + k4.544184) time with high probability or O(n + k4.853059) deterministically. Our algorithms combine several new structural properties of the Dyck edit distance problem, a refined algorithm for fast (min , +) matrix product, and a careful modification of ideas used in Valiant’s parsing algorithm.
戴克序列是一个平衡的(各种类型的)开括号和闭括号序列。给定括号序列S的Dyck编辑距离是将S转换为Dyck序列所需的最小编辑操作(插入、删除和替换)数量。我们考虑阈值Dyck编辑距离问题,其中输入是括号S和正整数k的序列,目标是仅当距离不大于k时计算S的Dyck编辑距离,否则报告距离大于k。Backurs和Onak [PODS ' 16]表明阈值Dyck编辑距离问题可以在O(n + k16)时间内解决。在这项工作中,我们为阈值Dyck编辑距离问题设计了新的算法,该问题的高概率时间为O(n + k4.544184),确定性时间为O(n + k4.853059)。我们的算法结合了Dyck编辑距离问题的几个新的结构属性,一个快速(min, +)矩阵乘积的改进算法,以及对Valiant解析算法中使用的思想的仔细修改。
{"title":"An Improved Algorithm for The k-Dyck Edit Distance Problem","authors":"Dvir Fried, Shay Golan, T. Kociumaka, T. Kopelowitz, E. Porat, Tatiana Starikovskaya","doi":"10.1137/1.9781611977073.144","DOIUrl":"https://doi.org/10.1137/1.9781611977073.144","url":null,"abstract":"A Dyck sequence is a sequence of opening and closing parentheses (of various types) that is balanced. The Dyck edit distance of a given sequence of parentheses S is the smallest number of edit operations (insertions, deletions, and substitutions) needed to transform S into a Dyck sequence. We consider the threshold Dyck edit distance problem, where the input is a sequence of parentheses S and a positive integer k, and the goal is to compute the Dyck edit distance of S only if the distance is at most k, and otherwise report that the distance is larger than k. Backurs and Onak [PODS’16] showed that the threshold Dyck edit distance problem can be solved in O(n + k16) time. In this work, we design new algorithms for the threshold Dyck edit distance problem which costs O(n + k4.544184) time with high probability or O(n + k4.853059) deterministically. Our algorithms combine several new structural properties of the Dyck edit distance problem, a refined algorithm for fast (min , +) matrix product, and a careful modification of ideas used in Valiant’s parsing algorithm.","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"230 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75908560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
ACM Transactions on Algorithms
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1