首页 > 最新文献

Journal of the ACM最新文献

英文 中文
Relative Error Streaming Quantiles 相对错误流分位数
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-16 DOI: 10.1145/3617891
Graham Cormode, Zohar Karnin, Edo Liberty, Justin Thaler, Pavel Veselý
Estimating ranks, quantiles, and distributions over streaming data is a central task in data analysis and monitoring. Given a stream of n items from a data universe equipped with a total order, the task is to compute a sketch (data structure) of size polylogarithmic in n . Given the sketch and a query item y , one should be able to approximate its rank in the stream, i.e., the number of stream elements smaller than or equal to y . Most works to date focused on additive ε n error approximation, culminating in the KLL sketch that achieved optimal asymptotic behavior. This article investigates multiplicative (1± ε)-error approximations to the rank. Practical motivation for multiplicative error stems from demands to understand the tails of distributions, and hence for sketches to be more accurate near extreme values. The most space-efficient algorithms due to prior work store either O(log (ε 2 n )/ε 2 ) or O (log 3 (ε n )/ε) universe items. We present a randomized sketch storing O (log 1.5 (ε n )/ε) items that can (1± ε)-approximate the rank of each universe item with high constant probability; this space bound is within an (O(sqrt {log (varepsilon n)})) factor of optimal. Our algorithm does not require prior knowledge of the stream length and is fully mergeable, rendering it suitable for parallel and distributed computing environments.
估计流数据的等级、分位数和分布是数据分析和监控的中心任务。给定一个由n个项目组成的流,该流来自一个具有总顺序的数据域,任务是计算一个大小为n的多对数的草图(数据结构)。给定草图和查询项y,应该能够估计其在流中的排名,即小于或等于y的流元素的数量。迄今为止,大多数工作都集中在可加性ε n误差近似上,最终实现了最优渐近行为的KLL草图。本文研究秩的乘法(1±ε)误差近似。乘法误差的实际动机源于理解分布尾部的需求,因此草图在极值附近更准确。由于先前的工作,最节省空间的算法存储O(log (ε 2n)/ε 2)或O(log 3 (ε n)/ε)宇宙项。我们提出了一个存储O (log 1.5 (ε n)/ε)个项目的随机草图,该草图可以(1±ε)-近似每个宇宙项目的秩,具有高恒定概率;这个空间边界在(O(sqrt {log (varepsilon n)}))因子的最优范围内。我们的算法不需要预先知道流长度,并且是完全可合并的,使其适合并行和分布式计算环境。
{"title":"Relative Error Streaming Quantiles","authors":"Graham Cormode, Zohar Karnin, Edo Liberty, Justin Thaler, Pavel Veselý","doi":"10.1145/3617891","DOIUrl":"https://doi.org/10.1145/3617891","url":null,"abstract":"Estimating ranks, quantiles, and distributions over streaming data is a central task in data analysis and monitoring. Given a stream of n items from a data universe equipped with a total order, the task is to compute a sketch (data structure) of size polylogarithmic in n . Given the sketch and a query item y , one should be able to approximate its rank in the stream, i.e., the number of stream elements smaller than or equal to y . Most works to date focused on additive ε n error approximation, culminating in the KLL sketch that achieved optimal asymptotic behavior. This article investigates multiplicative (1± ε)-error approximations to the rank. Practical motivation for multiplicative error stems from demands to understand the tails of distributions, and hence for sketches to be more accurate near extreme values. The most space-efficient algorithms due to prior work store either O(log (ε 2 n )/ε 2 ) or O (log 3 (ε n )/ε) universe items. We present a randomized sketch storing O (log 1.5 (ε n )/ε) items that can (1± ε)-approximate the rank of each universe item with high constant probability; this space bound is within an (O(sqrt {log (varepsilon n)})) factor of optimal. Our algorithm does not require prior knowledge of the stream length and is fully mergeable, rendering it suitable for parallel and distributed computing environments.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136077380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
First Price Auction is 1 − 1/ e 2 Efficient 首价拍卖是1−1/ e 2有效
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-14 DOI: 10.1145/3617902
Yaonan Jin, Pinyan Lu
We prove that the PoA of First Price Auctions is 1-1/ e 2 ≈ 0.8647, closing the gap between the best known bounds [0.7430, 0.8689].
我们证明了首价拍卖的PoA为1-1/ e 2≈0.8647,缩小了已知边界[0.7430,0.8689]之间的差距。
{"title":"First Price Auction is 1 − 1/ <i>e</i> <sup>2</sup> Efficient","authors":"Yaonan Jin, Pinyan Lu","doi":"10.1145/3617902","DOIUrl":"https://doi.org/10.1145/3617902","url":null,"abstract":"We prove that the PoA of First Price Auctions is 1-1/ e 2 ≈ 0.8647, closing the gap between the best known bounds [0.7430, 0.8689].","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135767360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proximity Gaps for Reed–Solomon Codes Reed-Solomon码的邻近间隙
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-11 DOI: 10.1145/3614423
Eli Ben-Sasson, Dan Carmon, Yuval Ishai, Swastik Kopparty, Shubhangi Saraf
A collection of sets displays a proximity gap with respect to some property if for every set in the collection, either (i) all members are δ-close to the property in relative Hamming distance or (ii) only a tiny fraction of members are δ-close to the property. In particular, no set in the collection has roughly half of its members δ-close to the property and the others δ-far from it. We show that the collection of affine spaces displays a proximity gap with respect to Reed–Solomon (RS) codes, even over small fields, of size polynomial in the dimension of the code, and the gap applies to any δ smaller than the Johnson/Guruswami–Sudan list-decoding bound of the RS code. We also show near-optimal gap results, over fields of (at least) linear size in the RS code dimension, for δ smaller than the unique decoding radius. Concretely, if δ is smaller than half the minimal distance of an RS code V ⊂ 𝔽 q n , then every affine space is either entirely δ-close to the code or, alternatively, at most an ( n/q )-fraction of it is δ-close to the code. Finally, we discuss several applications of our proximity gap results to distributed storage, multi-party cryptographic protocols, and concretely efficient proof systems. We prove the proximity gap results by analyzing the execution of classical algebraic decoding algorithms for Reed–Solomon codes (due to Berlekamp–Welch and Guruswami–Sudan) on a formal element of an affine space. This involves working with Reed–Solomon codes whose base field is an (infinite) rational function field. Our proofs are obtained by developing an extension (to function fields) of a strategy of Arora and Sudan for analyzing low-degree tests.
如果对于集合中的每个集合,(i)在相对汉明距离中所有成员都δ-接近该属性,或者(ii)只有一小部分成员δ-接近该属性,则集合集合就某些属性显示接近间隙。特别地,集合中没有一个集合有大约一半的成员δ——接近属性而其他的δ——远离属性。我们证明了仿射空间的集合相对于Reed-Solomon (RS)码,即使在码维大小为多项式的小域上,也存在接近间隙,并且该间隙适用于任何小于RS码的Johnson/ Guruswami-Sudan列表解码界的δ。我们还展示了接近最优的间隙结果,在RS码维(至少)线性大小的字段上,δ小于唯一解码半径。具体地说,如果δ小于RS码的最小距离V∧q n的一半,则每个仿射空间要么完全δ-接近码,要么至多有(n/q)个部分δ-接近码。最后,我们讨论了我们的接近间隙结果在分布式存储、多方加密协议和具体高效证明系统中的几个应用。我们通过分析Reed-Solomon码(由于Berlekamp-Welch和Guruswami-Sudan)在仿射空间的形式元素上的经典代数解码算法的执行来证明邻近间隙结果。这涉及到使用里德-所罗门码,其基域是一个(无限)有理函数域。我们的证明是通过将Arora和Sudan的一种低次检验分析策略推广到函数域得到的。
{"title":"Proximity Gaps for Reed–Solomon Codes","authors":"Eli Ben-Sasson, Dan Carmon, Yuval Ishai, Swastik Kopparty, Shubhangi Saraf","doi":"10.1145/3614423","DOIUrl":"https://doi.org/10.1145/3614423","url":null,"abstract":"A collection of sets displays a proximity gap with respect to some property if for every set in the collection, either (i) all members are δ-close to the property in relative Hamming distance or (ii) only a tiny fraction of members are δ-close to the property. In particular, no set in the collection has roughly half of its members δ-close to the property and the others δ-far from it. We show that the collection of affine spaces displays a proximity gap with respect to Reed–Solomon (RS) codes, even over small fields, of size polynomial in the dimension of the code, and the gap applies to any δ smaller than the Johnson/Guruswami–Sudan list-decoding bound of the RS code. We also show near-optimal gap results, over fields of (at least) linear size in the RS code dimension, for δ smaller than the unique decoding radius. Concretely, if δ is smaller than half the minimal distance of an RS code V ⊂ 𝔽 q n , then every affine space is either entirely δ-close to the code or, alternatively, at most an ( n/q )-fraction of it is δ-close to the code. Finally, we discuss several applications of our proximity gap results to distributed storage, multi-party cryptographic protocols, and concretely efficient proof systems. We prove the proximity gap results by analyzing the execution of classical algebraic decoding algorithms for Reed–Solomon codes (due to Berlekamp–Welch and Guruswami–Sudan) on a formal element of an affine space. This involves working with Reed–Solomon codes whose base field is an (infinite) rational function field. Our proofs are obtained by developing an extension (to function fields) of a strategy of Arora and Sudan for analyzing low-degree tests.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136058224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exponentially Faster Massively Parallel Maximal Matching 指数更快的大规模并行最大匹配
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-11 DOI: 10.1145/3617360
Soheil Behnezhad, MohammadTaghi Hajiaghayi, David G. Harris
The study of approximate matching in the Massively Parallel Computations (MPC) model has recently seen a burst of breakthroughs. Despite this progress, we still have a limited understanding of maximal matching which is one of the central problems of parallel and distributed computing. All known MPC algorithms for maximal matching either take polylogarithmic time which is considered inefficient, or require a strictly super-linear space of n 1+Ω (1) per machine. In this work, we close this gap by providing a novel analysis of an extremely simple algorithm, which is a variant of an algorithm conjectured to work by Czumaj, Lacki, Madry, Mitrovic, Onak, and Sankowski [ 15 ]. The algorithm edge-samples the graph, randomly partitions the vertices, and finds a random greedy maximal matching within each partition. We show that this algorithm drastically reduces the vertex degrees. This, among other results, leads to an O (log log Δ) round algorithm for maximal matching with O(n) space (or even mildly sublinear in n using standard techniques). As an immediate corollary, we get a 2 approximate minimum vertex cover in essentially the same rounds and space, which is the optimal approximation factor under standard assumptions. We also get an improved O (log log Δ) round algorithm for 1 + ε approximate matching. All these results can also be implemented in the congested clique model in the same number of rounds.
大规模并行计算(MPC)模型中近似匹配的研究近年来取得了突破性进展。尽管取得了这些进展,但我们对最大匹配的理解仍然有限,最大匹配是并行和分布式计算的核心问题之一。所有已知的最大匹配MPC算法要么需要多对数时间,这被认为是低效的,要么需要每台机器n 1+Ω(1)的严格超线性空间。在这项工作中,我们通过对一种极其简单的算法进行新颖的分析来缩小这一差距,该算法是Czumaj、Lacki、Madry、Mitrovic、Onak和Sankowski[15]推测有效的算法的变体。该算法对图进行边采样,对顶点进行随机分区,并在每个分区内找到一个随机贪婪的最大匹配。我们证明了该算法极大地减少了顶点度。在其他结果中,这导致了O(log log Δ)轮算法与O(n)空间的最大匹配(或者使用标准技术在n中甚至轻度次线性)。作为一个直接的推论,我们在本质上相同的回合和空间中得到了2个近似的最小顶点覆盖,这是标准假设下的最佳近似因子。我们还得到了一种改进的0 (log log Δ)轮算法用于1 + ε近似匹配。所有这些结果也可以在相同轮数的拥塞团模型中实现。
{"title":"Exponentially Faster Massively Parallel Maximal Matching","authors":"Soheil Behnezhad, MohammadTaghi Hajiaghayi, David G. Harris","doi":"10.1145/3617360","DOIUrl":"https://doi.org/10.1145/3617360","url":null,"abstract":"The study of approximate matching in the Massively Parallel Computations (MPC) model has recently seen a burst of breakthroughs. Despite this progress, we still have a limited understanding of maximal matching which is one of the central problems of parallel and distributed computing. All known MPC algorithms for maximal matching either take polylogarithmic time which is considered inefficient, or require a strictly super-linear space of n 1+Ω (1) per machine. In this work, we close this gap by providing a novel analysis of an extremely simple algorithm, which is a variant of an algorithm conjectured to work by Czumaj, Lacki, Madry, Mitrovic, Onak, and Sankowski [ 15 ]. The algorithm edge-samples the graph, randomly partitions the vertices, and finds a random greedy maximal matching within each partition. We show that this algorithm drastically reduces the vertex degrees. This, among other results, leads to an O (log log Δ) round algorithm for maximal matching with O(n) space (or even mildly sublinear in n using standard techniques). As an immediate corollary, we get a 2 approximate minimum vertex cover in essentially the same rounds and space, which is the optimal approximation factor under standard assumptions. We also get an improved O (log log Δ) round algorithm for 1 + ε approximate matching. All these results can also be implemented in the congested clique model in the same number of rounds.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136057754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pliability and Approximating Max-CSPs 柔韧性与max - csp近似
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-06 DOI: 10.1145/3626515
Miguel Romero, Marcin Wrochna, Stanislav Živný
We identify a sufficient condition, treewidth-pliability , that gives a polynomial-time algorithm for an arbitrarily good approximation of the optimal value in a large class of Max-2-CSPs parameterised by the class of allowed constraint graphs (with arbitrary constraints on an unbounded alphabet). Our result applies more generally to the maximum homomorphism problem between two rational-valued structures. The condition unifies the two main approaches for designing a polynomial-time approximation scheme. One is Baker’s layering technique, which applies to sparse graphs such as planar or excluded-minor graphs. The other is based on Szemerédi’s regularity lemma and applies to dense graphs. We extend the applicability of both techniques to new classes of Max-CSPs. On the other hand, we prove that the condition cannot be used to find solutions (as opposed to approximating the optimal value) in general. Treewidth-pliability turns out to be a robust notion that can be defined in several equivalent ways, including characterisations via size, treedepth, or the Hadwiger number. We show connections to the notions of fractional-treewidth-fragility from structural graph theory, hyperfiniteness from the area of property testing, and regularity partitions from the theory of dense graph limits. These may be of independent interest. In particular we show that a monotone class of graphs is hyperfinite if and only if it is fractionally-treewidth-fragile and has bounded degree.
我们确定了一个充分条件,树宽柔顺性,它给出了一个多项式时间算法,可以在允许约束图(在无界字母上具有任意约束)参数化的一大类max -2- csp中任意逼近最优值。我们的结果更普遍地适用于两个有理值结构之间的极大同态问题。该条件统一了设计多项式时间逼近格式的两种主要方法。一种是Baker的分层技术,它适用于稀疏图,如平面图或排除次要图。另一种是基于szemersamedi的正则引理,并应用于密集图。我们将这两种技术的适用性扩展到新的max - csp类。另一方面,我们证明了该条件一般不能用于寻找解(与逼近最优值相反)。树宽柔韧性是一个强大的概念,可以用几种等效的方式来定义,包括通过大小、树深或哈维格数来表征。我们展示了与结构图理论中的分数-树宽-脆弱性概念的联系,与性能测试领域的超有限性的联系,与密集图极限理论中的正则划分的联系。这些可能是独立的利益。特别地,我们证明了一类单调图是超有限的当且仅当它是分数-树宽-脆弱的并且有界度。
{"title":"Pliability and Approximating Max-CSPs","authors":"Miguel Romero, Marcin Wrochna, Stanislav Živný","doi":"10.1145/3626515","DOIUrl":"https://doi.org/10.1145/3626515","url":null,"abstract":"We identify a sufficient condition, treewidth-pliability , that gives a polynomial-time algorithm for an arbitrarily good approximation of the optimal value in a large class of Max-2-CSPs parameterised by the class of allowed constraint graphs (with arbitrary constraints on an unbounded alphabet). Our result applies more generally to the maximum homomorphism problem between two rational-valued structures. The condition unifies the two main approaches for designing a polynomial-time approximation scheme. One is Baker’s layering technique, which applies to sparse graphs such as planar or excluded-minor graphs. The other is based on Szemerédi’s regularity lemma and applies to dense graphs. We extend the applicability of both techniques to new classes of Max-CSPs. On the other hand, we prove that the condition cannot be used to find solutions (as opposed to approximating the optimal value) in general. Treewidth-pliability turns out to be a robust notion that can be defined in several equivalent ways, including characterisations via size, treedepth, or the Hadwiger number. We show connections to the notions of fractional-treewidth-fragility from structural graph theory, hyperfiniteness from the area of property testing, and regularity partitions from the theory of dense graph limits. These may be of independent interest. In particular we show that a monotone class of graphs is hyperfinite if and only if it is fractionally-treewidth-fragile and has bounded degree.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Towards a Better Understanding of Randomized Greedy Matching 更好地理解随机贪婪匹配
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-06 DOI: 10.1145/3614318
Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang
There has been a long history of studying randomized greedy matching algorithms since the work by Dyer and Frieze (RSA 1991). We follow this trend and consider the problem formulated in the oblivious setting, in which the vertex set of a graph is known to the algorithm, but not the edge set. The algorithm can make queries for the existence of the edge between any pair of vertices but must include the edge into the matching if it exists, i.e., as in the query-commit model by Gamlath et al. (SODA 2019). We revisit the Modified Randomized Greedy (MRG) algorithm by Aronson et al. (RSA 1995) that is proved to achieve a (0.5 + ϵ)-approximation. In each step of the algorithm, an unmatched vertex is chosen uniformly at random and matched to a randomly chosen neighbor (if exists). We study a weaker version of the algorithm named Random Decision Order (RDO) that, in each step, randomly picks an unmatched vertex and matches it to an arbitrary neighbor (if exists). We prove that the RDO algorithm provides a 0.639-approximation for bipartite graphs and 0.531-approximation for general graphs. As a corollary, we substantially improve the approximation ratio of MRG . Furthermore, we generalize the RDO algorithm to the edge-weighted case and prove that it achieves a 0.501 approximation ratio. This result solves the open question by Chan et al. (SICOMP 2018) and Gamlath et al. (SODA 2019) about the existence of an algorithm that beats greedy in edge-weighted general graphs, where the greedy algorithm probes the edges in descending order of edge-weights. We also present a variant of the algorithm that achieves a (1 − 1/ e )-approximation for edge-weighted bipartite graphs, which generalizes the (1 − 1/ e ) approximation ratio of Gamlath et al. (SODA 2019) for the stochastic setting to the case when the realizations of edges are arbitrarily correlated, where in the stochastic setting, there is a known probability associated with each pair of vertices that indicates the probability that an edge exists between the two vertices, when the pair is probed.
自Dyer和Frieze (RSA 1991)的工作以来,对随机贪婪匹配算法的研究已经有了很长的历史。我们遵循这一趋势,并考虑在遗忘设置下制定的问题,其中图的顶点集是算法已知的,但不知道边集。该算法可以查询任何对顶点之间是否存在边,但如果存在,则必须将边包含到匹配中,即,如Gamlath等人(SODA 2019)的查询提交模型中所示。我们重新审视Aronson等人(RSA 1995)的改进随机贪婪(MRG)算法,该算法已被证明可以实现(0.5 + ε)-近似。在算法的每一步中,均匀随机地选择一个不匹配的顶点,并与随机选择的邻居(如果存在)匹配。我们研究了一种弱版本的算法,称为随机决策顺序(RDO),在每一步中,随机选择一个不匹配的顶点,并将其与任意邻居(如果存在)匹配。我们证明了RDO算法对二部图提供0.639逼近,对一般图提供0.531逼近。因此,我们大大提高了MRG的近似比。进一步,我们将RDO算法推广到边缘加权的情况,并证明了它达到了0.501的近似比。这一结果解决了Chan等人(SICOMP 2018)和Gamlath等人(SODA 2019)提出的一个开放问题,即在边加权一般图中存在一种击败贪心算法的算法,其中贪心算法按边权重降序探测边。我们也提出一个算法的变体,达到(1−1 / e)光纤edge-weighted由两部分构成的图表,可以推广(1−1 / e)近似Gamlath et al .(2019年苏打水)的比例随机设置的边的实现时,任意相关,在随机环境中,有一个已知的概率与每一对顶点表示的概率之间存在一条边的两个顶点,当对被探测时。
{"title":"Towards a Better Understanding of Randomized Greedy Matching","authors":"Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang","doi":"10.1145/3614318","DOIUrl":"https://doi.org/10.1145/3614318","url":null,"abstract":"There has been a long history of studying randomized greedy matching algorithms since the work by Dyer and Frieze (RSA 1991). We follow this trend and consider the problem formulated in the oblivious setting, in which the vertex set of a graph is known to the algorithm, but not the edge set. The algorithm can make queries for the existence of the edge between any pair of vertices but must include the edge into the matching if it exists, i.e., as in the query-commit model by Gamlath et al. (SODA 2019). We revisit the Modified Randomized Greedy (MRG) algorithm by Aronson et al. (RSA 1995) that is proved to achieve a (0.5 + ϵ)-approximation. In each step of the algorithm, an unmatched vertex is chosen uniformly at random and matched to a randomly chosen neighbor (if exists). We study a weaker version of the algorithm named Random Decision Order (RDO) that, in each step, randomly picks an unmatched vertex and matches it to an arbitrary neighbor (if exists). We prove that the RDO algorithm provides a 0.639-approximation for bipartite graphs and 0.531-approximation for general graphs. As a corollary, we substantially improve the approximation ratio of MRG . Furthermore, we generalize the RDO algorithm to the edge-weighted case and prove that it achieves a 0.501 approximation ratio. This result solves the open question by Chan et al. (SICOMP 2018) and Gamlath et al. (SODA 2019) about the existence of an algorithm that beats greedy in edge-weighted general graphs, where the greedy algorithm probes the edges in descending order of edge-weights. We also present a variant of the algorithm that achieves a (1 − 1/ e )-approximation for edge-weighted bipartite graphs, which generalizes the (1 − 1/ e ) approximation ratio of Gamlath et al. (SODA 2019) for the stochastic setting to the case when the realizations of edges are arbitrarily correlated, where in the stochastic setting, there is a known probability associated with each pair of vertices that indicates the probability that an edge exists between the two vertices, when the pair is probed.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Iceberg Hashing: Optimizing Many Hash-Table Criteria at Once 冰山哈希:一次优化多个哈希表标准
2区 计算机科学 Q2 Computer Science Pub Date : 2023-10-02 DOI: 10.1145/3625817
Michael A. Bender, Alex Conway, Martín Farach-Colton, William Kuszmaul, Guido Tagliavini
Despite being one of the oldest data structures in computer science, hash tables continue to be the focus of a great deal of both theoretical and empirical research. A central reason for this is that many of the fundamental properties that one desires from a hash table are difficult to achieve simultaneously; thus many variants offering different trade-offs have been proposed. This paper introduces Iceberg hashing, a hash table that simultaneously offers the strongest known guarantees on a large number of core properties. Iceberg hashing supports constant-time operations while improving on the state of the art for space efficiency, cache efficiency, and low failure probability. Iceberg hashing is also the first hash table to support a load factor of up to 1 − o (1) while being stable, meaning that the position where an element is stored only ever changes when resizes occur. In fact, in the setting where keys are Θ (log n ) bits, the space guarantees that Iceberg hashing offers, namely that it uses at most (log binom{|U|}{n} + O(n log log n) ) bits to store n items from a universe U , matches a lower bound by Demaine et al. that applies to any stable hash table. Iceberg hashing introduces new general-purpose techniques for some of the most basic aspects of hash-table design. Notably, our indirection-free technique for dynamic resizing, which we call waterfall addressing, and our techniques for achieving stability and very-high probability guarantees, can be applied to any hash table that makes use of the front-yard/backyard paradigm for hash table design.
尽管哈希表是计算机科学中最古老的数据结构之一,但它仍然是大量理论和实证研究的焦点。这样做的一个主要原因是,人们希望从哈希表中获得的许多基本属性很难同时实现;因此,提出了许多提供不同权衡的变体。本文介绍了冰山哈希,这是一种同时为大量核心属性提供已知最强保证的哈希表。冰山散列支持恒定时间操作,同时提高了空间效率、缓存效率和低故障概率。冰山哈希也是第一个支持负载因子高达1−0(1)的哈希表,同时保持稳定,这意味着存储元素的位置只有在调整大小时才会改变。事实上,在键为Θ (log n)位的设置中,空间保证冰山哈希提供的空间,即它最多使用(log binom{|U|}{n} + O(n log log n) )位来存储来自宇宙U的n个项目,匹配Demaine等人提出的适用于任何稳定哈希表的下界。冰山哈希为哈希表设计的一些最基本方面引入了新的通用技术。值得注意的是,我们用于动态调整大小的非间接技术,我们称之为瀑布寻址,以及我们实现稳定性和非常高概率保证的技术,可以应用于任何使用前院/后院范例进行哈希表设计的哈希表。
{"title":"Iceberg Hashing: Optimizing Many Hash-Table Criteria at Once","authors":"Michael A. Bender, Alex Conway, Martín Farach-Colton, William Kuszmaul, Guido Tagliavini","doi":"10.1145/3625817","DOIUrl":"https://doi.org/10.1145/3625817","url":null,"abstract":"Despite being one of the oldest data structures in computer science, hash tables continue to be the focus of a great deal of both theoretical and empirical research. A central reason for this is that many of the fundamental properties that one desires from a hash table are difficult to achieve simultaneously; thus many variants offering different trade-offs have been proposed. This paper introduces Iceberg hashing, a hash table that simultaneously offers the strongest known guarantees on a large number of core properties. Iceberg hashing supports constant-time operations while improving on the state of the art for space efficiency, cache efficiency, and low failure probability. Iceberg hashing is also the first hash table to support a load factor of up to 1 − o (1) while being stable, meaning that the position where an element is stored only ever changes when resizes occur. In fact, in the setting where keys are Θ (log n ) bits, the space guarantees that Iceberg hashing offers, namely that it uses at most (log binom{|U|}{n} + O(n log log n) ) bits to store n items from a universe U , matches a lower bound by Demaine et al. that applies to any stable hash table. Iceberg hashing introduces new general-purpose techniques for some of the most basic aspects of hash-table design. Notably, our indirection-free technique for dynamic resizing, which we call waterfall addressing, and our techniques for achieving stability and very-high probability guarantees, can be applied to any hash table that makes use of the front-yard/backyard paradigm for hash table design.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135830528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Balanced Allocations with the Choice of Noise 均衡分配与噪声的选择
2区 计算机科学 Q2 Computer Science Pub Date : 2023-09-27 DOI: 10.1145/3625386
Dimitrios Los, Thomas Sauerwald
We consider the allocation of m balls (jobs) into n bins (servers). In the standard Two-Choice process, at each step t = 1, 2, …, m we first sample two randomly chosen bins, compare their two loads and then place a ball in the least loaded bin. It is well-known that for any m ≥ n , this results in a gap (difference between the maximum and average load) of log 2 log n + Θ (1) (with high probability). In this work, we consider Two-Choice in different settings with noisy load comparisons. One key setting involves an adaptive adversary whose power is limited by some threshold (g in mathbb {N} ) . In each step, such adversary can determine the result of any load comparison between two bins whose loads differ by at most g , while if the load difference is greater than g , the comparison is correct. For this adversarial setting, we first prove that for any m ≥ n the gap is (mathcal {O}(g+log n) ) with high probability. Then through a refined analysis we prove that if g ≤ log n , then for any m ≥ n the gap is (mathcal {O}(frac{g}{log g} cdot log log n) ) . For constant values of g , this generalizes the heavily loaded analysis of [19, 61] for the Two-Choice process, and establishes that asymptotically the same gap bound holds even if load comparisons among “similarly loaded” bins are wrong. Finally, we complement these upper bounds with tight lower bounds, which establish an interesting phase transition on how the parameter g impacts the gap. The analysis also applies to settings with outdated and delayed information. For example, for the setting of [18] where balls are allocated in consecutive batches of size b = n , we present an improved and tight gap bound of (Theta big (frac{log n}{log log n}big) ) . This bound also extends for a range of values of b and applies to a relaxed setting where the reported load of a bin can be any load value from the last b steps.
我们考虑将m个球(作业)分配到n个箱(服务器)中。在标准的two - choice过程中,在每一步t = 1,2,…,m,我们首先对两个随机选择的箱子进行抽样,比较它们的两个载荷,然后将一个球放入载荷最少的箱子中。众所周知,对于任意m≥n,这将导致log 2 log n + Θ(1)的缺口(最大和平均负载之间的差)(具有高概率)。在这项工作中,我们考虑了两种选择在不同的设置与噪声负载比较。一个关键的设置涉及到一个自适应的对手,它的能力受到一些阈值的限制(g in mathbb {N} )。在每一步中,这样的对手可以确定负载相差不超过g的两个箱子之间的任何负载比较的结果,如果负载差大于g,则比较是正确的。对于这种对抗性设置,我们首先证明了对于任意m≥n,间隙大概率为(mathcal {O}(g+log n) )。然后通过精细分析证明,当g≤log n时,则对于任意m≥n,间隙为(mathcal {O}(frac{g}{log g} cdot log log n) )。对于g的恒定值,这推广了[19,61]对Two-Choice过程的重负载分析,并建立了即使“相似负载”的箱子之间的负载比较是错误的,也能渐进地保持相同的间隙界。最后,我们用严格的下界来补充这些上界,这建立了一个有趣的关于参数g如何影响间隙的相变。该分析也适用于信息过时和延迟的设置。例如,对于[18]的设置,其中球以大小为b = n的连续批次分配,我们提出了改进的紧密间隙界(Theta big (frac{log n}{log log n}big) )。此界限还扩展到b值的范围,并应用于一个宽松的设置,其中报告的存储库负载可以是最后b步中的任何负载值。
{"title":"Balanced Allocations with the Choice of Noise","authors":"Dimitrios Los, Thomas Sauerwald","doi":"10.1145/3625386","DOIUrl":"https://doi.org/10.1145/3625386","url":null,"abstract":"We consider the allocation of m balls (jobs) into n bins (servers). In the standard Two-Choice process, at each step t = 1, 2, …, m we first sample two randomly chosen bins, compare their two loads and then place a ball in the least loaded bin. It is well-known that for any m ≥ n , this results in a gap (difference between the maximum and average load) of log 2 log n + Θ (1) (with high probability). In this work, we consider Two-Choice in different settings with noisy load comparisons. One key setting involves an adaptive adversary whose power is limited by some threshold (g in mathbb {N} ) . In each step, such adversary can determine the result of any load comparison between two bins whose loads differ by at most g , while if the load difference is greater than g , the comparison is correct. For this adversarial setting, we first prove that for any m ≥ n the gap is (mathcal {O}(g+log n) ) with high probability. Then through a refined analysis we prove that if g ≤ log n , then for any m ≥ n the gap is (mathcal {O}(frac{g}{log g} cdot log log n) ) . For constant values of g , this generalizes the heavily loaded analysis of [19, 61] for the Two-Choice process, and establishes that asymptotically the same gap bound holds even if load comparisons among “similarly loaded” bins are wrong. Finally, we complement these upper bounds with tight lower bounds, which establish an interesting phase transition on how the parameter g impacts the gap. The analysis also applies to settings with outdated and delayed information. For example, for the setting of [18] where balls are allocated in consecutive batches of size b = n , we present an improved and tight gap bound of (Theta big (frac{log n}{log log n}big) ) . This bound also extends for a range of values of b and applies to a relaxed setting where the reported load of a bin can be any load value from the last b steps.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135539141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast, Algebraic Multivariate Multipoint Evaluation in Small Characteristic and Applications 小特征下的快速、代数多元多点评价及其应用
2区 计算机科学 Q2 Computer Science Pub Date : 2023-09-22 DOI: 10.1145/3625226
Vishwas Bhargava, Sumanta Ghosh, Mrinal Kumar, Chandra Kanta Mohapatra
Multipoint evaluation is the computational task of evaluating a polynomial given as a list of coefficients at a given set of inputs. Besides being a natural and fundamental question in computer algebra on its own, fast algorithms for this problem are also closely related to fast algorithms for other natural algebraic questions like polynomial factorization and modular composition. And while nearly linear time algorithms have been known for the univariate instance of multipoint evaluation for close to five decades due to a work of Borodin and Moenck [7], fast algorithms for the multivariate version have been much harder to come by. In a significant improvement to the state of art for this problem, Umans [25] and Kedlaya & Umans [16] gave nearly linear time algorithms for this problem over field of small characteristic and over all finite fields respectively, provided that the number of variables n is at most d o (1) where the degree of the input polynomial in every variable is less than d . They also stated the question of designing fast algorithms for the large variable case (i.e. n ∉ d o (1) ) as an open problem. use in the preparation of the documentation of their work. In this work, we show that there is a deterministic algorithm for multivariate multipoint evaluation over a field (mathbb {F}_{q} ) of characteristic p which evaluates an n -variate polynomial of degree less than d in each variable on N inputs in time [ left((N + d^n)^{1 + o(1)}text{poly}(log q, d, n, p)right) ,, ] provided that p is at most d o (1) , and q is at most (exp (exp (exp (⋅⋅⋅(exp ( d ))))), where the height of this tower of exponentials is fixed. When the number of variables is large (e.g. n ∉ d o (1) ), this is the first nearly linear time algorithm for this problem over any (large enough) field. Our algorithm is based on elementary algebraic ideas and this algebraic structure naturally leads to the following two independently interesting applications. • We show that there is an algebraic data structure for univariate polynomial evaluation with nearly linear space complexity and sublinear time complexity over finite fields of small characteristic and quasipolynomially bounded size. This provides a counterexample to a conjecture of Miltersen [21] who conjectured that over small finite fields, any algebraic data structure for polynomial evaluation using polynomial space must have linear query complexity. • We also show that over finite fields of small characteristic and quasipolynomially bounded size, Vandermonde matrices are not rigid enough to yield size-depth tradeoffs for linear circuits via the current quantitative bounds in Valiant’s program [26]. More precisely, for every fixed prime p , we show that for every constant ϵ > 0, and large enough n , the rank of any n × n Vandermonde matrix V over the field (mathbb {F}_{p^a} ) can be reduced to ( n /exp ( Ω (poly(ϵ)log 0.53 n ))) by changing at most n Θ (ϵ) entries in every row of V , provided a ≤ poly(log n
多点求值是一种计算任务,对给定输入集上的系数列表给出的多项式求值。该问题的快速算法除了本身是计算机代数中一个自然而基本的问题外,还与多项式分解、模合成等其他自然代数问题的快速算法密切相关。近五十年来,由于Borodin和Moenck[7]的工作,近线性时间算法已经为多点评估的单变量实例而闻名,而多变量版本的快速算法则很难实现。在解决这个问题的技术水平上有了显著的进步,人类的b[25]和凯德拉亚;human[16]分别给出了在小特征域上和在所有有限域上求解该问题的近线性时间算法,条件是变量数n最多为d o(1),且每个变量的输入多项式的阶数小于d。他们还将设计大变量情况(即n∈d o(1))的快速算法的问题作为一个开放问题。用于准备他们工作的文档。在这项工作中,我们证明了存在一种确定性算法,用于特征p的域(mathbb {F}_{q} )上的多元多点评估,该算法在时间上对n个输入[ left((N + d^n)^{1 + o(1)}text{poly}(log q, d, n, p)right) ,, ]上的每个变量评估一个程度小于d的n变量多项式,前提是p最多为d o(1),并且q最多为exp (exp (exp (exp))(⋅⋅⋅⋅(exp (d))))))),其中该指数塔的高度是固定的。当变量数较大时(如n∈d o(1)),这是该问题在任何(足够大的)域上的第一个近似线性时间算法。我们的算法基于初等代数思想,这种代数结构自然会导致以下两个独立有趣的应用。•我们证明了在小特征和拟多项式有界大小的有限域上存在一种具有近线性空间复杂度和次线性时间复杂度的单变量多项式计算的代数数据结构。这为Miltersen[21]的一个猜想提供了一个反例,他推测在小的有限域上,任何使用多项式空间进行多项式求值的代数数据结构都必须具有线性查询复杂性。•我们还表明,在小特征和准多项式有界大小的有限域上,Vandermonde矩阵不够刚性,无法通过Valiant程序[26]中的当前定量界限为线性电路提供尺寸深度权衡。更准确地说,对于每一个固定的素数p,我们证明了对于每一个常数ε &gt;0,并且n足够大,任何n × n Vandermonde矩阵V在(mathbb {F}_{p^a} )域上的秩可以简化为(n /exp (Ω (poly(λ)log 0.53 n)),只要在V的每行中最多改变n个Θ (λ)项,只要≤poly(log n)。在此工作之前,类似的刚性上界仅为特殊的Vandermonde矩阵所知。例如,离散傅里叶变换矩阵和具有几何级数[9]生成器的Vandermonde矩阵。
{"title":"Fast, Algebraic Multivariate Multipoint Evaluation in Small Characteristic and Applications","authors":"Vishwas Bhargava, Sumanta Ghosh, Mrinal Kumar, Chandra Kanta Mohapatra","doi":"10.1145/3625226","DOIUrl":"https://doi.org/10.1145/3625226","url":null,"abstract":"Multipoint evaluation is the computational task of evaluating a polynomial given as a list of coefficients at a given set of inputs. Besides being a natural and fundamental question in computer algebra on its own, fast algorithms for this problem are also closely related to fast algorithms for other natural algebraic questions like polynomial factorization and modular composition. And while nearly linear time algorithms have been known for the univariate instance of multipoint evaluation for close to five decades due to a work of Borodin and Moenck [7], fast algorithms for the multivariate version have been much harder to come by. In a significant improvement to the state of art for this problem, Umans [25] and Kedlaya &amp; Umans [16] gave nearly linear time algorithms for this problem over field of small characteristic and over all finite fields respectively, provided that the number of variables n is at most d o (1) where the degree of the input polynomial in every variable is less than d . They also stated the question of designing fast algorithms for the large variable case (i.e. n ∉ d o (1) ) as an open problem. use in the preparation of the documentation of their work. In this work, we show that there is a deterministic algorithm for multivariate multipoint evaluation over a field (mathbb {F}_{q} ) of characteristic p which evaluates an n -variate polynomial of degree less than d in each variable on N inputs in time [ left((N + d^n)^{1 + o(1)}text{poly}(log q, d, n, p)right) ,, ] provided that p is at most d o (1) , and q is at most (exp (exp (exp (⋅⋅⋅(exp ( d ))))), where the height of this tower of exponentials is fixed. When the number of variables is large (e.g. n ∉ d o (1) ), this is the first nearly linear time algorithm for this problem over any (large enough) field. Our algorithm is based on elementary algebraic ideas and this algebraic structure naturally leads to the following two independently interesting applications. • We show that there is an algebraic data structure for univariate polynomial evaluation with nearly linear space complexity and sublinear time complexity over finite fields of small characteristic and quasipolynomially bounded size. This provides a counterexample to a conjecture of Miltersen [21] who conjectured that over small finite fields, any algebraic data structure for polynomial evaluation using polynomial space must have linear query complexity. • We also show that over finite fields of small characteristic and quasipolynomially bounded size, Vandermonde matrices are not rigid enough to yield size-depth tradeoffs for linear circuits via the current quantitative bounds in Valiant’s program [26]. More precisely, for every fixed prime p , we show that for every constant ϵ &gt; 0, and large enough n , the rank of any n × n Vandermonde matrix V over the field (mathbb {F}_{p^a} ) can be reduced to ( n /exp ( Ω (poly(ϵ)log 0.53 n ))) by changing at most n Θ (ϵ) entries in every row of V , provided a ≤ poly(log n","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136060143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cerise: Program Verification on a Capability Machine in the Presence of Untrusted Code Cerise:存在不可信代码的能力机器上的程序验证
2区 计算机科学 Q2 Computer Science Pub Date : 2023-09-14 DOI: 10.1145/3623510
Aïna Linn Georges, Armaël Guéneau, Thomas Van Strydonck, Amin Timany, Alix Trieu, Dominique Devriese, Lars Birkedal
A capability machine is a type of CPU allowing fine-grained privilege separation using capabilities , machine words that represent certain kinds of authority. We present a mathematical model and accompanying proof methods that can be used for formal verification of functional correctness of programs running on a capability machine, even when they invoke and are invoked by unknown (and possibly malicious) code. We use a program logic called Cerise for reasoning about known code, and an associated logical relation, for reasoning about unknown code. The logical relation formally captures the capability safety guarantees provided by the capability machine. The Cerise program logic, logical relation, and all the examples considered in the paper have been mechanized using the Iris program logic framework in the Coq proof assistant. The methodology we present underlies recent work of the authors on formal reasoning about capability machines [15, 33, 37], but was left somewhat implicit in those publications. In this paper we present a pedagogical introduction to the methodology, in a simpler setting (no exotic capabilities), and starting from minimal examples. We work our way up to new results about a heap-based calling convention and implementations of sophisticated object-capability patterns of the kind previously studied for high-level languages with object-capabilities, demonstrating that the methodology scales to such reasoning.
能力机器是一种CPU类型,允许使用能力进行细粒度的特权分离,机器字代表某些类型的权限。我们提出了一个数学模型和附带的证明方法,可以用于正式验证在功能机器上运行的程序的功能正确性,即使它们调用未知(可能是恶意的)代码并被调用。我们使用名为Cerise的程序逻辑来对已知代码进行推理,并使用相关的逻辑关系来对未知代码进行推理。逻辑关系正式地捕获由能力机器提供的能力安全保证。在Coq证明助手中,使用Iris程序逻辑框架实现了Cerise程序逻辑、逻辑关系以及本文所考虑的所有示例的机械化。我们提出的方法是作者最近关于能力机器的形式推理工作的基础[15,33,37],但在这些出版物中有些隐含。在本文中,我们以一个更简单的设置(没有外来的能力),从最小的例子开始,对该方法进行了教学介绍。我们通过自己的努力得到了关于基于堆的调用约定和复杂的对象能力模式的实现的新结果,这些模式是以前为具有对象能力的高级语言研究过的,并证明了该方法可以扩展到这样的推理。
{"title":"Cerise: Program Verification on a Capability Machine in the Presence of Untrusted Code","authors":"Aïna Linn Georges, Armaël Guéneau, Thomas Van Strydonck, Amin Timany, Alix Trieu, Dominique Devriese, Lars Birkedal","doi":"10.1145/3623510","DOIUrl":"https://doi.org/10.1145/3623510","url":null,"abstract":"A capability machine is a type of CPU allowing fine-grained privilege separation using capabilities , machine words that represent certain kinds of authority. We present a mathematical model and accompanying proof methods that can be used for formal verification of functional correctness of programs running on a capability machine, even when they invoke and are invoked by unknown (and possibly malicious) code. We use a program logic called Cerise for reasoning about known code, and an associated logical relation, for reasoning about unknown code. The logical relation formally captures the capability safety guarantees provided by the capability machine. The Cerise program logic, logical relation, and all the examples considered in the paper have been mechanized using the Iris program logic framework in the Coq proof assistant. The methodology we present underlies recent work of the authors on formal reasoning about capability machines [15, 33, 37], but was left somewhat implicit in those publications. In this paper we present a pedagogical introduction to the methodology, in a simpler setting (no exotic capabilities), and starting from minimal examples. We work our way up to new results about a heap-based calling convention and implementations of sophisticated object-capability patterns of the kind previously studied for high-level languages with object-capabilities, demonstrating that the methodology scales to such reasoning.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134913118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Journal of the ACM
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1