首页 > 最新文献

Journal of the ACM (JACM)最新文献

英文 中文
Engineering with Logic 逻辑工程
Pub Date : 2018-12-12 DOI: 10.1145/3243650
S. Bishop, M. Fairbairn, Hannes Mehnert, Michael Norrish, T. Ridge, Peter Sewell, Michael Smith, Keith Wansbrough
Conventional computer engineering relies on test-and-debug development processes, with the behavior of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for expressing complex (and loose) specifications. The TCP/IP protocols and Sockets API are a good example of this: they play a vital role in modern communication and computation, and interoperability between implementations is essential. But what exactly they are is surprisingly obscure: their original development focused on “rough consensus and running code,” augmented by prose RFC specifications that do not precisely define what it means for an implementation to be correct. Ultimately, the actual standard is the de facto one of the common implementations, including, for example, the 15 000 to 20 000 lines of the BSD implementation—optimized and multithreaded C code, time dependent, with asynchronous event handlers, intertwined with the operating system, and security critical. This article reports on work done in the Netsem project to develop lightweight mathematically rigorous techniques that can be applied to such systems: to specify their behavior precisely (but loosely enough to permit the required implementation variation) and to test whether these specifications and the implementations correspond with specifications that are executable as test oracles. We developed post hoc specifications of TCP, UDP, and the Sockets API, both of the service that they provide to applications (in terms of TCP bidirectional stream connections) and of the internal operation of the protocol (in terms of TCP segments and UDP datagrams), together with a testable abstraction function relating the two. These specifications are rigorous, detailed, readable, with broad coverage, and rather accurate. Working within a general-purpose proof assistant (HOL4), we developed language idioms (within higher-order logic) in which to write the specifications: operational semantics with nondeterminism, time, system calls, monadic relational programming, and so forth. We followed an experimental semantics approach, validating the specifications against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, as were a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL4. Having demonstrated that our logic-based engineering techniques suffice for handling real-world protocols, we argue that similar techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing) more robust and predictable implementations. In cases where specification looseness can be controlled, this should be possible with lightweight techniques, without the need for a general-purpose proof assistant, at
传统的计算机工程依赖于测试和调试的开发过程,而公共接口的行为(充其量)是用散文式规范文档描述的。但是散文规范不能以任何自动化的方式用于测试和调试开发,并且散文是表达复杂(和松散)规范的糟糕媒介。TCP/IP协议和套接字API就是一个很好的例子:它们在现代通信和计算中起着至关重要的作用,实现之间的互操作性是必不可少的。但它们究竟是什么却令人惊讶地模糊:它们最初的开发集中在“大致共识和运行代码”上,并被散文式的RFC规范所增强,这些规范并没有精确地定义实现是正确的。最终,实际的标准实际上是一种常见的实现,包括,例如,15000到20000行BSD实现优化的多线程C代码,时间依赖,使用异步事件处理程序,与操作系统交织在一起,并且对安全性至关重要。本文报告了Netsem项目中所做的工作,以开发轻量级的数学上严格的技术,这些技术可以应用于这样的系统:精确地指定它们的行为(但足够松散,以允许所需的实现变化),并测试这些规范和实现是否与作为测试oracle可执行的规范相对应。我们开发了TCP、UDP和套接字API的临时规范,包括它们为应用程序提供的服务(就TCP双向流连接而言)和协议的内部操作(就TCP段和UDP数据报而言),以及与两者相关的可测试抽象功能。这些规范严格、详细、可读、覆盖范围广,而且相当准确。在通用证明助手(HOL4)中,我们开发了语言习惯(在高阶逻辑中)来编写规范:具有不确定性的操作语义、时间、系统调用、一元关系编程等等。我们遵循一种实验性的语义方法,根据从三种实现(FreeBSD、Linux和WinXP)中捕获的数千条跟踪来验证规范。它们之间有许多不同之处,也有许多错误。验证是使用在HOL4上面编程的专用符号模型检查器完成的。在证明了我们基于逻辑的工程技术足以处理现实世界的协议之后,我们认为类似的技术可以在设计时应用于未来的关键软件基础设施,从而导致更清晰的设计和(通过基于规范的测试)更健壮和可预测的实现。在规范松动可以控制的情况下,这应该可以用轻量级技术实现,而不需要通用的证明助手,成本相对较低。
{"title":"Engineering with Logic","authors":"S. Bishop, M. Fairbairn, Hannes Mehnert, Michael Norrish, T. Ridge, Peter Sewell, Michael Smith, Keith Wansbrough","doi":"10.1145/3243650","DOIUrl":"https://doi.org/10.1145/3243650","url":null,"abstract":"Conventional computer engineering relies on test-and-debug development processes, with the behavior of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for expressing complex (and loose) specifications. The TCP/IP protocols and Sockets API are a good example of this: they play a vital role in modern communication and computation, and interoperability between implementations is essential. But what exactly they are is surprisingly obscure: their original development focused on “rough consensus and running code,” augmented by prose RFC specifications that do not precisely define what it means for an implementation to be correct. Ultimately, the actual standard is the de facto one of the common implementations, including, for example, the 15 000 to 20 000 lines of the BSD implementation—optimized and multithreaded C code, time dependent, with asynchronous event handlers, intertwined with the operating system, and security critical. This article reports on work done in the Netsem project to develop lightweight mathematically rigorous techniques that can be applied to such systems: to specify their behavior precisely (but loosely enough to permit the required implementation variation) and to test whether these specifications and the implementations correspond with specifications that are executable as test oracles. We developed post hoc specifications of TCP, UDP, and the Sockets API, both of the service that they provide to applications (in terms of TCP bidirectional stream connections) and of the internal operation of the protocol (in terms of TCP segments and UDP datagrams), together with a testable abstraction function relating the two. These specifications are rigorous, detailed, readable, with broad coverage, and rather accurate. Working within a general-purpose proof assistant (HOL4), we developed language idioms (within higher-order logic) in which to write the specifications: operational semantics with nondeterminism, time, system calls, monadic relational programming, and so forth. We followed an experimental semantics approach, validating the specifications against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, as were a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL4. Having demonstrated that our logic-based engineering techniques suffice for handling real-world protocols, we argue that similar techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing) more robust and predictable implementations. In cases where specification looseness can be controlled, this should be possible with lightweight techniques, without the need for a general-purpose proof assistant, at ","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"93 1","pages":"1 - 77"},"PeriodicalIF":0.0,"publicationDate":"2018-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76103549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Parallel Metric Tree Embedding Based on an Algebraic View on Moore-Bellman-Ford 基于Moore-Bellman-Ford代数观点的并行度量树嵌入
Pub Date : 2018-11-22 DOI: 10.1145/3231591
Stephan Friedrichs, C. Lenzen
A metric tree embedding of expected stretch α ≥ 1 maps a weighted n-node graph G = (V, E, ω) to a weighted tree T = (VT, ET , ωT) with V ⊑ VT such that, for all v,w ∈ V, dist(v, w, G) ≤ dist(v, w, T), and E[dist(v, w, T)] ≤ α dist(v, w, G). Such embeddings are highly useful for designing fast approximation algorithms as many hard problems are easy to solve on tree instances. However, to date, the best parallel polylog n)-depth algorithm that achieves an asymptotically optimal expected stretch of α ∈ O(log n) requires Ω (n2) work and a metric as input. In this article, we show how to achieve the same guarantees using polylog n depth and Õ(m1+ϵ) work, where m = |E| and ϵ > 0 is an arbitrarily small constant. Moreover, one may further reduce the work to Õ(m + n1+ε) at the expense of increasing the expected stretch to O(ε−1 log n). Our main tool in deriving these parallel algorithms is an algebraic characterization of a generalization of the classic Moore-Bellman-Ford algorithm. We consider this framework, which subsumes a variety of previous “Moore-Bellman-Ford-like” algorithms, to be of independent interest and discuss it in depth. In our tree embedding algorithm, we leverage it to provide efficient query access to an approximate metric that allows sampling the tree using polylog n depth and Õ(m) work. We illustrate the generality and versatility of our techniques by various examples and a number of additional results. Specifically, we (1) improve the state of the art for determining metric tree embeddings in the Congest model, (2) determine a (1 + εˆ)-approximate metric regarding the distances in a graph G in polylogarithmic depth and Õ(n(m+n1 + ε )) work, and (3) improve upon the state of the art regarding the k-median and the buy-at-bulk network design problems.
期望伸缩α≥1的度量树嵌入将一个加权n节点图G = (V, E, ω)映射到一个加权树T = (VT, ET, ωT),使得对于所有V, w∈V, dist(V, w, G)≤dist(V, w, T),并且E[dist(V, w, T)]≤α dist(V, w, G)。这种嵌入对于设计快速逼近算法非常有用,因为许多难题很容易在树实例上解决。然而,迄今为止,实现α∈O(log n)的渐近最优期望延伸的最佳并行polylogn -depth算法需要Ω (n2)功和一个度量作为输入。在本文中,我们展示了如何使用polylogn深度和Õ(m1+ λ)工作来实现相同的保证,其中m = |E|和λ >是一个任意小的常数。此外,可以进一步将工作量减少到Õ(m + n1+ε),代价是将期望拉伸增加到O(ε−1 log n)。我们推导这些并行算法的主要工具是对经典Moore-Bellman-Ford算法的推广的代数表征。我们认为这个框架,它包含了各种以前的“Moore-Bellman-Ford-like”算法,是独立的兴趣,并深入讨论它。在我们的树嵌入算法中,我们利用它来提供对近似度量的有效查询访问,该度量允许使用polylog n深度和Õ(m)工作对树进行采样。我们通过各种示例和一些附加结果来说明我们的技术的通用性和多功能性。具体来说,我们(1)改进了在最拥挤模型中确定度量树嵌入的技术水平,(2)确定了关于图G在多对数深度和Õ(n(m+n1 + ε))工作中的距离的(1 + ε -)近似度量,以及(3)改进了关于k-中位数和散装购买网络设计问题的技术水平。
{"title":"Parallel Metric Tree Embedding Based on an Algebraic View on Moore-Bellman-Ford","authors":"Stephan Friedrichs, C. Lenzen","doi":"10.1145/3231591","DOIUrl":"https://doi.org/10.1145/3231591","url":null,"abstract":"A metric tree embedding of expected stretch α ≥ 1 maps a weighted n-node graph G = (V, E, ω) to a weighted tree T = (VT, ET , ωT) with V ⊑ VT such that, for all v,w ∈ V, dist(v, w, G) ≤ dist(v, w, T), and E[dist(v, w, T)] ≤ α dist(v, w, G). Such embeddings are highly useful for designing fast approximation algorithms as many hard problems are easy to solve on tree instances. However, to date, the best parallel polylog n)-depth algorithm that achieves an asymptotically optimal expected stretch of α ∈ O(log n) requires Ω (n2) work and a metric as input. In this article, we show how to achieve the same guarantees using polylog n depth and Õ(m1+ϵ) work, where m = |E| and ϵ > 0 is an arbitrarily small constant. Moreover, one may further reduce the work to Õ(m + n1+ε) at the expense of increasing the expected stretch to O(ε−1 log n). Our main tool in deriving these parallel algorithms is an algebraic characterization of a generalization of the classic Moore-Bellman-Ford algorithm. We consider this framework, which subsumes a variety of previous “Moore-Bellman-Ford-like” algorithms, to be of independent interest and discuss it in depth. In our tree embedding algorithm, we leverage it to provide efficient query access to an approximate metric that allows sampling the tree using polylog n depth and Õ(m) work. We illustrate the generality and versatility of our techniques by various examples and a number of additional results. Specifically, we (1) improve the state of the art for determining metric tree embeddings in the Congest model, (2) determine a (1 + εˆ)-approximate metric regarding the distances in a graph G in polylogarithmic depth and Õ(n(m+n1 + ε )) work, and (3) improve upon the state of the art regarding the k-median and the buy-at-bulk network design problems.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"53 1","pages":"1 - 55"},"PeriodicalIF":0.0,"publicationDate":"2018-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91342300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Shuffles and Circuits (On Lower Bounds for Modern Parallel Computation) 洗牌与电路(关于现代并行计算的下界)
Pub Date : 2018-11-22 DOI: 10.1145/3232536
T. Roughgarden, Sergei Vassilvitskii, Joshua R. Wang
The goal of this article is to identify fundamental limitations on how efficiently algorithms implemented on platforms such as MapReduce and Hadoop can compute the central problems in motivating application domains, such as graph connectivity problems. We introduce an abstract model of massively parallel computation, where essentially the only restrictions are that the “fan-in” of each machine is limited to s bits, where s is smaller than the input size n, and that computation proceeds in synchronized rounds, with no communication between different machines within a round. Lower bounds on the round complexity of a problem in this model apply to every computing platform that shares the most basic design principles of MapReduce-type systems. We prove that computations in our model that use few rounds can be represented as low-degree polynomials over the reals. This connection allows us to translate a lower bound on the (approximate) polynomial degree of a Boolean function to a lower bound on the round complexity of every (randomized) massively parallel computation of that function. These lower bounds apply even in the “unbounded width” version of our model, where the number of machines can be arbitrarily large. As one example of our general results, computing any nontrivial monotone graph property—such as connectivity—requires a super-constant number of rounds when every machine receives only a subpolynomial (in n) number of input bits s. Finally, we prove that, in two senses, our lower bounds are the best one could hope for. For the unbounded-width model, we prove a matching upper bound. Restricting to a polynomial number of machines, we show that asymptotically better lower bounds would separate P from NC1.
本文的目标是确定在MapReduce和Hadoop等平台上实现的算法如何有效地计算激励应用程序领域中的核心问题(如图连接问题)的基本限制。我们引入了一个大规模并行计算的抽象模型,其本质上唯一的限制是每台机器的“扇入”被限制为s位,其中s小于输入大小n,并且计算以同步轮进行,在一轮内不同机器之间没有通信。该模型中问题的循环复杂度的下限适用于共享mapreduce类型系统最基本设计原则的每个计算平台。我们证明了在我们的模型中使用少量轮的计算可以表示为实数上的低次多项式。这种联系允许我们将布尔函数的(近似)多项式度的下界转换为该函数的每个(随机)大规模并行计算的循环复杂度的下界。这些下界甚至适用于我们模型的“无界宽度”版本,其中机器数量可以任意大。作为我们一般结果的一个例子,当每台机器只接收到一个次多项式(n)个数的输入比特s时,计算任何非平凡单调图属性(如连通性)需要一个超常数的轮数。最后,我们证明,在两种意义上,我们的下界是我们所能期望的最好的。对于无边界宽度模型,我们证明了一个匹配的上界。在多项式机器数量的限制下,我们证明了渐近更好的下界可以将P与NC1分开。
{"title":"Shuffles and Circuits (On Lower Bounds for Modern Parallel Computation)","authors":"T. Roughgarden, Sergei Vassilvitskii, Joshua R. Wang","doi":"10.1145/3232536","DOIUrl":"https://doi.org/10.1145/3232536","url":null,"abstract":"The goal of this article is to identify fundamental limitations on how efficiently algorithms implemented on platforms such as MapReduce and Hadoop can compute the central problems in motivating application domains, such as graph connectivity problems. We introduce an abstract model of massively parallel computation, where essentially the only restrictions are that the “fan-in” of each machine is limited to s bits, where s is smaller than the input size n, and that computation proceeds in synchronized rounds, with no communication between different machines within a round. Lower bounds on the round complexity of a problem in this model apply to every computing platform that shares the most basic design principles of MapReduce-type systems. We prove that computations in our model that use few rounds can be represented as low-degree polynomials over the reals. This connection allows us to translate a lower bound on the (approximate) polynomial degree of a Boolean function to a lower bound on the round complexity of every (randomized) massively parallel computation of that function. These lower bounds apply even in the “unbounded width” version of our model, where the number of machines can be arbitrarily large. As one example of our general results, computing any nontrivial monotone graph property—such as connectivity—requires a super-constant number of rounds when every machine receives only a subpolynomial (in n) number of input bits s. Finally, we prove that, in two senses, our lower bounds are the best one could hope for. For the unbounded-width model, we prove a matching upper bound. Restricting to a polynomial number of machines, we show that asymptotically better lower bounds would separate P from NC1.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"23 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2018-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86749895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Invited Article Foreword 特邀文章前言
Pub Date : 2018-11-22 DOI: 10.1145/3241947
É. Tardos
The Invited Article section of this issue consists of the article, “Settling the query complexity of non-adaptive junta testing,” by Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie, which won the best paper award at the 2017 Computational Complexity Conference (CCC’17). We want to thank the CCC’17 Program Committee and the PC chair Ryan O’Donnell for their help in selecting this invited article, and editor Irit Dinur for handling the article.
本文由陈曦、Rocco A. Servedio、Tan Li-Yang、Erik Waingarten、Jinyu Xie撰写的文章“解决非自适应团测试的查询复杂性”获得2017年计算复杂性会议(CCC ' 17)最佳论文奖。我们要感谢CCC ' 17项目委员会和PC主席Ryan O ' donnell在选择这篇特邀文章时的帮助,以及编辑Irit Dinur对这篇文章的处理。
{"title":"Invited Article Foreword","authors":"É. Tardos","doi":"10.1145/3241947","DOIUrl":"https://doi.org/10.1145/3241947","url":null,"abstract":"The Invited Article section of this issue consists of the article, “Settling the query complexity of non-adaptive junta testing,” by Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie, which won the best paper award at the 2017 Computational Complexity Conference (CCC’17). We want to thank the CCC’17 Program Committee and the PC chair Ryan O’Donnell for their help in selecting this invited article, and editor Irit Dinur for handling the article.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"73 1","pages":"1 - 1"},"PeriodicalIF":0.0,"publicationDate":"2018-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83200045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matroid Secretary Problems 矩阵秘书问题
Pub Date : 2018-11-19 DOI: 10.1145/3212512
Moshe Babaioff, Nicole Immorlica, D. Kempe, Robert D. Kleinberg
We define a generalization of the classical secretary problem called the matroid secretary problem. In this problem, the elements of a matroid are presented to an online algorithm in uniformly random order. When an element arrives, the algorithm observes its value and must make an irrevocable decision whether or not to accept it. The accepted elements must form an independent set, and the objective is to maximize the combined value of these elements. We present an O(log k)-competitive algorithm for general matroids (where k is the rank of the matroid), and constant-competitive algorithms for several special cases including graphic matroids, truncated partition matroids, and bounded degree transversal matroids. We leave as an open question the existence of constant-competitive algorithms for general matroids. Our results have applications in welfare-maximizing online mechanism design for domains in which the sets of simultaneously satisfiable agents form a matroid.
我们定义了经典秘书问题的一个推广,称为矩阵秘书问题。在这个问题中,矩阵的元素以一致的随机顺序呈现给在线算法。当一个元素到达时,算法观察它的值,并且必须做出是否接受它的不可撤销的决定。被接受的元素必须形成一个独立的集合,目标是最大化这些元素的组合值。我们提出了一般拟阵的O(log k)竞争算法(其中k为拟阵的秩),以及几种特殊情况的常竞争算法,包括图形拟阵、截断分割拟阵和有界度横切拟阵。对于一般拟阵是否存在常竞争算法,我们留下一个悬而未决的问题。我们的研究结果可以应用于同时可满足的智能体集合形成一个矩阵的领域的福利最大化在线机制设计。
{"title":"Matroid Secretary Problems","authors":"Moshe Babaioff, Nicole Immorlica, D. Kempe, Robert D. Kleinberg","doi":"10.1145/3212512","DOIUrl":"https://doi.org/10.1145/3212512","url":null,"abstract":"We define a generalization of the classical secretary problem called the matroid secretary problem. In this problem, the elements of a matroid are presented to an online algorithm in uniformly random order. When an element arrives, the algorithm observes its value and must make an irrevocable decision whether or not to accept it. The accepted elements must form an independent set, and the objective is to maximize the combined value of these elements. We present an O(log k)-competitive algorithm for general matroids (where k is the rank of the matroid), and constant-competitive algorithms for several special cases including graphic matroids, truncated partition matroids, and bounded degree transversal matroids. We leave as an open question the existence of constant-competitive algorithms for general matroids. Our results have applications in welfare-maximizing online mechanism design for domains in which the sets of simultaneously satisfiable agents form a matroid.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"11 1","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2018-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84307067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Unifying Concurrent Objects and Distributed Tasks 统一并发对象和分布式任务
Pub Date : 2018-11-19 DOI: 10.1145/3266457
Armando Castañeda, S. Rajsbaum, M. Raynal
Tasks and objects are two predominant ways of specifying distributed problems where processes should compute outputs based on their inputs. Roughly speaking, a task specifies, for each set of processes and each possible assignment of input values, their valid outputs. In contrast, an object is defined by a sequential specification. Also, an object can be invoked multiple times by each process, while a task is a one-shot problem. Each one requires its own implementation notion, stating when an execution satisfies the specification. For objects, linearizability is commonly used, while tasks implementation notions are less explored. The article introduces the notion of interval-sequential object, and the corresponding implementation notion of interval-linearizability, to encompass many problems that have no sequential specification as objects. It is shown that interval-sequential specifications are local, namely, one can consider interval-linearizable object implementations in isolation and compose them for free, without sacrificing interval-linearizability of the whole system. The article also introduces the notion of refined tasks and its corresponding satisfiability notion. In contrast to a task, a refined task can be invoked multiple times by each process. Also, objects that cannot be defined using tasks can be defined using refined tasks. In fact, a main result of the article is that interval-sequential objects and refined tasks have the same expressive power and both are complete in the sense that they are able to specify any prefix-closed set of well-formed executions. Interval-linearizability and refined tasks go beyond unifying objects and tasks; they shed new light on both of them. On the one hand, interval-linearizability brings to task the following benefits: an explicit operational semantics, a more precise implementation notion, a notion of state, and a locality property. On the other hand, refined tasks open new possibilities of applying topological techniques to objects.
任务和对象是指定分布式问题的两种主要方式,其中流程应该根据其输入计算输出。粗略地说,任务为每组流程和输入值的每种可能赋值指定它们的有效输出。相反,对象是由顺序规范定义的。此外,每个流程可以多次调用对象,而任务则是一次性问题。每一个都需要自己的实现概念,说明执行何时满足规范。对于对象,线性化是常用的,而任务实现的概念则较少探索。本文引入了区间顺序对象的概念,以及相应的区间线性化的实现概念,以涵盖许多没有顺序规范作为对象的问题。结果表明,区间序列规范是局部的,即可以在不牺牲整个系统的区间线性性的情况下,孤立地考虑可区间线性化的对象实现并自由地组合它们。本文还介绍了细化任务的概念及其相应的可满足性概念。与任务不同的是,细化的任务可以被每个流程多次调用。此外,不能使用任务定义的对象可以使用细化任务定义。实际上,本文的一个主要结果是,间隔顺序对象和精炼任务具有相同的表达能力,并且两者都是完整的,因为它们能够指定任何前缀封闭的格式良好的执行集。区间线性化和精细任务超越了统一对象和任务;他们对这两个问题都有了新的认识。一方面,区间线性化带来了以下好处:显式的操作语义、更精确的实现概念、状态概念和局部性属性。另一方面,精细的任务打开了将拓扑技术应用于对象的新可能性。
{"title":"Unifying Concurrent Objects and Distributed Tasks","authors":"Armando Castañeda, S. Rajsbaum, M. Raynal","doi":"10.1145/3266457","DOIUrl":"https://doi.org/10.1145/3266457","url":null,"abstract":"Tasks and objects are two predominant ways of specifying distributed problems where processes should compute outputs based on their inputs. Roughly speaking, a task specifies, for each set of processes and each possible assignment of input values, their valid outputs. In contrast, an object is defined by a sequential specification. Also, an object can be invoked multiple times by each process, while a task is a one-shot problem. Each one requires its own implementation notion, stating when an execution satisfies the specification. For objects, linearizability is commonly used, while tasks implementation notions are less explored. The article introduces the notion of interval-sequential object, and the corresponding implementation notion of interval-linearizability, to encompass many problems that have no sequential specification as objects. It is shown that interval-sequential specifications are local, namely, one can consider interval-linearizable object implementations in isolation and compose them for free, without sacrificing interval-linearizability of the whole system. The article also introduces the notion of refined tasks and its corresponding satisfiability notion. In contrast to a task, a refined task can be invoked multiple times by each process. Also, objects that cannot be defined using tasks can be defined using refined tasks. In fact, a main result of the article is that interval-sequential objects and refined tasks have the same expressive power and both are complete in the sense that they are able to specify any prefix-closed set of well-formed executions. Interval-linearizability and refined tasks go beyond unifying objects and tasks; they shed new light on both of them. On the one hand, interval-linearizability brings to task the following benefits: an explicit operational semantics, a more precise implementation notion, a notion of state, and a locality property. On the other hand, refined tasks open new possibilities of applying topological techniques to objects.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"42 1","pages":"1 - 42"},"PeriodicalIF":0.0,"publicationDate":"2018-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87716418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
On the Complexity of Cache Analysis for Different Replacement Policies 不同替换策略下缓存分析的复杂性
Pub Date : 2018-11-05 DOI: 10.1145/3366018
D. Monniaux, Valentin Touzeau
Modern processors use cache memory, a memory access that “hits” the cache returns early, while a “miss” takes more time. Given a memory access in a program, cache analysis consists in deciding whether this access is always a hit, always a miss, or is a hit or a miss depending on execution. Such an analysis is of high importance for bounding the worst-case execution time of safety-critical real-time programs. There exist multiple possible policies for evicting old data from the cache when new data are brought in, and different policies, though apparently similar in goals and performance, may be very different from the analysis point of view. In this article, we explore these differences from a complexity-theoretical point of view. Specifically, we show that, among the common replacement policies, Least Recently Used is the only one whose analysis is NP-complete, whereas the analysis problems for the other policies are PSPACE-complete.
现代处理器使用缓存存储器,这种存储器访问“命中”缓存会提前返回,而“未命中”则需要更长的时间。给定一个程序中的内存访问,缓存分析包括判断这个访问是总是命中还是总是未命中,或者是命中还是未命中取决于执行。这种分析对于限定安全关键型实时程序的最坏情况执行时间具有重要意义。当引入新数据时,有多种可能的策略用于从缓存中清除旧数据,不同的策略虽然在目标和性能上明显相似,但从分析的角度来看可能差别很大。在本文中,我们从复杂性理论的角度探讨这些差异。具体来说,我们证明了在常见的替换策略中,最近最少使用是唯一一个分析问题是np完全的策略,而其他策略的分析问题是pspace完全的。
{"title":"On the Complexity of Cache Analysis for Different Replacement Policies","authors":"D. Monniaux, Valentin Touzeau","doi":"10.1145/3366018","DOIUrl":"https://doi.org/10.1145/3366018","url":null,"abstract":"Modern processors use cache memory, a memory access that “hits” the cache returns early, while a “miss” takes more time. Given a memory access in a program, cache analysis consists in deciding whether this access is always a hit, always a miss, or is a hit or a miss depending on execution. Such an analysis is of high importance for bounding the worst-case execution time of safety-critical real-time programs. There exist multiple possible policies for evicting old data from the cache when new data are brought in, and different policies, though apparently similar in goals and performance, may be very different from the analysis point of view. In this article, we explore these differences from a complexity-theoretical point of view. Specifically, we show that, among the common replacement policies, Least Recently Used is the only one whose analysis is NP-complete, whereas the analysis problems for the other policies are PSPACE-complete.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"9 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2018-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83220474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Uniform, Integral, and Feasible Proofs for the Determinant Identities 行列式恒等式的一致、积分和可行证明
Pub Date : 2018-11-01 DOI: 10.1145/3431922
Iddo Tzameret, S. Cook
Aiming to provide weak as possible axiomatic assumptions in which one can develop basic linear algebra, we give a uniform and integral version of the short propositional proofs for the determinant identities demonstrated over GF(2) in Hrubeš-Tzameret [15]. Specifically, we show that the multiplicativity of the determinant function and the Cayley-Hamilton theorem over the integers are provable in the bounded arithmetic theory VNC2; the latter is a first-order theory corresponding to the complexity class NC2 consisting of problems solvable by uniform families of polynomial-size circuits and O(log2 n)-depth. This also establishes the existence of uniform polynomial-size propositional proofs operating with NC2-circuits of the basic determinant identities over the integers (previous propositional proofs hold only over the two-element field).
为了提供尽可能弱的公理假设,在这些假设中人们可以发展基本的线性代数,我们给出了在Hrubeš-Tzameret[15]中GF(2)上证明的行列式恒等式的短命题证明的统一和积分版本。具体地说,我们证明了行列式函数的乘法性和整数上的Cayley-Hamilton定理在有界算术理论VNC2中是可证明的;后者是一个一阶理论,对应于复杂度类NC2,由多项式大小电路的统一族和O(log2 n)深度可解的问题组成。这也建立了在整数上用基本行列式恒等式的nc2回路操作的一致多项式大小的命题证明的存在性(先前的命题证明只在二元域上成立)。
{"title":"Uniform, Integral, and Feasible Proofs for the Determinant Identities","authors":"Iddo Tzameret, S. Cook","doi":"10.1145/3431922","DOIUrl":"https://doi.org/10.1145/3431922","url":null,"abstract":"Aiming to provide weak as possible axiomatic assumptions in which one can develop basic linear algebra, we give a uniform and integral version of the short propositional proofs for the determinant identities demonstrated over GF(2) in Hrubeš-Tzameret [15]. Specifically, we show that the multiplicativity of the determinant function and the Cayley-Hamilton theorem over the integers are provable in the bounded arithmetic theory VNC2; the latter is a first-order theory corresponding to the complexity class NC2 consisting of problems solvable by uniform families of polynomial-size circuits and O(log2 n)-depth. This also establishes the existence of uniform polynomial-size propositional proofs operating with NC2-circuits of the basic determinant identities over the integers (previous propositional proofs hold only over the two-element field).","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"9 35 1","pages":"1 - 80"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88253504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallelism in Randomized Incremental Algorithms 随机增量算法中的并行性
Pub Date : 2018-10-12 DOI: 10.1145/3402819
G. Blelloch, Yan Gu, Julian Shun, Yihan Sun
In this article, we show that many sequential randomized incremental algorithms are in fact parallel. We consider algorithms for several problems, including Delaunay triangulation, linear programming, closest pair, smallest enclosing disk, least-element lists, and strongly connected components. We analyze the dependencies between iterations in an algorithm and show that the dependence structure is shallow with high probability or that, by violating some dependencies, the structure is shallow and the work is not increased significantly. We identify three types of algorithms based on their dependencies and present a framework for analyzing each type. Using the framework gives work-efficient polylogarithmic-depth parallel algorithms for most of the problems that we study. This article shows the first incremental Delaunay triangulation algorithm with optimal work and polylogarithmic depth. This result is important, since most implementations of parallel Delaunay triangulation use the incremental approach. Our results also improve bounds on strongly connected components and least-element lists and significantly simplify parallel algorithms for several problems.
在本文中,我们证明了许多顺序随机增量算法实际上是并行的。我们考虑了几个问题的算法,包括Delaunay三角剖分、线性规划、最接近对、最小包围盘、最小元素列表和强连接分量。分析了算法中迭代之间的依赖关系,表明依赖结构很可能是浅的,或者通过违反某些依赖关系,结构很浅而工作量不会显著增加。我们根据它们的依赖关系确定了三种类型的算法,并提出了分析每种类型的框架。使用该框架为我们研究的大多数问题提供了高效的多对数深度并行算法。本文展示了第一种具有最优工作和多对数深度的增量Delaunay三角剖分算法。这个结果很重要,因为并行Delaunay三角剖分的大多数实现都使用增量方法。我们的结果还改进了强连通分量和最小元素列表的边界,并显著简化了若干问题的并行算法。
{"title":"Parallelism in Randomized Incremental Algorithms","authors":"G. Blelloch, Yan Gu, Julian Shun, Yihan Sun","doi":"10.1145/3402819","DOIUrl":"https://doi.org/10.1145/3402819","url":null,"abstract":"In this article, we show that many sequential randomized incremental algorithms are in fact parallel. We consider algorithms for several problems, including Delaunay triangulation, linear programming, closest pair, smallest enclosing disk, least-element lists, and strongly connected components. We analyze the dependencies between iterations in an algorithm and show that the dependence structure is shallow with high probability or that, by violating some dependencies, the structure is shallow and the work is not increased significantly. We identify three types of algorithms based on their dependencies and present a framework for analyzing each type. Using the framework gives work-efficient polylogarithmic-depth parallel algorithms for most of the problems that we study. This article shows the first incremental Delaunay triangulation algorithm with optimal work and polylogarithmic depth. This result is important, since most implementations of parallel Delaunay triangulation use the incremental approach. Our results also improve bounds on strongly connected components and least-element lists and significantly simplify parallel algorithms for several problems.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"22 1","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2018-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84522598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
The Reachability Problem for Petri Nets Is Not Elementary Petri网的可达性问题不是基本的
Pub Date : 2018-09-19 DOI: 10.1145/3422822
Wojciech Czerwinski, S. Lasota, R. Lazic, Jérôme Leroux, Filip Mazowiecki
Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modeling and analysis of hardware, software, and database systems, as well as chemical, biological, and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and, currently, the best published upper bound is non-primitive recursive Ackermannian of Leroux and Schmitz from Symposium on Logic in Computer Science 2019. We establish a non-elementary lower bound, i.e., that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi, and other areas, which are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the current best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. We develop a construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. At the heart of our proof is then a novel gadget, the so-called factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. Repeatedly composing the factorial amplifier with itself by means of the former construction enables us to compute, in linear time, Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the non-elementary lower bound. By refining this scheme further, we, in fact, already establish hardness for h-exponential space for Petri nets with h + 13 counters.
Petri网,也称为矢量加法系统,是一种建立已久的并发模型,在硬件、软件和数据库系统以及化学、生物和业务过程的建模和分析中有着广泛的应用。Petri网的核心算法问题是可达性:从给定的初始配置是否存在一系列有效的执行步骤,以达到给定的最终配置。自1960年代以来,这个问题的复杂性一直没有得到解决,它是验证理论中最突出的未决问题之一。Mayr在其开创性的STOC 1981工作中证明了可判定性,目前,发表的最好的上界是Leroux和Schmitz在2019年计算机科学逻辑研讨会上的非原始递归Ackermannian。我们建立了一个非初等下界,即可达性问题需要一个时间和空间的指数塔。在这项工作之前,最好的下界是指数空间,这是由Lipton在1976年提出的。新的下限是一个重大突破,原因有几个。首先,它表明可达性问题比可覆盖性(即状态可达性)问题要困难得多,可覆盖性问题也是普遍存在的,但自20世纪70年代末以来就被认为是指数空间的完备问题。其次,它意味着来自形式语言、逻辑、并发系统、过程演算和其他领域的大量问题也不是基本问题,这些问题都承认来自Petri网可达性问题的约简。第三,它使Petri网的两个关键扩展(带分支和带下推堆栈)的可达性问题的当前最佳下界过时。我们开发了一种结构,使用任意大的比值为R的值对来提供0个以R为界的可测试计数器。我们证明的核心是一个新奇的小工具,即所谓的阶乘放大器,假设存在0个可测试计数器,且以k为界,保证产生任意大的值对,其比率恰好是k的阶乘。通过前一种结构将阶乘放大器与自身重复组合,使我们能够在线性时间内计算模拟明斯基机的Petri网,其计数器由指数塔限制,从而产生非初等下界。通过进一步改进该方案,我们实际上已经为具有h + 13计数器的Petri网建立了h指数空间的硬度。
{"title":"The Reachability Problem for Petri Nets Is Not Elementary","authors":"Wojciech Czerwinski, S. Lasota, R. Lazic, Jérôme Leroux, Filip Mazowiecki","doi":"10.1145/3422822","DOIUrl":"https://doi.org/10.1145/3422822","url":null,"abstract":"Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modeling and analysis of hardware, software, and database systems, as well as chemical, biological, and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and, currently, the best published upper bound is non-primitive recursive Ackermannian of Leroux and Schmitz from Symposium on Logic in Computer Science 2019. We establish a non-elementary lower bound, i.e., that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi, and other areas, which are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the current best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. We develop a construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. At the heart of our proof is then a novel gadget, the so-called factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. Repeatedly composing the factorial amplifier with itself by means of the former construction enables us to compute, in linear time, Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the non-elementary lower bound. By refining this scheme further, we, in fact, already establish hardness for h-exponential space for Petri nets with h + 13 counters.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"55 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2018-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87501202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
期刊
Journal of the ACM (JACM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1