首页 > 最新文献

Journal of the ACM最新文献

英文 中文
Whole-grain Petri Nets and Processes 全谷物Petri网和工艺
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-12-19 DOI: https://dl.acm.org/doi/10.1145/3559103
Joachim Kock

We present a formalism for Petri nets based on polynomial-style finite-set configurations and etale maps. The formalism supports both a geometric semantics in the style of Goltz and Reisig (processes are etale maps from graphs) and an algebraic semantics in the style of Meseguer and Montanari, in terms of free coloured props, and allows the following unification: for P a Petri net, the Segal space of P-processes is shown to be the free coloured prop-in-groupoids on P. There is also an unfolding semantics à la Winskel, which bypasses the classical symmetry problems: with the new formalism, every Petri net admits a universal unfolding, which in turn has associated an event structure and a Scott domain. Since everything is encoded with explicit sets, Petri nets and their processes have elements. In particular, individual-token semantics is native. (Collective-token semantics emerges from rather drastic quotient constructions à la Best–Devillers, involving taking π0 of the groupoids of states.)

我们提出了一种基于多项式型有限集构形和线性映射的Petri网的形式。该形式主义既支持Goltz和Reisig风格的几何语义(过程是从图中生成的映射),也支持Meseguer和Montanari风格的代数语义,就自由彩色支柱而言,并允许以下统一:对于P a Petri网,P-过程的Segal空间被证明是P上的群中的自由彩色支柱。在新的形式主义中,每个Petri网都承认一个普遍展开,这反过来又将事件结构和斯科特域联系起来。因为所有东西都是用显式集合编码的,所以Petri网和它们的过程都有元素。特别是,单个令牌语义是本地的。(集体令牌语义来自于相当激烈的商构造(例如Best-Devillers),包括取状态群类群的π0。)
{"title":"Whole-grain Petri Nets and Processes","authors":"Joachim Kock","doi":"https://dl.acm.org/doi/10.1145/3559103","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3559103","url":null,"abstract":"<p>We present a formalism for Petri nets based on polynomial-style finite-set configurations and etale maps. The formalism supports both a geometric semantics in the style of Goltz and Reisig (processes are etale maps from graphs) and an algebraic semantics in the style of Meseguer and Montanari, in terms of free coloured props, and allows the following unification: for <monospace>P</monospace> a Petri net, the Segal space of <monospace>P</monospace>-processes is shown to be the free coloured prop-in-groupoids on <monospace>P</monospace>. There is also an unfolding semantics à la Winskel, which bypasses the classical symmetry problems: with the new formalism, every Petri net admits a universal unfolding, which in turn has associated an event structure and a Scott domain. Since everything is encoded with explicit sets, Petri nets and their processes have elements. In particular, individual-token semantics is native. (Collective-token semantics emerges from rather drastic quotient constructions à la Best–Devillers, involving taking π<sub>0</sub> of the groupoids of states.)</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"AES-2 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OptORAMa: Optimal Oblivious RAM OptORAMa:最佳遗忘内存
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-12-19 DOI: https://dl.acm.org/doi/10.1145/3566049
Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Enoch Peserico, Elaine Shi

Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC ’87 and J. ACM ’96) is a technique for provably obfuscating programs’ access patterns, such that the access patterns leak no information about the programs’ secret inputs. To compile a general program to an oblivious counterpart, it is well-known that Ω (log N) amortized blowup in memory accesses is necessary, where N is the size of the logical memory. This was shown in Goldreich and Ostrovksy’s original ORAM work for statistical security and in a somewhat restricted model (the so-called balls-and-bins model), and recently by Larsen and Nielsen (CRYPTO ’18) for computational security.

A long-standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this article, we resolve this problem and present the first secure ORAM with O(log N) amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS ’18) who gave a construction with O(log N⋅ log log N) amortized blowup, assuming one-way functions.

One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of n elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our O(n) algorithm improves the previously best-known deterministic or randomized algorithms whose running time is O(n ⋅ log n) or O(n ⋅ log log n), respectively.

遗忘RAM (ORAM),在Goldreich和Ostrovsky (STOC ' 87和J. ACM ' 96)的开创性工作中首次引入,是一种可证明的混淆程序访问模式的技术,这样访问模式就不会泄露有关程序秘密输入的信息。要将一个通用程序编译为无关的对应程序,众所周知,在内存访问中需要Ω (log N)平摊爆炸,其中N是逻辑内存的大小。这在golddreich和Ostrovksy关于统计安全的原始ORAM工作中以及在某种程度上受限制的模型(所谓的球与箱模型)中得到了证明,最近由Larsen和Nielsen (CRYPTO ' 18)在计算安全方面得到了证明。一个长期存在的悬而未决的问题是,是否存在匹配上述对数下界的最佳ORAM结构(不做大内存字假设,并假设CPU寄存器数量恒定)。在本文中,我们解决了这个问题,并提出了第一个具有O(log N)平摊爆炸的安全ORAM,假设函数是单向的。我们的结果受到了Patel et al. (FOCS ' 18)最近的漂亮工作的启发,并对其进行了非平凡的改进,Patel et al. (FOCS ' 18)给出了O(log N·log log N)平摊放大的构造,假设单向函数。我们感兴趣的独立构建块之一是用于紧密压缩的线性时间确定性遗忘算法:给定一个包含n个元素的数组,其中一些元素被标记,我们对数组中的元素进行排列,以便所有标记的元素最终位于数组的前面。我们的O(n)算法改进了之前最著名的确定性或随机算法,它们的运行时间分别为O(n·log n)或O(n·log log n)。
{"title":"OptORAMa: Optimal Oblivious RAM","authors":"Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Enoch Peserico, Elaine Shi","doi":"https://dl.acm.org/doi/10.1145/3566049","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3566049","url":null,"abstract":"<p>Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC ’87 and J. ACM ’96) is a technique for provably obfuscating programs’ access patterns, such that the access patterns leak no information about the programs’ secret inputs. To compile a general program to an oblivious counterpart, it is well-known that Ω (log <i>N</i>) amortized blowup in memory accesses is necessary, where <i>N</i> is the size of the logical memory. This was shown in Goldreich and Ostrovksy’s original ORAM work for statistical security and in a somewhat restricted model (the so-called <i>balls-and-bins</i> model), and recently by Larsen and Nielsen (CRYPTO ’18) for computational security.</p><p>A long-standing open question is whether there exists an <i>optimal</i> ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this article, we resolve this problem and present the first secure ORAM with <i>O</i>(log <i>N</i>) amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS ’18) who gave a construction with <i>O</i>(log <i>N</i>⋅ log log <i>N</i>) amortized blowup, assuming one-way functions. </p><p>One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of <i>n</i> elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our <i>O</i>(<i>n</i>) algorithm improves the previously best-known deterministic or randomized algorithms whose running time is <i>O</i>(<i>n</i> ⋅ log <i>n</i>) or <i>O</i>(<i>n</i> ⋅ log log <i>n</i>), respectively.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"22 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Properly Learning Decision Trees in almost Polynomial Time 在多项式时间内正确学习决策树
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-24 DOI: https://dl.acm.org/doi/10.1145/3561047
Guy Blanc, Jane Lange, Mingda Qiao, Li-Yang Tan

We give an nO(log log n)-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over { ± 1}n. Even in the realizable setting, the previous fastest runtime was nO(log n), a consequence of a classic algorithm of Ehrenfeucht and Haussler.

Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of O’Donnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be “pruned” so that every variable in the resulting tree is influential.

在{±1}n的均匀分布下,我们给出了一种nO(log log n)时间的隶属度查询算法,用于正确和不可知地学习决策树。即使在可实现的设置中,以前最快的运行时间是nO(log n),这是Ehrenfeucht和Haussler的经典算法的结果。我们的算法与学习决策树的实际启发式算法有相似之处,我们增加了额外的想法来规避这些启发式的已知下界。为了分析我们的算法,我们证明了决策树的一个新的结构结果,它加强了O 'Donnell, Saks, Schramm和Servedio的一个定理。虽然OSSS定理说每个决策树都有一个有影响的变量,但我们展示了如何“修剪”每个决策树,以便结果树中的每个变量都有影响。
{"title":"Properly Learning Decision Trees in almost Polynomial Time","authors":"Guy Blanc, Jane Lange, Mingda Qiao, Li-Yang Tan","doi":"https://dl.acm.org/doi/10.1145/3561047","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3561047","url":null,"abstract":"<p>We give an <i>n</i><sup><i>O</i>(log log <i>n</i>)</sup>-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over { ± 1}<sup><i>n</i></sup>. Even in the realizable setting, the previous fastest runtime was <i>n</i><sup><i>O</i>(log <i>n</i>)</sup>, a consequence of a classic algorithm of Ehrenfeucht and Haussler.</p><p>Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of O’Donnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be “pruned” so that <i>every</i> variable in the resulting tree is influential.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"52 10","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarially Robust Streaming Algorithms via Differential Privacy 基于差分隐私的对抗鲁棒流算法
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-24 DOI: https://dl.acm.org/doi/10.1145/3556972
Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer

A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy. This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters.

如果流算法在数据流被自适应对手恶意选择的情况下仍能保证其准确性,则称其具有对抗性鲁棒性。我们建立了流算法的对抗鲁棒性与差分隐私概念之间的联系。这种联系使我们能够设计新的对抗鲁棒的流算法,在许多有趣的参数制度下优于当前最先进的结构。
{"title":"Adversarially Robust Streaming Algorithms via Differential Privacy","authors":"Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer","doi":"https://dl.acm.org/doi/10.1145/3556972","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3556972","url":null,"abstract":"<p>A streaming algorithm is said to be <i>adversarially robust</i> if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an <i>adaptive adversary</i>. We establish a connection between adversarial robustness of streaming algorithms and the notion of <i>differential privacy</i>. This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"3 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Need for Large Quantum Depth 论对大量子深度的需求
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-23 DOI: 10.1145/3570637
Nai-Hui Chia, Kai-Min Chung, C. Lai
Near-term quantum computers are likely to have small depths due to short coherence time and noisy gates. A natural approach to leverage these quantum computers is interleaving them with classical computers. Understanding the capabilities and limits of this hybrid approach is an essential topic in quantum computation. Most notably, the quantum Fourier transform can be implemented by a hybrid of logarithmic-depth quantum circuits and a classical polynomial-time algorithm. Therefore, it seems possible that quantum polylogarithmic depth is as powerful as quantum polynomial depth in the presence of classical computation. Indeed, Jozsa conjectured that “Any quantum polynomial-time algorithm can be implemented with only O(log n) quantum depth interspersed with polynomial-time classical computations.” This can be formalized as asserting the equivalence of BQP and “BQNCBPP.” However, Aaronson conjectured that “there exists an oracle separation between BQP and BPPBQNC.” BQNCBPP and BPPBQNC are two natural and seemingly incomparable ways of hybrid classical-quantum computation. In this work, we manage to prove Aaronson’s conjecture and in the meantime prove that Jozsa’s conjecture, relative to an oracle, is false. In fact, we prove a stronger statement that for any depth parameter d, there exists an oracle that separates quantum depth d and 2d+1 in the presence of classical computation. Thus, our results show that relative to oracles, doubling the quantum circuit depth does make the hybrid model more powerful, and this cannot be traded by classical computation.
由于相干时间短和噪声门,近期量子计算机可能具有较小的深度。利用这些量子计算机的一种自然方法是将它们与经典计算机交叉使用。理解这种混合方法的能力和局限性是量子计算中的一个重要主题。最值得注意的是,量子傅立叶变换可以通过对数深度量子电路和经典多项式时间算法的混合来实现。因此,在经典计算的存在下,量子多对数深度似乎有可能与量子多项式深度一样强大。事实上,Jozsa推测“任何量子多项式时间算法都可以通过O(log n)量子深度与多项式时间经典计算的穿插来实现。”这可以形式化为断言BQP和“BQNCBPP”的等价性。然而,Aaronson推测“在BQP和BPPBQNC之间存在oracle分离”。BQNCBPP和BPPBQNC是两种自然的、看似无可比拟的混合经典量子计算方式。在这项工作中,我们设法证明了Aaronson的猜想,同时证明了Jozsa的猜想,相对于神谕来说,是错误的。事实上,我们证明了一个更强的命题,即对于任何深度参数d,在经典计算的存在下,存在一个将量子深度d和2d+1分开的预言。因此,我们的结果表明,相对于预言机,加倍量子电路深度确实使混合模型更强大,这是经典计算无法替代的。
{"title":"On the Need for Large Quantum Depth","authors":"Nai-Hui Chia, Kai-Min Chung, C. Lai","doi":"10.1145/3570637","DOIUrl":"https://doi.org/10.1145/3570637","url":null,"abstract":"Near-term quantum computers are likely to have small depths due to short coherence time and noisy gates. A natural approach to leverage these quantum computers is interleaving them with classical computers. Understanding the capabilities and limits of this hybrid approach is an essential topic in quantum computation. Most notably, the quantum Fourier transform can be implemented by a hybrid of logarithmic-depth quantum circuits and a classical polynomial-time algorithm. Therefore, it seems possible that quantum polylogarithmic depth is as powerful as quantum polynomial depth in the presence of classical computation. Indeed, Jozsa conjectured that “Any quantum polynomial-time algorithm can be implemented with only O(log n) quantum depth interspersed with polynomial-time classical computations.” This can be formalized as asserting the equivalence of BQP and “BQNCBPP.” However, Aaronson conjectured that “there exists an oracle separation between BQP and BPPBQNC.” BQNCBPP and BPPBQNC are two natural and seemingly incomparable ways of hybrid classical-quantum computation. In this work, we manage to prove Aaronson’s conjecture and in the meantime prove that Jozsa’s conjecture, relative to an oracle, is false. In fact, we prove a stronger statement that for any depth parameter d, there exists an oracle that separates quantum depth d and 2d+1 in the presence of classical computation. Thus, our results show that relative to oracles, doubling the quantum circuit depth does make the hybrid model more powerful, and this cannot be traded by classical computation.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"282 1","pages":"1 - 38"},"PeriodicalIF":2.5,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75401752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple Uncoupled No-regret Learning Dynamics for Extensive-form Correlated Equilibrium 广义相关均衡的简单解耦无遗憾学习动力学
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-18 DOI: https://dl.acm.org/doi/10.1145/3563772
Gabriele Farina, Andrea Celli, Alberto Marchesi, Nicola Gatti

The existence of simple uncoupled no-regret learning dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as imperfect information. Because of the sequential nature and presence of private information in the game, correlation in extensive-form games possesses significantly different properties than in normal-form games, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to the classical notion of correlated equilibrium in normal-form games. Compared to the latter, the constraints that define the set of EFCEs are significantly more complex, as the correlation device (a.k.a. mediator) must take into account the evolution of beliefs of each player as they make observations throughout the game. Due to that significant added complexity, the existence of uncoupled learning dynamics leading to an EFCE has remained a challenging open research question for a long time. In this article, we settle that question by giving the first uncoupled no-regret dynamics that converge to the set of EFCEs in n-player general-sum extensive-form games with perfect recall. We show that each iterate can be computed in time polynomial in the size of the game tree, and that, when all players play repeatedly according to our learning dynamics, the empirical frequency of play after T game repetitions is proven to be a ( O(1/sqrt {T}) )-approximate EFCE with high probability, and an EFCE almost surely in the limit.

在多智能体系统理论中,存在着一种简单的解耦无后悔学习动态,它收敛于规范化博弈中的相关均衡。具体来说,20多年来我们已经知道,当所有玩家在重复的正常形式游戏中寻求最小化他们的内在遗憾时,游戏的经验频率收敛于正常形式相关均衡。扩展形式(即树形)游戏通过对顺序和同时移动以及不完全信息进行建模来推广正常形式游戏。由于博弈中的序列性和私有信息的存在,广义博弈中的相关性具有与正规博弈明显不同的性质,其中许多仍是开放的研究方向。广义相关均衡(EFCE)被认为是范式博弈中经典相关均衡概念的自然广义对应。与后者相比,定义efce集合的约束条件要复杂得多,因为相关设备(又名中介)必须考虑到每个玩家在整个游戏过程中观察到的信念演变。由于这种显著增加的复杂性,导致EFCE的解耦学习动力学的存在一直是一个具有挑战性的开放研究问题。在这篇文章中,我们通过给出第一个解耦无遗憾动态来解决这个问题,该动态收敛于具有完美回忆的n人一般和广泛形式博弈中的efce集合。我们证明了每次迭代都可以用游戏树大小的时间多项式来计算,并且,当所有玩家根据我们的学习动态重复游戏时,T次游戏重复后的经验游戏频率被证明是高概率的( O(1/sqrt {T}) ) -近似EFCE,并且几乎肯定是极限的EFCE。
{"title":"Simple Uncoupled No-regret Learning Dynamics for Extensive-form Correlated Equilibrium","authors":"Gabriele Farina, Andrea Celli, Alberto Marchesi, Nicola Gatti","doi":"https://dl.acm.org/doi/10.1145/3563772","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3563772","url":null,"abstract":"<p>The existence of simple uncoupled no-regret learning dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their <i>internal</i> regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as imperfect information. Because of the sequential nature and presence of private information in the game, correlation in extensive-form games possesses significantly different properties than in normal-form games, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to the classical notion of correlated equilibrium in normal-form games. Compared to the latter, the constraints that define the set of EFCEs are significantly more complex, as the correlation device (a.k.a. mediator) must take into account the evolution of beliefs of each player as they make observations throughout the game. Due to that significant added complexity, the existence of uncoupled learning dynamics leading to an EFCE has remained a challenging open research question for a long time. In this article, we settle that question by giving the first uncoupled no-regret dynamics that converge to the set of EFCEs in <i>n</i>-player general-sum extensive-form games with perfect recall. We show that each iterate can be computed in time polynomial in the size of the game tree, and that, when all players play repeatedly according to our learning dynamics, the empirical frequency of play after <i>T</i> game repetitions is proven to be a ( O(1/sqrt {T}) )-approximate EFCE with high probability, and an EFCE almost surely in the limit.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"22 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-Weighted Online Bipartite Matching 边加权在线二部匹配
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-17 DOI: https://dl.acm.org/doi/10.1145/3556971
Matthew Fahrbach, Zhiyi Huang, Runzhou Tao, Morteza Zadimoghaddam

Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) gave an elegant algorithm for unweighted bipartite matching that achieves an optimal competitive ratio of 1-1/e . Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial 1/2-competitive greedy algorithm. In this article, we present the first online algorithm that breaks the long-standing 1/2 barrier and achieves a competitive ratio of at least 0.5086. In light of the hardness result of Kapralov, Post, and Vondrák (SODA 2013), which restricts beating a 1/2 competitive ratio for the more general monotone submodular welfare maximization problem, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in an online setting.

The main ingredient in our online matching algorithm is a novel subroutine called online correlated selection (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.

在线二部匹配是在线算法文献中最基本的问题之一。Karp, Vazirani和Vazirani (STOC 1990)给出了一种优雅的非加权二部匹配算法,该算法实现了1-1/e的最优竞争比。Aggarwal等人(SODA 2011)后来将他们的算法和分析推广到顶点加权的情况。然而,除了平凡的1/2竞争贪婪算法之外,我们对最一般的边加权问题知之甚少。在本文中,我们提出了第一个在线算法,它打破了长期存在的1/2障碍,并实现了至少0.5086的竞争比。根据Kapralov, Post和Vondrák (SODA 2013)的硬度结果,该结果限制了更一般的单调次模福利最大化问题的1/2竞争比,我们的结果可以被视为强有力的证据,证明在在线设置中,边加权二部匹配严格比次模福利最大化更容易。在线匹配算法的主要组成部分是一种新的子程序,称为在线相关选择(OCS),它以一系列顶点对作为输入,并从每对中选择一个顶点。OCS不是使用一个新的随机比特来从每对顶点中选择一个顶点,而是将不同对之间的决定负相关,并提供了相关水平的定量度量。我们相信我们的OCS技术是独立的兴趣,并将在其他在线优化问题中找到进一步的应用。
{"title":"Edge-Weighted Online Bipartite Matching","authors":"Matthew Fahrbach, Zhiyi Huang, Runzhou Tao, Morteza Zadimoghaddam","doi":"https://dl.acm.org/doi/10.1145/3556971","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3556971","url":null,"abstract":"<p>Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) gave an elegant algorithm for unweighted bipartite matching that achieves an optimal competitive ratio of 1-1/e . Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial 1/2-competitive greedy algorithm. In this article, we present the first online algorithm that breaks the long-standing 1/2 barrier and achieves a competitive ratio of at least 0.5086. In light of the hardness result of Kapralov, Post, and Vondrák (SODA 2013), which restricts beating a 1/2 competitive ratio for the more general monotone submodular welfare maximization problem, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in an online setting.</p><p>The main ingredient in our online matching algorithm is a novel subroutine called <i>online correlated selection</i> (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"4 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Bandits with Knapsacks 带着背包的敌对强盗
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-17 DOI: https://dl.acm.org/doi/10.1145/3557045
Nicole Immorlica, Karthik Sankararaman, Robert Schapire, Aleksandrs Slivkins

We consider Bandits with Knapsacks (henceforth, BwK), a general model for multi-armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known knapsack problem: find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the “classic” adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the competitive ratio: the ratio of the benchmark reward to algorithm’s reward.

We design an algorithm with competitive ratio O(log T) relative to the best fixed distribution over actions, where T is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version, and use it as a subroutine to solve the latter.

Our algorithm is the first “black-box reduction” from bandits to BwK: it takes an arbitrary bandit algorithm and uses it as a subroutine. We use this reduction to derive several extensions.

我们考虑带着背包的强盗(以下简称BwK),这是一个在供应/预算限制下的多武装强盗的一般模型。特别是,强盗算法需要解决一个众所周知的背包问题:找到一个最优的物品包装到一个有限大小的背包中。BwK问题是许多激励例子的共同概括,从动态定价到重复拍卖,从动态广告分配到网络路由和调度。虽然之前对BwK的研究主要集中在随机版本,但我们开创了另一个极端,在这个极端中,结果可以被对抗性地选择。与随机版本和“经典”对抗性强盗相比,这是一个相当困难的问题,因为遗憾最小化不再可行。相反,目标是最小化竞争比率:基准奖励与算法奖励的比率。我们设计了一个相对于最佳固定分布的竞争比为O(log T)的算法,其中T是时间范围;我们还证明了一个匹配的下界。关键的概念贡献是对问题的随机版本的新视角。我们提出了一种随机版本的新算法,该算法建立在重复博弈中后悔最小化的框架之上,与之前的工作相比,它的分析要简单得多。然后,我们分析了该算法的对抗性版本,并将其用作解决后者的子程序。我们的算法是第一个从强盗到BwK的“黑盒约简”:它取一个任意的强盗算法,并将其作为子程序使用。我们使用这个约简来推导几个扩展。
{"title":"Adversarial Bandits with Knapsacks","authors":"Nicole Immorlica, Karthik Sankararaman, Robert Schapire, Aleksandrs Slivkins","doi":"https://dl.acm.org/doi/10.1145/3557045","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3557045","url":null,"abstract":"<p>We consider <b><i>Bandits with Knapsacks</i></b> (henceforth, <i><b>BwK</b></i>), a general model for multi-armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known <i>knapsack problem</i>: find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the “classic” adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the <i>competitive ratio</i>: the ratio of the benchmark reward to algorithm’s reward.</p><p>We design an algorithm with competitive ratio <i>O</i>(log <i>T</i>) relative to the best fixed distribution over actions, where <i>T</i> is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version, and use it as a subroutine to solve the latter.</p><p>Our algorithm is the first “black-box reduction” from bandits to BwK: it takes an arbitrary bandit algorithm and uses it as a subroutine. We use this reduction to derive several extensions.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"15 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nearly Optimal Pseudorandomness from Hardness 硬度的近最优伪随机性
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-17 DOI: https://dl.acm.org/doi/10.1145/3555307
Dean Doron, Dana Moshkovitz, Justin Oh, David Zuckerman

Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown. Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time tn into a deterministic one running in time t2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n, since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits.

Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s, under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2(1-α′)n, where α = O(α)′. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.

现有的从电路下界推导出BPP = P的证明将随机算法转化为具有较大多项式减速的确定性算法。我们将随机算法转换为确定性算法,速度几乎没有减慢。具体来说,假设随机NP∩coNP电路(正式称为随机SVN电路)的指数下界,我们将任意长度为n的随机算法转换为在t≥n时间运行的随机算法,对于任意小常数α >,我们将任意长度为n的随机算法转换为在t2+α时间运行的确定性算法;0. 当t接近n时,这种减速几乎是最优的,因为在标准的复杂性理论假设下,存在固有的二次非随机化减速问题。我们还将任何很少出错的随机算法转换为具有相似运行时间(经过预处理)的确定性算法。后一种非随机化结果在较弱的假设下成立,即对确定性SVN电路的指数下界。我们的结果来自于一个新的、近乎最优的、显式伪随机生成器,它欺骗了大小为s、种子长度为(1+α)log s的电路,假设存在一个函数f∈E,该函数需要大小至少为2(1-α ')n的随机SVN电路,其中α = O(α) '。该构造在伪熵生成器和局部列表可恢复代码之间使用了一种新的连接。
{"title":"Nearly Optimal Pseudorandomness from Hardness","authors":"Dean Doron, Dana Moshkovitz, Justin Oh, David Zuckerman","doi":"https://dl.acm.org/doi/10.1145/3555307","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3555307","url":null,"abstract":"<p>Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with <i>little slowdown</i>. Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length <i>n</i> running in time <i>t</i> ≥ <i>n</i> into a deterministic one running in time <i>t</i><sup>2+α</sup> for an arbitrarily small constant α &gt; 0. Such a slowdown is nearly optimal for <i>t</i> close to <i>n</i>, since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that <i>errs rarely</i> into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits.</p><p>Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size <i>s</i> with seed length (1+α)log <i>s</i>, under the assumption that there exists a function <i>f</i> ∈ E that requires randomized SVN circuits of size at least 2<sup>(1-α′)</sup><i>n</i>, where α = <i>O</i>(α)′. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"12 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Datalog with Continuous Distributions 具有连续分布的生成数据
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-08-30 DOI: 10.1145/3559102
Martin Grohe, Benjamin Lucien Kaminski, J. Katoen, P. Lindner
Arguing for the need to combine declarative and probabilistic programming, Bárány et al. (TODS 2017) recently introduced a probabilistic extension of Datalog as a “purely declarative probabilistic programming language.” We revisit this language and propose a more principled approach towards defining its semantics based on stochastic kernels and Markov processes—standard notions from probability theory. This allows us to extend the semantics to continuous probability distributions, thereby settling an open problem posed by Bárány et al. We show that our semantics is fairly robust, allowing both parallel execution and arbitrary chase orders when evaluating a program. We cast our semantics in the framework of infinite probabilistic databases (Grohe and Lindner, LMCS 2022) and show that the semantics remains meaningful even when the input of a probabilistic Datalog program is an arbitrary probabilistic database.
Bárány等人(TODS 2017)认为需要将声明性编程和概率编程结合起来,他们最近引入了Datalog的概率扩展,作为“纯声明性概率编程语言”。我们重新审视这种语言,并提出一种更有原则的方法来定义其基于随机核和马尔可夫过程的语义-来自概率论的标准概念。这允许我们将语义扩展到连续概率分布,从而解决Bárány等人提出的开放问题。我们展示了我们的语义是相当健壮的,在计算程序时允许并行执行和任意跟踪命令。我们将语义放在无限概率数据库的框架中(Grohe和Lindner, LMCS 2022),并表明即使概率数据程序的输入是任意概率数据库,语义仍然是有意义的。
{"title":"Generative Datalog with Continuous Distributions","authors":"Martin Grohe, Benjamin Lucien Kaminski, J. Katoen, P. Lindner","doi":"10.1145/3559102","DOIUrl":"https://doi.org/10.1145/3559102","url":null,"abstract":"Arguing for the need to combine declarative and probabilistic programming, Bárány et al. (TODS 2017) recently introduced a probabilistic extension of Datalog as a “purely declarative probabilistic programming language.” We revisit this language and propose a more principled approach towards defining its semantics based on stochastic kernels and Markov processes—standard notions from probability theory. This allows us to extend the semantics to continuous probability distributions, thereby settling an open problem posed by Bárány et al. We show that our semantics is fairly robust, allowing both parallel execution and arbitrary chase orders when evaluating a program. We cast our semantics in the framework of infinite probabilistic databases (Grohe and Lindner, LMCS 2022) and show that the semantics remains meaningful even when the input of a probabilistic Datalog program is an arbitrary probabilistic database.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"22 1","pages":"1 - 52"},"PeriodicalIF":2.5,"publicationDate":"2022-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80064994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Journal of the ACM
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1