首页 > 最新文献

arXiv - CS - Computational Complexity最新文献

英文 中文
New Direct Sum Tests 新的直接总和测试
Pub Date : 2024-09-16 DOI: arxiv-2409.10464
Alek Westover, Edward Yu, Kai Zheng
A function $f:[n]^{d} to mathbb{F}_2$ is a defn{direct sum} if there arefunctions $L_i:[n]to mathbb{F}_2$ such that ${f(x) = sum_{i}L_i(x_i)}$. Inthis work we give multiple results related to the property testing of directsums. Our first result concerns a test proposed by Dinur and Golubev in 2019. Wecall their test the Diamond test and show that it is indeed a direct sumtester. More specifically, we show that if a function $f$ is $epsilon$-farfrom being a direct sum function, then the Diamond test rejects $f$ withprobability at least $Omega_{n,epsilon}(1)$. Even in the case of $n = 2$, theDiamond test is, to the best of our knowledge, novel and yields a new testerfor the classic property of affinity. Apart from the Diamond test, we also analyze a broad family of direct sumtests, which at a high level, run an arbitrary affinity test on the restrictionof $f$ to a random hypercube inside of $[n]^d$. This family of tests includesthe direct sum test analyzed in cite{di19}, but does not include the Diamondtest. As an application of our result, we obtain a direct sum test which worksin the online adversary model of cite{KRV}. Finally, we also discuss a Fourier analytic interpretation of the diamondtester in the $n=2$ case, as well as prove local correction results for directsum as conjectured by Dinur and Golubev.
函数 $f:[n]^{d}如果有函数 $L_i:[n]to mathbb{F}_2$ 使得 ${f(x) = sum_{i}L_i(x_i)}$ 是一个 defn{direct sum},那么这个函数就是一个 defn{direct sum}。在这项工作中,我们给出了与直方和属性检验相关的多个结果。我们的第一个结果涉及 Dinur 和 Golubev 于 2019 年提出的一个检验。我们把他们的检验称为 Diamond 检验,并证明它确实是一个直接求和检验。更具体地说,我们证明,如果函数 $f$ 离直接求和函数很远,那么 Diamond 检验拒绝 $f$ 的概率至少为 $Omega_{n,epsilon}(1)$。据我们所知,即使在 $n = 2$ 的情况下,钻石检验也是新颖的,它为亲和这一经典性质提供了新的检验方法。除钻石检验外,我们还分析了直接求和检验的广泛系列,它们在高层次上对 $f$ 到 $[n]^d$ 内随机超立方体的限制进行任意亲和性检验。这个检验系列包括在 cite{di19}中分析的直接求和检验,但不包括戴蒙德检验。作为我们结果的一个应用,我们得到了一个直接求和检验,它可以在 cite{KRV}的在线对抗模型中工作。最后,我们还讨论了在 $n=2$ 情况下 diamondtester 的傅立叶分析解释,并证明了 Dinur 和 Golubev 所猜想的 directsum 的局部修正结果。
{"title":"New Direct Sum Tests","authors":"Alek Westover, Edward Yu, Kai Zheng","doi":"arxiv-2409.10464","DOIUrl":"https://doi.org/arxiv-2409.10464","url":null,"abstract":"A function $f:[n]^{d} to mathbb{F}_2$ is a defn{direct sum} if there are\u0000functions $L_i:[n]to mathbb{F}_2$ such that ${f(x) = sum_{i}L_i(x_i)}$. In\u0000this work we give multiple results related to the property testing of direct\u0000sums. Our first result concerns a test proposed by Dinur and Golubev in 2019. We\u0000call their test the Diamond test and show that it is indeed a direct sum\u0000tester. More specifically, we show that if a function $f$ is $epsilon$-far\u0000from being a direct sum function, then the Diamond test rejects $f$ with\u0000probability at least $Omega_{n,epsilon}(1)$. Even in the case of $n = 2$, the\u0000Diamond test is, to the best of our knowledge, novel and yields a new tester\u0000for the classic property of affinity. Apart from the Diamond test, we also analyze a broad family of direct sum\u0000tests, which at a high level, run an arbitrary affinity test on the restriction\u0000of $f$ to a random hypercube inside of $[n]^d$. This family of tests includes\u0000the direct sum test analyzed in cite{di19}, but does not include the Diamond\u0000test. As an application of our result, we obtain a direct sum test which works\u0000in the online adversary model of cite{KRV}. Finally, we also discuss a Fourier analytic interpretation of the diamond\u0000tester in the $n=2$ case, as well as prove local correction results for direct\u0000sum as conjectured by Dinur and Golubev.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complexity and algorithms for Swap median and relation to other consensus problems 交换中值的复杂性和算法以及与其他共识问题的关系
Pub Date : 2024-09-15 DOI: arxiv-2409.09734
Luís Cunha, Thiago Lopes, Arnaud Mary
Genome rearrangements are events in which large blocks of DNA exchange piecesduring evolution. The analysis of such events is a tool for understandingevolutionary genomics, based on finding the minimum number of rearrangements totransform one genome into another. In a general scenario, more than two genomesare considered and we have new challenges. The {sc Median} problem consists infinding, given three permutations and a distance metric, a permutation $s$ thatminimizes the sum of the distances between $s$ and each input. We study the{sc median} problem over emph{swap} distances in permutations, for which thecomputational complexity has been open for almost 20 years (Eriksen,emph{Theor. Compt. Sci.}, 2007). We consider this problem through somebranches. We associate median solutions and interval convex sets, where theconcept of graph convexity inspires the following investigation: Does a medianpermutation belong to every shortest path between one of the pairs of inputpermutations? We are able to partially answer this question, and as aby-product we solve a long open problem by proving that the {sc Swap Median}problem is NP-hard. Furthermore, using a similar approach, we show that the{sc Closest} problem, which seeks to minimize the maximum distance between thesolution and the input permutations, is NP-hard even considering three inputpermutations. This gives a sharp dichotomy into the P vs. NP-hard approaches,since considering two input permutations the problem is easily solvable andconsidering any number of input permutations it is known to be NP-hard since2007 (Popov, emph{Theor. Compt. Sci.}, 2007). In addition, we show that {scSwap Median} and {sc Swap Closest} are APX-hard problems.
基因组重排是大块 DNA 在进化过程中交换片段的事件。对这类事件的分析是理解基因组进化的一种工具,其基础是找到将一个基因组转变为另一个基因组的最少重排次数。在一般情况下,我们需要考虑两个以上的基因组,这就给我们带来了新的挑战。{/sc中值}问题包括:在给定三个排列和一个距离度量的情况下,找到一个排列$s$,使$s$和每个输入之间的距离之和最小。我们研究的是排列组合中{emph{swap}距离的{sc中值}问题,这个问题的计算复杂度问题已经有近20年的历史了(Eriksen, emph{Theor. Compt. Sci.},2007)。我们通过一些分支来考虑这个问题。我们把中值解和区间凸集联系起来,其中图凸性的概念启发了下面的研究:中值突变是否属于输入突变对之间的每一条最短路径?我们能够部分地回答这个问题,作为副产品,我们通过证明{交换中值}问题是 NP-hard的,解决了一个长期悬而未决的问题。此外,我们还用类似的方法证明了{sc Closest}问题(该问题旨在最小化解与输入排列组合之间的最大距离)即使考虑三个输入排列组合也是 NP 难的。这给出了P与NP-hard方法的截然对立,因为考虑两个输入排列组合,这个问题很容易求解,而考虑任意数量的输入排列组合,这个问题自2007年以来就被认为是NP-hard的(Popov, emph{Theor. Compt.)此外,我们还证明了 {scSwap Median} 和 {scSwap Closest} 是 APX 难问题。
{"title":"Complexity and algorithms for Swap median and relation to other consensus problems","authors":"Luís Cunha, Thiago Lopes, Arnaud Mary","doi":"arxiv-2409.09734","DOIUrl":"https://doi.org/arxiv-2409.09734","url":null,"abstract":"Genome rearrangements are events in which large blocks of DNA exchange pieces\u0000during evolution. The analysis of such events is a tool for understanding\u0000evolutionary genomics, based on finding the minimum number of rearrangements to\u0000transform one genome into another. In a general scenario, more than two genomes\u0000are considered and we have new challenges. The {sc Median} problem consists in\u0000finding, given three permutations and a distance metric, a permutation $s$ that\u0000minimizes the sum of the distances between $s$ and each input. We study the\u0000{sc median} problem over emph{swap} distances in permutations, for which the\u0000computational complexity has been open for almost 20 years (Eriksen,\u0000emph{Theor. Compt. Sci.}, 2007). We consider this problem through some\u0000branches. We associate median solutions and interval convex sets, where the\u0000concept of graph convexity inspires the following investigation: Does a median\u0000permutation belong to every shortest path between one of the pairs of input\u0000permutations? We are able to partially answer this question, and as a\u0000by-product we solve a long open problem by proving that the {sc Swap Median}\u0000problem is NP-hard. Furthermore, using a similar approach, we show that the\u0000{sc Closest} problem, which seeks to minimize the maximum distance between the\u0000solution and the input permutations, is NP-hard even considering three input\u0000permutations. This gives a sharp dichotomy into the P vs. NP-hard approaches,\u0000since considering two input permutations the problem is easily solvable and\u0000considering any number of input permutations it is known to be NP-hard since\u00002007 (Popov, emph{Theor. Compt. Sci.}, 2007). In addition, we show that {sc\u0000Swap Median} and {sc Swap Closest} are APX-hard problems.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT 记者、情感与生成式人工智能聊天机器人的引入:对 ChatGPT 推出前后推文的大规模分析
Pub Date : 2024-09-13 DOI: arxiv-2409.08761
Seth C. Lewis, David M. Markowitz, Jon Benedik Bunquin
As part of a broader look at the impact of generative AI, this studyinvestigated the emotional responses of journalists to the release of ChatGPTat the time of its launch. By analyzing nearly 1 million Tweets fromjournalists at major U.S. news outlets, we tracked changes in emotional toneand sentiment before and after the introduction of ChatGPT in November 2022.Using various computational and natural language processing techniques tomeasure emotional shifts in response to ChatGPT's release, we found an increasein positive emotion and a more favorable tone post-launch, suggesting initialoptimism toward AI's potential. This research underscores the pivotal role ofjournalists as interpreters of technological innovation and disruption,highlighting how their emotional reactions may shape public narratives aroundemerging technologies. The study contributes to understanding the intersectionof journalism, emotion, and AI, offering insights into the broader societalimpact of generative AI tools.
作为对生成式人工智能的影响进行更广泛研究的一部分,本研究调查了 ChatGPT 发布时记者的情绪反应。通过分析来自美国主要新闻机构记者的近 100 万条推文,我们追踪了 ChatGPT 于 2022 年 11 月推出前后的情感基调和情绪变化。通过使用各种计算和自然语言处理技术来测量 ChatGPT 发布后的情感变化,我们发现发布后积极情绪增加,基调更加有利,这表明人们最初对人工智能的潜力持乐观态度。这项研究强调了记者作为技术创新和颠覆的诠释者所发挥的关键作用,突出了他们的情绪反应如何影响公众对新兴技术的叙述。这项研究有助于理解新闻、情感和人工智能的交叉点,为了解生成式人工智能工具更广泛的社会影响提供了见解。
{"title":"Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT","authors":"Seth C. Lewis, David M. Markowitz, Jon Benedik Bunquin","doi":"arxiv-2409.08761","DOIUrl":"https://doi.org/arxiv-2409.08761","url":null,"abstract":"As part of a broader look at the impact of generative AI, this study\u0000investigated the emotional responses of journalists to the release of ChatGPT\u0000at the time of its launch. By analyzing nearly 1 million Tweets from\u0000journalists at major U.S. news outlets, we tracked changes in emotional tone\u0000and sentiment before and after the introduction of ChatGPT in November 2022.\u0000Using various computational and natural language processing techniques to\u0000measure emotional shifts in response to ChatGPT's release, we found an increase\u0000in positive emotion and a more favorable tone post-launch, suggesting initial\u0000optimism toward AI's potential. This research underscores the pivotal role of\u0000journalists as interpreters of technological innovation and disruption,\u0000highlighting how their emotional reactions may shape public narratives around\u0000emerging technologies. The study contributes to understanding the intersection\u0000of journalism, emotion, and AI, offering insights into the broader societal\u0000impact of generative AI tools.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Almost-catalytic Computation 几乎催化的计算
Pub Date : 2024-09-11 DOI: arxiv-2409.07208
Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Deep Rai, Jayalal Sarma
Designing algorithms for space bounded models with restoration requirementson the space used by the algorithm is an important challenge posed about thecatalytic computation model introduced by Buhrman et al. (2014). Motivated bythe scenarios where we do not need to restore unless is useful, we define$ACL(A)$ to be the class of languages that can be accepted by almost-catalyticTuring machines with respect to $A$ (which we call the catalytic set), thatuses at most $clog n$ work space and $n^c$ catalytic space. We show that if there are almost-catalytic algorithms for a problem withcatalytic set as $A subseteq Sigma^*$ and its complement respectively, thenthe problem can be solved by a ZPP algorithm. Using this, we derive that todesign catalytic algorithms, it suffices to design almost-catalytic algorithmswhere the catalytic set is the set of strings of odd weight ($PARITY$). Towardsthis, we consider two complexity measures of the set $A$ which are maximizedfor $PARITY$ - random projection complexity (${cal R}(A)$) and the subcubepartition complexity (${cal P}(A)$). By making use of error-correcting codes, we show that for all $k ge 1$,there is a language $A_k subseteq Sigma^*$ such that $DSPACE(n^k) subseteqACL(A_k)$ where for every $m ge 1$, $mathcal{R}(A_k cap {0,1}^m) gefrac{m}{4}$ and $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$. This contrasts thecatalytic machine model where it is unclear if it can accept all languages in$DSPACE(log^{1+epsilon} n)$ for any $epsilon > 0$. Improving the partition complexity of the catalytic set $A$ further, we showthat for all $k ge 1$, there is a $A_k subseteq {0,1}^*$ such that$mathsf{DSPACE}(log^k n) subseteq ACL(A_k)$ where for every $m ge 1$,$mathcal{R}(A_k cap {0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap{0,1}^m)=2^{m/4+Omega(log m)}$.
为有空间约束的模型设计算法,并对算法使用的空间提出还原要求,是 Buhrman 等人(2014)提出的催化计算模型面临的一个重要挑战。在我们不需要还原除非有用的场景的激励下,我们将$ACL(A)$定义为可以被关于$A$(我们称之为催化集)的近乎催化图灵机所接受的语言类,该类语言最多使用$clog n$工作空间和$n^c$催化空间。我们证明,如果一个问题的催化集分别为 $A subseteq Sigma^*$ 及其补集,那么这个问题就可以用 ZPP 算法来解决。利用这一点,我们得出,要设计催化算法,只需设计几乎催化的算法即可,其中催化集是奇数权重的字符串集($PARITY$)。为此,我们考虑了针对 $PARITY$ 而最大化的集合 $A$ 的两个复杂度度量--随机投影复杂度(${cal R}(A)$)和子分区复杂度(${cal P}(A)$)。通过使用纠错码,我们证明了对于所有 $k ge 1$,存在一种语言 $A_k subseteq Sigma^*$ ,使得 $DSPACE(n^k) subseteqACL(A_k)$ 其中对于每 $m ge 1$、$mathcal{R}(A_k cap {0,1}^m) gefrac{m}{4}$ 和 $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$ 。这与催化机器模型形成了鲜明对比,在催化机器模型中,对于任何$epsilon > 0$的语言,它是否能接受$DSPACE(log^{1+epsilon} n)$中的所有语言还不清楚。为了进一步提高催化集 $A$ 的分割复杂度,我们证明对于所有 $k ge 1$,存在一个 $A_k subseteq {0、1}^*$ such that$mathsf{DSPACE}(log^k n)subseteq ACL(A_k)$ where for every $m ge 1$,$mathcal{R}(A_k cap{0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap{0,1}^m)=2^{m/4+Omega(log m)}$.
{"title":"Almost-catalytic Computation","authors":"Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Deep Rai, Jayalal Sarma","doi":"arxiv-2409.07208","DOIUrl":"https://doi.org/arxiv-2409.07208","url":null,"abstract":"Designing algorithms for space bounded models with restoration requirements\u0000on the space used by the algorithm is an important challenge posed about the\u0000catalytic computation model introduced by Buhrman et al. (2014). Motivated by\u0000the scenarios where we do not need to restore unless is useful, we define\u0000$ACL(A)$ to be the class of languages that can be accepted by almost-catalytic\u0000Turing machines with respect to $A$ (which we call the catalytic set), that\u0000uses at most $clog n$ work space and $n^c$ catalytic space. We show that if there are almost-catalytic algorithms for a problem with\u0000catalytic set as $A subseteq Sigma^*$ and its complement respectively, then\u0000the problem can be solved by a ZPP algorithm. Using this, we derive that to\u0000design catalytic algorithms, it suffices to design almost-catalytic algorithms\u0000where the catalytic set is the set of strings of odd weight ($PARITY$). Towards\u0000this, we consider two complexity measures of the set $A$ which are maximized\u0000for $PARITY$ - random projection complexity (${cal R}(A)$) and the subcube\u0000partition complexity (${cal P}(A)$). By making use of error-correcting codes, we show that for all $k ge 1$,\u0000there is a language $A_k subseteq Sigma^*$ such that $DSPACE(n^k) subseteq\u0000ACL(A_k)$ where for every $m ge 1$, $mathcal{R}(A_k cap {0,1}^m) ge\u0000frac{m}{4}$ and $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$. This contrasts the\u0000catalytic machine model where it is unclear if it can accept all languages in\u0000$DSPACE(log^{1+epsilon} n)$ for any $epsilon > 0$. Improving the partition complexity of the catalytic set $A$ further, we show\u0000that for all $k ge 1$, there is a $A_k subseteq {0,1}^*$ such that\u0000$mathsf{DSPACE}(log^k n) subseteq ACL(A_k)$ where for every $m ge 1$,\u0000$mathcal{R}(A_k cap {0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap\u0000{0,1}^m)=2^{m/4+Omega(log m)}$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Simulation of Cellular Automata by Self-Composition 通过自组合快速模拟细胞自动机
Pub Date : 2024-09-11 DOI: arxiv-2409.07065
Joseph Natal, Oleksiy Al-saadi
It is shown that computing the configuration of any one-dimensional cellularautomaton at generation $n$ can be accelerated by constructing and running acomposite one with a radius proportional to $log n$. The new automaton is theoriginal automaton whose local rule function is composed with itself. Theasymptotic time complexity to compute the configuration of generation $n$ isreduced from $O(n^2)$ operations to $O(n^2 / log n)$ on a given machine with$O(n^2)$ memory usage. Experimental results are given in the case of Rule 30.
研究表明,通过构建并运行一个半径与 $log n$ 成比例的复合自动机,可以加速计算任何一维蜂窝自动机在生成 $n$ 时的配置。新的自动机是其局部规则函数与自身组成的原始自动机。在内存使用量为$O(n^2)$的给定机器上,计算第$n$代配置的渐近时间复杂度从$O(n^2)$运算降低到$O(n^2 / log n)$。本文给出了规则 30 的实验结果。
{"title":"Fast Simulation of Cellular Automata by Self-Composition","authors":"Joseph Natal, Oleksiy Al-saadi","doi":"arxiv-2409.07065","DOIUrl":"https://doi.org/arxiv-2409.07065","url":null,"abstract":"It is shown that computing the configuration of any one-dimensional cellular\u0000automaton at generation $n$ can be accelerated by constructing and running a\u0000composite one with a radius proportional to $log n$. The new automaton is the\u0000original automaton whose local rule function is composed with itself. The\u0000asymptotic time complexity to compute the configuration of generation $n$ is\u0000reduced from $O(n^2)$ operations to $O(n^2 / log n)$ on a given machine with\u0000$O(n^2)$ memory usage. Experimental results are given in the case of Rule 30.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully Characterizing Lossy Catalytic Computation 全面描述有损催化计算
Pub Date : 2024-09-08 DOI: arxiv-2409.05046
Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker
A catalytic machine is a model of computation where a traditionalspace-bounded machine is augmented with an additional, significantly larger,"catalytic" tape, which, while being available as a work tape, has the caveatof being initialized with an arbitrary string, which must be preserved at theend of the computation. Despite this restriction, catalytic machines have beenshown to have surprising additional power; a logspace machine with a polynomiallength catalytic tape, known as catalytic logspace ($CL$), can compute problemswhich are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, ofleaving the catalytic tape in its exact original configuration, is robust tominor deviations. This study was initialized by Gupta et al. (2024), whodefined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allowup to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] =CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). Weshow that $$LCSPACE[s,c,e] = CSPACE[Theta(s + e log c), Theta(c)]$$ In otherwords, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, upto a constant stretch, to an equivalent errorless catalytic machine with anadditional $e log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[elog n] subseteq ZPP$, thus giving a barrier to any improvement beyond$LCL[O(1)] = CL$. We also show equivalent results for non-deterministic andrandomized catalytic space.
催化机器是一种计算模型,它在传统的有空间限制的机器上增加了一个额外的、大得多的 "催化 "磁带,这个 "催化 "磁带虽然可以作为工作磁带,但有一个注意事项,即它必须用一个任意字符串初始化,而这个字符串在计算结束时必须保留。尽管有这样的限制,催化机器仍被证明具有惊人的额外能力;具有多项式长度催化磁带的对数空间机器,即催化对数空间($CL$),可以计算被认为对$L$来说不可能的问题。该模型的一个基本问题是,将催化带保持在完全原始的配置上这一催化条件,是否对微小偏差具有稳健性。这项研究是由古普塔等人(2024 年)发起的,他们把有损催化对数空间($LCL[e]$)定义为$CL$ 的一种变体,在这种变体中,我们允许在重置催化磁带时最多有 $e$ 的误差。他们证明,对于任意 $e = O(1)$,$LCL[e]=CL$ 仍然是我们理解的前沿。在这项工作中,我们用普通催化空间($CSPACE[s,c]$)完全描述了有损催化空间($LCSPACE[s,c,e]$)的特征。换句话说,在长度为 $c$ 的催化磁带上允许 $e$ 的错误,在一个恒定的伸展范围内,等价于一个等价的无差错催化机器,它有额外的 $e log c$ 位的普通工作存储器。因此,我们证明,对于任何 $e$,$LCL[e] = CL$ 意味着 $SPACE[elog n] subseteq ZPP$,从而为任何超越$LCL[O(1)] = CL$ 的改进提供了障碍。我们还展示了非确定性和随机催化空间的等效结果。
{"title":"Fully Characterizing Lossy Catalytic Computation","authors":"Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker","doi":"arxiv-2409.05046","DOIUrl":"https://doi.org/arxiv-2409.05046","url":null,"abstract":"A catalytic machine is a model of computation where a traditional\u0000space-bounded machine is augmented with an additional, significantly larger,\u0000\"catalytic\" tape, which, while being available as a work tape, has the caveat\u0000of being initialized with an arbitrary string, which must be preserved at the\u0000end of the computation. Despite this restriction, catalytic machines have been\u0000shown to have surprising additional power; a logspace machine with a polynomial\u0000length catalytic tape, known as catalytic logspace ($CL$), can compute problems\u0000which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of\u0000leaving the catalytic tape in its exact original configuration, is robust to\u0000minor deviations. This study was initialized by Gupta et al. (2024), who\u0000defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow\u0000up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] =\u0000CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space\u0000($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We\u0000show that $$LCSPACE[s,c,e] = CSPACE[Theta(s + e log c), Theta(c)]$$ In other\u0000words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up\u0000to a constant stretch, to an equivalent errorless catalytic machine with an\u0000additional $e log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e\u0000log n] subseteq ZPP$, thus giving a barrier to any improvement beyond\u0000$LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and\u0000randomized catalytic space.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Quantum Pigeonhole Principle and Two Semidefinite Relaxations of Communication Complexity 量子鸽洞原理和通信复杂性的两种半无限松弛
Pub Date : 2024-09-06 DOI: arxiv-2409.04592
Pavel Dvořák, Bruno Loff, Suhail Sherif
We study semidefinite relaxations of $Pi_1$ combinatorial statements. Byrelaxing the pigeonhole principle, we obtain a new "quantum" pigeonholeprinciple which is a stronger statement. By relaxing statements of the form"the communication complexity of $f$ is $> k$", we obtain new communicationmodels, which we call "$gamma_2$ communication" and "quantum-lab protocols".We prove, via an argument from proof complexity, that any natural modelobtained by such a relaxation must solve all Karchmer--Wigderson gamesefficiently. However, the argument is not constructive, so we work toexplicitly construct such protocols in these two models.
我们研究 $Pi_1$ 组合声明的半无限松弛。通过放松鸽洞原理,我们得到了一个新的 "量子 "鸽洞原理,它是一个更强的陈述。通过放松"$f$的通信复杂度为$>k$"这种形式的声明,我们得到了新的通信模型,我们称之为"$gamma_2$通信 "和 "量子实验室协议"。然而,这个论证并不具有建设性,因此我们致力于在这两个模型中明确构建这样的协议。
{"title":"A Quantum Pigeonhole Principle and Two Semidefinite Relaxations of Communication Complexity","authors":"Pavel Dvořák, Bruno Loff, Suhail Sherif","doi":"arxiv-2409.04592","DOIUrl":"https://doi.org/arxiv-2409.04592","url":null,"abstract":"We study semidefinite relaxations of $Pi_1$ combinatorial statements. By\u0000relaxing the pigeonhole principle, we obtain a new \"quantum\" pigeonhole\u0000principle which is a stronger statement. By relaxing statements of the form\u0000\"the communication complexity of $f$ is $> k$\", we obtain new communication\u0000models, which we call \"$gamma_2$ communication\" and \"quantum-lab protocols\".\u0000We prove, via an argument from proof complexity, that any natural model\u0000obtained by such a relaxation must solve all Karchmer--Wigderson games\u0000efficiently. However, the argument is not constructive, so we work to\u0000explicitly construct such protocols in these two models.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Sided Lossless Expanders in the Unbalanced Setting 非平衡设置中的双面无损扩展器
Pub Date : 2024-09-06 DOI: arxiv-2409.04549
Eshan Chattopadhyay, Mohit Gurumukhani, Noam Ringach, Yunya Zhao
We present the first explicit construction of two-sided lossless expanders inthe unbalanced setting (bipartite graphs that have many more nodes on the leftthan on the right). Prior to our work, all known explicit constructions in theunbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed byKalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codesintroduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact,two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless(non-bipartite) expander graphs with high degree and a free group action. Asfar as we know, this is the first explicit construction of lossless(non-bipartite) expanders with $N$ vertices and degree $ll N$.
我们首次提出了在不平衡图(左侧节点多于右侧节点的双向图)中明确构建双面无损扩展器的方法。在我们的研究之前,所有已知的非平衡环境下的显式构造都只能实现单边无损扩展。具体来说,我们证明了 Kalev 和 Ta-Shma (RANDOM'22) 基于 Kopparty、Saraf 和 Yekhanin (STOC'11) 提出的多重性代码构建的单边无损扩展器实际上是双边无损扩展器。利用我们的非平衡双方位展开图,我们很容易得到具有高阶和自由群作用的无损(非双方位)展开图。据我们所知,这是第一次明确构造出顶点为 $N$、阶数为 $ll N$ 的无损(非双态)展开图。
{"title":"Two-Sided Lossless Expanders in the Unbalanced Setting","authors":"Eshan Chattopadhyay, Mohit Gurumukhani, Noam Ringach, Yunya Zhao","doi":"arxiv-2409.04549","DOIUrl":"https://doi.org/arxiv-2409.04549","url":null,"abstract":"We present the first explicit construction of two-sided lossless expanders in\u0000the unbalanced setting (bipartite graphs that have many more nodes on the left\u0000than on the right). Prior to our work, all known explicit constructions in the\u0000unbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed by\u0000Kalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codes\u0000introduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact,\u0000two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless\u0000(non-bipartite) expander graphs with high degree and a free group action. As\u0000far as we know, this is the first explicit construction of lossless\u0000(non-bipartite) expanders with $N$ vertices and degree $ll N$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"225 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query complexity lower bounds for local list-decoding and hard-core predicates (even for small rate and huge lists) 本地列表解码和硬核谓词的查询复杂度下限(即使对于小速率和大列表也是如此)
Pub Date : 2024-09-03 DOI: arxiv-2409.01708
Noga Ron-Zewi, Ronen Shaltiel, Nithin Varma
A binary code Enc$:{0,1}^k to {0,1}^n$ is $(0.5-epsilon,L)$-listdecodable if for all $w in {0,1}^n$, the set List$(w)$ of all messages $min {0,1}^k$ such that the relative Hamming distance between Enc$(m)$ and $w$is at most $0.5 -epsilon$, has size at most $L$. Informally, a $q$-query locallist-decoder for Enc is a randomized procedure Dec$:[k]times [L] to {0,1}$that when given oracle access to a string $w$, makes at most $q$ oracle calls,and for every message $m in text{List}(w)$, with high probability, thereexists $j in [L]$ such that for every $i in [k]$, with high probability,Dec$^w(i,j)=m_i$. We prove lower bounds on $q$, that apply even if $L$ is huge (say$L=2^{k^{0.9}}$) and the rate of Enc is small (meaning that $n ge 2^{k}$): 1. For $epsilon geq 1/k^{nu}$ for some universal constant $0< nu < 1$, weprove a lower bound of $q=Omega(frac{log(1/delta)}{epsilon^2})$, where$delta$ is the error probability of the local list-decoder. This bound istight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of$q=O(frac{log(1/delta)}{epsilon^2})$ for the Hadamard code (which has$n=2^k$). This bound extends an earlier work of Grinberg, Shaltiel and Viola(FOCS 2018) which only works if $n le 2^{k^{gamma}}$ for some universalconstant $0
二进制编码 Enc$:{0,1}^k to {0,1}^n$ 是 $(0.5-epsilon,L)$-listdecodable if for all $w in {0,1}^n$, the set List$(w)$ of all messages $min {0,1}^k$ such that the relative Hamming distance between Enc$(m)$ and $w$ is at most $0.5 -epsilon$, has size at most $L$.非正式地讲,Enc 的 $q$-query locallist-decoder 是一个随机过程 Dec$:[k]times[L]to{0,1}$,当给定对字符串 $w$ 的甲骨文访问权限时,最多进行 $q$ 的甲骨文调用,并且对于 text{List}(w)$ 中的每条信息 $m,高概率地,[L]$ 中存在 $j,从而对于 [k]$ 中的每条 $i,高概率地,Dec$^w(i,j)=m_i$。我们证明了关于 $q$ 的下限,即使 $L$ 非常大(例如$L=2^{k^{0.9}}$)且 Enc 的速率很小(意味着 $n ge 2^{k}$),这些下限仍然适用:1.对于某个通用常数$0< nu <1$的$epsilon geq 1/k^{nu}$,我们证明了一个下限:$q=Omega(frac{log(1/delta)}{epsilon^2})$,其中$delta$是本地列表解码器的错误概率。这个界限是正确的,因为 Goldreich 和 Levin(STOC,1989 年)对哈达玛德编码(有$n=2^k$)的匹配上界是$q=O(frac{log(1/delta)}{epsilon^2})$。这一约束扩展了格林伯格、沙尔蒂尔和维奥拉(FOCS 2018)的早期工作,该工作只有在某个普适常数$0
{"title":"Query complexity lower bounds for local list-decoding and hard-core predicates (even for small rate and huge lists)","authors":"Noga Ron-Zewi, Ronen Shaltiel, Nithin Varma","doi":"arxiv-2409.01708","DOIUrl":"https://doi.org/arxiv-2409.01708","url":null,"abstract":"A binary code Enc$:{0,1}^k to {0,1}^n$ is $(0.5-epsilon,L)$-list\u0000decodable if for all $w in {0,1}^n$, the set List$(w)$ of all messages $m\u0000in {0,1}^k$ such that the relative Hamming distance between Enc$(m)$ and $w$\u0000is at most $0.5 -epsilon$, has size at most $L$. Informally, a $q$-query local\u0000list-decoder for Enc is a randomized procedure Dec$:[k]times [L] to {0,1}$\u0000that when given oracle access to a string $w$, makes at most $q$ oracle calls,\u0000and for every message $m in text{List}(w)$, with high probability, there\u0000exists $j in [L]$ such that for every $i in [k]$, with high probability,\u0000Dec$^w(i,j)=m_i$. We prove lower bounds on $q$, that apply even if $L$ is huge (say\u0000$L=2^{k^{0.9}}$) and the rate of Enc is small (meaning that $n ge 2^{k}$): 1. For $epsilon geq 1/k^{nu}$ for some universal constant $0< nu < 1$, we\u0000prove a lower bound of $q=Omega(frac{log(1/delta)}{epsilon^2})$, where\u0000$delta$ is the error probability of the local list-decoder. This bound is\u0000tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of\u0000$q=O(frac{log(1/delta)}{epsilon^2})$ for the Hadamard code (which has\u0000$n=2^k$). This bound extends an earlier work of Grinberg, Shaltiel and Viola\u0000(FOCS 2018) which only works if $n le 2^{k^{gamma}}$ for some universal\u0000constant $0<gamma <1$, and the number of coins tossed by Dec is small (and\u0000therefore does not apply to the Hadamard code, or other codes with low rate). 2. For smaller $epsilon$, we prove a lower bound of roughly $q =\u0000Omega(frac{1}{sqrt{epsilon}})$. To the best of our knowledge, this is the\u0000first lower bound on the number of queries of local list-decoders that gives $q\u0000ge k$ for small $epsilon$. We also prove black-box limitations for improving some of the parameters of\u0000the Goldreich-Levin hard-core predicate construction.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Partial and weighted matrix multiplication 部分矩阵乘法和加权矩阵乘法
Pub Date : 2024-08-28 DOI: arxiv-2408.15728
Péter Vrana
In a paper published in 1981, Sch"onhage showed that large total matrixmultiplications can be reduced to powers of partial matrix multiplicationtensors, which correspond to the bilinear computation task of multiplyingmatrices with some of the entries fixed to be zero. It was left as an openproblem to generalize the method to the case when the multiplication is alsopartial in the sense that only a subset of the entries need to be computed. Weprove a variant of a more general case: reducing large weighted matrixmultiplications to tensor powers of a partial matrix multiplication in thesense that every entry of the result is a partial version of the inner productof the corresponding row and column of the factors that would appear in theusual matrix product. The implication is that support rank upper bounds onpartial matrix multiplication tensors in this general sense give upper boundson the support rank exponent of matrix multiplication.
Sch"onhage 在 1981 年发表的一篇论文中指出,大型总矩阵乘法可以简化为部分矩阵乘法张量的幂次,这相当于部分项为零的矩阵乘法的双线性计算任务。如何将这一方法推广到乘法也是部分乘法的情况,即只需计算子集条目,这还是一个未决问题。我们证明了一种更普遍情况的变体:将大型加权矩阵乘法简化为部分矩阵乘法的张量幂,即结果的每个条目都是相应行列因子内积的部分版本,而这些因子会出现在通常的矩阵乘法中。这意味着,在这种一般意义上,部分矩阵乘法张量的支持等级上限给出了矩阵乘法的支持等级指数上限。
{"title":"Partial and weighted matrix multiplication","authors":"Péter Vrana","doi":"arxiv-2408.15728","DOIUrl":"https://doi.org/arxiv-2408.15728","url":null,"abstract":"In a paper published in 1981, Sch\"onhage showed that large total matrix\u0000multiplications can be reduced to powers of partial matrix multiplication\u0000tensors, which correspond to the bilinear computation task of multiplying\u0000matrices with some of the entries fixed to be zero. It was left as an open\u0000problem to generalize the method to the case when the multiplication is also\u0000partial in the sense that only a subset of the entries need to be computed. We\u0000prove a variant of a more general case: reducing large weighted matrix\u0000multiplications to tensor powers of a partial matrix multiplication in the\u0000sense that every entry of the result is a partial version of the inner product\u0000of the corresponding row and column of the factors that would appear in the\u0000usual matrix product. The implication is that support rank upper bounds on\u0000partial matrix multiplication tensors in this general sense give upper bounds\u0000on the support rank exponent of matrix multiplication.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Computational Complexity
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1