Pub Date : 2024-07-09DOI: 10.4230/LIPIcs.ICALP.2024.132
Dmitry Chistikov, Alessio Mansutti, Mikhail R. Starchak
This paper provides an NP procedure that decides whether a linear-exponential system of constraints has an integer solution. Linear-exponential systems extend standard integer linear programs with exponential terms $2^x$ and remainder terms ${(x bmod 2^y)}$. Our result implies that the existential theory of the structure $(mathbb{N},0,1,+,2^{(cdot)},V_2(cdot,cdot),leq)$ has an NP-complete satisfiability problem, thus improving upon a recent EXPSPACE upper bound. This theory extends the existential fragment of Presburger arithmetic with the exponentiation function $x mapsto 2^x$ and the binary predicate $V_2(x,y)$ that is true whenever $y geq 1$ is the largest power of $2$ dividing $x$. Our procedure for solving linear-exponential systems uses the method of quantifier elimination. As a by-product, we modify the classical Gaussian variable elimination into a non-deterministic polynomial-time procedure for integer linear programming (or: existential Presburger arithmetic).
{"title":"Integer Linear-Exponential Programming in NP by Quantifier Elimination","authors":"Dmitry Chistikov, Alessio Mansutti, Mikhail R. Starchak","doi":"10.4230/LIPIcs.ICALP.2024.132","DOIUrl":"https://doi.org/10.4230/LIPIcs.ICALP.2024.132","url":null,"abstract":"This paper provides an NP procedure that decides whether a linear-exponential system of constraints has an integer solution. Linear-exponential systems extend standard integer linear programs with exponential terms $2^x$ and remainder terms ${(x bmod 2^y)}$. Our result implies that the existential theory of the structure $(mathbb{N},0,1,+,2^{(cdot)},V_2(cdot,cdot),leq)$ has an NP-complete satisfiability problem, thus improving upon a recent EXPSPACE upper bound. This theory extends the existential fragment of Presburger arithmetic with the exponentiation function $x mapsto 2^x$ and the binary predicate $V_2(x,y)$ that is true whenever $y geq 1$ is the largest power of $2$ dividing $x$. Our procedure for solving linear-exponential systems uses the method of quantifier elimination. As a by-product, we modify the classical Gaussian variable elimination into a non-deterministic polynomial-time procedure for integer linear programming (or: existential Presburger arithmetic).","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"52 8","pages":"132:1-132:20"},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141663410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.4230/LIPIcs.ICALP.2023.73
I. Haviv
A subset of $[n] = {1,2,ldots,n}$ is called stable if it forms an independent set in the cycle on the vertex set $[n]$. In 1978, Schrijver proved via a topological argument that for all integers $n$ and $k$ with $n geq 2k$, the family of stable $k$-subsets of $[n]$ cannot be covered by $n-2k+1$ intersecting families. We study two total search problems whose totality relies on this result. In the first problem, denoted by $mathsf{Schrijve}r(n,k,m)$, we are given an access to a coloring of the stable $k$-subsets of $[n]$ with $m = m(n,k)$ colors, where $m leq n-2k+1$, and the goal is to find a pair of disjoint subsets that are assigned the same color. While for $m = n-2k+1$ the problem is known to be $mathsf{PPA}$-complete, we prove that for $m
{"title":"On Finding Constrained Independent Sets in Cycles","authors":"I. Haviv","doi":"10.4230/LIPIcs.ICALP.2023.73","DOIUrl":"https://doi.org/10.4230/LIPIcs.ICALP.2023.73","url":null,"abstract":"A subset of $[n] = {1,2,ldots,n}$ is called stable if it forms an independent set in the cycle on the vertex set $[n]$. In 1978, Schrijver proved via a topological argument that for all integers $n$ and $k$ with $n geq 2k$, the family of stable $k$-subsets of $[n]$ cannot be covered by $n-2k+1$ intersecting families. We study two total search problems whose totality relies on this result. In the first problem, denoted by $mathsf{Schrijve}r(n,k,m)$, we are given an access to a coloring of the stable $k$-subsets of $[n]$ with $m = m(n,k)$ colors, where $m leq n-2k+1$, and the goal is to find a pair of disjoint subsets that are assigned the same color. While for $m = n-2k+1$ the problem is known to be $mathsf{PPA}$-complete, we prove that for $m<d cdot lfloor frac{n}{2k+d-2} rfloor$, with $d$ being any fixed constant, the problem admits an efficient algorithm. For $m = lfloor n/2 rfloor-2k+1$, we prove that the problem is efficiently reducible to the $mathsf{Kneser}$ problem. Motivated by the relation between the problems, we investigate the family of unstable $k$-subsets of $[n]$, which might be of independent interest. In the second problem, called Unfair Independent Set in Cycle, we are given $ell$ subsets $V_1, ldots, V_ell$ of $[n]$, where $ell leq n-2k+1$ and $|V_i| geq 2$ for all $i in [ell]$, and the goal is to find a stable $k$-subset $S$ of $[n]$ satisfying the constraints $|S cap V_i| leq |V_i|/2$ for $i in [ell]$. We prove that the problem is $mathsf{PPA}$-complete and that its restriction to instances with $n=3k$ is at least as hard as the Cycle plus Triangles problem, for which no efficient algorithm is known. On the contrary, we prove that there exists a constant $c$ for which the restriction of the problem to instances with $n geq c cdot k$ can be solved in polynomial time.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128511650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-22DOI: 10.48550/arXiv.2306.13058
Pascal Baumann, Moses Ganardi, R. Majumdar, R. Thinniyam, Georg Zetzsche
In the language-theoretic approach to refinement verification, we check that the language of traces of an implementation all belong to the language of a specification. We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck language. We show that this problem is EXPSPACE-complete -- the same complexity as that of language emptiness and for refinement verification against a regular specification. Our algorithm uses several technical ingredients. First, we show that checking if the coverability language of a succinctly described vector addition system with states (VASS) is contained in a Dyck language is EXPSPACE-complete. Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that allows replacing the (context-free) language of each task in an asynchronous program by a regular language. Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have to construct the regular language ab initio. Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient. In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common practical resource usage patterns based on reference counting, for which few algorithmic techniques were known.
{"title":"Checking Refinement of Asynchronous Programs against Context-Free Specifications","authors":"Pascal Baumann, Moses Ganardi, R. Majumdar, R. Thinniyam, Georg Zetzsche","doi":"10.48550/arXiv.2306.13058","DOIUrl":"https://doi.org/10.48550/arXiv.2306.13058","url":null,"abstract":"In the language-theoretic approach to refinement verification, we check that the language of traces of an implementation all belong to the language of a specification. We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck language. We show that this problem is EXPSPACE-complete -- the same complexity as that of language emptiness and for refinement verification against a regular specification. Our algorithm uses several technical ingredients. First, we show that checking if the coverability language of a succinctly described vector addition system with states (VASS) is contained in a Dyck language is EXPSPACE-complete. Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that allows replacing the (context-free) language of each task in an asynchronous program by a regular language. Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have to construct the regular language ab initio. Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient. In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common practical resource usage patterns based on reference counting, for which few algorithmic techniques were known.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127747799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-30DOI: 10.48550/arXiv.2305.18861
I. Cohen, Debmalya Panigrahi
Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the {em learning-augmented} setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a {em general} algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.
{"title":"A General Framework for Learning-Augmented Online Allocation","authors":"I. Cohen, Debmalya Panigrahi","doi":"10.48550/arXiv.2305.18861","DOIUrl":"https://doi.org/10.48550/arXiv.2305.18861","url":null,"abstract":"Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the {em learning-augmented} setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a {em general} algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-24DOI: 10.48550/arXiv.2305.15489
Bader Abu Radi, O. Kupferman
A nondeterministic automaton is semantically deterministic (SD) if different nondeterministic choices in the automaton lead to equivalent states. Semantic determinism is interesting as it is a natural relaxation of determinism, and as some applications of deterministic automata in formal methods can actually use automata with some level of nondeterminism, tightly related to semantic determinism. In the context of finite words, semantic determinism coincides with determinism, in the sense that every pruning of an SD automaton to a deterministic one results in an equivalent automaton. We study SD automata on infinite words, focusing on B"uchi, co-B"uchi, and weak automata. We show that there, while semantic determinism does not increase the expressive power, the combinatorial and computational properties of SD automata are very different from these of deterministic automata. In particular, SD B"uchi and co-B"uchi automata are exponentially more succinct than deterministic ones (in fact, also exponentially more succinct than history-deterministic automata), their complementation involves an exponential blow up, and decision procedures for them like universality and minimization are PSPACE-complete. For weak automata, we show that while an SD weak automaton need not be pruned to an equivalent deterministic one, it can be determinized to an equivalent deterministic weak automaton with the same state space, implying also efficient complementation and decision procedures for SD weak automata.
{"title":"On Semantically-Deterministic Automata","authors":"Bader Abu Radi, O. Kupferman","doi":"10.48550/arXiv.2305.15489","DOIUrl":"https://doi.org/10.48550/arXiv.2305.15489","url":null,"abstract":"A nondeterministic automaton is semantically deterministic (SD) if different nondeterministic choices in the automaton lead to equivalent states. Semantic determinism is interesting as it is a natural relaxation of determinism, and as some applications of deterministic automata in formal methods can actually use automata with some level of nondeterminism, tightly related to semantic determinism. In the context of finite words, semantic determinism coincides with determinism, in the sense that every pruning of an SD automaton to a deterministic one results in an equivalent automaton. We study SD automata on infinite words, focusing on B\"uchi, co-B\"uchi, and weak automata. We show that there, while semantic determinism does not increase the expressive power, the combinatorial and computational properties of SD automata are very different from these of deterministic automata. In particular, SD B\"uchi and co-B\"uchi automata are exponentially more succinct than deterministic ones (in fact, also exponentially more succinct than history-deterministic automata), their complementation involves an exponential blow up, and decision procedures for them like universality and minimization are PSPACE-complete. For weak automata, we show that while an SD weak automaton need not be pruned to an equivalent deterministic one, it can be determinized to an equivalent deterministic weak automaton with the same state space, implying also efficient complementation and decision procedures for SD weak automata.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124864095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-22DOI: 10.48550/arXiv.2305.13089
Pan Peng, Yuyang Wang
We revisit the relation between two fundamental property testing models for bounded-degree directed graphs: the bidirectional model in which the algorithms are allowed to query both the outgoing edges and incoming edges of a vertex, and the unidirectional model in which only queries to the outgoing edges are allowed. Czumaj, Peng and Sohler [STOC 2016] showed that for directed graphs with both maximum indegree and maximum outdegree upper bounded by $d$, any property that can be tested with query complexity $O_{varepsilon,d}(1)$ in the bidirectional model can be tested with $n^{1-Omega_{varepsilon,d}(1)}$ queries in the unidirectional model. In particular, if the proximity parameter $varepsilon$ approaches $0$, then the query complexity of the transformed tester in the unidirectional model approaches $n$. It was left open if this transformation can be further improved or there exists any property that exhibits such an extreme separation. We prove that testing subgraph-freeness in which the subgraph contains $k$ source components, requires $Omega(n^{1-frac{1}{k}})$ queries in the unidirectional model. This directly gives the first explicit properties that exhibit an $O_{varepsilon,d}(1)$ vs $Omega(n^{1-f(varepsilon,d)})$ separation of the query complexities between the bidirectional model and unidirectional model, where $f(varepsilon,d)$ is a function that approaches $0$ as $varepsilon$ approaches $0$. Furthermore, our lower bound also resolves a conjecture by Hellweg and Sohler [ESA 2012] on the query complexity of testing $k$-star-freeness.
我们重新审视了有界度有向图的两个基本属性测试模型之间的关系:双向模型,其中算法允许查询顶点的输出边和传入边,以及单向模型,其中只允许查询输出边。Czumaj, Peng和Sohler [STOC 2016]表明,对于最大度和最大出界度上限都为$d$的有向图,任何可以在双向模型中用查询复杂度$O_{varepsilon,d}(1)$测试的属性都可以在单向模型中用$n^{1-Omega_{varepsilon,d}(1)}$查询测试。特别是,如果接近参数$varepsilon$接近$0$,则转换后的测试器在单向模型中的查询复杂度接近$n$。如果这种转换可以进一步改进,或者存在任何表现出这种极端分离的性质,这是一个开放的问题。我们证明了测试子图包含$k$源组件的子图自由性需要单向模型中的$Omega(n^{1-frac{1}{k}})$查询。这直接给出了第一个显式属性,显示了双向模型和单向模型之间查询复杂性的$O_{varepsilon,d}(1)$ vs $Omega(n^{1-f(varepsilon,d)})$分离,其中$f(varepsilon,d)$是接近$0$的函数,$varepsilon$接近$0$。此外,我们的下界还解决了Hellweg和Sohler [ESA 2012]关于测试$k$ -star-free的查询复杂度的猜想。
{"title":"An Optimal Separation between Two Property Testing Models for Bounded Degree Directed Graphs","authors":"Pan Peng, Yuyang Wang","doi":"10.48550/arXiv.2305.13089","DOIUrl":"https://doi.org/10.48550/arXiv.2305.13089","url":null,"abstract":"We revisit the relation between two fundamental property testing models for bounded-degree directed graphs: the bidirectional model in which the algorithms are allowed to query both the outgoing edges and incoming edges of a vertex, and the unidirectional model in which only queries to the outgoing edges are allowed. Czumaj, Peng and Sohler [STOC 2016] showed that for directed graphs with both maximum indegree and maximum outdegree upper bounded by $d$, any property that can be tested with query complexity $O_{varepsilon,d}(1)$ in the bidirectional model can be tested with $n^{1-Omega_{varepsilon,d}(1)}$ queries in the unidirectional model. In particular, if the proximity parameter $varepsilon$ approaches $0$, then the query complexity of the transformed tester in the unidirectional model approaches $n$. It was left open if this transformation can be further improved or there exists any property that exhibits such an extreme separation. We prove that testing subgraph-freeness in which the subgraph contains $k$ source components, requires $Omega(n^{1-frac{1}{k}})$ queries in the unidirectional model. This directly gives the first explicit properties that exhibit an $O_{varepsilon,d}(1)$ vs $Omega(n^{1-f(varepsilon,d)})$ separation of the query complexities between the bidirectional model and unidirectional model, where $f(varepsilon,d)$ is a function that approaches $0$ as $varepsilon$ approaches $0$. Furthermore, our lower bound also resolves a conjecture by Hellweg and Sohler [ESA 2012] on the query complexity of testing $k$-star-freeness.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133106238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.4230/LIPIcs.ICALP.2023.129
Diptarka Chakraborty, Sourav Chakraborty, G. Kumar, Kuldeep S. Meel
Given a Boolean formula $phi$ over $n$ variables, the problem of model counting is to compute the number of solutions of $phi$. Model counting is a fundamental problem in computer science with wide-ranging applications. Owing to the #P-hardness of the problems, Stockmeyer initiated the study of the complexity of approximate counting. Stockmeyer showed that $log n$ calls to an NP oracle are necessary and sufficient to achieve $(varepsilon,delta)$ guarantees. The hashing-based framework proposed by Stockmeyer has been very influential in designing practical counters over the past decade, wherein the SAT solver substitutes the NP oracle calls in practice. It is well known that an NP oracle does not fully capture the behavior of SAT solvers, as SAT solvers are also designed to provide satisfying assignments when a formula is satisfiable, without additional overhead. Accordingly, the notion of SAT oracle has been proposed to capture the behavior of SAT solver wherein given a Boolean formula, an SAT oracle returns a satisfying assignment if the formula is satisfiable or returns unsatisfiable otherwise. Since the practical state-of-the-art approximate counting techniques use SAT solvers, a natural question is whether an SAT oracle is more powerful than an NP oracle in the context of approximate model counting. The primary contribution of this work is to study the relative power of the NP oracle and SAT oracle in the context of approximate model counting. The previous techniques proposed in the context of an NP oracle are weak to provide strong bounds in the context of SAT oracle since, in contrast to an NP oracle that provides only one bit of information, a SAT oracle can provide $n$ bits of information. We therefore develop a new methodology to achieve the main result: a SAT oracle is no more powerful than an NP oracle in the context of approximate model counting.
{"title":"Regular Methods for Operator Precedence Languages","authors":"Diptarka Chakraborty, Sourav Chakraborty, G. Kumar, Kuldeep S. Meel","doi":"10.4230/LIPIcs.ICALP.2023.129","DOIUrl":"https://doi.org/10.4230/LIPIcs.ICALP.2023.129","url":null,"abstract":"Given a Boolean formula $phi$ over $n$ variables, the problem of model counting is to compute the number of solutions of $phi$. Model counting is a fundamental problem in computer science with wide-ranging applications. Owing to the #P-hardness of the problems, Stockmeyer initiated the study of the complexity of approximate counting. Stockmeyer showed that $log n$ calls to an NP oracle are necessary and sufficient to achieve $(varepsilon,delta)$ guarantees. The hashing-based framework proposed by Stockmeyer has been very influential in designing practical counters over the past decade, wherein the SAT solver substitutes the NP oracle calls in practice. It is well known that an NP oracle does not fully capture the behavior of SAT solvers, as SAT solvers are also designed to provide satisfying assignments when a formula is satisfiable, without additional overhead. Accordingly, the notion of SAT oracle has been proposed to capture the behavior of SAT solver wherein given a Boolean formula, an SAT oracle returns a satisfying assignment if the formula is satisfiable or returns unsatisfiable otherwise. Since the practical state-of-the-art approximate counting techniques use SAT solvers, a natural question is whether an SAT oracle is more powerful than an NP oracle in the context of approximate model counting. The primary contribution of this work is to study the relative power of the NP oracle and SAT oracle in the context of approximate model counting. The previous techniques proposed in the context of an NP oracle are weak to provide strong bounds in the context of SAT oracle since, in contrast to an NP oracle that provides only one bit of information, a SAT oracle can provide $n$ bits of information. We therefore develop a new methodology to achieve the main result: a SAT oracle is no more powerful than an NP oracle in the context of approximate model counting.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132732625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.48550/arXiv.2305.03697
Davide Bilò, Keerti Choudhary, S. Cohen, T. Friedrich, Simon Krogmann, Martin Schirneck
We study the problem of estimating the $ST$-diameter of a graph that is subject to a bounded number of edge failures. An $f$-edge fault-tolerant $ST$-diameter oracle ($f$-FDO-$ST$) is a data structure that preprocesses a given graph $G$, two sets of vertices $S,T$, and positive integer $f$. When queried with a set $F$ of at most $f$ edges, the oracle returns an estimate $widehat{D}$ of the $ST$-diameter $operatorname{diam}(G-F,S,T)$, the maximum distance between vertices in $S$ and $T$ in $G-F$. The oracle has stretch $sigma geq 1$ if $operatorname{diam}(G-F,S,T) leq widehat{D} leq sigma operatorname{diam}(G-F,S,T)$. If $S$ and $T$ both contain all vertices, the data structure is called an $f$-edge fault-tolerant diameter oracle ($f$-FDO). An $f$-edge fault-tolerant distance sensitivity oracles ($f$-DSO) estimates the pairwise graph distances under up to $f$ failures. We design new $f$-FDOs and $f$-FDO-$ST$s by reducing their construction to that of all-pairs and single-source $f$-DSOs. We obtain several new tradeoffs between the size of the data structure, stretch guarantee, query and preprocessing times for diameter oracles by combining our black-box reductions with known results from the literature. We also provide an information-theoretic lower bound on the space requirement of approximate $f$-FDOs. We show that there exists a family of graphs for which any $f$-FDO with sensitivity $f ge 2$ and stretch less than $5/3$ requires $Omega(n^{3/2})$ bits of space, regardless of the query time.
{"title":"Fault-Tolerant ST-Diameter Oracles","authors":"Davide Bilò, Keerti Choudhary, S. Cohen, T. Friedrich, Simon Krogmann, Martin Schirneck","doi":"10.48550/arXiv.2305.03697","DOIUrl":"https://doi.org/10.48550/arXiv.2305.03697","url":null,"abstract":"We study the problem of estimating the $ST$-diameter of a graph that is subject to a bounded number of edge failures. An $f$-edge fault-tolerant $ST$-diameter oracle ($f$-FDO-$ST$) is a data structure that preprocesses a given graph $G$, two sets of vertices $S,T$, and positive integer $f$. When queried with a set $F$ of at most $f$ edges, the oracle returns an estimate $widehat{D}$ of the $ST$-diameter $operatorname{diam}(G-F,S,T)$, the maximum distance between vertices in $S$ and $T$ in $G-F$. The oracle has stretch $sigma geq 1$ if $operatorname{diam}(G-F,S,T) leq widehat{D} leq sigma operatorname{diam}(G-F,S,T)$. If $S$ and $T$ both contain all vertices, the data structure is called an $f$-edge fault-tolerant diameter oracle ($f$-FDO). An $f$-edge fault-tolerant distance sensitivity oracles ($f$-DSO) estimates the pairwise graph distances under up to $f$ failures. We design new $f$-FDOs and $f$-FDO-$ST$s by reducing their construction to that of all-pairs and single-source $f$-DSOs. We obtain several new tradeoffs between the size of the data structure, stretch guarantee, query and preprocessing times for diameter oracles by combining our black-box reductions with known results from the literature. We also provide an information-theoretic lower bound on the space requirement of approximate $f$-FDOs. We show that there exists a family of graphs for which any $f$-FDO with sensitivity $f ge 2$ and stretch less than $5/3$ requires $Omega(n^{3/2})$ bits of space, regardless of the query time.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"347 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115885927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.48550/arXiv.2305.02508
Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, Joshua R. Wang
Online caching is among the most fundamental and well-studied problems in the area of online algorithms. Innovative algorithmic ideas and analysis -- including potential functions and primal-dual techniques -- give insight into this still-growing area. Here, we introduce a new analysis technique that first uses a potential function to upper bound the cost of an online algorithm and then pairs that with a new dual-fitting strategy to lower bound the cost of an offline optimal algorithm. We apply these techniques to the Caching with Reserves problem recently introduced by Ibrahimpur et al. [10] and give an O(log k)-competitive fractional online algorithm via a marking strategy, where k denotes the size of the cache. We also design a new online rounding algorithm that runs in polynomial time to obtain an O(log k)-competitive randomized integral algorithm. Additionally, we provide a new, simple proof for randomized marking for the classical unweighted paging problem.
{"title":"Efficient Caching with Reserves via Marking","authors":"Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, Joshua R. Wang","doi":"10.48550/arXiv.2305.02508","DOIUrl":"https://doi.org/10.48550/arXiv.2305.02508","url":null,"abstract":"Online caching is among the most fundamental and well-studied problems in the area of online algorithms. Innovative algorithmic ideas and analysis -- including potential functions and primal-dual techniques -- give insight into this still-growing area. Here, we introduce a new analysis technique that first uses a potential function to upper bound the cost of an online algorithm and then pairs that with a new dual-fitting strategy to lower bound the cost of an offline optimal algorithm. We apply these techniques to the Caching with Reserves problem recently introduced by Ibrahimpur et al. [10] and give an O(log k)-competitive fractional online algorithm via a marking strategy, where k denotes the size of the cache. We also design a new online rounding algorithm that runs in polynomial time to obtain an O(log k)-competitive randomized integral algorithm. Additionally, we provide a new, simple proof for randomized marking for the classical unweighted paging problem.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130588071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.48550/arXiv.2305.02566
Ruizhe Zhang, Xinzhi Zhang
In 2013, Marcus, Spielman, and Srivastava resolved the famous Kadison-Singer conjecture. It states that for $n$ independent random vectors $v_1,cdots, v_n$ that have expected squared norm bounded by $epsilon$ and are in the isotropic position in expectation, there is a positive probability that the determinant polynomial $det(xI - sum_{i=1}^n v_iv_i^top)$ has roots bounded by $(1 + sqrt{epsilon})^2$. An interpretation of the Kadison-Singer theorem is that we can always find a partition of the vectors $v_1,cdots,v_n$ into two sets with a low discrepancy in terms of the spectral norm (in other words, rely on the determinant polynomial). In this paper, we provide two results for a broader class of polynomials, the hyperbolic polynomials. Furthermore, our results are in two generalized settings: $bullet$ The first one shows that the Kadison-Singer result requires a weaker assumption that the vectors have a bounded sum of hyperbolic norms. $bullet$ The second one relaxes the Kadison-Singer result's distribution assumption to the Strongly Rayleigh distribution. To the best of our knowledge, the previous results only support determinant polynomials [Anari and Oveis Gharan'14, Kyng, Luh and Song'20]. It is unclear whether they can be generalized to a broader class of polynomials. In addition, we also provide a sub-exponential time algorithm for constructing our results.
2013年,马库斯、斯皮尔曼和斯里瓦斯塔瓦解决了著名的卡迪森-辛格猜想。它指出,对于具有期望平方范数以$epsilon$为界并且在期望中处于各向同性位置的$n$独立随机向量$v_1,cdots, v_n$,行列式多项式$det(xI - sum_{i=1}^n v_iv_i^top)$具有以$(1 + sqrt{epsilon})^2$为界的根的概率为正。对卡迪逊-辛格定理的一种解释是,我们总能找到将向量$v_1,cdots,v_n$分割成两个集,在谱范数方面差异很小(换句话说,依赖于行列式多项式)。在本文中,我们提供了两个结果对于一个更广泛的多项式类,双曲多项式。此外,我们的结果是在两个广义设置:$bullet$第一个表明,卡迪逊-辛格结果需要一个较弱的假设,即向量具有双曲范数的有界和。$bullet$第二种方法将Kadison-Singer结果的分布假设放宽为强瑞利分布。据我们所知,之前的结果只支持行列式多项式[Anari and Oveis Gharan'14, kyking, Luh and Song'20]。目前还不清楚它们是否可以推广到更广泛的多项式类。此外,我们还提供了一个次指数时间算法来构造我们的结果。
{"title":"A Hyperbolic Extension of Kadison-Singer Type Results","authors":"Ruizhe Zhang, Xinzhi Zhang","doi":"10.48550/arXiv.2305.02566","DOIUrl":"https://doi.org/10.48550/arXiv.2305.02566","url":null,"abstract":"In 2013, Marcus, Spielman, and Srivastava resolved the famous Kadison-Singer conjecture. It states that for $n$ independent random vectors $v_1,cdots, v_n$ that have expected squared norm bounded by $epsilon$ and are in the isotropic position in expectation, there is a positive probability that the determinant polynomial $det(xI - sum_{i=1}^n v_iv_i^top)$ has roots bounded by $(1 + sqrt{epsilon})^2$. An interpretation of the Kadison-Singer theorem is that we can always find a partition of the vectors $v_1,cdots,v_n$ into two sets with a low discrepancy in terms of the spectral norm (in other words, rely on the determinant polynomial). In this paper, we provide two results for a broader class of polynomials, the hyperbolic polynomials. Furthermore, our results are in two generalized settings: $bullet$ The first one shows that the Kadison-Singer result requires a weaker assumption that the vectors have a bounded sum of hyperbolic norms. $bullet$ The second one relaxes the Kadison-Singer result's distribution assumption to the Strongly Rayleigh distribution. To the best of our knowledge, the previous results only support determinant polynomials [Anari and Oveis Gharan'14, Kyng, Luh and Song'20]. It is unclear whether they can be generalized to a broader class of polynomials. In addition, we also provide a sub-exponential time algorithm for constructing our results.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"25 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130928204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}