We prove near-optimal trade-offs for quantifier depth (also called quantifier rank) versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least nΩ(k/log k). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler–Leman algorithm imply near-optimal lower bounds on the number of refinement iterations.
A key component in our proof is the hardness condensation technique introduced by [Razborov ’16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the minimal quantifier depth needed to distinguish them in finite variable logics.
The states of a finite-state automaton (mathcal {N} ) can be identified with collections of words in the prefix closure of the regular language accepted by (mathcal {N} ). But words can be ordered, and among the many possible orders a very natural one is the co-lexicographic order. Such naturalness stems from the fact that it suggests a transfer of the order from words to the automaton’s states. This suggestion is, in fact, concrete and in a number of papers automata admitting a total co-lexicographic (co-lex for brevity) ordering of states have been proposed and studied. Such class of ordered automata — Wheeler automata — turned out to require just a constant number of bits per transition to be represented and enable regular expression matching queries in constant time per matched character.
Unfortunately, not all automata can be totally ordered as previously outlined. In the present work, we lay out a new theory showing that all automata can always be partially ordered, and an intrinsic measure of their complexity can be defined and effectively determined, namely, the minimum width p of one of their admissible co-lex partial orders—dubbed here the automaton’s co-lex width. We first show that this new measure captures at once the complexity of several seemingly-unrelated hard problems on automata. Any NFA of co-lex width p: (i) has an equivalent powerset DFA whose size is exponential in p rather than (as a classic analysis shows) in the NFA’s size; (ii) can be encoded using just Θ(log p) bits per transition; (iii) admits a linear-space data structure solving regular expression matching queries in time proportional to p2 per matched character. Some consequences of this new parameterization of automata are that PSPACE-hard problems such as NFA equivalence are FPT in p, and quadratic lower bounds for the regular expression matching problem do not hold for sufficiently small p.
Having established that the co-lex width of an automaton is a fundamental complexity measure, we proceed by (i) determining its computational complexity and (ii) extending this notion from automata to regular languages by studying their smallest-width accepting NFAs and DFAs. In this work we focus on the deterministic case and prove that a canonical minimum-width DFA accepting a language (mathcal {L} )—dubbed the Hasse automaton (mathcal {H} ) of (mathcal {L} )—can be exhibited. (mathcal {H} ) provides, in a precise sense, the best possible way to (partially) order the states of any DFA accepting (mathcal {L} ), as long as we want to maintain an operational link with the (co-lexicographic) order of (mathcal {L} )’s prefixes. Finally, we explore the relationship between two conflicting objectives: minimizing the width and minimizing the number of states of a DFA. In this context, we provide an analogue of the Myhill-Nerode Theorem for co-lexicogr
We consider the problem of deciding the existence of real roots of real-valued exponential polynomials with algebraic coefficients. Such functions arise as solutions of linear differential equations with real algebraic coefficients. We focus on two problems: the Zero Problem, which asks whether an exponential polynomial has a real root, and the Infinite Zeros Problem, which asks whether such a function has infinitely many real roots. Our main result is that for differential equations of order at most 8 the Zero Problem is decidable, subject to Schanuel’s Conjecture, whilst the Infinite Zeros Problem is decidable unconditionally. We show moreover that a decision procedure for the Infinite Zeros Problem at order 9 would yield an algorithm for computing the Lagrange constant of any given real algebraic number to arbitrary precision, indicating that it will be very difficult to extend our decidability results to higher orders.
Large-scale, two-sided matching platforms must find market outcomes that align with user preferences while simultaneously learning these preferences from data. Classical notions of stability (Gale and Shapley, 1962; Shapley and Shubik, 1971) are, unfortunately, of limited value in the learning setting, given that preferences are inherently uncertain and destabilizing while they are being learned. To bridge this gap, we develop a framework and algorithms for learning stable market outcomes under uncertainty. Our primary setting is matching with transferable utilities, where the platform both matches agents and sets monetary transfers between them. We design an incentive-aware learning objective that captures the distance of a market outcome from equilibrium. Using this objective, we analyze the complexity of learning as a function of preference structure, casting learning as a stochastic multi-armed bandit problem. Algorithmically, we show that “optimism in the face of uncertainty,” the principle underlying many bandit algorithms, applies to a primal-dual formulation of matching with transfers and leads to near-optimal regret bounds. Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.