We prove near-optimal trade-offs for quantifier depth (also called quantifier rank) versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least nΩ(k/log k). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler–Leman algorithm imply near-optimal lower bounds on the number of refinement iterations.
A key component in our proof is the hardness condensation technique introduced by [Razborov ’16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the minimal quantifier depth needed to distinguish them in finite variable logics.
We consider the problem of deciding the existence of real roots of real-valued exponential polynomials with algebraic coefficients. Such functions arise as solutions of linear differential equations with real algebraic coefficients. We focus on two problems: the Zero Problem, which asks whether an exponential polynomial has a real root, and the Infinite Zeros Problem, which asks whether such a function has infinitely many real roots. Our main result is that for differential equations of order at most 8 the Zero Problem is decidable, subject to Schanuel’s Conjecture, whilst the Infinite Zeros Problem is decidable unconditionally. We show moreover that a decision procedure for the Infinite Zeros Problem at order 9 would yield an algorithm for computing the Lagrange constant of any given real algebraic number to arbitrary precision, indicating that it will be very difficult to extend our decidability results to higher orders.
Large-scale, two-sided matching platforms must find market outcomes that align with user preferences while simultaneously learning these preferences from data. Classical notions of stability (Gale and Shapley, 1962; Shapley and Shubik, 1971) are, unfortunately, of limited value in the learning setting, given that preferences are inherently uncertain and destabilizing while they are being learned. To bridge this gap, we develop a framework and algorithms for learning stable market outcomes under uncertainty. Our primary setting is matching with transferable utilities, where the platform both matches agents and sets monetary transfers between them. We design an incentive-aware learning objective that captures the distance of a market outcome from equilibrium. Using this objective, we analyze the complexity of learning as a function of preference structure, casting learning as a stochastic multi-armed bandit problem. Algorithmically, we show that “optimism in the face of uncertainty,” the principle underlying many bandit algorithms, applies to a primal-dual formulation of matching with transfers and leads to near-optimal regret bounds. Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
We introduce the abstract notion of a chain, which is a sequence of n points in the plane, ordered by x-coordinates, so that the edge between any two consecutive points is unavoidable as far as triangulations are concerned. A general theory of the structural properties of chains is developed, alongside a general understanding of their number of triangulations.
We also describe an intriguing new and concrete configuration, which we call the Koch chain due to its similarities to the Koch curve. A specific construction based on Koch chains is then shown to have Ω (9.08n) triangulations. This is a significant improvement over the previous and long-standing lower bound of Ω (8.65n) for the maximum number of triangulations of planar point sets.
Understanding the algorithmic behaviors that are in principle realizable in a chemical system is necessary for a rigorous understanding of the design principles of biological regulatory networks. Further, advances in synthetic biology herald the time when we will be able to rationally engineer complex chemical systems and when idealized formal models will become blueprints for engineering.
Coupled chemical interactions in a well-mixed solution are commonly formalized as chemical reaction networks (CRNs). However, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. Here, we study the following problem: What functions f : ℝk → ℝ can be computed by a CRN, in which the CRN eventually produces the correct amount of the “output” molecule, no matter the rate at which reactions proceed? This captures a previously unexplored but very natural class of computations: For example, the reaction X1 + X2 → Y can be thought to compute the function y = min (x1, x2). Such a CRN is robust in the sense that it is correct whether its evolution is governed by the standard model of mass-action kinetics, alternatives such as Hill-function or Michaelis-Menten kinetics, or other arbitrary models of chemistry that respect the (fundamentally digital) stoichiometric constraints (what are the reactants and products?).
We develop a reachability relation based on a broad notion of “what could happen” if reaction rates can vary arbitrarily over time. Using reachability, we define stable computation analogously to probability 1 computation in distributed computing and connect it with a seemingly stronger notion of rate-independent computation based on convergence in the limit t → ∞ under a wide class of generalized rate laws. Besides the direct mapping of a concentration to a nonnegative analog value, we also consider the “dual-rail representation” that can represent negative values as the difference of two concentrations and allows the composition of CRN modules. We prove that a function is rate-independently computable if and only if it is piecewise linear (with rational coefficients) and continuous (dual-rail representation), or non-negative with discontinuities occurring only when some inputs switch from zero to positive (direct representation). The many contexts where continuous piecewise linear functions are powerful targets for implementation, combined with the systematic construction we develop for computing these functions, demonstrate the potential of rate-independent chemical computation.