We introduce proper display calculi for intuitionistic, bi-intuitionistic and classical linear logics with exponentials, which are sound, complete, conservative, and enjoy cut elimination and subformula property. Based on the same design, we introduce a variant of Lambek calculus with exponentials, aimed at capturing the controlled application of exchange and associativity. Properness (i.e., closure under uniform substitution of all parametric parts in rules) is the main technical novelty of the present proposal, allowing both for the smoothest proof of cut elimination and for the development of an overarching and modular treatment for a vast class of axiomatic extensions and expansions of intuitionistic, bi-intuitionistic, and classical linear logics with exponentials. Our proposal builds on an algebraic and order-theoretic analysis of linear logic and applies the guidelines of the multi-type methodology in the design of display calculi.
The existing testing theories for CSP cater for verification of interaction patterns (traces) and deadlocks, but not time. We address here refinement and testing based on a dialect of CSP, called tock-CSP, which can capture discrete time properties. This version of CSP has been of widespread interest for decades; recently, it has been given a denotational semantics, and model checking has become possible using a well established tool. Here, we first equip tock-CSP with a novel semantics for testing, which distinguishes input and output events: the standard models of (tock-)CSP do not differentiate them, but for testing this is essential. We then present a new testing theory for timewise refinement, based on novel definitions of test and test execution. Finally, we reconcile refinement and testing by relating timed ioco testing and refinement in tock-CSP with inputs and outputs. With these results, this paper provides, for the first time, a systematic theory that allows both timed testing and timed refinement to be expressed. An important practical consequence is that this ensures that the notion of correctness used by developers guarantees that tests pass when applied to a correct system and, in addition, faults identified during testing correspond to development mistakes.
We provide a tight characterisation of proof size in resolution for quantified Boolean formulas (QBF) via circuit complexity. Such a characterisation was previously obtained for a hierarchy of QBF Frege systems [16], but leaving open the most important case of QBF resolution. Different from the Frege case, our characterisation uses a new version of decision lists as its circuit model, which is stronger than the CNFs the system works with. Our decision list model is well suited to compute countermodels for QBFs. Our characterisation works for both Q-Resolution and QU-Resolution.
Using our characterisation, we obtain a size-width relation for QBF resolution in the spirit of the celebrated result for propositional resolution [4]. However, our result is not just a replication of the propositional relation—intriguingly ruled out for QBF in previous research [12]—but shows a different dependence between size, width, and quantifier complexity. An essential ingredient is an improved relation between the size and width of term decision lists; this may be of independent interest.
We demonstrate that our new technique elegantly reproves known QBF hardness results and unifies previous lower-bound techniques in the QBF domain.
Parikh proposed his relevance-sensitive axiom to remedy the weakness of the classical AGM paradigm in addressing relevant change. An insufficiency of Parikh’s criterion, however, is its dependency on the contingent beliefs of a belief set to be revised, since the former only constrains the revision process of splittable theories (i.e., theories that can be divided in mutually disjoint compartments). The case of arbitrary non-splittable belief sets remains out of the scope of Parikh’s approach. On that premise, we generalize Parikh’s criterion, introducing (both axiomatically and semantically) a new notion of relevance, which we call relevance at the sentential level. We show that the proposed notion of relevance is universal (as it is applicable to arbitrary belief sets) and acts in a more refined way as compared to Parikh’s proposal; as we illustrate, this latter feature of relevance at the sentential level potentially leads to a significant drop in the computational resources required for implementing belief revision. Furthermore, we prove that Dalal’s popular revision operator respects, to a certain extent, relevance at the sentential level. Last but not least, the tight relation between local and relevance-sensitive revision is pointed out.
In this article, we consider Answer Set Programming (ASP). It is a declarative problem solving paradigm that can be used to encode a problem as a logic program whose answer sets correspond to the solutions of the problem. It has been widely applied in various domains in AI and beyond. Given that answer sets are supposed to yield solutions to the original problem, the question of “why a set of atoms is an answer set” becomes important for both semantics understanding and program debugging. It has been well investigated for normal logic programs. However, for the class of disjunctive logic programs, which is a substantial extension of that of normal logic programs, this question has not been addressed much. In this article, we propose a notion of reduct for disjunctive logic programs and show how it can provide answers to the aforementioned question. First, we show that for each answer set, its reduct provides a resolution proof for each atom in it. We then further consider minimal sets of rules that will be sufficient to provide resolution proofs for sets of atoms. Such sets of rules will be called witnesses and are the focus of this article. We study complexity issues of computing various witnesses and provide algorithms for computing them. In particular, we show that the problem is tractable for normal and headcycle-free disjunctive logic programs, but intractable for general disjunctive logic programs. We also conducted some experiments and found that for many well-known ASP and SAT benchmarks, computing a minimal witness for an atom of an answer set is often feasible.
An extension of
The article discusses temporal information systems (TISs) that add the dimension of time to complete or incomplete information systems. Through TISs, one can accommodate the possibility of domains or attribute values for objects changing with time or the availability of currently missing information with time. Different patterns of flow of information give different TISs. The corresponding logics with sound and complete axiomatization are presented.