Maude is a specification language based on rewriting logic whose programs can be executed, model checked, and analyzed with other automated techniques, but not easily theorem proved. On the other hand, Lean is a modern proof assistant based on the calculus of inductive constructions with a wide library of reusable proofs and definitions. This paper presents a translation from the first formalism to the second, and the maude2lean tool that predictably derives a Lean program from a Maude module. Hence, theorems can be proved in Lean about Maude specifications.
Session types are a typed approach to message-passing concurrency, where types describe sequences of intended exchanges over channels. Session type systems have been given strong logical foundations via Curry-Howard correspondences with linear logic, a resource-aware logic that naturally captures structured interactions. These logical foundations provide an elegant framework to specify and (statically) verify message-passing processes.
In this paper, we rigorously compare different type systems for concurrency derived from the Curry-Howard correspondence between linear logic and session types. We address the main divide between these type systems: the classical and intuitionistic presentations of linear logic. Over the years, these presentations have given rise to separate research strands on logical foundations for concurrency; the differences between their derived type systems have only been addressed informally.
To formally assess these differences, we develop , a session type system that encompasses type systems derived from classical and intuitionistic interpretations of linear logic. Based on a fragment of Girard's Logic of Unity, provides a basic reference framework: we compare existing session type systems by characterizing fragments of that coincide with classical and intuitionistic formulations. We analyze the significance of our characterizations by considering the locality principle (enforced by intuitionistic interpretations but not by classical ones) and forms of process composition induced by the interpretations.
In this paper, we will show how to handle a new normal form called terse normal form (), which is crucial to the development of a novel tableau method that solves realizability and synthesis for specifications expressed in a safety fragment of . The construction of these tableaux is based on the conversion of formulas into , which is one of the most computationally expensive parts of the method. We will explain how to efficiently extract the relevant information required by the tableaux without having to compute the entire of a safety formula. We present a correct algorithm for carrying out this task as well as its implementation.
We study the realizability and strong satisfiability problems for Safety LTL, a syntactic fragment of Linear Temporal Logic (
We revisit the problem of the exact classification of the complexity of realizability for
Furthermore, we revisit the techniques used in the purported proof of
Protocols are typically specified in an operational manner by specifying the communication patterns among the different involved principals. However, many properties are of epistemic nature, e.g., what each principal believes after having seen a run of the protocol. We elaborate on a unified algebraic framework suitable for epistemic reasoning about operational protocols. This reasoning framework is based on a logic of beliefs and allows for the operational specification of untruthful communications. The information recorded in the semantic models to support reasoning about the interaction between the operational and epistemic aspects intensifies the state-space explosion. We propose an efficient on-the-fly reduction for such a unifying framework by providing a set of operational rules. These operational rules automatically generate efficient reduced semantics for a class of epistemic properties, specified in a rich extension of modal μ-calculus with past and belief modality, and can potentially reduce an infinite state space into a finite one. We reformulate and prove criteria that guarantee belief consistency for credulous agents, i.e., agents that are ready to believe what is told unless it is logically inconsistent. We adjust our reduction so that the belief consistency of an original model is preserved. We prove the soundness and completeness result for the specified class of properties.
Addressing the problem of fairness is crucial to safely using machine learning algorithms to support decisions that have a critical impact on people's lives, such as job hiring, child maltreatment, disease diagnosis, loan granting, etc. Several notions of fairness have been defined and examined in the past decade, such as statistical parity and equalized odds. However, the most recent notions of fairness are causal-based and reflect the now widely accepted idea that using causality is necessary to appropriately address the problem of fairness. This paper examines an exhaustive list of causal-based fairness notions and studies their applicability in real-world scenarios. As most causal-based fairness notions are defined in terms of non-observable quantities (e.g., interventions and counterfactuals), their deployment in practice requires computing or estimating those quantities using observational data. This paper offers a comprehensive report of the different approaches to infer causal quantities from observational data, including identifiability (Pearl's SCM framework) and estimation (potential outcome framework). The main contributions of this survey paper are (1) a guideline to help select a suitable causal fairness notion given a specific real-world scenario and (2) a ranking of the fairness notions according to Pearl's causation ladder, indicating how difficult it is to deploy each notion in practice.
Partial functions are a key concept in programming. Without partiality a programming language has limited expressiveness – it is not Turing-complete, hence, it excludes some constructs such as while-loops. In functional programming languages, partiality mostly originates from the non-termination of recursive functions. Corecursive functions are another source of partiality: here, the issue is not termination, but the inability to produce arbitrary large, finite approximations of a theoretically infinite output.
Partial functions have been formally studied in the branch of theoretical computer science called domain theory. In this paper we propose to step up the level of formality by using the Coq proof assistant. The main difficulty is that Coq requires all functions to be total, since partiality would break the soundness of its underlying logic. We propose practical solutions for this issue, and others, which appear when one attempts to define and reason about partial (co)recursive functions in a total functional language.

