This article gives an in-depth presentation of the omni-big-step and omni-small-step styles of semantic judgments. These styles describe operational semantics by relating starting states to sets of outcomes rather than to individual outcomes. A single derivation of these semantics for a particular starting state and program describes all possible nondeterministic executions (hence the name omni), whereas in traditional small-step and big-step semantics, each derivation only talks about one single execution. This restructuring allows for straightforward modeling of both nondeterminism and undefined behavior as commonly encountered in sequential functional and imperative programs. Specifically, omnisemantics inherently assert safety (i.e., they guarantee that none of the execution branches gets stuck), while traditional semantics need either a separate judgment or additional error markers to specify safety in the presence of nondeterminism.
Omnisemantics can be understood as an inductively defined weakest-precondition semantics (or more generally, predicate-transformer semantics) that does not involve invariants for loops and recursion but instead uses unrolling rules like in traditional small-step and big-step semantics. Omnisemantics were previously described in association with several projects, but we believe the technique has been underappreciated and deserves a well-motivated, extensive, and pedagogical presentation of its benefits. We also explore several novel aspects associated with these semantics, in particular, their use in type-safety proofs for lambda calculi, partial-correctness reasoning, and forward proofs of compiler correctness for terminating but potentially nondeterministic programs being compiled to nondeterministic target languages. All results in this article are formalized in Coq.
The literature presents many strategies for enforcing the integrity of types when typed code interacts with untyped code. This article presents a uniform evaluation framework that characterizes the differences among some major existing semantics for typed–untyped interaction. Type system designers can use this framework to analyze the guarantees of their own dynamic semantics.
Security-critical software applications contain confidential information which has to be protected from leaking to unauthorized systems. With language-based techniques, the confidentiality of applications can be enforced. Such techniques are for example type systems that enforce an information flow policy through typing rules. The precision of such type systems, especially in object-oriented languages, is an area of active research: an appropriate system should not reject too many secure programs while soundly preserving noninterference. In this work, we introduce the language SIFO which supports information flow control for an object-oriented language with type modifiers. Type modifiers increase the precision of the type system by utilizing immutability and uniqueness properties of objects for the detection of information leaks. We present SIFO informally by using examples to demonstrate the applicability of the language, formalize the type system, prove noninterference, implement SIFO as a pluggable type system in the programming language L42, and evaluate it with a feasibility study and a benchmark.
The end of Moore’s Law has ushered in a diversity of hardware not seen in decades. Operating system (OS) (and system software) portability is accordingly becoming increasingly critical. Simultaneously, there has been tremendous progress in program synthesis. We set out to explore the feasibility of using modern program synthesis to generate the machine-dependent parts of an operating system. Our ultimate goal is to generate new ports automatically from descriptions of new machines.
One of the issues involved is writing specifications, both for machine-dependent operating system functionality and for instruction set architectures. We designed two domain-specific languages: Alewife for machine-independent specifications of machine-dependent operating system functionality and Cassiopea for describing instruction set architecture semantics. Automated porting also requires an implementation. We developed a toolchain that, given an Alewife specification and a Cassiopea machine description, specializes the machine-independent specification to the target instruction set architecture and synthesizes an implementation in assembly language with a customized symbolic execution engine. Using this approach, we demonstrate the successful synthesis of a total of 140 OS components from two pre-existing OSes for four real hardware platforms. We also developed several optimization methods for OS-related assembly synthesis to improve scalability.
The effectiveness of our languages and ability to synthesize code for all 140 specifications is evidence of the feasibility of program synthesis for machine-dependent OS code. However, many research challenges remain; we also discuss the benefits and limitations of our synthesis-based approach to automated OS porting.
In the tortoise-and-hare algorithm, when the fast pointer reaches the end of a finite list, the slow pointer points to the middle of this list. In the early 2000’s, this property was found to make it possible to program a palindrome detector for immutable lists that operates in one recursive traversal of the given list and performs the smallest possible number of comparisons, using the “There And Back Again” (TABA) recursion pattern. In this article, this palindrome detector is reconstructed in OCaml, formalized with the Coq Proof Assistant, and proved to be correct. More broadly, this article presents a compositional account of the tortoise-and-hare algorithm for finite lists. Concretely, compositionality means that programs that use a fast and a slow pointer can be expressed with an ordinary fold function for lists and reasoned about using ordinary structural induction on the given list. This article also contains a dozen new applications of the TABA recursion pattern and of its tail-recursive variant, “There and Forth Again”.
Automatically verifying multi-threaded programs is difficult because of the vast number of thread interleavings, a problem aggravated by weak memory consistency. Partial orders can help with verification because they can represent many thread interleavings concisely. However, there is no dedicated decision procedure for solving partial-order constraints.
In this article, we propose a novel ordering consistency theory for concurrent program verification that is applicable not only under sequential consistency, but also under the TSO and PSO weak memory models. We further develop an efficient theory solver, which checks consistency incrementally, generates minimal conflict clauses, and includes a custom propagation procedure. We have implemented our approach in a tool, called Zord, and have conducted extensive experiments on the SV-COMP 2020 ConcurrencySafety benchmarks. Our experimental results show a significant improvement over the state-of-the-art.