Pub Date : 2021-04-15DOI: 10.1017/s095679682100006x
LAU SKORSTENGAARD, DOMINIQUE DEVRIESE, LARS BIRKEDAL
We propose and study StkTokens: a new calling convention that provably enforces well-bracketed control flow and local state encapsulation on a capability machine. The calling convention is based on linear capabilities: a type of capabilities that are prevented from being duplicated by the hardware. In addition to designing and formalizing this new calling convention, we also contribute a new way to formalize and prove that it effectively enforces well-bracketed control flow and local state encapsulation using what we call a fully abstract overlay semantics.
{"title":"StkTokens: Enforcing well-bracketed control flow and stack encapsulation using linear capabilities","authors":"LAU SKORSTENGAARD, DOMINIQUE DEVRIESE, LARS BIRKEDAL","doi":"10.1017/s095679682100006x","DOIUrl":"https://doi.org/10.1017/s095679682100006x","url":null,"abstract":"We propose and study StkTokens: a new calling convention that provably enforces well-bracketed control flow and local state encapsulation on a capability machine. The calling convention is based on linear capabilities: a type of capabilities that are prevented from being duplicated by the hardware. In addition to designing and formalizing this new calling convention, we also contribute a new way to formalize and prove that it effectively enforces well-bracketed control flow and local state encapsulation using what we call a fully abstract overlay semantics.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":"56 2","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1017/s0956796821000034
ANDREA VEZZOSI, ANDERS MÖRTBERG, ANDREAS ABEL
Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types (HITs). This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of HITs. These new primitives allow the direct definition of function and propositional extensionality as well as quotient types, all with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. The adoption of cubical type theory extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity.
{"title":"Cubical Agda: A dependently typed programming language with univalence and higher inductive types","authors":"ANDREA VEZZOSI, ANDERS MÖRTBERG, ANDREAS ABEL","doi":"10.1017/s0956796821000034","DOIUrl":"https://doi.org/10.1017/s0956796821000034","url":null,"abstract":"Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types (HITs). This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of HITs. These new primitives allow the direct definition of function and propositional extensionality as well as quotient types, all with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. The adoption of cubical type theory extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":"2 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-05DOI: 10.1017/S0956796821000058
Akimasa Morihata
Abstract Parallel reduction is a major component of parallel programming and widely used for summarisation and aggregation. It is not well understood, however, what sorts of non-trivial summarisations can be implemented as parallel reductions. This paper develops a calculus named λAS, a simply typed lambda calculus with algebraic simplification. This calculus provides a foundation for studying a parallelisation of complex reductions by equational reasoning. Its key feature is δ abstraction. A δ abstraction is observationally equivalent to the standard λ abstraction, but its body is simplified before the arrival of its arguments using algebraic properties such as associativity and commutativity. In addition, the type system of λAS guarantees that simplifications due to δ abstractions do not lead to serious overheads. The usefulness of λAS is demonstrated on examples of developing complex parallel reductions, including those containing more than one reduction operator, loops with conditional jumps, prefix sum patterns and even tree manipulations.
{"title":"Lambda calculus with algebraic simplification for reduction parallelisation: Extended study","authors":"Akimasa Morihata","doi":"10.1017/S0956796821000058","DOIUrl":"https://doi.org/10.1017/S0956796821000058","url":null,"abstract":"Abstract Parallel reduction is a major component of parallel programming and widely used for summarisation and aggregation. It is not well understood, however, what sorts of non-trivial summarisations can be implemented as parallel reductions. This paper develops a calculus named λAS, a simply typed lambda calculus with algebraic simplification. This calculus provides a foundation for studying a parallelisation of complex reductions by equational reasoning. Its key feature is δ abstraction. A δ abstraction is observationally equivalent to the standard λ abstraction, but its body is simplified before the arrival of its arguments using algebraic properties such as associativity and commutativity. In addition, the type system of λAS guarantees that simplifications due to δ abstractions do not lead to serious overheads. The usefulness of λAS is demonstrated on examples of developing complex parallel reductions, including those containing more than one reduction operator, loops with conditional jumps, prefix sum patterns and even tree manipulations.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S0956796821000058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48501070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-30DOI: 10.1017/s0956796821000022
THOMAS VAN STRYDONCK, FRANK PIESSENS, DOMINIQUE DEVRIESE
Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separation-logic-proof-directed: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. This article is an extended version of one that was presented at ICFP 2019 (Van Strydonck et al., 2019).
分离逻辑是一种功能强大的程序逻辑,用于命令式程序的静态模块化验证。然而,对已验证模块和不受信任模块之间边界上的分离逻辑契约进行动态检查是困难的,因为它需要强制执行(除其他事项外)从已验证模块到不受信任模块的输出调用不访问当前由已验证模块拥有的内存资源。本文提出了一种动态契约检查的方法,它依赖于对功能的支持,这是一种经过充分研究的不可伪造内存指针形式,可以实现细粒度、高效的内存访问控制。更具体地说,我们依赖于一种称为线性能力的能力形式,硬件强制它们不能被复制。我们将我们的方法形式化为一个完全抽象的编译器,从静态验证的源语言到支持线性功能的未经验证的目标语言。我们编译器背后的关键见解是,由空间分离逻辑谓词描述的内存资源可以在运行时通过线性能力表示。编译器是面向分离逻辑证明的:它使用源程序的分离逻辑证明来确定源程序中的内存访问应该如何编译为目标程序中的线性能力访问。编译器的完整抽象属性本质上保证编译后的经过验证的模块可以与不受信任的目标语言模块交互,就好像它们是从经过验证的代码中编译出来的一样。本文是在ICFP 2019上发表的一篇文章的扩展版本(Van Strydonck et al., 2019)。
{"title":"Linear capabilities for fully abstract compilation of separation-logic-verified code","authors":"THOMAS VAN STRYDONCK, FRANK PIESSENS, DOMINIQUE DEVRIESE","doi":"10.1017/s0956796821000022","DOIUrl":"https://doi.org/10.1017/s0956796821000022","url":null,"abstract":"Separation logic is a powerful program logic for the static modular verification of imperative programs. However, <jats:italic>dynamic</jats:italic> checking of separation logic contracts on the boundaries between verified and untrusted modules is hard because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called <jats:italic>linear</jats:italic> capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is <jats:italic>separation-logic-proof-directed</jats:italic>: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. This article is an extended version of one that was presented at ICFP 2019 (Van Strydonck et al., 2019).","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":"46 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1017/S0956796822000168
P. Downen, Z. M. Ariola
Abstract Recursion is a mature, well-understood topic in the theory and practice of programming. Yet its dual, corecursion is underappreciated and still seen as exotic. We aim to put them both on equal footing by giving a foundation for primitive corecursion based on computation, giving a terminating calculus analogous to the original computational foundation of recursion. We show how the implementation details in an abstract machine strengthens their connection, syntactically deriving corecursion from recursion via logical duality. We also observe the impact of evaluation strategy on the computational complexity of primitive (co)recursive combinators: call-by-name allows for more efficient recursion, but call-by-value allows for more efficient corecursion.
{"title":"Classical (co)recursion: Mechanics","authors":"P. Downen, Z. M. Ariola","doi":"10.1017/S0956796822000168","DOIUrl":"https://doi.org/10.1017/S0956796822000168","url":null,"abstract":"Abstract Recursion is a mature, well-understood topic in the theory and practice of programming. Yet its dual, corecursion is underappreciated and still seen as exotic. We aim to put them both on equal footing by giving a foundation for primitive corecursion based on computation, giving a terminating calculus analogous to the original computational foundation of recursion. We show how the implementation details in an abstract machine strengthens their connection, syntactically deriving corecursion from recursion via logical duality. We also observe the impact of evaluation strategy on the computational complexity of primitive (co)recursive combinators: call-by-name allows for more efficient recursion, but call-by-value allows for more efficient corecursion.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":"33 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46423017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-26DOI: 10.1017/s0956796820000283
JOACHIM BREITNER, ANTAL SPECTOR-ZABUSKY, YAO LI, CHRISTINE RIZKALLAH, JOHN WIEGLEY, JOSHUA COHEN, STEPHANIE WEIRICH
Good tools can bring mechanical verification to programs written in mainstream functional languages. We use hs-to-coq to translate significant portions of Haskell’s containers library into Coq, and verify it against specifications that we derive from a variety of sources including type class laws, the library’s test suite, and interfaces from Coq’s standard library. Our work shows that it is feasible to verify mature, widely used, highly optimized, and unmodified Haskell code. We also learn more about the theory of weight-balanced trees, extend hs-to-coq to handle partiality, and – since we found no bugs – attest to the superb quality of well-tested functional code.
{"title":"Ready, Set, Verify! Applying hs-to-coq to real-world Haskell code","authors":"JOACHIM BREITNER, ANTAL SPECTOR-ZABUSKY, YAO LI, CHRISTINE RIZKALLAH, JOHN WIEGLEY, JOSHUA COHEN, STEPHANIE WEIRICH","doi":"10.1017/s0956796820000283","DOIUrl":"https://doi.org/10.1017/s0956796820000283","url":null,"abstract":"Good tools can bring mechanical verification to programs written in mainstream functional languages. We use <jats:monospace>hs-to-coq</jats:monospace> to translate significant portions of Haskell’s <jats:monospace>containers</jats:monospace> library into Coq, and verify it against specifications that we derive from a variety of sources including type class laws, the library’s test suite, and interfaces from Coq’s standard library. Our work shows that it is feasible to verify mature, widely used, highly optimized, and unmodified Haskell code. We also learn more about the theory of weight-balanced trees, extend <jats:monospace>hs-to-coq</jats:monospace> to handle partiality, and – since we found no bugs – attest to the superb quality of well-tested functional code.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":"51 2","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-22DOI: 10.1017/S0956796821000010
M. Elsman, Niels Hallenberg
Abstract We present a region-based memory management scheme with support for generational garbage collection. The scheme features a compile-time region inference algorithm, which associates values with logical regions, and builds on a region type system that deploys region types at runtime to avoid the overhead of write barriers and to support partly tag-free garbage collection. The scheme is implemented in the MLKit Standard ML compiler, which generates native x64 machine code. Besides demonstrating a number of important formal properties of the scheme, we measure the scheme’s characteristics, for a number of benchmarks, and compare the performance of the generated executables with the performance of executables generated with the MLton state-of-the-art Standard ML compiler and configurations of the MLKit with and without region inference and generational garbage collection enabled. Although region inference often serves the purpose of generations, combining region inference with generational garbage collection is shown often to be superior to combining region inference with non-generational collection despite the overhead introduced by a larger amount of memory waste, due to region fragmentation.
摘要我们提出了一种基于区域的内存管理方案,支持代垃圾收集。该方案采用编译时区域推理算法,将值与逻辑区域相关联,并建立在区域类型系统的基础上,该系统在运行时部署区域类型,以避免写障碍的开销,并支持部分无标记的垃圾收集。该方案在MLKit Standard ML编译器中实现,该编译器生成本机x64计算机代码。除了证明该方案的一些重要形式性质外,我们还测量了该方案的特性,用于许多基准,并将生成的可执行文件的性能与使用MLton最先进的标准ML编译器生成的可运行文件的性能以及启用和不启用区域推断和世代垃圾收集的MLKit的配置进行比较。尽管区域推断通常有助于生成,但将区域推断与生成垃圾收集相结合通常比将区域推断和非生成收集相结合要好,尽管由于区域碎片化,大量内存浪费会带来开销。
{"title":"Integrating region memory management and tag-free generational garbage collection","authors":"M. Elsman, Niels Hallenberg","doi":"10.1017/S0956796821000010","DOIUrl":"https://doi.org/10.1017/S0956796821000010","url":null,"abstract":"Abstract We present a region-based memory management scheme with support for generational garbage collection. The scheme features a compile-time region inference algorithm, which associates values with logical regions, and builds on a region type system that deploys region types at runtime to avoid the overhead of write barriers and to support partly tag-free garbage collection. The scheme is implemented in the MLKit Standard ML compiler, which generates native x64 machine code. Besides demonstrating a number of important formal properties of the scheme, we measure the scheme’s characteristics, for a number of benchmarks, and compare the performance of the generated executables with the performance of executables generated with the MLton state-of-the-art Standard ML compiler and configurations of the MLKit with and without region inference and generational garbage collection enabled. Although region inference often serves the purpose of generations, combining region inference with generational garbage collection is shown often to be superior to combining region inference with non-generational collection despite the overhead introduced by a larger amount of memory waste, due to region fragmentation.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S0956796821000010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42689016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-15DOI: 10.1017/S095679682000026X
K. Andersen, Ilya Sergey
Abstract Distributed systems are hard to get right, model, test, debug, and teach. Their textbook definitions, typically given in a form of replicated state machines, are concise, yet prone to introducing programming errors if naïvely translated into runnable implementations. In this work, we present Distributed Protocol Combinators (DPC), a declarative programming framework that aims to bridge the gap between specifications and runnable implementations of distributed systems, and facilitate their modeling, testing, and execution. DPC builds on the ideas from the state-of-the art logics for compositional systems verification. The contribution of DPC is a novel family of program-level primitives, which facilitates construction of larger distributed systems from smaller components, streamlining the usage of the most common asynchronous message-passing communication patterns, and providing machinery for testing and user-friendly dynamic verification of systems. This paper describes the main ideas behind the design of the framework and presents its implementation in Haskell. We introduce DPC through a series of characteristic examples and showcase it on a number of distributed protocols from the literature. This paper extends our preceeding conference publication (Andersen & Sergey, 2019a) with an exploration of randomized testing for protocols and their implementations, and an additional case study demonstrating bounded model checking of protocols.
{"title":"Protocol combinators for modeling, testing, and execution of distributed systems","authors":"K. Andersen, Ilya Sergey","doi":"10.1017/S095679682000026X","DOIUrl":"https://doi.org/10.1017/S095679682000026X","url":null,"abstract":"Abstract Distributed systems are hard to get right, model, test, debug, and teach. Their textbook definitions, typically given in a form of replicated state machines, are concise, yet prone to introducing programming errors if naïvely translated into runnable implementations. In this work, we present Distributed Protocol Combinators (DPC), a declarative programming framework that aims to bridge the gap between specifications and runnable implementations of distributed systems, and facilitate their modeling, testing, and execution. DPC builds on the ideas from the state-of-the art logics for compositional systems verification. The contribution of DPC is a novel family of program-level primitives, which facilitates construction of larger distributed systems from smaller components, streamlining the usage of the most common asynchronous message-passing communication patterns, and providing machinery for testing and user-friendly dynamic verification of systems. This paper describes the main ideas behind the design of the framework and presents its implementation in Haskell. We introduce DPC through a series of characteristic examples and showcase it on a number of distributed protocols from the literature. This paper extends our preceeding conference publication (Andersen & Sergey, 2019a) with an exploration of randomized testing for protocols and their implementations, and an additional case study demonstrating bounded model checking of protocols.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S095679682000026X","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45586478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-27DOI: 10.1017/S0956796820000271
O. Kiselyov, Shin-Cheng Mu, A. Sabry
Abstract The challenge of reasoning about programs with (multiple) effects such as mutation, jumps, or IO dates back to the inception of program semantics in the works of Strachey and Landin. Using monads to represent individual effects and the associated equational laws to reason about them proved exceptionally effective. Even then it is not always clear what laws are to be associated with a monad—for a good reason, as we show for non-determinism. Combining expressions using different effects brings challenges not just for monads, which do not compose, but also for equational reasoning: the interaction of effects may invalidate their individual laws, as well as induce emerging properties that are not apparent in the semantics of individual effects. Overall, the problems are judging the adequacy of a law; determining if or when a law continues to hold upon addition of new effects; and obtaining and easily verifying emergent laws. We present a solution relying on the framework of (algebraic, extensible) effects, which already proved itself for writing programs with multiple effects. Equipped with a fairly conventional denotational semantics, this framework turns useful, as we demonstrate, also for reasoning about and optimizing programs with multiple interacting effects. Unlike the conventional approach, equational laws are not imposed on programs/effect handlers, but induced from them: our starting point hence is a program (model), whose denotational semantics, besides being used directly, suggests and justifies equational laws and clarifies side conditions. The main technical result is the introduction of the notion of equivalence modulo handlers (“modulo observation”) or a particular combination of handlers—and proving it to be a congruence. It is hence usable for reasoning in any context, not just evaluation contexts—provided particular conditions are met. Concretely, we describe several realistic handlers for non-determinism and elucidate their laws (some of which hold in the presence of any other effect). We demonstrate appropriate equational laws of non-determinism in the presence of global state, which have been a challenge to state and prove before.
{"title":"Not by equations alone","authors":"O. Kiselyov, Shin-Cheng Mu, A. Sabry","doi":"10.1017/S0956796820000271","DOIUrl":"https://doi.org/10.1017/S0956796820000271","url":null,"abstract":"Abstract The challenge of reasoning about programs with (multiple) effects such as mutation, jumps, or IO dates back to the inception of program semantics in the works of Strachey and Landin. Using monads to represent individual effects and the associated equational laws to reason about them proved exceptionally effective. Even then it is not always clear what laws are to be associated with a monad—for a good reason, as we show for non-determinism. Combining expressions using different effects brings challenges not just for monads, which do not compose, but also for equational reasoning: the interaction of effects may invalidate their individual laws, as well as induce emerging properties that are not apparent in the semantics of individual effects. Overall, the problems are judging the adequacy of a law; determining if or when a law continues to hold upon addition of new effects; and obtaining and easily verifying emergent laws. We present a solution relying on the framework of (algebraic, extensible) effects, which already proved itself for writing programs with multiple effects. Equipped with a fairly conventional denotational semantics, this framework turns useful, as we demonstrate, also for reasoning about and optimizing programs with multiple interacting effects. Unlike the conventional approach, equational laws are not imposed on programs/effect handlers, but induced from them: our starting point hence is a program (model), whose denotational semantics, besides being used directly, suggests and justifies equational laws and clarifies side conditions. The main technical result is the introduction of the notion of equivalence modulo handlers (“modulo observation”) or a particular combination of handlers—and proving it to be a congruence. It is hence usable for reasoning in any context, not just evaluation contexts—provided particular conditions are met. Concretely, we describe several realistic handlers for non-determinism and elucidate their laws (some of which hold in the presence of any other effect). We demonstrate appropriate equational laws of non-determinism in the presence of global state, which have been a challenge to state and prove before.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S0956796820000271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48381643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.1017/S0956796821000253
Shin-Cheng Mu, Tsung-Ju Chiang
Abstract Given a string of parentheses, the task is to find the longest consecutive segment that is balanced, in linear time. We find this problem interesting because it involves a combination of techniques: the usual approach for solving segment problems and a theorem for constructing the inverse of a function—through which we derive an instance of shift-reduce parsing.
{"title":"Longest segment of balanced parentheses: an exercise in program inversion in a segment problem","authors":"Shin-Cheng Mu, Tsung-Ju Chiang","doi":"10.1017/S0956796821000253","DOIUrl":"https://doi.org/10.1017/S0956796821000253","url":null,"abstract":"Abstract Given a string of parentheses, the task is to find the longest consecutive segment that is balanced, in linear time. We find this problem interesting because it involves a combination of techniques: the usual approach for solving segment problems and a theorem for constructing the inverse of a function—through which we derive an instance of shift-reduce parsing.","PeriodicalId":15874,"journal":{"name":"Journal of Functional Programming","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44475813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}