We improve the backward compatibility of stableKanren to run miniKanren programs. stableKanren is a miniKanren extension capable of non-monotonic reasoning through stable model semantics. However, standard miniKanren programs that produce infinite results do not run as expected in stableKanren. According to stable model semantics, the contradictions are created by negations. A standard miniKanren's relations do not involve negation, and the coarse contradictions handling in stableKanren causes this compatibility issue. Therefore, we provide a find-grinded contradiction handling to restrict the checking scope. As a result, standard miniKanren relations can produce answers. We also add a ``run-partial'' interface so that standard miniKanren's relations implemented with ``define''/``defineo'' can generate answers even if they coexist with non-terminating or unsatisfiable stableKanren relations in the same environment. The ``run-partial'' interface also supports running stratified negation programs faster without checking global unavoidable contradictions. A dependency graph analysis can be applied to the input query in the future, so the ``run'' interface can implicitly decide whether to perform unavoidable contradictions checking to improve usability.
{"title":"Improving stableKanren's Backward Compatibility","authors":"Xiangyu Guo, Ajay Bansal","doi":"arxiv-2408.16257","DOIUrl":"https://doi.org/arxiv-2408.16257","url":null,"abstract":"We improve the backward compatibility of stableKanren to run miniKanren\u0000programs. stableKanren is a miniKanren extension capable of non-monotonic\u0000reasoning through stable model semantics. However, standard miniKanren programs\u0000that produce infinite results do not run as expected in stableKanren. According\u0000to stable model semantics, the contradictions are created by negations. A\u0000standard miniKanren's relations do not involve negation, and the coarse\u0000contradictions handling in stableKanren causes this compatibility issue.\u0000Therefore, we provide a find-grinded contradiction handling to restrict the\u0000checking scope. As a result, standard miniKanren relations can produce answers.\u0000We also add a ``run-partial'' interface so that standard miniKanren's relations\u0000implemented with ``define''/``defineo'' can generate answers even if they\u0000coexist with non-terminating or unsatisfiable stableKanren relations in the\u0000same environment. The ``run-partial'' interface also supports running\u0000stratified negation programs faster without checking global unavoidable\u0000contradictions. A dependency graph analysis can be applied to the input query\u0000in the future, so the ``run'' interface can implicitly decide whether to\u0000perform unavoidable contradictions checking to improve usability.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are currently developing an innovative and visually-driven programming language called Omega.Although the Omega code is stored in text files, these files are not intended for manual editing or traditional printing.Furthermore, parsing these files using a context-free grammar is not possible.The parsing of the code and the facilitation of user-friendly manual editing both necessitate a global knowledge of the codebase.Strictly speaking, code visualization is not an integral part of the Omega language; instead, this task is delegated to the editing tools.Thanks to the global knowledge of the code, the editing process becomes remarkably straightforward, with numerous automatic completion features that enhance usability.Omega leverages a visual-oriented approach to encompass all fundamental aspects of software engineering.It offers robust features, including safe static typing, design by contracts, rules for accessing slots, operator definitions, and more,all presented in an intuitively and visually comprehensible manner, eliminating the need for obscure syntax.
{"title":"Omega: The Power of Visual Simplicity","authors":"Benoit SonntagLSIIT, Dominique ColnetKIWI","doi":"arxiv-2408.15631","DOIUrl":"https://doi.org/arxiv-2408.15631","url":null,"abstract":"We are currently developing an innovative and visually-driven programming\u0000language called Omega.Although the Omega code is stored in text files, these\u0000files are not intended for manual editing or traditional printing.Furthermore,\u0000parsing these files using a context-free grammar is not possible.The parsing of\u0000the code and the facilitation of user-friendly manual editing both necessitate\u0000a global knowledge of the codebase.Strictly speaking, code visualization is not\u0000an integral part of the Omega language; instead, this task is delegated to the\u0000editing tools.Thanks to the global knowledge of the code, the editing process\u0000becomes remarkably straightforward, with numerous automatic completion features\u0000that enhance usability.Omega leverages a visual-oriented approach to encompass\u0000all fundamental aspects of software engineering.It offers robust features,\u0000including safe static typing, design by contracts, rules for accessing slots,\u0000operator definitions, and more,all presented in an intuitively and visually\u0000comprehensible manner, eliminating the need for obscure syntax.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charlie Murphy, Keith Johnson, Thomas Reps, Loris D'Antoni
Semantics-Guided Synthesis (SemGuS) provides a framework to specify synthesis problems in a solver-agnostic and domain-agnostic way, by allowing a user to provide both the syntax and semantics of the language in which the desired program should be synthesized. Because synthesis and verification are closely intertwined, the SemGuS framework raises the problem of how to verify programs in a solver and domain-agnostic way. We prove that the problem of verifying whether a program is a valid solution to a SemGuS problem can be reduced to proving validity of a query in the `CLP calculus, a fixed-point logic that generalizes Constrained Horn Clauses and co-Constrained Horn Clauses. Our encoding into `CLP allows us to further classify the SemGuS verification problems into ones that are reducible to validity of (i) first-order-logic formulas, (ii) Constrained Horn Clauses, (iii) co-Constrained Horn Clauses, and (iv) `CLP queries. Furthermore, our encoding shines light on some limitations of the SemGuS framework, such as its inability to model nondeterminism and reactive synthesis. We thus propose a modification to SemGuS that makes it more expressive, and for which verifying solutions is exactly equivalent to proving validity of a query in the `CLP calculus. Our implementation of SemGuS verifiers based on the above encoding can verify instances that were not even encodable in previous work. Furthermore, we use our SemGuS verifiers within an enumeration-based SemGuS solver to correctly synthesize solutions to SemGuS problems that no previous SemGuS synthesizer could solve.
{"title":"Verifying Solutions to Semantics-Guided Synthesis Problems","authors":"Charlie Murphy, Keith Johnson, Thomas Reps, Loris D'Antoni","doi":"arxiv-2408.15475","DOIUrl":"https://doi.org/arxiv-2408.15475","url":null,"abstract":"Semantics-Guided Synthesis (SemGuS) provides a framework to specify synthesis\u0000problems in a solver-agnostic and domain-agnostic way, by allowing a user to\u0000provide both the syntax and semantics of the language in which the desired\u0000program should be synthesized. Because synthesis and verification are closely\u0000intertwined, the SemGuS framework raises the problem of how to verify programs\u0000in a solver and domain-agnostic way. We prove that the problem of verifying whether a program is a valid solution\u0000to a SemGuS problem can be reduced to proving validity of a query in the `CLP\u0000calculus, a fixed-point logic that generalizes Constrained Horn Clauses and\u0000co-Constrained Horn Clauses. Our encoding into `CLP allows us to further\u0000classify the SemGuS verification problems into ones that are reducible to\u0000validity of (i) first-order-logic formulas, (ii) Constrained Horn Clauses,\u0000(iii) co-Constrained Horn Clauses, and (iv) `CLP queries. Furthermore, our\u0000encoding shines light on some limitations of the SemGuS framework, such as its\u0000inability to model nondeterminism and reactive synthesis. We thus propose a\u0000modification to SemGuS that makes it more expressive, and for which verifying\u0000solutions is exactly equivalent to proving validity of a query in the `CLP\u0000calculus. Our implementation of SemGuS verifiers based on the above encoding\u0000can verify instances that were not even encodable in previous work.\u0000Furthermore, we use our SemGuS verifiers within an enumeration-based SemGuS\u0000solver to correctly synthesize solutions to SemGuS problems that no previous\u0000SemGuS synthesizer could solve.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keith J. C. Johnson, Rahul Krishnan, Thomas Reps, Loris D'Antoni
In top-down enumeration for program synthesis, abstraction-based pruning uses an abstract domain to approximate the set of possible values that a partial program, when completed, can output on a given input. If the set does not contain the desired output, the partial program and all its possible completions can be pruned. In its general form, abstraction-based pruning requires manually designed, domain-specific abstract domains and semantics, and thus has only been used in domain-specific synthesizers. This paper provides sufficient conditions under which a form of abstraction-based pruning can be automated for arbitrary synthesis problems in the general-purpose Semantics-Guided Synthesis (SemGuS) framework without requiring manually-defined abstract domains. We show that if the semantics of the language for which we are synthesizing programs exhibits some monotonicity properties, one can obtain an abstract interval-based semantics for free from the concrete semantics of the programming language, and use such semantics to effectively prune the search space. We also identify a condition that ensures such abstract semantics can be used to compute a precise abstraction of the set of values that a program derivable from a given hole in a partial program can produce. These precise abstractions make abstraction-based pruning more effective. We implement our approach in a tool, Moito, which can tackle synthesis problems defined in the SemGuS framework. Moito can automate interval-based pruning without any a-priori knowledge of the problem domain, and solve synthesis problems that previously required domain-specific, abstraction-based synthesizers -- e.g., synthesis of regular expressions, CSV file schema, and imperative programs from examples.
在用于程序综合的自顶向下枚举中,基于抽象的剪枝使用一个抽象域来近似估计部分程序在完成后可在给定输入上输出的可能值集合。如果该集合不包含所需的输出,则可以剪枝该部分程序及其所有可能的完成。一般来说,基于抽象的剪枝需要人工设计特定领域的抽象域和语义,因此只在特定领域的合成器中使用过。本文提供了充分条件,在这些条件下,通用语义引导合成(Semantics-Guided Synthesis,SemGuS)框架中的任意合成问题都可以自动实现基于抽象的剪枝,而无需人工定义抽象域。我们的研究表明,如果我们正在合成程序的语言的语义表现出某些单调性特性,我们就可以从编程语言的具体语义中免费获得基于区间的抽象语义,并利用这种语义有效地剪裁搜索空间。我们还确定了一个条件,确保抽象语义可用于计算部分程序中给定洞可派生程序所能产生的值集的精确抽象。这些精确的抽象使得基于抽象的剪枝更加有效。我们在工具 Moito 中实现了我们的方法,它可以处理 SemGuS 框架中定义的综合问题。Moito 可以自动进行基于区间的剪枝,而不需要任何关于问题领域的先验知识,并且可以解决以前需要特定领域、基于抽象的合成器才能解决的合成问题--例如,正则表达式、CSV 文件模式和示例中的交互式程序的合成。
{"title":"Automating Pruning in Top-Down Enumeration for Program Synthesis Problems with Monotonic Semantics","authors":"Keith J. C. Johnson, Rahul Krishnan, Thomas Reps, Loris D'Antoni","doi":"arxiv-2408.15822","DOIUrl":"https://doi.org/arxiv-2408.15822","url":null,"abstract":"In top-down enumeration for program synthesis, abstraction-based pruning uses\u0000an abstract domain to approximate the set of possible values that a partial\u0000program, when completed, can output on a given input. If the set does not\u0000contain the desired output, the partial program and all its possible\u0000completions can be pruned. In its general form, abstraction-based pruning\u0000requires manually designed, domain-specific abstract domains and semantics, and\u0000thus has only been used in domain-specific synthesizers. This paper provides sufficient conditions under which a form of\u0000abstraction-based pruning can be automated for arbitrary synthesis problems in\u0000the general-purpose Semantics-Guided Synthesis (SemGuS) framework without\u0000requiring manually-defined abstract domains. We show that if the semantics of\u0000the language for which we are synthesizing programs exhibits some monotonicity\u0000properties, one can obtain an abstract interval-based semantics for free from\u0000the concrete semantics of the programming language, and use such semantics to\u0000effectively prune the search space. We also identify a condition that ensures\u0000such abstract semantics can be used to compute a precise abstraction of the set\u0000of values that a program derivable from a given hole in a partial program can\u0000produce. These precise abstractions make abstraction-based pruning more\u0000effective. We implement our approach in a tool, Moito, which can tackle synthesis\u0000problems defined in the SemGuS framework. Moito can automate interval-based\u0000pruning without any a-priori knowledge of the problem domain, and solve\u0000synthesis problems that previously required domain-specific, abstraction-based\u0000synthesizers -- e.g., synthesis of regular expressions, CSV file schema, and\u0000imperative programs from examples.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyle Deeds, Willow Ahrens, Magda Balazinska, Dan Suciu
The tensor programming abstraction has become the key . This framework allows users to write high performance programs for bulk computation via a high-level imperative interface. Recent work has extended this paradigm to sparse tensors (i.e. tensors where most entries are not explicitly represented) with the use of sparse tensor compilers. These systems excel at producing efficient code for computation over sparse tensors, which may be stored in a wide variety of formats. However, they require the user to manually choose the order of operations and the data formats at every step. Unfortunately, these decisions are both highly impactful and complicated, requiring significant effort to manually optimize. In this work, we present Galley, a system for declarative sparse tensor programming. Galley performs cost-based optimization to lower these programs to a logical plan then to a physical plan. It then leverages sparse tensor compilers to execute the physical plan efficiently. We show that Galley achieves high performance on a wide variety of problems including machine learning algorithms, subgraph counting, and iterative graph algorithms.
{"title":"Galley: Modern Query Optimization for Sparse Tensor Programs","authors":"Kyle Deeds, Willow Ahrens, Magda Balazinska, Dan Suciu","doi":"arxiv-2408.14706","DOIUrl":"https://doi.org/arxiv-2408.14706","url":null,"abstract":"The tensor programming abstraction has become the key . This framework allows\u0000users to write high performance programs for bulk computation via a high-level\u0000imperative interface. Recent work has extended this paradigm to sparse tensors\u0000(i.e. tensors where most entries are not explicitly represented) with the use\u0000of sparse tensor compilers. These systems excel at producing efficient code for\u0000computation over sparse tensors, which may be stored in a wide variety of\u0000formats. However, they require the user to manually choose the order of\u0000operations and the data formats at every step. Unfortunately, these decisions\u0000are both highly impactful and complicated, requiring significant effort to\u0000manually optimize. In this work, we present Galley, a system for declarative\u0000sparse tensor programming. Galley performs cost-based optimization to lower\u0000these programs to a logical plan then to a physical plan. It then leverages\u0000sparse tensor compilers to execute the physical plan efficiently. We show that\u0000Galley achieves high performance on a wide variety of problems including\u0000machine learning algorithms, subgraph counting, and iterative graph algorithms.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compilers convert between representations -- usually, from higher-level, human writable code to lower-level, machine-readable code. A compiler backend is the portion of the compiler containing optimizations and code generation routines for a specific hardware target. In this dissertation, I advocate for a specific way of building compiler backends: namely, by automatically generating them from explicit, formal models of hardware using automated reasoning algorithms. I describe how automatically generating compilers from formal models of hardware leads to increased optimization ability, stronger correctness guarantees, and reduced development time for compiler backends. As evidence, I present two case studies: first, Glenside, which uses equality saturation to increase the 3LA compiler's ability to offload operations to machine learning accelerators, and second, Lakeroad, a technology mapper for FPGAs which uses program synthesis and semantics extracted from Verilog to map hardware designs to complex, programmable hardware primitives.
{"title":"Generation of Compiler Backends from Formal Models of Hardware","authors":"Gus Henry Smith","doi":"arxiv-2408.15429","DOIUrl":"https://doi.org/arxiv-2408.15429","url":null,"abstract":"Compilers convert between representations -- usually, from higher-level,\u0000human writable code to lower-level, machine-readable code. A compiler backend\u0000is the portion of the compiler containing optimizations and code generation\u0000routines for a specific hardware target. In this dissertation, I advocate for a\u0000specific way of building compiler backends: namely, by automatically generating\u0000them from explicit, formal models of hardware using automated reasoning\u0000algorithms. I describe how automatically generating compilers from formal\u0000models of hardware leads to increased optimization ability, stronger\u0000correctness guarantees, and reduced development time for compiler backends. As\u0000evidence, I present two case studies: first, Glenside, which uses equality\u0000saturation to increase the 3LA compiler's ability to offload operations to\u0000machine learning accelerators, and second, Lakeroad, a technology mapper for\u0000FPGAs which uses program synthesis and semantics extracted from Verilog to map\u0000hardware designs to complex, programmable hardware primitives.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangyi Liu, Charlie Murphy, Anvay Grover, Keith J. C. Johnson, Thomas Reps, Loris D'Antoni
Program verification and synthesis frameworks that allow one to customize the language in which one is interested typically require the user to provide a formally defined semantics for the language. Because writing a formal semantics can be a daunting and error-prone task, this requirement stands in the way of such frameworks being adopted by non-expert users. We present an algorithm that can automatically synthesize inductively defined syntax-directed semantics when given (i) a grammar describing the syntax of a language and (ii) an executable (closed-box) interpreter for computing the semantics of programs in the language of the grammar. Our algorithm synthesizes the semantics in the form of Constrained-Horn Clauses (CHCs), a natural, extensible, and formal logical framework for specifying inductively defined relations that has recently received widespread adoption in program verification and synthesis. The key innovation of our synthesis algorithm is a Counterexample-Guided Synthesis (CEGIS) approach that breaks the hard problem of synthesizing a set of constrained Horn clauses into small, tractable expression-synthesis problems that can be dispatched to existing SyGuS synthesizers. Our tool Synantic synthesized inductively-defined formal semantics from 14 interpreters for languages used in program-synthesis applications. When synthesizing formal semantics for one of our benchmarks, Synantic unveiled an inconsistency in the semantics computed by the interpreter for a language of regular expressions; fixing the inconsistency resulted in a more efficient semantics and, for some cases, in a 1.2x speedup for a synthesizer solving synthesis problems over such a language.
{"title":"Synthesizing Formal Semantics from Executable Interpreters","authors":"Jiangyi Liu, Charlie Murphy, Anvay Grover, Keith J. C. Johnson, Thomas Reps, Loris D'Antoni","doi":"arxiv-2408.14668","DOIUrl":"https://doi.org/arxiv-2408.14668","url":null,"abstract":"Program verification and synthesis frameworks that allow one to customize the\u0000language in which one is interested typically require the user to provide a\u0000formally defined semantics for the language. Because writing a formal semantics can be a daunting and error-prone task,\u0000this requirement stands in the way of such frameworks being adopted by\u0000non-expert users. We present an algorithm that can automatically synthesize inductively defined\u0000syntax-directed semantics when given (i) a grammar describing the syntax of a\u0000language and (ii) an executable (closed-box) interpreter for computing the\u0000semantics of programs in the language of the grammar. Our algorithm synthesizes the semantics in the form of Constrained-Horn\u0000Clauses (CHCs), a natural, extensible, and formal logical framework for\u0000specifying inductively defined relations that has recently received widespread\u0000adoption in program verification and synthesis. The key innovation of our synthesis algorithm is a Counterexample-Guided\u0000Synthesis (CEGIS) approach that breaks the hard problem of synthesizing a set\u0000of constrained Horn clauses into small, tractable expression-synthesis problems\u0000that can be dispatched to existing SyGuS synthesizers. Our tool Synantic synthesized inductively-defined formal semantics from 14\u0000interpreters for languages used in program-synthesis applications. When synthesizing formal semantics for one of our benchmarks, Synantic\u0000unveiled an inconsistency in the semantics computed by the interpreter for a\u0000language of regular expressions; fixing the inconsistency resulted in a more\u0000efficient semantics and, for some cases, in a 1.2x speedup for a synthesizer\u0000solving synthesis problems over such a language.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity and error proneness, due to the absence of an adequate system for the interoperability of multiple programming languages. Developers are compelled to resort to workarounds, such as library reimplementation or language-specific wrappers, which are often dependent on C as the common denominator for interoperability. These challenges render the use of multiple programming languages a burdensome and demanding task that necessitates highly skilled developers for implementation, debugging, and maintenance, and raise doubts about the benefits of interoperability. To overcome these challenges, we propose MetaFFI, a pluggable in-process indirect-interoperability system that allows the loading and utilization of entities from multiple programming languages. This is achieved by exploiting the less restrictive shallow binding mechanisms (e.g., Foreign Function Interface) to offer deep binding features (e.g., object creation, methods, fields). MetaFFI provides a runtime-independent framework to load and emph{xcall} (Cross-Call) foreign entities (e.g., functions, objects). MetaFFI uses Common Data Types (CDTs) to pass parameters and return values, including objects and complex types, and even cross-language callbacks. The indirect interoperability approach of MetaFFI has the significant advantage of requiring only $2n$ mechanisms to support $n$ languages, as opposed to the direct interoperability approaches that need $n^2$ mechanisms. We have successfully tested the binding between Go, Python3.11, and Java in a proof-of-concept on Windows and Ubuntu.
{"title":"MetaFFI -- Multilingual Indirect Interoperability System","authors":"Tsvi Cherny-Shahar, Amiram Yehudai","doi":"arxiv-2408.14175","DOIUrl":"https://doi.org/arxiv-2408.14175","url":null,"abstract":"The development of software applications using multiple programming languages\u0000has increased in recent years, as it allows the selection of the most suitable\u0000language and runtime for each component of the system and the integration of\u0000third-party libraries. However, this practice involves complexity and error\u0000proneness, due to the absence of an adequate system for the interoperability of\u0000multiple programming languages. Developers are compelled to resort to\u0000workarounds, such as library reimplementation or language-specific wrappers,\u0000which are often dependent on C as the common denominator for interoperability.\u0000These challenges render the use of multiple programming languages a burdensome\u0000and demanding task that necessitates highly skilled developers for\u0000implementation, debugging, and maintenance, and raise doubts about the benefits\u0000of interoperability. To overcome these challenges, we propose MetaFFI, a\u0000pluggable in-process indirect-interoperability system that allows the loading\u0000and utilization of entities from multiple programming languages. This is\u0000achieved by exploiting the less restrictive shallow binding mechanisms (e.g.,\u0000Foreign Function Interface) to offer deep binding features (e.g., object\u0000creation, methods, fields). MetaFFI provides a runtime-independent framework to\u0000load and emph{xcall} (Cross-Call) foreign entities (e.g., functions, objects).\u0000MetaFFI uses Common Data Types (CDTs) to pass parameters and return values,\u0000including objects and complex types, and even cross-language callbacks. The\u0000indirect interoperability approach of MetaFFI has the significant advantage of\u0000requiring only $2n$ mechanisms to support $n$ languages, as opposed to the\u0000direct interoperability approaches that need $n^2$ mechanisms. We have\u0000successfully tested the binding between Go, Python3.11, and Java in a\u0000proof-of-concept on Windows and Ubuntu.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typestate systems are notoriously complex as they require sophisticated machinery for tracking aliasing. We propose a new, transition-oriented foundation for typestate in the setting of impure functional programming. Our approach relies on ordered types for simple alias tracking and its formalization draws on work on bunched implications. Yet, we support a flexible notion of borrowing in the presence of typestate. Our core calculus comes with a notion of resource types indexed by an ordered partial monoid that models abstract state transitions. We prove syntactic type soundness with respect to a resource-instrumented semantics. We give an algorithmic version of our type system and prove its soundness. Algorithmic typing facilitates a simple surface language that does not expose tedious details of ordered types. We implemented a typechecker for the surface language along with an interpreter for the core language.
{"title":"Law and Order for Typestate with Borrowing","authors":"Hannes Saffrich, Yuki Nishida, Peter Thiemann","doi":"arxiv-2408.14031","DOIUrl":"https://doi.org/arxiv-2408.14031","url":null,"abstract":"Typestate systems are notoriously complex as they require sophisticated\u0000machinery for tracking aliasing. We propose a new, transition-oriented\u0000foundation for typestate in the setting of impure functional programming. Our\u0000approach relies on ordered types for simple alias tracking and its\u0000formalization draws on work on bunched implications. Yet, we support a flexible\u0000notion of borrowing in the presence of typestate. Our core calculus comes with a notion of resource types indexed by an ordered\u0000partial monoid that models abstract state transitions. We prove syntactic type\u0000soundness with respect to a resource-instrumented semantics. We give an\u0000algorithmic version of our type system and prove its soundness. Algorithmic\u0000typing facilitates a simple surface language that does not expose tedious\u0000details of ordered types. We implemented a typechecker for the surface language\u0000along with an interpreter for the core language.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aaron BembenekUniversity of Melbourne, Michael GreenbergStevens Institute of Technology, Stephen ChongHarvard University
By combining Datalog, SMT solving, and functional programming, the language Formulog provides an appealing mix of features for implementing SMT-based static analyses (e.g., refinement type checking, symbolic execution) in a natural, declarative way. At the same time, the performance of its custom Datalog solver can be an impediment to using Formulog beyond prototyping -- a common problem for Datalog variants that aspire to solve large problem instances. In this work we speed up Formulog evaluation, with surprising results: while 2.2x speedups are obtained by using the conventional techniques for high-performance Datalog (e.g., compilation, specialized data structures), the big wins come by abandoning the central assumption in modern performant Datalog engines, semi-naive Datalog evaluation. In its place, we develop eager evaluation, a concurrent Datalog evaluation algorithm that explores the logical inference space via a depth-first traversal order. In practice, eager evaluation leads to an advantageous distribution of Formulog's SMT workload to external SMT solvers and improved SMT solving times: our eager evaluation extensions to the Formulog interpreter and Souffl'e's code generator achieve mean 5.2x and 7.6x speedups, respectively, over the optimized code generated by off-the-shelf Souffl'e on SMT-heavy Formulog benchmarks. Using compilation and eager evaluation, Formulog implementations of refinement type checking, bottom-up pointer analysis, and symbolic execution achieve speedups on 20 out of 23 benchmarks over previously published, hand-tuned analyses written in F#, Java, and C++, providing strong evidence that Formulog can be the basis of a realistic platform for SMT-based static analysis. Moreover, our experience adds nuance to the conventional wisdom that semi-naive evaluation is the one-size-fits-all best Datalog evaluation algorithm for static analysis workloads.
{"title":"Making Formulog Fast: An Argument for Unconventional Datalog Evaluation (Extended Version)","authors":"Aaron BembenekUniversity of Melbourne, Michael GreenbergStevens Institute of Technology, Stephen ChongHarvard University","doi":"arxiv-2408.14017","DOIUrl":"https://doi.org/arxiv-2408.14017","url":null,"abstract":"By combining Datalog, SMT solving, and functional programming, the language\u0000Formulog provides an appealing mix of features for implementing SMT-based\u0000static analyses (e.g., refinement type checking, symbolic execution) in a\u0000natural, declarative way. At the same time, the performance of its custom\u0000Datalog solver can be an impediment to using Formulog beyond prototyping -- a\u0000common problem for Datalog variants that aspire to solve large problem\u0000instances. In this work we speed up Formulog evaluation, with surprising\u0000results: while 2.2x speedups are obtained by using the conventional techniques\u0000for high-performance Datalog (e.g., compilation, specialized data structures),\u0000the big wins come by abandoning the central assumption in modern performant\u0000Datalog engines, semi-naive Datalog evaluation. In its place, we develop eager\u0000evaluation, a concurrent Datalog evaluation algorithm that explores the logical\u0000inference space via a depth-first traversal order. In practice, eager\u0000evaluation leads to an advantageous distribution of Formulog's SMT workload to\u0000external SMT solvers and improved SMT solving times: our eager evaluation\u0000extensions to the Formulog interpreter and Souffl'e's code generator achieve\u0000mean 5.2x and 7.6x speedups, respectively, over the optimized code generated by\u0000off-the-shelf Souffl'e on SMT-heavy Formulog benchmarks. Using compilation and eager evaluation, Formulog implementations of\u0000refinement type checking, bottom-up pointer analysis, and symbolic execution\u0000achieve speedups on 20 out of 23 benchmarks over previously published,\u0000hand-tuned analyses written in F#, Java, and C++, providing strong evidence\u0000that Formulog can be the basis of a realistic platform for SMT-based static\u0000analysis. Moreover, our experience adds nuance to the conventional wisdom that\u0000semi-naive evaluation is the one-size-fits-all best Datalog evaluation\u0000algorithm for static analysis workloads.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}