Emanuele De AngelisIASI-CNR, Rome, Italy, Fabio FioravantiDEc, University 'G. d'Annunzio', Chieti-Pescara, Italy, Alberto PettorossiDICII, University of Rome 'Tor Vergata', Italy, Maurizio ProiettiIASI-CNR, Rome, Italy
Catamorphisms are functions that are recursively defined on list and trees and, in general, on Algebraic Data Types (ADTs), and are often used to compute suitable abstractions of programs that manipulate ADTs. Examples of catamorphisms include functions that compute size of lists, orderedness of lists, and height of trees. It is well known that program properties specified through catamorphisms can be proved by showing the satisfiability of suitable sets of Constrained Horn Clauses (CHCs). We address the problem of checking the satisfiability of those sets of CHCs, and we propose a method for transforming sets of CHCs into equisatisfiable sets where catamorphisms are no longer present. As a consequence, clauses with catamorphisms can be handled without extending the satisfiability algorithms used by existing CHC solvers. Through an experimental evaluation on a non-trivial benchmark consisting of many list and tree processing algorithms expressed as sets of CHCs, we show that our technique is indeed effective and significantly enhances the performance of state-of-the-art CHC solvers.
{"title":"Catamorphic Abstractions for Constrained Horn Clause Satisfiability","authors":"Emanuele De AngelisIASI-CNR, Rome, Italy, Fabio FioravantiDEc, University 'G. d'Annunzio', Chieti-Pescara, Italy, Alberto PettorossiDICII, University of Rome 'Tor Vergata', Italy, Maurizio ProiettiIASI-CNR, Rome, Italy","doi":"arxiv-2408.06988","DOIUrl":"https://doi.org/arxiv-2408.06988","url":null,"abstract":"Catamorphisms are functions that are recursively defined on list and trees\u0000and, in general, on Algebraic Data Types (ADTs), and are often used to compute\u0000suitable abstractions of programs that manipulate ADTs. Examples of\u0000catamorphisms include functions that compute size of lists, orderedness of\u0000lists, and height of trees. It is well known that program properties specified\u0000through catamorphisms can be proved by showing the satisfiability of suitable\u0000sets of Constrained Horn Clauses (CHCs). We address the problem of checking the\u0000satisfiability of those sets of CHCs, and we propose a method for transforming\u0000sets of CHCs into equisatisfiable sets where catamorphisms are no longer\u0000present. As a consequence, clauses with catamorphisms can be handled without\u0000extending the satisfiability algorithms used by existing CHC solvers. Through\u0000an experimental evaluation on a non-trivial benchmark consisting of many list\u0000and tree processing algorithms expressed as sets of CHCs, we show that our\u0000technique is indeed effective and significantly enhances the performance of\u0000state-of-the-art CHC solvers.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolaj S. BjørnerMicrosoft Research, Ashley J. ChenNew York University Shanghai, Shuo ChenMicrosoft Research, Yang ChenMicrosoft Research, Zhongxin GuoMicrosoft Research, Tzu-Han HsuMichigan State University, Peng LiuPennsylvania State University, Nanqing LuoPennsylvania State University
Security bugs and trapdoors in smart contracts have been impacting the Ethereum community since its inception. Conceptually, the 1.45-million Ethereum's contracts form a single "gigantic program" whose behaviors are determined by the complex reference-topology between the contracts. Can the Ethereum community be assured that this gigantic program conforms to its design-level safety properties, despite unforeseeable code-level intricacies? Static code verification is inadequate due to the program's gigantic scale and high polymorphism. In this paper, we present a viable technological roadmap for the community toward this ambitious goal. Our technology, called Theorem-Carrying-Transaction (TCT), combines the benefits of concrete execution and symbolic proofs. Under the TCT protocol, every transaction carries a theorem that proves its adherence to the specified properties in the invoked contracts, and the runtime system checks the theorem before executing the transaction. Once a property is specified in a contract, it can be treated confidently as an unconditional guarantee made by the contract. As case studies, we demonstrate that TCT secures token contracts without foreseeing code-level intricacies like integer overflow and reentrancy. TCT is also successfully applied to a Uniswap codebase, showcasing a complex decentralized finance (DeFi) scenario. Our prototype incurs a negligible runtime overhead, two orders of magnitude lower than a state-of-the-art approach.
{"title":"Theorem-Carrying-Transaction: Runtime Certification to Ensure Safety for Smart Contract Transactions","authors":"Nikolaj S. BjørnerMicrosoft Research, Ashley J. ChenNew York University Shanghai, Shuo ChenMicrosoft Research, Yang ChenMicrosoft Research, Zhongxin GuoMicrosoft Research, Tzu-Han HsuMichigan State University, Peng LiuPennsylvania State University, Nanqing LuoPennsylvania State University","doi":"arxiv-2408.06478","DOIUrl":"https://doi.org/arxiv-2408.06478","url":null,"abstract":"Security bugs and trapdoors in smart contracts have been impacting the\u0000Ethereum community since its inception. Conceptually, the 1.45-million\u0000Ethereum's contracts form a single \"gigantic program\" whose behaviors are\u0000determined by the complex reference-topology between the contracts. Can the\u0000Ethereum community be assured that this gigantic program conforms to its\u0000design-level safety properties, despite unforeseeable code-level intricacies?\u0000Static code verification is inadequate due to the program's gigantic scale and\u0000high polymorphism. In this paper, we present a viable technological roadmap for\u0000the community toward this ambitious goal. Our technology, called\u0000Theorem-Carrying-Transaction (TCT), combines the benefits of concrete execution\u0000and symbolic proofs. Under the TCT protocol, every transaction carries a\u0000theorem that proves its adherence to the specified properties in the invoked\u0000contracts, and the runtime system checks the theorem before executing the\u0000transaction. Once a property is specified in a contract, it can be treated\u0000confidently as an unconditional guarantee made by the contract. As case\u0000studies, we demonstrate that TCT secures token contracts without foreseeing\u0000code-level intricacies like integer overflow and reentrancy. TCT is also\u0000successfully applied to a Uniswap codebase, showcasing a complex decentralized\u0000finance (DeFi) scenario. Our prototype incurs a negligible runtime overhead,\u0000two orders of magnitude lower than a state-of-the-art approach.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of automatically repairing infinite-state software programs w.r.t. temporal hyperproperties. As a first step, we present a repair approach for the temporal logic HyperLTL based on symbolic execution, constraint generation, and syntax-guided synthesis of repair expression (SyGuS). To improve the repair quality, we introduce the notation of a transparent repair that aims to find a patch that is as close as possible to the original program. As a practical realization, we develop an iterative repair approach. Here, we search for a sequence of repairs that are closer and closer to the original program's behavior. We implement our method in a prototype and report on encouraging experimental results using off-the-shelf SyGuS solvers.
{"title":"Syntax-Guided Automated Program Repair for Hyperproperties","authors":"Raven Beutner, Tzu-Han Hsu, Borzoo Bonakdarpour, Bernd Finkbeiner","doi":"arxiv-2408.06035","DOIUrl":"https://doi.org/arxiv-2408.06035","url":null,"abstract":"We study the problem of automatically repairing infinite-state software\u0000programs w.r.t. temporal hyperproperties. As a first step, we present a repair\u0000approach for the temporal logic HyperLTL based on symbolic execution,\u0000constraint generation, and syntax-guided synthesis of repair expression\u0000(SyGuS). To improve the repair quality, we introduce the notation of a\u0000transparent repair that aims to find a patch that is as close as possible to\u0000the original program. As a practical realization, we develop an iterative\u0000repair approach. Here, we search for a sequence of repairs that are closer and\u0000closer to the original program's behavior. We implement our method in a\u0000prototype and report on encouraging experimental results using off-the-shelf\u0000SyGuS solvers.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Jan Andries Stassen, Rasmus Ejlers Møgelberg, Maaike Zwart, Alejandro Aguirre, Lars Birkedal
Constructive type theory combines logic and programming in one language. This is useful both for reasoning about programs written in type theory, as well as for reasoning about other programming languages inside type theory. It is well-known that it is challenging to extend these applications to languages with recursion and computational effects such as probabilistic choice, because these features are not easily represented in constructive type theory. We show how to define and reason about a programming language with probabilistic choice and recursive types, in guarded type theory. We use higher inductive types to represent finite distributions and guarded recursion to model recursion. We define both operational and denotational semantics, as well as a relation between the two. The relation can be used to prove adequacy, but we also show how to use it to reason about programs up to contextual equivalence. To the best of our knowledge, this is the first model of a programming language with probabilistic choice and recursive types in a constructive type theory.
{"title":"Modelling Probabilistic FPC in Guarded Type Theory","authors":"Philipp Jan Andries Stassen, Rasmus Ejlers Møgelberg, Maaike Zwart, Alejandro Aguirre, Lars Birkedal","doi":"arxiv-2408.04455","DOIUrl":"https://doi.org/arxiv-2408.04455","url":null,"abstract":"Constructive type theory combines logic and programming in one language. This\u0000is useful both for reasoning about programs written in type theory, as well as\u0000for reasoning about other programming languages inside type theory. It is\u0000well-known that it is challenging to extend these applications to languages\u0000with recursion and computational effects such as probabilistic choice, because\u0000these features are not easily represented in constructive type theory. We show\u0000how to define and reason about a programming language with probabilistic choice\u0000and recursive types, in guarded type theory. We use higher inductive types to\u0000represent finite distributions and guarded recursion to model recursion. We\u0000define both operational and denotational semantics, as well as a relation\u0000between the two. The relation can be used to prove adequacy, but we also show\u0000how to use it to reason about programs up to contextual equivalence. To the\u0000best of our knowledge, this is the first model of a programming language with\u0000probabilistic choice and recursive types in a constructive type theory.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Learning models have experienced exponential growth in complexity and resource demands in recent years. Accelerating these models for efficient execution on resource-constrained devices has become more crucial than ever. Two notable techniques employed to achieve this goal are Hardware-aware Neural Architecture Search (HW-NAS) and Automatic Code Optimization (ACO). HW-NAS automatically designs accurate yet hardware-friendly neural networks, while ACO involves searching for the best compiler optimizations to apply on neural networks for efficient mapping and inference on the target hardware. This survey explores recent works that combine these two techniques within a single framework. We present the fundamental principles of both domains and demonstrate their sub-optimality when performed independently. We then investigate their integration into a joint optimization process that we call Hardware Aware-Neural Architecture and Compiler Optimizations co-Search (NACOS).
{"title":"Combining Neural Architecture Search and Automatic Code Optimization: A Survey","authors":"Inas Bachiri, Hadjer Benmeziane, Smail Niar, Riyadh Baghdadi, Hamza Ouarnoughi, Abdelkrime Aries","doi":"arxiv-2408.04116","DOIUrl":"https://doi.org/arxiv-2408.04116","url":null,"abstract":"Deep Learning models have experienced exponential growth in complexity and\u0000resource demands in recent years. Accelerating these models for efficient\u0000execution on resource-constrained devices has become more crucial than ever.\u0000Two notable techniques employed to achieve this goal are Hardware-aware Neural\u0000Architecture Search (HW-NAS) and Automatic Code Optimization (ACO). HW-NAS\u0000automatically designs accurate yet hardware-friendly neural networks, while ACO\u0000involves searching for the best compiler optimizations to apply on neural\u0000networks for efficient mapping and inference on the target hardware. This\u0000survey explores recent works that combine these two techniques within a single\u0000framework. We present the fundamental principles of both domains and\u0000demonstrate their sub-optimality when performed independently. We then\u0000investigate their integration into a joint optimization process that we call\u0000Hardware Aware-Neural Architecture and Compiler Optimizations co-Search\u0000(NACOS).","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, we showed how to apply program-synthesis techniques to create abstract transformers in a user-provided domain-specific language (DSL) L (i.e., ''L-transformers"). However, we found that the algorithm of Kalita et al. does not succeed when applied to reduced-product domains: the need to synthesize transformers for all of the domains simultaneously blows up the search space. Because reduced-product domains are an important device for improving the precision of abstract interpretation, in this paper, we propose an algorithm to synthesize reduced L-transformers $langle f_1^{sharp R}, f_2^{sharp R},..., f_n^{sharp R}rangle$ for a product domain $A_1 times A_2 times ldots times A_n$ , using multiple DSLs: $mathcal{L} = langle mathcal{L}_1 , mathcal{L}_2, ... , mathcal{L}_n rangle$. Synthesis of reduced-product transformers is quite challenging: first, the synthesis task has to tackle an increased ''feature set" because each component transformer now has access to the abstract inputs from all component domains in the product. Second, to ensure that the product transformer is maximally precise, the synthesis task needs to arrange for the component transformers to cooperate with each other. We implemented our algorithm in a tool, Amurth2, and used it to synthesize abstract transformers for two product domains -- SAFE and JSAI -- available within the SAFEstr framework for JavaScript program analysis. For four of the six operations supported by SAFEstr, Amurth2 synthesizes more precise abstract transformers than the manually written ones available in SAFEstr.
{"title":"Synthesizing Abstract Transformers for Reduced-Product Domains","authors":"Pankaj Kumar Kalita, Thomas Reps, Subhajit Roy","doi":"arxiv-2408.04040","DOIUrl":"https://doi.org/arxiv-2408.04040","url":null,"abstract":"Recently, we showed how to apply program-synthesis techniques to create\u0000abstract transformers in a user-provided domain-specific language (DSL) L\u0000(i.e., ''L-transformers\"). However, we found that the algorithm of Kalita et\u0000al. does not succeed when applied to reduced-product domains: the need to\u0000synthesize transformers for all of the domains simultaneously blows up the\u0000search space. Because reduced-product domains are an important device for improving the\u0000precision of abstract interpretation, in this paper, we propose an algorithm to\u0000synthesize reduced L-transformers $langle f_1^{sharp R}, f_2^{sharp R},...,\u0000f_n^{sharp R}rangle$ for a product domain $A_1 times A_2 times ldots\u0000times A_n$ , using multiple DSLs: $mathcal{L} = langle mathcal{L}_1 ,\u0000mathcal{L}_2, ... , mathcal{L}_n rangle$. Synthesis of reduced-product\u0000transformers is quite challenging: first, the synthesis task has to tackle an\u0000increased ''feature set\" because each component transformer now has access to\u0000the abstract inputs from all component domains in the product. Second, to\u0000ensure that the product transformer is maximally precise, the synthesis task\u0000needs to arrange for the component transformers to cooperate with each other. We implemented our algorithm in a tool, Amurth2, and used it to synthesize\u0000abstract transformers for two product domains -- SAFE and JSAI -- available\u0000within the SAFEstr framework for JavaScript program analysis. For four of the\u0000six operations supported by SAFEstr, Amurth2 synthesizes more precise abstract\u0000transformers than the manually written ones available in SAFEstr.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Polynomial Horn clauses with existentially and universally quantified variables arise in many problems of verification and program analysis. We present PolyHorn which is a tool for solving polynomial Horn clauses in which variables on both sides of the implication are real valued. Our tool provides a unified framework for polynomial Horn clause solving problems that arise in several papers in the literature. Our experimental evaluation over a wide range of benchmarks show the applicability of the tool as well as its benefits as opposed to simply using existing SMT solvers to solve such constraints.
{"title":"PolyHorn: A Polynomial Horn Clause Solver","authors":"Krishnendu Chatterjee, Amir Kafshdar Goharshady, Ehsan Kafshdar Goharshady, Mehrdad Karrabi, Milad Saadat, Đorđe Žikelić","doi":"arxiv-2408.03796","DOIUrl":"https://doi.org/arxiv-2408.03796","url":null,"abstract":"Polynomial Horn clauses with existentially and universally quantified\u0000variables arise in many problems of verification and program analysis. We\u0000present PolyHorn which is a tool for solving polynomial Horn clauses in which\u0000variables on both sides of the implication are real valued. Our tool provides a\u0000unified framework for polynomial Horn clause solving problems that arise in\u0000several papers in the literature. Our experimental evaluation over a wide range\u0000of benchmarks show the applicability of the tool as well as its benefits as\u0000opposed to simply using existing SMT solvers to solve such constraints.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hardware accelerators, in particular accelerators for tensor processing, have many potential application domains. However, they currently lack the software infrastructure to support the majority of domains outside of deep learning. Furthermore, a compiler that can easily be updated to reflect changes at both application and hardware levels would enable more agile development and design space exploration of accelerators, allowing hardware designers to realize closer-to-optimal performance. In this work, we discuss how large language models (LLMs) could be leveraged to build such a compiler. Specifically, we demonstrate the ability of GPT-4 to achieve high pass rates in translating code to the Gemmini accelerator, and prototype a technique for decomposing translation into smaller, more LLM-friendly steps. Additionally, we propose a 2-phase workflow for utilizing LLMs to generate hardware-optimized code.
{"title":"LLM-Aided Compilation for Tensor Accelerators","authors":"Charles Hong, Sahil Bhatia, Altan Haan, Shengjun Kris Dong, Dima Nikiforov, Alvin Cheung, Yakun Sophia Shao","doi":"arxiv-2408.03408","DOIUrl":"https://doi.org/arxiv-2408.03408","url":null,"abstract":"Hardware accelerators, in particular accelerators for tensor processing, have\u0000many potential application domains. However, they currently lack the software\u0000infrastructure to support the majority of domains outside of deep learning.\u0000Furthermore, a compiler that can easily be updated to reflect changes at both\u0000application and hardware levels would enable more agile development and design\u0000space exploration of accelerators, allowing hardware designers to realize\u0000closer-to-optimal performance. In this work, we discuss how large language\u0000models (LLMs) could be leveraged to build such a compiler. Specifically, we\u0000demonstrate the ability of GPT-4 to achieve high pass rates in translating code\u0000to the Gemmini accelerator, and prototype a technique for decomposing\u0000translation into smaller, more LLM-friendly steps. Additionally, we propose a\u00002-phase workflow for utilizing LLMs to generate hardware-optimized code.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a statically typed embedding of relational programming (specifically a dialect of miniKanren with disequality constraints) in Haskell. Apart from handling types, our dialect extends standard relational combinator repertoire with a variation of relational matching that supports static exhaustiveness checks. To hide the boilerplate definitions and support comfortable logic programming with user-defined data types we use generic programming via GHC.Generics as well as metaprogramming via Template Haskell. We demonstrate our dialect on several examples and compare its performance against some other known implementations of miniKanren.
{"title":"typedKanren: Statically Typed Relational Programming with Exhaustive Matching in Haskell","authors":"Nikolai Kudasov, Artem Starikov","doi":"arxiv-2408.03170","DOIUrl":"https://doi.org/arxiv-2408.03170","url":null,"abstract":"We present a statically typed embedding of relational programming\u0000(specifically a dialect of miniKanren with disequality constraints) in Haskell.\u0000Apart from handling types, our dialect extends standard relational combinator\u0000repertoire with a variation of relational matching that supports static\u0000exhaustiveness checks. To hide the boilerplate definitions and support\u0000comfortable logic programming with user-defined data types we use generic\u0000programming via GHC.Generics as well as metaprogramming via Template Haskell.\u0000We demonstrate our dialect on several examples and compare its performance\u0000against some other known implementations of miniKanren.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mihai Nicola, Chaitanya Agarwal, Eric Koskinen, Thomas Wies
Many temporal safety properties of higher-order programs go beyond simple event sequencing and require an automaton register (or "accumulator") to express, such as input-dependency, event summation, resource usage, ensuring equal event magnitude, computation cost, etc. Some steps have been made towards verifying more basic temporal event sequences via reductions to fair termination [Murase et al. 2016] or some input-dependent properties through deductive proof systems [Nanjo et al. 2018]. However, there are currently no automated techniques to verify the more general class of register-automaton safety properties of higher-order programs. We introduce an abstract interpretation-based analysis to compute dependent, register-automata effects of recursive, higher-order programs. We capture properties of a program's effects in terms of automata that summarizes the history of observed effects using an accumulator register. The key novelty is a new abstract domain for context-dependent effects, capable of abstracting relations between the program environment, the automaton control state, and the accumulator value. The upshot is a dataflow type and effect system that computes context-sensitive effect summaries. We demonstrate our work via a prototype implementation that computes dependent effect summaries (and validates assertions) for OCaml-like recursive higher order programs. As a basis of comparison, we describe reductions to assertion checking for effect-free programs, and demonstrate that our approach outperforms prior tools Drift and RCaml/PCSat. Overall, across a set of 21 new benchmarks, RCaml/PCSat could not verify any, Drift verified 9 benchmarks, and evDrift verified 19; evDrift also had a 30.5x over Drift on those benchmarks that both tools could solve.
{"title":"Inferring Accumulative Effects of Higher Order Programs","authors":"Mihai Nicola, Chaitanya Agarwal, Eric Koskinen, Thomas Wies","doi":"arxiv-2408.02791","DOIUrl":"https://doi.org/arxiv-2408.02791","url":null,"abstract":"Many temporal safety properties of higher-order programs go beyond simple\u0000event sequencing and require an automaton register (or \"accumulator\") to\u0000express, such as input-dependency, event summation, resource usage, ensuring\u0000equal event magnitude, computation cost, etc. Some steps have been made towards\u0000verifying more basic temporal event sequences via reductions to fair\u0000termination [Murase et al. 2016] or some input-dependent properties through\u0000deductive proof systems [Nanjo et al. 2018]. However, there are currently no\u0000automated techniques to verify the more general class of register-automaton\u0000safety properties of higher-order programs. We introduce an abstract interpretation-based analysis to compute dependent,\u0000register-automata effects of recursive, higher-order programs. We capture\u0000properties of a program's effects in terms of automata that summarizes the\u0000history of observed effects using an accumulator register. The key novelty is a\u0000new abstract domain for context-dependent effects, capable of abstracting\u0000relations between the program environment, the automaton control state, and the\u0000accumulator value. The upshot is a dataflow type and effect system that\u0000computes context-sensitive effect summaries. We demonstrate our work via a\u0000prototype implementation that computes dependent effect summaries (and\u0000validates assertions) for OCaml-like recursive higher order programs. As a\u0000basis of comparison, we describe reductions to assertion checking for\u0000effect-free programs, and demonstrate that our approach outperforms prior tools\u0000Drift and RCaml/PCSat. Overall, across a set of 21 new benchmarks, RCaml/PCSat\u0000could not verify any, Drift verified 9 benchmarks, and evDrift verified 19;\u0000evDrift also had a 30.5x over Drift on those benchmarks that both tools could\u0000solve.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"99 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}