This paper describes a semantics for pure Prolog programs with negation that provides meaning to metaprograms. Metaprograms are programs that construct and use data structures as programs. In Prolog a primary mataprogramming construct is the use of a variable as a literal in the body of a clause. The traditional Prolog 3-line metainterpreter is another example of a metaprogram. The account given here also supplies a meaning for clauses that have a variable as head, even though most Prolog systems do not support such clauses. This semantics naturally includes such programs, giving them their intuitive meaning. Ideas from M. Denecker and his colleagues form the basis of this approach. The key idea is to notice that if we give meanings to all propositional programs and treat Prolog rules with variables as the set of their ground instances, then we can give meanings to all programs. We must treat Prolog rules (which may be metarules) as templates for generating ground propositional rules, and not as first-order formulas, which they may not be. We use parameterized inductive definitions to give propositional models to Prolog programs, in which the propositions are expressions. Then the set of expressions of a propositional model determine a first-order Herbrand Model, providing a first-order logical semantics for all (pure) Prolog programs, including metaprograms. We give examples to show the applicability of this theory. We also demonstrate how this theory makes proofs of some important properties of metaprograms very straightforward.
{"title":"The Semantics of Metapropramming in Prolog","authors":"David S. Warren","doi":"arxiv-2408.07652","DOIUrl":"https://doi.org/arxiv-2408.07652","url":null,"abstract":"This paper describes a semantics for pure Prolog programs with negation that\u0000provides meaning to metaprograms. Metaprograms are programs that construct and\u0000use data structures as programs. In Prolog a primary mataprogramming construct\u0000is the use of a variable as a literal in the body of a clause. The traditional\u0000Prolog 3-line metainterpreter is another example of a metaprogram. The account\u0000given here also supplies a meaning for clauses that have a variable as head,\u0000even though most Prolog systems do not support such clauses. This semantics\u0000naturally includes such programs, giving them their intuitive meaning. Ideas from M. Denecker and his colleagues form the basis of this approach.\u0000The key idea is to notice that if we give meanings to all propositional\u0000programs and treat Prolog rules with variables as the set of their ground\u0000instances, then we can give meanings to all programs. We must treat Prolog\u0000rules (which may be metarules) as templates for generating ground propositional\u0000rules, and not as first-order formulas, which they may not be. We use\u0000parameterized inductive definitions to give propositional models to Prolog\u0000programs, in which the propositions are expressions. Then the set of\u0000expressions of a propositional model determine a first-order Herbrand Model,\u0000providing a first-order logical semantics for all (pure) Prolog programs,\u0000including metaprograms. We give examples to show the applicability of this theory. We also\u0000demonstrate how this theory makes proofs of some important properties of\u0000metaprograms very straightforward.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent theoretical work on automatic differentiation (autodiff) has focused on characteristics such as correctness and efficiency while assuming that all derivatives are automatically generated by autodiff using program transformation, with the exception of a fixed set of derivatives for primitive operations. However, in practice this assumption is insufficient: the programmer often needs to provide custom derivatives for composite functions to achieve efficiency and numerical stability. In this work, we start from the untyped lambda calculus with a reverse-mode autodiff operator, extend it with an operator to attach manual derivatives, and demonstrate its utility via several examples.
{"title":"Composing Automatic Differentiation with Custom Derivatives of Higher-Order Functions","authors":"Sam Estep","doi":"arxiv-2408.07683","DOIUrl":"https://doi.org/arxiv-2408.07683","url":null,"abstract":"Recent theoretical work on automatic differentiation (autodiff) has focused\u0000on characteristics such as correctness and efficiency while assuming that all\u0000derivatives are automatically generated by autodiff using program\u0000transformation, with the exception of a fixed set of derivatives for primitive\u0000operations. However, in practice this assumption is insufficient: the\u0000programmer often needs to provide custom derivatives for composite functions to\u0000achieve efficiency and numerical stability. In this work, we start from the\u0000untyped lambda calculus with a reverse-mode autodiff operator, extend it with\u0000an operator to attach manual derivatives, and demonstrate its utility via\u0000several examples.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emanuele De AngelisIASI-CNR, Rome, Italy, Fabio FioravantiDEc, University 'G. d'Annunzio', Chieti-Pescara, Italy, Alberto PettorossiDICII, University of Rome 'Tor Vergata', Italy, Maurizio ProiettiIASI-CNR, Rome, Italy
Catamorphisms are functions that are recursively defined on list and trees and, in general, on Algebraic Data Types (ADTs), and are often used to compute suitable abstractions of programs that manipulate ADTs. Examples of catamorphisms include functions that compute size of lists, orderedness of lists, and height of trees. It is well known that program properties specified through catamorphisms can be proved by showing the satisfiability of suitable sets of Constrained Horn Clauses (CHCs). We address the problem of checking the satisfiability of those sets of CHCs, and we propose a method for transforming sets of CHCs into equisatisfiable sets where catamorphisms are no longer present. As a consequence, clauses with catamorphisms can be handled without extending the satisfiability algorithms used by existing CHC solvers. Through an experimental evaluation on a non-trivial benchmark consisting of many list and tree processing algorithms expressed as sets of CHCs, we show that our technique is indeed effective and significantly enhances the performance of state-of-the-art CHC solvers.
{"title":"Catamorphic Abstractions for Constrained Horn Clause Satisfiability","authors":"Emanuele De AngelisIASI-CNR, Rome, Italy, Fabio FioravantiDEc, University 'G. d'Annunzio', Chieti-Pescara, Italy, Alberto PettorossiDICII, University of Rome 'Tor Vergata', Italy, Maurizio ProiettiIASI-CNR, Rome, Italy","doi":"arxiv-2408.06988","DOIUrl":"https://doi.org/arxiv-2408.06988","url":null,"abstract":"Catamorphisms are functions that are recursively defined on list and trees\u0000and, in general, on Algebraic Data Types (ADTs), and are often used to compute\u0000suitable abstractions of programs that manipulate ADTs. Examples of\u0000catamorphisms include functions that compute size of lists, orderedness of\u0000lists, and height of trees. It is well known that program properties specified\u0000through catamorphisms can be proved by showing the satisfiability of suitable\u0000sets of Constrained Horn Clauses (CHCs). We address the problem of checking the\u0000satisfiability of those sets of CHCs, and we propose a method for transforming\u0000sets of CHCs into equisatisfiable sets where catamorphisms are no longer\u0000present. As a consequence, clauses with catamorphisms can be handled without\u0000extending the satisfiability algorithms used by existing CHC solvers. Through\u0000an experimental evaluation on a non-trivial benchmark consisting of many list\u0000and tree processing algorithms expressed as sets of CHCs, we show that our\u0000technique is indeed effective and significantly enhances the performance of\u0000state-of-the-art CHC solvers.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolaj S. BjørnerMicrosoft Research, Ashley J. ChenNew York University Shanghai, Shuo ChenMicrosoft Research, Yang ChenMicrosoft Research, Zhongxin GuoMicrosoft Research, Tzu-Han HsuMichigan State University, Peng LiuPennsylvania State University, Nanqing LuoPennsylvania State University
Security bugs and trapdoors in smart contracts have been impacting the Ethereum community since its inception. Conceptually, the 1.45-million Ethereum's contracts form a single "gigantic program" whose behaviors are determined by the complex reference-topology between the contracts. Can the Ethereum community be assured that this gigantic program conforms to its design-level safety properties, despite unforeseeable code-level intricacies? Static code verification is inadequate due to the program's gigantic scale and high polymorphism. In this paper, we present a viable technological roadmap for the community toward this ambitious goal. Our technology, called Theorem-Carrying-Transaction (TCT), combines the benefits of concrete execution and symbolic proofs. Under the TCT protocol, every transaction carries a theorem that proves its adherence to the specified properties in the invoked contracts, and the runtime system checks the theorem before executing the transaction. Once a property is specified in a contract, it can be treated confidently as an unconditional guarantee made by the contract. As case studies, we demonstrate that TCT secures token contracts without foreseeing code-level intricacies like integer overflow and reentrancy. TCT is also successfully applied to a Uniswap codebase, showcasing a complex decentralized finance (DeFi) scenario. Our prototype incurs a negligible runtime overhead, two orders of magnitude lower than a state-of-the-art approach.
{"title":"Theorem-Carrying-Transaction: Runtime Certification to Ensure Safety for Smart Contract Transactions","authors":"Nikolaj S. BjørnerMicrosoft Research, Ashley J. ChenNew York University Shanghai, Shuo ChenMicrosoft Research, Yang ChenMicrosoft Research, Zhongxin GuoMicrosoft Research, Tzu-Han HsuMichigan State University, Peng LiuPennsylvania State University, Nanqing LuoPennsylvania State University","doi":"arxiv-2408.06478","DOIUrl":"https://doi.org/arxiv-2408.06478","url":null,"abstract":"Security bugs and trapdoors in smart contracts have been impacting the\u0000Ethereum community since its inception. Conceptually, the 1.45-million\u0000Ethereum's contracts form a single \"gigantic program\" whose behaviors are\u0000determined by the complex reference-topology between the contracts. Can the\u0000Ethereum community be assured that this gigantic program conforms to its\u0000design-level safety properties, despite unforeseeable code-level intricacies?\u0000Static code verification is inadequate due to the program's gigantic scale and\u0000high polymorphism. In this paper, we present a viable technological roadmap for\u0000the community toward this ambitious goal. Our technology, called\u0000Theorem-Carrying-Transaction (TCT), combines the benefits of concrete execution\u0000and symbolic proofs. Under the TCT protocol, every transaction carries a\u0000theorem that proves its adherence to the specified properties in the invoked\u0000contracts, and the runtime system checks the theorem before executing the\u0000transaction. Once a property is specified in a contract, it can be treated\u0000confidently as an unconditional guarantee made by the contract. As case\u0000studies, we demonstrate that TCT secures token contracts without foreseeing\u0000code-level intricacies like integer overflow and reentrancy. TCT is also\u0000successfully applied to a Uniswap codebase, showcasing a complex decentralized\u0000finance (DeFi) scenario. Our prototype incurs a negligible runtime overhead,\u0000two orders of magnitude lower than a state-of-the-art approach.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of automatically repairing infinite-state software programs w.r.t. temporal hyperproperties. As a first step, we present a repair approach for the temporal logic HyperLTL based on symbolic execution, constraint generation, and syntax-guided synthesis of repair expression (SyGuS). To improve the repair quality, we introduce the notation of a transparent repair that aims to find a patch that is as close as possible to the original program. As a practical realization, we develop an iterative repair approach. Here, we search for a sequence of repairs that are closer and closer to the original program's behavior. We implement our method in a prototype and report on encouraging experimental results using off-the-shelf SyGuS solvers.
{"title":"Syntax-Guided Automated Program Repair for Hyperproperties","authors":"Raven Beutner, Tzu-Han Hsu, Borzoo Bonakdarpour, Bernd Finkbeiner","doi":"arxiv-2408.06035","DOIUrl":"https://doi.org/arxiv-2408.06035","url":null,"abstract":"We study the problem of automatically repairing infinite-state software\u0000programs w.r.t. temporal hyperproperties. As a first step, we present a repair\u0000approach for the temporal logic HyperLTL based on symbolic execution,\u0000constraint generation, and syntax-guided synthesis of repair expression\u0000(SyGuS). To improve the repair quality, we introduce the notation of a\u0000transparent repair that aims to find a patch that is as close as possible to\u0000the original program. As a practical realization, we develop an iterative\u0000repair approach. Here, we search for a sequence of repairs that are closer and\u0000closer to the original program's behavior. We implement our method in a\u0000prototype and report on encouraging experimental results using off-the-shelf\u0000SyGuS solvers.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Jan Andries Stassen, Rasmus Ejlers Møgelberg, Maaike Zwart, Alejandro Aguirre, Lars Birkedal
Constructive type theory combines logic and programming in one language. This is useful both for reasoning about programs written in type theory, as well as for reasoning about other programming languages inside type theory. It is well-known that it is challenging to extend these applications to languages with recursion and computational effects such as probabilistic choice, because these features are not easily represented in constructive type theory. We show how to define and reason about a programming language with probabilistic choice and recursive types, in guarded type theory. We use higher inductive types to represent finite distributions and guarded recursion to model recursion. We define both operational and denotational semantics, as well as a relation between the two. The relation can be used to prove adequacy, but we also show how to use it to reason about programs up to contextual equivalence. To the best of our knowledge, this is the first model of a programming language with probabilistic choice and recursive types in a constructive type theory.
{"title":"Modelling Probabilistic FPC in Guarded Type Theory","authors":"Philipp Jan Andries Stassen, Rasmus Ejlers Møgelberg, Maaike Zwart, Alejandro Aguirre, Lars Birkedal","doi":"arxiv-2408.04455","DOIUrl":"https://doi.org/arxiv-2408.04455","url":null,"abstract":"Constructive type theory combines logic and programming in one language. This\u0000is useful both for reasoning about programs written in type theory, as well as\u0000for reasoning about other programming languages inside type theory. It is\u0000well-known that it is challenging to extend these applications to languages\u0000with recursion and computational effects such as probabilistic choice, because\u0000these features are not easily represented in constructive type theory. We show\u0000how to define and reason about a programming language with probabilistic choice\u0000and recursive types, in guarded type theory. We use higher inductive types to\u0000represent finite distributions and guarded recursion to model recursion. We\u0000define both operational and denotational semantics, as well as a relation\u0000between the two. The relation can be used to prove adequacy, but we also show\u0000how to use it to reason about programs up to contextual equivalence. To the\u0000best of our knowledge, this is the first model of a programming language with\u0000probabilistic choice and recursive types in a constructive type theory.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Learning models have experienced exponential growth in complexity and resource demands in recent years. Accelerating these models for efficient execution on resource-constrained devices has become more crucial than ever. Two notable techniques employed to achieve this goal are Hardware-aware Neural Architecture Search (HW-NAS) and Automatic Code Optimization (ACO). HW-NAS automatically designs accurate yet hardware-friendly neural networks, while ACO involves searching for the best compiler optimizations to apply on neural networks for efficient mapping and inference on the target hardware. This survey explores recent works that combine these two techniques within a single framework. We present the fundamental principles of both domains and demonstrate their sub-optimality when performed independently. We then investigate their integration into a joint optimization process that we call Hardware Aware-Neural Architecture and Compiler Optimizations co-Search (NACOS).
{"title":"Combining Neural Architecture Search and Automatic Code Optimization: A Survey","authors":"Inas Bachiri, Hadjer Benmeziane, Smail Niar, Riyadh Baghdadi, Hamza Ouarnoughi, Abdelkrime Aries","doi":"arxiv-2408.04116","DOIUrl":"https://doi.org/arxiv-2408.04116","url":null,"abstract":"Deep Learning models have experienced exponential growth in complexity and\u0000resource demands in recent years. Accelerating these models for efficient\u0000execution on resource-constrained devices has become more crucial than ever.\u0000Two notable techniques employed to achieve this goal are Hardware-aware Neural\u0000Architecture Search (HW-NAS) and Automatic Code Optimization (ACO). HW-NAS\u0000automatically designs accurate yet hardware-friendly neural networks, while ACO\u0000involves searching for the best compiler optimizations to apply on neural\u0000networks for efficient mapping and inference on the target hardware. This\u0000survey explores recent works that combine these two techniques within a single\u0000framework. We present the fundamental principles of both domains and\u0000demonstrate their sub-optimality when performed independently. We then\u0000investigate their integration into a joint optimization process that we call\u0000Hardware Aware-Neural Architecture and Compiler Optimizations co-Search\u0000(NACOS).","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, we showed how to apply program-synthesis techniques to create abstract transformers in a user-provided domain-specific language (DSL) L (i.e., ''L-transformers"). However, we found that the algorithm of Kalita et al. does not succeed when applied to reduced-product domains: the need to synthesize transformers for all of the domains simultaneously blows up the search space. Because reduced-product domains are an important device for improving the precision of abstract interpretation, in this paper, we propose an algorithm to synthesize reduced L-transformers $langle f_1^{sharp R}, f_2^{sharp R},..., f_n^{sharp R}rangle$ for a product domain $A_1 times A_2 times ldots times A_n$ , using multiple DSLs: $mathcal{L} = langle mathcal{L}_1 , mathcal{L}_2, ... , mathcal{L}_n rangle$. Synthesis of reduced-product transformers is quite challenging: first, the synthesis task has to tackle an increased ''feature set" because each component transformer now has access to the abstract inputs from all component domains in the product. Second, to ensure that the product transformer is maximally precise, the synthesis task needs to arrange for the component transformers to cooperate with each other. We implemented our algorithm in a tool, Amurth2, and used it to synthesize abstract transformers for two product domains -- SAFE and JSAI -- available within the SAFEstr framework for JavaScript program analysis. For four of the six operations supported by SAFEstr, Amurth2 synthesizes more precise abstract transformers than the manually written ones available in SAFEstr.
{"title":"Synthesizing Abstract Transformers for Reduced-Product Domains","authors":"Pankaj Kumar Kalita, Thomas Reps, Subhajit Roy","doi":"arxiv-2408.04040","DOIUrl":"https://doi.org/arxiv-2408.04040","url":null,"abstract":"Recently, we showed how to apply program-synthesis techniques to create\u0000abstract transformers in a user-provided domain-specific language (DSL) L\u0000(i.e., ''L-transformers\"). However, we found that the algorithm of Kalita et\u0000al. does not succeed when applied to reduced-product domains: the need to\u0000synthesize transformers for all of the domains simultaneously blows up the\u0000search space. Because reduced-product domains are an important device for improving the\u0000precision of abstract interpretation, in this paper, we propose an algorithm to\u0000synthesize reduced L-transformers $langle f_1^{sharp R}, f_2^{sharp R},...,\u0000f_n^{sharp R}rangle$ for a product domain $A_1 times A_2 times ldots\u0000times A_n$ , using multiple DSLs: $mathcal{L} = langle mathcal{L}_1 ,\u0000mathcal{L}_2, ... , mathcal{L}_n rangle$. Synthesis of reduced-product\u0000transformers is quite challenging: first, the synthesis task has to tackle an\u0000increased ''feature set\" because each component transformer now has access to\u0000the abstract inputs from all component domains in the product. Second, to\u0000ensure that the product transformer is maximally precise, the synthesis task\u0000needs to arrange for the component transformers to cooperate with each other. We implemented our algorithm in a tool, Amurth2, and used it to synthesize\u0000abstract transformers for two product domains -- SAFE and JSAI -- available\u0000within the SAFEstr framework for JavaScript program analysis. For four of the\u0000six operations supported by SAFEstr, Amurth2 synthesizes more precise abstract\u0000transformers than the manually written ones available in SAFEstr.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Polynomial Horn clauses with existentially and universally quantified variables arise in many problems of verification and program analysis. We present PolyHorn which is a tool for solving polynomial Horn clauses in which variables on both sides of the implication are real valued. Our tool provides a unified framework for polynomial Horn clause solving problems that arise in several papers in the literature. Our experimental evaluation over a wide range of benchmarks show the applicability of the tool as well as its benefits as opposed to simply using existing SMT solvers to solve such constraints.
{"title":"PolyHorn: A Polynomial Horn Clause Solver","authors":"Krishnendu Chatterjee, Amir Kafshdar Goharshady, Ehsan Kafshdar Goharshady, Mehrdad Karrabi, Milad Saadat, Đorđe Žikelić","doi":"arxiv-2408.03796","DOIUrl":"https://doi.org/arxiv-2408.03796","url":null,"abstract":"Polynomial Horn clauses with existentially and universally quantified\u0000variables arise in many problems of verification and program analysis. We\u0000present PolyHorn which is a tool for solving polynomial Horn clauses in which\u0000variables on both sides of the implication are real valued. Our tool provides a\u0000unified framework for polynomial Horn clause solving problems that arise in\u0000several papers in the literature. Our experimental evaluation over a wide range\u0000of benchmarks show the applicability of the tool as well as its benefits as\u0000opposed to simply using existing SMT solvers to solve such constraints.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hardware accelerators, in particular accelerators for tensor processing, have many potential application domains. However, they currently lack the software infrastructure to support the majority of domains outside of deep learning. Furthermore, a compiler that can easily be updated to reflect changes at both application and hardware levels would enable more agile development and design space exploration of accelerators, allowing hardware designers to realize closer-to-optimal performance. In this work, we discuss how large language models (LLMs) could be leveraged to build such a compiler. Specifically, we demonstrate the ability of GPT-4 to achieve high pass rates in translating code to the Gemmini accelerator, and prototype a technique for decomposing translation into smaller, more LLM-friendly steps. Additionally, we propose a 2-phase workflow for utilizing LLMs to generate hardware-optimized code.
{"title":"LLM-Aided Compilation for Tensor Accelerators","authors":"Charles Hong, Sahil Bhatia, Altan Haan, Shengjun Kris Dong, Dima Nikiforov, Alvin Cheung, Yakun Sophia Shao","doi":"arxiv-2408.03408","DOIUrl":"https://doi.org/arxiv-2408.03408","url":null,"abstract":"Hardware accelerators, in particular accelerators for tensor processing, have\u0000many potential application domains. However, they currently lack the software\u0000infrastructure to support the majority of domains outside of deep learning.\u0000Furthermore, a compiler that can easily be updated to reflect changes at both\u0000application and hardware levels would enable more agile development and design\u0000space exploration of accelerators, allowing hardware designers to realize\u0000closer-to-optimal performance. In this work, we discuss how large language\u0000models (LLMs) could be leveraged to build such a compiler. Specifically, we\u0000demonstrate the ability of GPT-4 to achieve high pass rates in translating code\u0000to the Gemmini accelerator, and prototype a technique for decomposing\u0000translation into smaller, more LLM-friendly steps. Additionally, we propose a\u00002-phase workflow for utilizing LLMs to generate hardware-optimized code.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}