Alejandro L. García Navarro, Nataliia Koneva, Alfonso Sánchez-Macián, José Alberto Hernández
Python has gained widespread popularity in the fields of machine learning, artificial intelligence, and data engineering due to its effectiveness and extensive libraries. R, on its side, remains a dominant language for statistical analysis and visualization. However, certain libraries have become outdated, limiting their functionality and performance. Users can use Python's advanced machine learning and AI capabilities alongside R's robust statistical packages by combining these two programming languages. This paper explores using R's reticulate package to call Python from R, providing practical examples and highlighting scenarios where this integration enhances productivity and analytical capabilities. With a few hello-world code snippets, we demonstrate how to run Python's scikit-learn, pytorch and OpenAI gym libraries for building Machine Learning, Deep Learning, and Reinforcement Learning projects easily.
Python 因其高效和丰富的库而在机器学习、人工智能和数据工程领域广受欢迎。R 则仍然是统计分析和可视化的主流语言。然而,某些库已经过时,限制了其功能和性能。用户可以通过将 Python 和 R 这两种编程语言结合起来,在使用 R 的强大统计软件包的同时,使用 Python 先进的机器学习和人工智能功能。本文探讨了如何使用 R 的 reticulate 包从 R 中调用 Python,并提供了实际示例,重点介绍了这种集成可以提高生产率和分析能力的应用场景。我们将通过一些hello-world代码片段,演示如何运行Python的scikit-learn、pytorch和OpenAI gymlibraries,轻松构建机器学习、深度学习和强化学习项目。
{"title":"A Comprehensive Guide to Combining R and Python code for Data Science, Machine Learning and Reinforcement Learning","authors":"Alejandro L. García Navarro, Nataliia Koneva, Alfonso Sánchez-Macián, José Alberto Hernández","doi":"arxiv-2407.14695","DOIUrl":"https://doi.org/arxiv-2407.14695","url":null,"abstract":"Python has gained widespread popularity in the fields of machine learning,\u0000artificial intelligence, and data engineering due to its effectiveness and\u0000extensive libraries. R, on its side, remains a dominant language for\u0000statistical analysis and visualization. However, certain libraries have become\u0000outdated, limiting their functionality and performance. Users can use Python's\u0000advanced machine learning and AI capabilities alongside R's robust statistical\u0000packages by combining these two programming languages. This paper explores\u0000using R's reticulate package to call Python from R, providing practical\u0000examples and highlighting scenarios where this integration enhances\u0000productivity and analytical capabilities. With a few hello-world code snippets,\u0000we demonstrate how to run Python's scikit-learn, pytorch and OpenAI gym\u0000libraries for building Machine Learning, Deep Learning, and Reinforcement\u0000Learning projects easily.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Tinoco, Alexandre Madeira, Manuel A. Martins, José Proença
Reactive graphs are transition structures whereas edges become active and inactive during its evolution, that were introduced by Dov Gabbay from a mathematical's perspective. This paper presents Marge (https://fm-dcc.github.io/MARGe), a web-based tool to visualise and analyse reactive graphs enriched with labels. Marge animates the operational semantics of reactive graphs and offers different graphical views to provide insights over concrete systems. We motivate the applicability of reactive graphs for adaptive systems and for featured transition systems, using Marge to tighten the gap between the existing theoretical models and their usage to analyse concrete systems.
{"title":"Reactive graphs in action (extended version)","authors":"David Tinoco, Alexandre Madeira, Manuel A. Martins, José Proença","doi":"arxiv-2407.14705","DOIUrl":"https://doi.org/arxiv-2407.14705","url":null,"abstract":"Reactive graphs are transition structures whereas edges become active and\u0000inactive during its evolution, that were introduced by Dov Gabbay from a\u0000mathematical's perspective. This paper presents Marge\u0000(https://fm-dcc.github.io/MARGe), a web-based tool to visualise and analyse\u0000reactive graphs enriched with labels. Marge animates the operational semantics\u0000of reactive graphs and offers different graphical views to provide insights\u0000over concrete systems. We motivate the applicability of reactive graphs for\u0000adaptive systems and for featured transition systems, using Marge to tighten\u0000the gap between the existing theoretical models and their usage to analyse\u0000concrete systems.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David van Balen, Gabriele Keller, Ivo Gabede Wolff, Trevor L. McDonell
We present an Integer Linear Programming based approach to finding the optimal fusion strategy for combinator-based parallel programs. While combinator-based languages or libraries provide a convenient interface for programming parallel hardware, fusing combinators to more complex operations is essential to achieve the desired performance. Our approach is not only suitable for languages with the usual map, fold, scan, indexing and scatter operations, but also gather operations, which access arrays in arbitrary order, and therefore goes beyond the traditional producer-consumer fusion. It can be parametrised with appropriate cost functions, and is fast enough to be suitable for just-in-time compilation.
{"title":"Fusing Gathers with Integer Linear Programming","authors":"David van Balen, Gabriele Keller, Ivo Gabede Wolff, Trevor L. McDonell","doi":"arxiv-2407.13585","DOIUrl":"https://doi.org/arxiv-2407.13585","url":null,"abstract":"We present an Integer Linear Programming based approach to finding the\u0000optimal fusion strategy for combinator-based parallel programs. While\u0000combinator-based languages or libraries provide a convenient interface for\u0000programming parallel hardware, fusing combinators to more complex operations is\u0000essential to achieve the desired performance. Our approach is not only suitable\u0000for languages with the usual map, fold, scan, indexing and scatter operations,\u0000but also gather operations, which access arrays in arbitrary order, and\u0000therefore goes beyond the traditional producer-consumer fusion. It can be\u0000parametrised with appropriate cost functions, and is fast enough to be suitable\u0000for just-in-time compilation.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"172 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Ghorbani, Emilien Bauer, Tobias Grosser, Amir Shaikhha
Tensor algebra is a crucial component for data-intensive workloads such as machine learning and scientific computing. As the complexity of data grows, scientists often encounter a dilemma between the highly specialized dense tensor algebra and efficient structure-aware algorithms provided by sparse tensor algebra. In this paper, we introduce DASTAC, a framework to propagate the tensors's captured high-level structure down to low-level code generation by incorporating techniques such as automatic data layout compression, polyhedral analysis, and affine code generation. Our methodology reduces memory footprint by automatically detecting the best data layout, heavily benefits from polyhedral optimizations, leverages further optimizations, and enables parallelization through MLIR. Through extensive experimentation, we show that DASTAC achieves 1 to 2 orders of magnitude speedup over TACO, a state-of-the-art sparse tensor compiler, and StructTensor, a state-of-the-art structured tensor algebra compiler, with a significantly lower memory footprint.
{"title":"Compressing Structured Tensor Algebra","authors":"Mahdi Ghorbani, Emilien Bauer, Tobias Grosser, Amir Shaikhha","doi":"arxiv-2407.13726","DOIUrl":"https://doi.org/arxiv-2407.13726","url":null,"abstract":"Tensor algebra is a crucial component for data-intensive workloads such as\u0000machine learning and scientific computing. As the complexity of data grows,\u0000scientists often encounter a dilemma between the highly specialized dense\u0000tensor algebra and efficient structure-aware algorithms provided by sparse\u0000tensor algebra. In this paper, we introduce DASTAC, a framework to propagate\u0000the tensors's captured high-level structure down to low-level code generation\u0000by incorporating techniques such as automatic data layout compression,\u0000polyhedral analysis, and affine code generation. Our methodology reduces memory\u0000footprint by automatically detecting the best data layout, heavily benefits\u0000from polyhedral optimizations, leverages further optimizations, and enables\u0000parallelization through MLIR. Through extensive experimentation, we show that\u0000DASTAC achieves 1 to 2 orders of magnitude speedup over TACO, a\u0000state-of-the-art sparse tensor compiler, and StructTensor, a state-of-the-art\u0000structured tensor algebra compiler, with a significantly lower memory\u0000footprint.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In December 2023, security agencies from five countries in North America, Europe, and the south Pacific produced a document encouraging senior executives in all software producing organizations to take responsibility for and oversight of the security of the software their organizations produce. In February 2024, the White House released a cybersecurity outline, highlighting the December document. In this work we review the safe languages listed in these documents, and compare the safety of those languages with Erlang and Elixir, two BEAM languages. These security agencies' declaration of some languages as safe is necessary but insufficient to make wise decisions regarding what language to use when creating code. We propose an additional way of looking at languages and the ease with which unsafe code can be written and used. We call this new perspective em{unsafe impedance}. We then go on to use unsafe impedance to examine nine languages that are considered to be safe. Finally, we suggest that business processes include what we refer to as an Unsafe Acceptance Process. This Unsafe Acceptance Process can be used as part of the memory safe roadmaps suggested by these agencies. Unsafe Acceptance Processes can aid organizations in their production of safe by design software.
{"title":"Unsafe Impedance: Safe Languages and Safe by Design Software","authors":"Lee Barney, Adolfo Neto","doi":"arxiv-2407.13046","DOIUrl":"https://doi.org/arxiv-2407.13046","url":null,"abstract":"In December 2023, security agencies from five countries in North America,\u0000Europe, and the south Pacific produced a document encouraging senior executives\u0000in all software producing organizations to take responsibility for and\u0000oversight of the security of the software their organizations produce. In\u0000February 2024, the White House released a cybersecurity outline, highlighting\u0000the December document. In this work we review the safe languages listed in\u0000these documents, and compare the safety of those languages with Erlang and\u0000Elixir, two BEAM languages. These security agencies' declaration of some languages as safe is necessary\u0000but insufficient to make wise decisions regarding what language to use when\u0000creating code. We propose an additional way of looking at languages and the\u0000ease with which unsafe code can be written and used. We call this new\u0000perspective em{unsafe impedance}. We then go on to use unsafe impedance to\u0000examine nine languages that are considered to be safe. Finally, we suggest that\u0000business processes include what we refer to as an Unsafe Acceptance Process.\u0000This Unsafe Acceptance Process can be used as part of the memory safe roadmaps\u0000suggested by these agencies. Unsafe Acceptance Processes can aid organizations\u0000in their production of safe by design software.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hesam Shahrokhi, Amirali Kaboli, Mahdi Ghorbani, Amir Shaikhha
Python data science libraries such as Pandas and NumPy have recently gained immense popularity. Although these libraries are feature-rich and easy to use, their scalability limitations require more robust computational resources. In this paper, we present PyTond, an efficient approach to push the processing of data science workloads down into the database engines that are already known for their big data handling capabilities. Compared to the previous work, by introducing TondIR, our approach can capture a more comprehensive set of workloads and data layouts. Moreover, by doing IR-level optimizations, we generate better SQL code that improves the query processing by the underlying database engine. Our evaluation results show promising performance improvement compared to Python and other alternatives for diverse data science workloads.
{"title":"PyTond: Efficient Python Data Science on the Shoulders of Databases","authors":"Hesam Shahrokhi, Amirali Kaboli, Mahdi Ghorbani, Amir Shaikhha","doi":"arxiv-2407.11616","DOIUrl":"https://doi.org/arxiv-2407.11616","url":null,"abstract":"Python data science libraries such as Pandas and NumPy have recently gained\u0000immense popularity. Although these libraries are feature-rich and easy to use,\u0000their scalability limitations require more robust computational resources. In\u0000this paper, we present PyTond, an efficient approach to push the processing of\u0000data science workloads down into the database engines that are already known\u0000for their big data handling capabilities. Compared to the previous work, by\u0000introducing TondIR, our approach can capture a more comprehensive set of\u0000workloads and data layouts. Moreover, by doing IR-level optimizations, we\u0000generate better SQL code that improves the query processing by the underlying\u0000database engine. Our evaluation results show promising performance improvement\u0000compared to Python and other alternatives for diverse data science workloads.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many Haskell textbooks explain the evaluation of pure functional programs as a process of stepwise rewriting using equations. However, usual implementation techniques perform program transformations that make producing the corresponding tracing evaluations difficult. This paper presents a tracing interpreter for a subset of Haskell based on the pattern matching calculus of Kahl. We start from a big-step semantics in the style of Launchbury and develop a small-step semantics in the style of Sestoft's machines. This machine is used in the implementation of a step-by-step educational interpreter. We also discuss some implementation decisions and present illustrative examples.
{"title":"Haskelite: A Tracing Interpreter Based on a Pattern-Matching Calculus","authors":"Pedro Vasconcelos, Rodrigo Marques","doi":"arxiv-2407.11831","DOIUrl":"https://doi.org/arxiv-2407.11831","url":null,"abstract":"Many Haskell textbooks explain the evaluation of pure functional programs as\u0000a process of stepwise rewriting using equations. However, usual implementation\u0000techniques perform program transformations that make producing the\u0000corresponding tracing evaluations difficult. This paper presents a tracing\u0000interpreter for a subset of Haskell based on the pattern matching calculus of\u0000Kahl. We start from a big-step semantics in the style of Launchbury and develop\u0000a small-step semantics in the style of Sestoft's machines. This machine is used\u0000in the implementation of a step-by-step educational interpreter. We also\u0000discuss some implementation decisions and present illustrative examples.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141722116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenhao Tang, Leo White, Stephen Dolan, Daniel Hillerström, Sam Lindley, Anton Lorenzen
We propose a novel type system for effects and handlers using modal types. Conventional effect systems attach effects to function types, which can lead to verbose effect-polymorphic types, especially for higher-order functions. Our modal effect system provides succinct types for higher-order first-class functions without losing modularity and reusability. The core idea is to decouple effects from function types and instead to track effects through relative and absolute modalities, which represent transformations on the ambient effects provided by the context. We formalise the idea of modal effect types in a multimodal System F-style core calculus Met with effects and handlers. Met supports modular effectful programming via modalities without relying on effect variables. We encode a practical fragment of a conventional row-based effect system with effect polymorphism, which captures most common use-cases, into Met in order to formally demonstrate the expressive power of modal effect types. To recover the full power of conventional effect systems beyond this fragment, we seamlessly extend Met to Mete with effect variables. We propose a surface language Metel for Mete with a sound and complete type inference algorithm inspired by FreezeML.
传统的效果系统将效果附加到函数类型上,这会导致冗长的效果多态类型,尤其是对高阶函数而言。我们的模态效果系统为高阶一阶函数提供了简洁的类型,同时又不失模块性和可重用性。其核心思想是将效果与函数类型分离开来,而是通过相关模态和绝对模态来跟踪效果,这些模态表示上下文所提供的环境效果的变换。我们将模态效果类型的想法正式化为具有效果和处理程序的多模态系统 F 风格核心微积分 Met。Met 支持通过模态进行模块化效果编程,而无需依赖效果变量。我们将传统的基于行的效果系统的一个实用片段编码到 Met 中,该片段具有效果多态性,可以捕捉到最常见的用例,从而从形式上展示了模态效果类型的表现力。为了在此片段之外恢复传统效果系统的全部功能,我们将 Met 无缝扩展为带有效果变量的 Mete。我们为 Mete 提出了一种表面语言 Metelfor,它具有受 FreezeML 启发的健全而完整的类型推断算法。
{"title":"Modal Effect Types","authors":"Wenhao Tang, Leo White, Stephen Dolan, Daniel Hillerström, Sam Lindley, Anton Lorenzen","doi":"arxiv-2407.11816","DOIUrl":"https://doi.org/arxiv-2407.11816","url":null,"abstract":"We propose a novel type system for effects and handlers using modal types.\u0000Conventional effect systems attach effects to function types, which can lead to\u0000verbose effect-polymorphic types, especially for higher-order functions. Our\u0000modal effect system provides succinct types for higher-order first-class\u0000functions without losing modularity and reusability. The core idea is to\u0000decouple effects from function types and instead to track effects through\u0000relative and absolute modalities, which represent transformations on the\u0000ambient effects provided by the context. We formalise the idea of modal effect types in a multimodal System F-style\u0000core calculus Met with effects and handlers. Met supports modular effectful\u0000programming via modalities without relying on effect variables. We encode a\u0000practical fragment of a conventional row-based effect system with effect\u0000polymorphism, which captures most common use-cases, into Met in order to\u0000formally demonstrate the expressive power of modal effect types. To recover the\u0000full power of conventional effect systems beyond this fragment, we seamlessly\u0000extend Met to Mete with effect variables. We propose a surface language Metel\u0000for Mete with a sound and complete type inference algorithm inspired by\u0000FreezeML.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Tsoukalas, Jasper Lee, John Jennings, Jimmy Xin, Michelle Ding, Michael Jennings, Amitayush Thakur, Swarat Chaudhuri
We present PutnamBench, a new multilingual benchmark for evaluating the ability of neural theorem-provers to solve competition mathematics problems. PutnamBench consists of 1697 hand-constructed formalizations of 640 theorems sourced from the William Lowell Putnam Mathematical Competition, the premier undergraduate-level mathematics competition in North America. All the theorems have formalizations in Lean 4 and Isabelle; a substantial subset also has Coq formalizations. Proving the theorems requires significant problem-solving ability and proficiency in a broad range of topics taught in undergraduate mathematics courses. We use PutnamBench to evaluate several established neural and symbolic theorem-provers. These approaches can only solve a handful of the PutnamBench problems, establishing the benchmark as a difficult open challenge for research on neural theorem-proving. PutnamBench is available at https://github.com/trishullab/PutnamBench.
{"title":"PutnamBench: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition","authors":"George Tsoukalas, Jasper Lee, John Jennings, Jimmy Xin, Michelle Ding, Michael Jennings, Amitayush Thakur, Swarat Chaudhuri","doi":"arxiv-2407.11214","DOIUrl":"https://doi.org/arxiv-2407.11214","url":null,"abstract":"We present PutnamBench, a new multilingual benchmark for evaluating the\u0000ability of neural theorem-provers to solve competition mathematics problems.\u0000PutnamBench consists of 1697 hand-constructed formalizations of 640 theorems\u0000sourced from the William Lowell Putnam Mathematical Competition, the premier\u0000undergraduate-level mathematics competition in North America. All the theorems\u0000have formalizations in Lean 4 and Isabelle; a substantial subset also has Coq\u0000formalizations. Proving the theorems requires significant problem-solving\u0000ability and proficiency in a broad range of topics taught in undergraduate\u0000mathematics courses. We use PutnamBench to evaluate several established neural\u0000and symbolic theorem-provers. These approaches can only solve a handful of the\u0000PutnamBench problems, establishing the benchmark as a difficult open challenge\u0000for research on neural theorem-proving. PutnamBench is available at\u0000https://github.com/trishullab/PutnamBench.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Lööw, Daniele Nantes-Sobrinho, Sacha-Élie Ayoun, Caroline Cronjäger, Petar Maksimović, Philippa Gardner
The introduction of separation logic has led to the development of symbolic-execution techniques and tools that are (functionally) compositional with function specifications that can be used in broader calling contexts. Many of the compositional symbolic-execution tools developed in academia and industry have been grounded on a formal foundation, but either the function specifications are not validated concerning the underlying separation logic of the theory, or there is a large gulf between the theory and the tool implementation. We introduce a formal compositional symbolic-execution engine which creates and uses function specifications from an underlying separation logic and provides a sound theoretical foundation partially inspired by the Gillian symbolic-execution platform. This is achieved by providing an axiomatic interface which describes the properties of the consume and produce operations used in the engine to compositionally update the symbolic state, including, when calling function specifications -- a technique used by VeriFast, Viper, and Gillian but not previously characterised independently of the tool. Our result consume and produce operations inspired by the Gillian implementation that satisfy the properties described by our axiomatic interface. A surprising property of our engine semantics is its ability to underpin both correctness and incorrectness reasoning, with the primary distinction being the choice between satisfiability and validity. We use this property to extend the Gillian platform, which previously only supported correctness reasoning, with incorrectness reasoning and automatic true bug-finding using incorrectness bi-abduction. We evaluate our new Gillian platform through instantiation to C. This instantiation is the first tool grounded on a common formal compositional symbolic-execution engine to support both correctness and incorrectness reasoning.
{"title":"Compositional Symbolic Execution for Correctness and Incorrectness Reasoning (Extended Version)","authors":"Andreas Lööw, Daniele Nantes-Sobrinho, Sacha-Élie Ayoun, Caroline Cronjäger, Petar Maksimović, Philippa Gardner","doi":"arxiv-2407.10838","DOIUrl":"https://doi.org/arxiv-2407.10838","url":null,"abstract":"The introduction of separation logic has led to the development of\u0000symbolic-execution techniques and tools that are (functionally) compositional\u0000with function specifications that can be used in broader calling contexts. Many\u0000of the compositional symbolic-execution tools developed in academia and\u0000industry have been grounded on a formal foundation, but either the function\u0000specifications are not validated concerning the underlying separation logic of\u0000the theory, or there is a large gulf between the theory and the tool\u0000implementation. We introduce a formal compositional symbolic-execution engine which creates\u0000and uses function specifications from an underlying separation logic and\u0000provides a sound theoretical foundation partially inspired by the Gillian\u0000symbolic-execution platform. This is achieved by providing an axiomatic\u0000interface which describes the properties of the consume and produce operations\u0000used in the engine to compositionally update the symbolic state, including,\u0000when calling function specifications -- a technique used by VeriFast, Viper,\u0000and Gillian but not previously characterised independently of the tool. Our\u0000result consume and produce operations inspired by the Gillian implementation\u0000that satisfy the properties described by our axiomatic interface. A surprising\u0000property of our engine semantics is its ability to underpin both correctness\u0000and incorrectness reasoning, with the primary distinction being the choice\u0000between satisfiability and validity. We use this property to extend the Gillian\u0000platform, which previously only supported correctness reasoning, with\u0000incorrectness reasoning and automatic true bug-finding using incorrectness\u0000bi-abduction. We evaluate our new Gillian platform through instantiation to C.\u0000This instantiation is the first tool grounded on a common formal compositional\u0000symbolic-execution engine to support both correctness and incorrectness\u0000reasoning.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"54 44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}