The development of large language models (LLMs), such as GPT, has enabled the construction of several socialbots, like ChatGPT, that are receiving a lot of attention for their ability to simulate a human conversation. However, the conversation is not guided by a goal and is hard to control. In addition, because LLMs rely more on pattern recognition than deductive reasoning, they can give confusing answers and have difficulty integrating multiple topics into a cohesive response. These limitations often lead the LLM to deviate from the main topic to keep the conversation interesting. We propose AutoCompanion, a socialbot that uses an LLM model to translate natural language into predicates (and vice versa) and employs commonsense reasoning based on Answer Set Programming (ASP) to hold a social conversation with a human. In particular, we rely on s(CASP), a goal-directed implementation of ASP as the backend. This paper presents the framework design and how an LLM is used to parse user messages and generate a response from the s(CASP) engine output. To validate our proposal, we describe (real) conversations in which the chatbot's goal is to keep the user entertained by talking about movies and books, and s(CASP) ensures (i) correctness of answers, (ii) coherence (and precision) during the conversation, which it dynamically regulates to achieve its specific purpose, and (iii) no deviation from the main topic.
{"title":"A Reliable Common-Sense Reasoning Socialbot Built Using LLMs and Goal-Directed ASP","authors":"Yankai Zeng, Abhiramon Rajashekharan, Kinjal Basu, Huaduo Wang, Joaquín Arias, Gopal Gupta","doi":"arxiv-2407.18498","DOIUrl":"https://doi.org/arxiv-2407.18498","url":null,"abstract":"The development of large language models (LLMs), such as GPT, has enabled the\u0000construction of several socialbots, like ChatGPT, that are receiving a lot of\u0000attention for their ability to simulate a human conversation. However, the\u0000conversation is not guided by a goal and is hard to control. In addition,\u0000because LLMs rely more on pattern recognition than deductive reasoning, they\u0000can give confusing answers and have difficulty integrating multiple topics into\u0000a cohesive response. These limitations often lead the LLM to deviate from the\u0000main topic to keep the conversation interesting. We propose AutoCompanion, a\u0000socialbot that uses an LLM model to translate natural language into predicates\u0000(and vice versa) and employs commonsense reasoning based on Answer Set\u0000Programming (ASP) to hold a social conversation with a human. In particular, we\u0000rely on s(CASP), a goal-directed implementation of ASP as the backend. This\u0000paper presents the framework design and how an LLM is used to parse user\u0000messages and generate a response from the s(CASP) engine output. To validate\u0000our proposal, we describe (real) conversations in which the chatbot's goal is\u0000to keep the user entertained by talking about movies and books, and s(CASP)\u0000ensures (i) correctness of answers, (ii) coherence (and precision) during the\u0000conversation, which it dynamically regulates to achieve its specific purpose,\u0000and (iii) no deviation from the main topic.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The quality of ontologies and their alignments is crucial for developing high-quality semantics-based applications. Traditional debugging techniques repair ontology networks by removing unwanted axioms and mappings, but may thereby remove consequences that are correct in the domain of the ontology network. In this paper we propose a framework for repairing ontology networks that deals with this issue. It defines basic operations such as debugging, weakening and completing. Further, it defines combination operators that reflect choices in how and when to use the basic operators, as well as choices regarding the autonomy level of the ontologies and alignments in the ontology network. We show the influence of the combination operators on the quality of the repaired network and present an implemented tool. By using our framework together with existing algorithms for debugging, weakening and completing, we essentially provide a blueprint for extending previous work and systems.
{"title":"Repairing Networks of $mathcal{EL_perp}$ Ontologies using Weakening and Completing -- Extended version","authors":"Ying Li, Patrick Lambrix","doi":"arxiv-2407.18848","DOIUrl":"https://doi.org/arxiv-2407.18848","url":null,"abstract":"The quality of ontologies and their alignments is crucial for developing\u0000high-quality semantics-based applications. Traditional debugging techniques\u0000repair ontology networks by removing unwanted axioms and mappings, but may\u0000thereby remove consequences that are correct in the domain of the ontology\u0000network. In this paper we propose a framework for repairing ontology networks\u0000that deals with this issue. It defines basic operations such as debugging,\u0000weakening and completing. Further, it defines combination operators that\u0000reflect choices in how and when to use the basic operators, as well as choices\u0000regarding the autonomy level of the ontologies and alignments in the ontology\u0000network. We show the influence of the combination operators on the quality of\u0000the repaired network and present an implemented tool. By using our framework\u0000together with existing algorithms for debugging, weakening and completing, we\u0000essentially provide a blueprint for extending previous work and systems.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"202 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Berkholz, Moritz Lichter, Harry Vinall-Smeeth
We study the refutation complexity of graph isomorphism in the tree-like resolution calculus. Tor'an and W"orz (TOCL 2023) showed that there is a resolution refutation of narrow width $k$ for two graphs if and only if they can be distinguished in ($k+1$)-variable first-order logic (FO$^{k+1}$) and hence by a count-free variant of the $k$-dimensional Weisfeiler-Leman algorithm. While DAG-like narrow width $k$ resolution refutations have size at most $n^k$, tree-like refutations may be much larger. We show that there are graphs of order n, whose isomorphism can be refuted in narrow width $k$ but only in tree-like size $2^{Omega(n^{k/2})}$. This is a supercritical trade-off where bounding one parameter (the narrow width) causes the other parameter (the size) to grow above its worst case. The size lower bound is super-exponential in the formula size and improves a related supercritical width versus tree-like size trade-off by Razborov (JACM 2016). To prove our result, we develop a new variant of the $k$-pebble EF-game for FO$^k$ to reason about tree-like refutation size in a similar way as the Prover-Delayer games in proof complexity. We analyze this game on a modified variant of the compressed CFI graphs introduced by Grohe, Lichter, Neuen, and Schweitzer (FOCS 2023). Using a recent improved robust compressed CFI construction of Janett, Nordstr"om, and Pang (unpublished manuscript), we obtain a similar bound for width $k$ (instead of the stronger but less common narrow width) and make the result more robust.
{"title":"Supercritical Size-Width Tree-Like Resolution Trade-Offs for Graph Isomorphism","authors":"Christoph Berkholz, Moritz Lichter, Harry Vinall-Smeeth","doi":"arxiv-2407.17947","DOIUrl":"https://doi.org/arxiv-2407.17947","url":null,"abstract":"We study the refutation complexity of graph isomorphism in the tree-like\u0000resolution calculus. Tor'an and W\"orz (TOCL 2023) showed that there is a\u0000resolution refutation of narrow width $k$ for two graphs if and only if they\u0000can be distinguished in ($k+1$)-variable first-order logic (FO$^{k+1}$) and\u0000hence by a count-free variant of the $k$-dimensional Weisfeiler-Leman\u0000algorithm. While DAG-like narrow width $k$ resolution refutations have size at\u0000most $n^k$, tree-like refutations may be much larger. We show that there are\u0000graphs of order n, whose isomorphism can be refuted in narrow width $k$ but\u0000only in tree-like size $2^{Omega(n^{k/2})}$. This is a supercritical trade-off\u0000where bounding one parameter (the narrow width) causes the other parameter (the\u0000size) to grow above its worst case. The size lower bound is super-exponential\u0000in the formula size and improves a related supercritical width versus tree-like\u0000size trade-off by Razborov (JACM 2016). To prove our result, we develop a new\u0000variant of the $k$-pebble EF-game for FO$^k$ to reason about tree-like\u0000refutation size in a similar way as the Prover-Delayer games in proof\u0000complexity. We analyze this game on a modified variant of the compressed CFI\u0000graphs introduced by Grohe, Lichter, Neuen, and Schweitzer (FOCS 2023). Using a\u0000recent improved robust compressed CFI construction of Janett, Nordstr\"om, and\u0000Pang (unpublished manuscript), we obtain a similar bound for width $k$ (instead\u0000of the stronger but less common narrow width) and make the result more robust.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdolmahdi Bagheri, Matin Alinejad, Kevin Bello, Alireza Akhondi-Asl
Causal reasoning is the primary bottleneck that Large Language Models (LLMs) must overcome to attain human-level intelligence. To address this, we introduce the Causal Chain of Prompting (C2P) as the first reasoning framework that equips current LLMs with causal reasoning capabilities. C2P operates autonomously, avoiding reliance on external tools or modules during both the causal learning and reasoning phases, and can be seamlessly implemented during the training or fine-tuning of LLMs. Experimental results across various benchmark datasets demonstrate a significant improvement in causal learning and subsequent reasoning accuracy of LLMs. We illustrate how C2P enhances LLMs' ability to causally reason in real-world scenarios, addressing complex problems in fields such as healthcare, medicine, economics, education, social sciences, environmental science, and marketing. With few-shot learning, GPT-4 Turbo using C2P with as few as six examples achieves significant performance improvements, boasting over a 33% increase in reasoning accuracy over the most state-of-the-art LLMs, which perform nearly randomly in similar circumstances. This demonstrates the transformative potential of integrating C2P into LLM training or fine-tuning processes, thereby empowering these models with advanced causal reasoning capabilities.
{"title":"C2P: Featuring Large Language Models with Causal Reasoning","authors":"Abdolmahdi Bagheri, Matin Alinejad, Kevin Bello, Alireza Akhondi-Asl","doi":"arxiv-2407.18069","DOIUrl":"https://doi.org/arxiv-2407.18069","url":null,"abstract":"Causal reasoning is the primary bottleneck that Large Language Models (LLMs)\u0000must overcome to attain human-level intelligence. To address this, we introduce\u0000the Causal Chain of Prompting (C2P) as the first reasoning framework that\u0000equips current LLMs with causal reasoning capabilities. C2P operates\u0000autonomously, avoiding reliance on external tools or modules during both the\u0000causal learning and reasoning phases, and can be seamlessly implemented during\u0000the training or fine-tuning of LLMs. Experimental results across various\u0000benchmark datasets demonstrate a significant improvement in causal learning and\u0000subsequent reasoning accuracy of LLMs. We illustrate how C2P enhances LLMs'\u0000ability to causally reason in real-world scenarios, addressing complex problems\u0000in fields such as healthcare, medicine, economics, education, social sciences,\u0000environmental science, and marketing. With few-shot learning, GPT-4 Turbo using\u0000C2P with as few as six examples achieves significant performance improvements,\u0000boasting over a 33% increase in reasoning accuracy over the most\u0000state-of-the-art LLMs, which perform nearly randomly in similar circumstances.\u0000This demonstrates the transformative potential of integrating C2P into LLM\u0000training or fine-tuning processes, thereby empowering these models with\u0000advanced causal reasoning capabilities.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a generic framework that provides sufficient conditions for guaranteeing polynomial-time decidability of fixed-negation fragments of first-order theories that adhere to certain fixed-parameter tractability requirements. It enables deciding sentences of such theories with arbitrary existential quantification, conjunction and a fixed number of negation symbols in polynomial time. It was recently shown by Nguyen and Pak [SIAM J. Comput. 51(2): 1--31 (2022)] that an even more restricted such fragment of Presburger arithmetic (the first-order theory of the integers with addition and order) is NP-hard. In contrast, by application of our framework, we show that the fixed negation fragment of weak Presburger arithmetic, which drops the order relation from Presburger arithmetic in favour of equality, is decidable in polynomial time.
本文介绍了一个通用框架,它为保证一阶理论的固定否定片段的多项式时间可解性提供了充分条件,这些一阶理论遵守了某些固定参数的可操作性要求。它能在多项式时间内判定具有任意存在定量、连词和固定数量否定符号的这类理论的句子。最近,Nguyen 和 Pak [SIAM J. Comput.51(2): 1--31 (2022)]证明,Presburgerarithmetic(带加法和阶的整数一阶理论)的一个更为有限的片段是 NP-困难的。与此相反,通过应用我们的框架,我们证明了弱普雷斯伯格算术的固定否定片段在多项式时间内是可解的。
{"title":"On Polynomial-Time Decidability of k-Negations Fragments of First-Order Theories","authors":"Christoph Haase, Alessio Mansutti, Amaury Pouly","doi":"arxiv-2407.18420","DOIUrl":"https://doi.org/arxiv-2407.18420","url":null,"abstract":"This paper introduces a generic framework that provides sufficient conditions\u0000for guaranteeing polynomial-time decidability of fixed-negation fragments of\u0000first-order theories that adhere to certain fixed-parameter tractability\u0000requirements. It enables deciding sentences of such theories with arbitrary\u0000existential quantification, conjunction and a fixed number of negation symbols\u0000in polynomial time. It was recently shown by Nguyen and Pak [SIAM J. Comput.\u000051(2): 1--31 (2022)] that an even more restricted such fragment of Presburger\u0000arithmetic (the first-order theory of the integers with addition and order) is\u0000NP-hard. In contrast, by application of our framework, we show that the fixed\u0000negation fragment of weak Presburger arithmetic, which drops the order relation\u0000from Presburger arithmetic in favour of equality, is decidable in polynomial\u0000time.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boolean circuits in d-DNNF form enable tractable probabilistic inference. However, as a key insight of this work, we show that commonly used d-DNNF compilation approaches introduce irrelevant subcircuits. We call these subcircuits Tseitin artifacts, as they are introduced due to the Tseitin transformation step -- a well-established procedure to transform any circuit into the CNF format required by several d-DNNF knowledge compilers. We discuss how to detect and remove both Tseitin variables and Tseitin artifacts, leading to more succinct circuits. We empirically observe an average size reduction of 77.5% when removing both Tseitin variables and artifacts. The additional pruning of Tseitin artifacts reduces the size by 22.2% on average. This significantly improves downstream tasks that benefit from a more succinct circuit, e.g., probabilistic inference tasks.
{"title":"Pruning Boolean d-DNNF Circuits Through Tseitin-Awareness","authors":"Vincent Derkinderen","doi":"arxiv-2407.17951","DOIUrl":"https://doi.org/arxiv-2407.17951","url":null,"abstract":"Boolean circuits in d-DNNF form enable tractable probabilistic inference.\u0000However, as a key insight of this work, we show that commonly used d-DNNF\u0000compilation approaches introduce irrelevant subcircuits. We call these\u0000subcircuits Tseitin artifacts, as they are introduced due to the Tseitin\u0000transformation step -- a well-established procedure to transform any circuit\u0000into the CNF format required by several d-DNNF knowledge compilers. We discuss\u0000how to detect and remove both Tseitin variables and Tseitin artifacts, leading\u0000to more succinct circuits. We empirically observe an average size reduction of\u000077.5% when removing both Tseitin variables and artifacts. The additional\u0000pruning of Tseitin artifacts reduces the size by 22.2% on average. This\u0000significantly improves downstream tasks that benefit from a more succinct\u0000circuit, e.g., probabilistic inference tasks.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We survey the complexity class $exists mathbb{R}$, which captures the complexity of deciding the existential theory of the reals. The class $exists mathbb{R}$ has roots in two different traditions, one based on the Blum-Shub-Smale model of real computation, and the other following work by Mn"{e}v and Shor on the universality of realization spaces of oriented matroids. Over the years the number of problems for which $exists mathbb{R}$ rather than NP has turned out to be the proper way of measuring their complexity has grown, particularly in the fields of computational geometry, graph drawing, game theory, and some areas in logic and algebra. $exists mathbb{R}$ has also started appearing in the context of machine learning, Markov decision processes, and probabilistic reasoning. We have aimed at collecting a comprehensive compendium of problems complete and hard for $exists mathbb{R}$, as well as a long list of open problems. The compendium is presented in the third part of our survey; a tour through the compendium and the areas it touches on makes up the second part. The first part introduces the reader to the existential theory of the reals as a complexity class, discussing its history, motivation and prospects as well as some technical aspects.
{"title":"The Existential Theory of the Reals as a Complexity Class: A Compendium","authors":"Marcus Schaefer, Jean Cardinal, Tillmann Miltzow","doi":"arxiv-2407.18006","DOIUrl":"https://doi.org/arxiv-2407.18006","url":null,"abstract":"We survey the complexity class $exists mathbb{R}$, which captures the\u0000complexity of deciding the existential theory of the reals. The class $exists\u0000mathbb{R}$ has roots in two different traditions, one based on the\u0000Blum-Shub-Smale model of real computation, and the other following work by\u0000Mn\"{e}v and Shor on the universality of realization spaces of oriented\u0000matroids. Over the years the number of problems for which $exists mathbb{R}$\u0000rather than NP has turned out to be the proper way of measuring their\u0000complexity has grown, particularly in the fields of computational geometry,\u0000graph drawing, game theory, and some areas in logic and algebra. $exists\u0000mathbb{R}$ has also started appearing in the context of machine learning,\u0000Markov decision processes, and probabilistic reasoning. We have aimed at collecting a comprehensive compendium of problems complete\u0000and hard for $exists mathbb{R}$, as well as a long list of open problems. The\u0000compendium is presented in the third part of our survey; a tour through the\u0000compendium and the areas it touches on makes up the second part. The first part\u0000introduces the reader to the existential theory of the reals as a complexity\u0000class, discussing its history, motivation and prospects as well as some\u0000technical aspects.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141776002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Avanzini, Gilles Barthe, Davide Davoli, Benjamin Grégoire
We introduce eRHL, a program logic for reasoning about relational expectation properties of pairs of probabilistic programs. eRHL is quantitative, i.e., its pre- and post-conditions take values in the extended non-negative reals. Thanks to its quantitative assertions, eRHL overcomes randomness alignment restrictions from prior logics, including PRHL, a popular relational program logic used to reason about security of cryptographic constructions, and apRHL, a variant of PRHL for differential privacy. As a result, eRHL is the first relational probabilistic program logic to be supported by non-trivial soundness and completeness results for all almost surely terminating programs. We show that eRHL is sound and complete with respect to program equivalence, statistical distance, and differential privacy. We also show that every PRHL judgment is valid iff it is provable in eRHL. We showcase the practical benefits of eRHL with examples that are beyond reach of PRHL and apRHL.
{"title":"A quantitative probabilistic relational Hoare logic","authors":"Martin Avanzini, Gilles Barthe, Davide Davoli, Benjamin Grégoire","doi":"arxiv-2407.17127","DOIUrl":"https://doi.org/arxiv-2407.17127","url":null,"abstract":"We introduce eRHL, a program logic for reasoning about relational expectation\u0000properties of pairs of probabilistic programs. eRHL is quantitative, i.e., its\u0000pre- and post-conditions take values in the extended non-negative reals. Thanks\u0000to its quantitative assertions, eRHL overcomes randomness alignment\u0000restrictions from prior logics, including PRHL, a popular relational program\u0000logic used to reason about security of cryptographic constructions, and apRHL,\u0000a variant of PRHL for differential privacy. As a result, eRHL is the first\u0000relational probabilistic program logic to be supported by non-trivial soundness\u0000and completeness results for all almost surely terminating programs. We show\u0000that eRHL is sound and complete with respect to program equivalence,\u0000statistical distance, and differential privacy. We also show that every PRHL\u0000judgment is valid iff it is provable in eRHL. We showcase the practical\u0000benefits of eRHL with examples that are beyond reach of PRHL and apRHL.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present our work on the collaborative use of dynamic and static analysis tools for the verification of software written in the OCaml language. We build upon Gospel, a specification language for OCaml that can be used both in dynamic and static analyses. We employ Ortac, for runtime assertion checking, and Cameleer and CFML for the deductive verification of OCaml code. We report on the use of such tools to build a case study of collaborative analysis of a non-trivial OCaml program. This shows how these tools nicely complement each others, while at the same highlights the differences when writing specification targeting dynamic or static analysis methods.
{"title":"Static and Dynamic Verification of OCaml Programs: The Gospel Ecosystem (Extended Version)","authors":"Tiago Lopes Soares, Ion Chririca, Mário Pereira","doi":"arxiv-2407.17289","DOIUrl":"https://doi.org/arxiv-2407.17289","url":null,"abstract":"We present our work on the collaborative use of dynamic and static analysis\u0000tools for the verification of software written in the OCaml language. We build\u0000upon Gospel, a specification language for OCaml that can be used both in\u0000dynamic and static analyses. We employ Ortac, for runtime assertion checking,\u0000and Cameleer and CFML for the deductive verification of OCaml code. We report\u0000on the use of such tools to build a case study of collaborative analysis of a\u0000non-trivial OCaml program. This shows how these tools nicely complement each\u0000others, while at the same highlights the differences when writing specification\u0000targeting dynamic or static analysis methods.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper combines the classical model of labeled transition systems with the epistemic model for reasoning about knowledge. The result is a unifying framework for modeling and analyzing multi-agent, knowledge-based, dynamic systems. On the modeling side, we propose a process algebraic, agent-oriented specification language that makes such a framework easy to use for practical purposes. On the verification side, we define a modal logic encompassing temporal and epistemic operators.
{"title":"A process algebraic framework for multi-agent dynamic epistemic systems","authors":"Alessandro Aldini","doi":"arxiv-2407.17537","DOIUrl":"https://doi.org/arxiv-2407.17537","url":null,"abstract":"This paper combines the classical model of labeled transition systems with\u0000the epistemic model for reasoning about knowledge. The result is a unifying\u0000framework for modeling and analyzing multi-agent, knowledge-based, dynamic\u0000systems. On the modeling side, we propose a process algebraic, agent-oriented\u0000specification language that makes such a framework easy to use for practical\u0000purposes. On the verification side, we define a modal logic encompassing\u0000temporal and epistemic operators.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141776000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}