This article provides the theoretical framework of Probabilistic Shoenfield Machines (PSMs), an extension of the classical Shoenfield Machine that models randomness in the computation process. PSMs are brought in contexts where deterministic computation is insufficient, such as randomized algorithms. By allowing transitions to multiple possible states with certain probabilities, PSMs can solve problems and make decisions based on probabilistic outcomes, hence expanding the variety of possible computations. We provide an overview of PSMs, detailing their formal definitions as well as the computation mechanism and their equivalence with Non-deterministic Shoenfield Machines (NSM).
{"title":"Probabilistic Shoenfield Machines","authors":"Maksymilian Bujok, Adam Mata","doi":"arxiv-2407.05777","DOIUrl":"https://doi.org/arxiv-2407.05777","url":null,"abstract":"This article provides the theoretical framework of Probabilistic Shoenfield\u0000Machines (PSMs), an extension of the classical Shoenfield Machine that models\u0000randomness in the computation process. PSMs are brought in contexts where\u0000deterministic computation is insufficient, such as randomized algorithms. By\u0000allowing transitions to multiple possible states with certain probabilities,\u0000PSMs can solve problems and make decisions based on probabilistic outcomes,\u0000hence expanding the variety of possible computations. We provide an overview of\u0000PSMs, detailing their formal definitions as well as the computation mechanism\u0000and their equivalence with Non-deterministic Shoenfield Machines (NSM).","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated Theorem Proving (ATP) faces challenges due to its complexity and computational demands. Recent work has explored using Large Language Models (LLMs) for ATP action selection, but these methods can be resource-intensive. This study introduces FEAS, an agent that enhances the COPRA in-context learning framework within Lean. FEAS refines prompt generation, response parsing, and incorporates domain-specific heuristics for functional equations. It introduces FunEq, a curated dataset of functional equation problems with varying difficulty. FEAS outperforms baselines on FunEq, particularly with the integration of domain-specific heuristics. The results demonstrate FEAS's effectiveness in generating and formalizing high-level proof strategies into Lean proofs, showcasing the potential of tailored approaches for specific ATP challenges.
自动定理证明(ATP)因其复杂性和计算需求而面临挑战。最近的工作探索了使用大型语言模型(LLMs)进行 ATP 动作选择,但这些方法可能会耗费大量资源。本研究介绍了 FEAS,它是一种在 Lean 中增强 COPRA 上下文学习框架的代理。FEAS 改进了提示生成、响应解析,并纳入了针对特定领域的函数方程启发式。FEAS 引入了 FunEq,这是一个难度各异的函数方程问题数据集。FEAS 在 FunEq 上的表现优于基线,特别是在集成了特定领域启发式后。结果证明了 FEAS 在生成高层次证明策略并将其形式化为精益证明方面的有效性,展示了针对特定 ATP 挑战的定制方法的潜力。
{"title":"Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent","authors":"Mahdi Buali, Robert Hoehndorf","doi":"arxiv-2407.14521","DOIUrl":"https://doi.org/arxiv-2407.14521","url":null,"abstract":"Automated Theorem Proving (ATP) faces challenges due to its complexity and\u0000computational demands. Recent work has explored using Large Language Models\u0000(LLMs) for ATP action selection, but these methods can be resource-intensive.\u0000This study introduces FEAS, an agent that enhances the COPRA in-context\u0000learning framework within Lean. FEAS refines prompt generation, response\u0000parsing, and incorporates domain-specific heuristics for functional equations.\u0000It introduces FunEq, a curated dataset of functional equation problems with\u0000varying difficulty. FEAS outperforms baselines on FunEq, particularly with the\u0000integration of domain-specific heuristics. The results demonstrate FEAS's\u0000effectiveness in generating and formalizing high-level proof strategies into\u0000Lean proofs, showcasing the potential of tailored approaches for specific ATP\u0000challenges.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sometimes only some digits of a numerical product or some terms of a polynomial or series product are required. Frequently these constitute the most significant or least significant part of the value, for example when computing initial values or refinement steps in iterative approximation schemes. Other situations require the middle portion. In this paper we provide algorithms for the general problem of computing a given span of coefficients within a product, that is the terms within a range of degrees for univariate polynomials or range digits of an integer. This generalizes the "middle product" concept of Hanrot, Quercia and Zimmerman. We are primarily interested in problems of modest size where constant speed up factors can improve overall system performance, and therefore focus the discussion on classical and Karatsuba multiplication and how methods may be combined.
{"title":"Computing Clipped Products","authors":"Arthur C. Norman, Stephen M. Watt","doi":"arxiv-2407.04133","DOIUrl":"https://doi.org/arxiv-2407.04133","url":null,"abstract":"Sometimes only some digits of a numerical product or some terms of a\u0000polynomial or series product are required. Frequently these constitute the most\u0000significant or least significant part of the value, for example when computing\u0000initial values or refinement steps in iterative approximation schemes. Other\u0000situations require the middle portion. In this paper we provide algorithms for\u0000the general problem of computing a given span of coefficients within a product,\u0000that is the terms within a range of degrees for univariate polynomials or range\u0000digits of an integer. This generalizes the \"middle product\" concept of Hanrot,\u0000Quercia and Zimmerman. We are primarily interested in problems of modest size\u0000where constant speed up factors can improve overall system performance, and\u0000therefore focus the discussion on classical and Karatsuba multiplication and\u0000how methods may be combined.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"2017 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study certain linear algebra algorithms for recursive block matrices. This representation has useful practical and theoretical properties. We summarize some previous results for block matrix inversion and present some results on triangular decomposition of block matrices. The case of inverting matrices over a ring that is neither formally real nor formally complex was inspired by Gonzalez-Vega et al.
我们研究递归块矩阵的某些线性代数算法。这种表示法具有有用的实践和理论特性。我们总结了以前关于分块矩阵反演的一些结果,并介绍了关于分块矩阵三角形分解的一些结果。冈萨雷斯-维加(Gonzalez-Vega et al.
{"title":"Algorithms for Recursive Block Matrices","authors":"Stephen M. Watt","doi":"arxiv-2407.03976","DOIUrl":"https://doi.org/arxiv-2407.03976","url":null,"abstract":"We study certain linear algebra algorithms for recursive block matrices. This\u0000representation has useful practical and theoretical properties. We summarize\u0000some previous results for block matrix inversion and present some results on\u0000triangular decomposition of block matrices. The case of inverting matrices over\u0000a ring that is neither formally real nor formally complex was inspired by\u0000Gonzalez-Vega et al.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Thomm, Michael Hersche, Giacomo Camposampiero, Aleksandar Terzić, Bernhard Schölkopf, Abbas Rahimi
We advance the recently proposed neuro-symbolic Differentiable Tree Machine, which learns tree operations using a combination of transformers and Tensor Product Representations. We investigate the architecture and propose two key components. We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts. This results in a Differentiable Tree Experts model with a constant number of parameters for any arbitrary number of steps in the computation, compared to the previous method in the Differentiable Tree Machine with a linear growth. Given this flexibility in the number of steps, we additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically. The resulting Terminating Differentiable Tree Experts model sluggishly learns to predict the number of steps without an oracle. It can do so while maintaining the learning capabilities of the model, converging to the optimal amount of steps.
{"title":"Terminating Differentiable Tree Experts","authors":"Jonathan Thomm, Michael Hersche, Giacomo Camposampiero, Aleksandar Terzić, Bernhard Schölkopf, Abbas Rahimi","doi":"arxiv-2407.02060","DOIUrl":"https://doi.org/arxiv-2407.02060","url":null,"abstract":"We advance the recently proposed neuro-symbolic Differentiable Tree Machine,\u0000which learns tree operations using a combination of transformers and Tensor\u0000Product Representations. We investigate the architecture and propose two key\u0000components. We first remove a series of different transformer layers that are\u0000used in every step by introducing a mixture of experts. This results in a\u0000Differentiable Tree Experts model with a constant number of parameters for any\u0000arbitrary number of steps in the computation, compared to the previous method\u0000in the Differentiable Tree Machine with a linear growth. Given this flexibility\u0000in the number of steps, we additionally propose a new termination algorithm to\u0000provide the model the power to choose how many steps to make automatically. The\u0000resulting Terminating Differentiable Tree Experts model sluggishly learns to\u0000predict the number of steps without an oracle. It can do so while maintaining\u0000the learning capabilities of the model, converging to the optimal amount of\u0000steps.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual mathematical reasoning, as a fundamental visual reasoning ability, has received widespread attention from the Large Multimodal Models (LMMs) community. Existing benchmarks, such as MathVista and MathVerse, focus more on the result-oriented performance but neglect the underlying principles in knowledge acquisition and generalization. Inspired by human-like mathematical reasoning, we introduce WE-MATH, the first benchmark specifically designed to explore the problem-solving principles beyond end-to-end performance. We meticulously collect and categorize 6.5K visual math problems, spanning 67 hierarchical knowledge concepts and five layers of knowledge granularity. We decompose composite problems into sub-problems according to the required knowledge concepts and introduce a novel four-dimensional metric, namely Insufficient Knowledge (IK), Inadequate Generalization (IG), Complete Mastery (CM), and Rote Memorization (RM), to hierarchically assess inherent issues in LMMs' reasoning process. With WE-MATH, we conduct a thorough evaluation of existing LMMs in visual mathematical reasoning and reveal a negative correlation between solving steps and problem-specific performance. We confirm the IK issue of LMMs can be effectively improved via knowledge augmentation strategies. More notably, the primary challenge of GPT-4o has significantly transitioned from IK to IG, establishing it as the first LMM advancing towards the knowledge generalization stage. In contrast, other LMMs exhibit a marked inclination towards Rote Memorization - they correctly solve composite problems involving multiple knowledge concepts yet fail to answer sub-problems. We anticipate that WE-MATH will open new pathways for advancements in visual mathematical reasoning for LMMs. The WE-MATH data and evaluation code are available at https://github.com/We-Math/We-Math.
{"title":"We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?","authors":"Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao, Zhimin Bao, Chen Li, Honggang Zhang","doi":"arxiv-2407.01284","DOIUrl":"https://doi.org/arxiv-2407.01284","url":null,"abstract":"Visual mathematical reasoning, as a fundamental visual reasoning ability, has\u0000received widespread attention from the Large Multimodal Models (LMMs)\u0000community. Existing benchmarks, such as MathVista and MathVerse, focus more on\u0000the result-oriented performance but neglect the underlying principles in\u0000knowledge acquisition and generalization. Inspired by human-like mathematical\u0000reasoning, we introduce WE-MATH, the first benchmark specifically designed to\u0000explore the problem-solving principles beyond end-to-end performance. We\u0000meticulously collect and categorize 6.5K visual math problems, spanning 67\u0000hierarchical knowledge concepts and five layers of knowledge granularity. We\u0000decompose composite problems into sub-problems according to the required\u0000knowledge concepts and introduce a novel four-dimensional metric, namely\u0000Insufficient Knowledge (IK), Inadequate Generalization (IG), Complete Mastery\u0000(CM), and Rote Memorization (RM), to hierarchically assess inherent issues in\u0000LMMs' reasoning process. With WE-MATH, we conduct a thorough evaluation of\u0000existing LMMs in visual mathematical reasoning and reveal a negative\u0000correlation between solving steps and problem-specific performance. We confirm\u0000the IK issue of LMMs can be effectively improved via knowledge augmentation\u0000strategies. More notably, the primary challenge of GPT-4o has significantly\u0000transitioned from IK to IG, establishing it as the first LMM advancing towards\u0000the knowledge generalization stage. In contrast, other LMMs exhibit a marked\u0000inclination towards Rote Memorization - they correctly solve composite problems\u0000involving multiple knowledge concepts yet fail to answer sub-problems. We\u0000anticipate that WE-MATH will open new pathways for advancements in visual\u0000mathematical reasoning for LMMs. The WE-MATH data and evaluation code are\u0000available at https://github.com/We-Math/We-Math.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cryptographic hash functions play a crucial role in ensuring data security, generating fixed-length hashes from variable-length inputs. The hash function SHA-256 is trusted for data security due to its resilience after over twenty years of intense scrutiny. One of its critical properties is collision resistance, meaning that it is infeasible to find two different inputs with the same hash. Currently, the best SHA-256 collision attacks use differential cryptanalysis to find collisions in simplified versions of SHA-256 that are reduced to have fewer steps, making it feasible to find collisions. In this paper, we use a satisfiability (SAT) solver as a tool to search for step-reduced SHA-256 collisions, and dynamically guide the solver with the aid of a computer algebra system (CAS) used to detect inconsistencies and deduce information that the solver would otherwise not detect on its own. Our hybrid SAT + CAS solver significantly outperformed a pure SAT approach, enabling us to find collisions in step-reduced SHA-256 with significantly more steps. Using SAT + CAS, we find a 38-step collision of SHA-256 with a modified initialization vector -- something first found by a highly sophisticated search tool of Mendel, Nad, and Schl"affer. Conversely, a pure SAT approach could find collisions for no more than 28 steps. However, our work only uses the SAT solver CaDiCaL and its programmatic interface IPASIR-UP.
加密哈希函数在确保数据安全方面发挥着至关重要的作用,它能从可变长度的输入生成固定长度的哈希值。哈希函数SHA-256经过二十多年的严格审查,具有很强的适应能力,因此在数据安全方面备受信赖。其关键特性之一是抗碰撞性,这意味着不可能找到具有相同哈希值的两个不同输入。目前,最好的 SHA-256 碰撞攻击使用差分加密分析来查找简化版 SHA-256 中的碰撞,这些简化版的步骤减少,使得查找碰撞变得可行。在本文中,我们使用可满足性(SAT)求解器作为搜索步骤缩减后的 SHA-256 碰撞的工具,并借助计算机代数系统(CAS)对求解器进行动态指导,CAS 用于检测不一致之处,并推导出求解器自身无法检测到的信息。我们的混合 SAT + CAS 求解器的性能明显优于纯 SAT 方法,使我们能够在步骤缩减的 SHA-256 中以明显更多的步骤发现碰撞。使用 SAT + CAS,我们找到了 SHA-256 中修改初始化向量的 38 步碰撞--这是 Mendel、Nad 和 Schl"affer 的高精密搜索工具首次发现的。相反,纯粹的 SAT 方法只能发现不超过 28 步的碰撞。不过,我们的工作只使用了 SAT 求解器 CaDiCaL 及其程序界面 IPASIR-UP。
{"title":"SHA-256 Collision Attack with Programmatic SAT","authors":"Nahiyan Alamgir, Saeed Nejati, Curtis Bright","doi":"arxiv-2406.20072","DOIUrl":"https://doi.org/arxiv-2406.20072","url":null,"abstract":"Cryptographic hash functions play a crucial role in ensuring data security,\u0000generating fixed-length hashes from variable-length inputs. The hash function\u0000SHA-256 is trusted for data security due to its resilience after over twenty\u0000years of intense scrutiny. One of its critical properties is collision\u0000resistance, meaning that it is infeasible to find two different inputs with the\u0000same hash. Currently, the best SHA-256 collision attacks use differential\u0000cryptanalysis to find collisions in simplified versions of SHA-256 that are\u0000reduced to have fewer steps, making it feasible to find collisions. In this paper, we use a satisfiability (SAT) solver as a tool to search for\u0000step-reduced SHA-256 collisions, and dynamically guide the solver with the aid\u0000of a computer algebra system (CAS) used to detect inconsistencies and deduce\u0000information that the solver would otherwise not detect on its own. Our hybrid\u0000SAT + CAS solver significantly outperformed a pure SAT approach, enabling us to\u0000find collisions in step-reduced SHA-256 with significantly more steps. Using\u0000SAT + CAS, we find a 38-step collision of SHA-256 with a modified\u0000initialization vector -- something first found by a highly sophisticated search\u0000tool of Mendel, Nad, and Schl\"affer. Conversely, a pure SAT approach could\u0000find collisions for no more than 28 steps. However, our work only uses the SAT\u0000solver CaDiCaL and its programmatic interface IPASIR-UP.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The difficulty of factoring large integers into primes is the basis for cryptosystems such as RSA. Due to the widespread popularity of RSA, there have been many proposed attacks on the factorization problem such as side-channel attacks where some bits of the prime factors are available. When enough bits of the prime factors are known, two methods that are effective at solving the factorization problem are satisfiability (SAT) solvers and Coppersmith's method. The SAT approach reduces the factorization problem to a Boolean satisfiability problem, while Coppersmith's approach uses lattice basis reduction. Both methods have their advantages, but they also have their limitations: Coppersmith's method does not apply when the known bit positions are randomized, while SAT-based methods can take advantage of known bits in arbitrary locations, but have no knowledge of the algebraic structure exploited by Coppersmith's method. In this paper we describe a new hybrid SAT and computer algebra approach to efficiently solve random leaked-bit factorization problems. Specifically, Coppersmith's method is invoked by a SAT solver to determine whether a partial bit assignment can be extended to a complete assignment. Our hybrid implementation solves random leaked-bit factorization problems significantly faster than either a pure SAT or pure computer algebra approach.
将大整数分解成素数的难度是 RSA 等密码系统的基础。由于 RSA 的广泛普及,已经出现了许多针对因式分解问题的攻击建议,例如在质因数的某些比特可用时的侧信道攻击。当已知的质因数位数足够多时,有两种方法可以有效解决因式分解问题,即可满足性(SAT)求解器和 Coppersmith 方法。SAT 方法将因式分解问题简化为布尔可满足性问题,而 Coppersmith 方法则使用格基还原。这两种方法各有优势,但也各有局限:Coppersmith 的方法不适用于已知位位置随机化的情况,而基于 SAT 的方法可以利用任意位置的已知位,但却不知道 Coppersmith 方法所利用的代数结构。本文描述了一种新的 SAT 和计算机代数混合方法,用于高效解决随机泄漏比特因式分解问题。具体来说,SAT 求解器会调用 Coppersmith 方法来确定部分位赋值是否可以扩展为完整赋值。我们的混合实现解决随机泄漏位因式分解问题的速度明显快于纯 SAT 或纯计算机代数方法。
{"title":"SAT and Lattice Reduction for Integer Factorization","authors":"Yameen Ajani, Curtis Bright","doi":"arxiv-2406.20071","DOIUrl":"https://doi.org/arxiv-2406.20071","url":null,"abstract":"The difficulty of factoring large integers into primes is the basis for\u0000cryptosystems such as RSA. Due to the widespread popularity of RSA, there have\u0000been many proposed attacks on the factorization problem such as side-channel\u0000attacks where some bits of the prime factors are available. When enough bits of\u0000the prime factors are known, two methods that are effective at solving the\u0000factorization problem are satisfiability (SAT) solvers and Coppersmith's\u0000method. The SAT approach reduces the factorization problem to a Boolean\u0000satisfiability problem, while Coppersmith's approach uses lattice basis\u0000reduction. Both methods have their advantages, but they also have their\u0000limitations: Coppersmith's method does not apply when the known bit positions\u0000are randomized, while SAT-based methods can take advantage of known bits in\u0000arbitrary locations, but have no knowledge of the algebraic structure exploited\u0000by Coppersmith's method. In this paper we describe a new hybrid SAT and\u0000computer algebra approach to efficiently solve random leaked-bit factorization\u0000problems. Specifically, Coppersmith's method is invoked by a SAT solver to\u0000determine whether a partial bit assignment can be extended to a complete\u0000assignment. Our hybrid implementation solves random leaked-bit factorization\u0000problems significantly faster than either a pure SAT or pure computer algebra\u0000approach.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satisfiability modulo nonlinear real arithmetic theory (SMT(NRA)) solving is essential to multiple applications, including program verification, program synthesis and software testing. In this context, recently model constructing satisfiability calculus (MCSAT) has been invented to directly search for models in the theory space. Although following papers discussed practical directions and updates on MCSAT, less attention has been paid to the detailed implementation. In this paper, we present an efficient implementation of dynamic variable orderings of MCSAT, called dnlsat. We show carefully designed data structures and promising mechanisms, such as branching heuristic, restart, and lemma management. Besides, we also give a theoretical study of potential influences brought by the dynamic variablr ordering. The experimental evaluation shows that dnlsat accelerates the solving speed and solves more satisfiable instances than other state-of-the-art SMT solvers. Demonstration Video: https://youtu.be/T2Z0gZQjnPw Code: https://github.com/yogurt-shadow/dnlsat/tree/master/code Benchmark https://zenodo.org/records/10607722/files/QF_NRA.tar.zst?download=1
{"title":"DNLSAT: A Dynamic Variable Ordering MCSAT Framework for Nonlinear Real Arithmetic","authors":"Zhonghan Wang","doi":"arxiv-2406.18964","DOIUrl":"https://doi.org/arxiv-2406.18964","url":null,"abstract":"Satisfiability modulo nonlinear real arithmetic theory (SMT(NRA)) solving is\u0000essential to multiple applications, including program verification, program\u0000synthesis and software testing. In this context, recently model constructing\u0000satisfiability calculus (MCSAT) has been invented to directly search for models\u0000in the theory space. Although following papers discussed practical directions\u0000and updates on MCSAT, less attention has been paid to the detailed\u0000implementation. In this paper, we present an efficient implementation of\u0000dynamic variable orderings of MCSAT, called dnlsat. We show carefully designed\u0000data structures and promising mechanisms, such as branching heuristic, restart,\u0000and lemma management. Besides, we also give a theoretical study of potential\u0000influences brought by the dynamic variablr ordering. The experimental\u0000evaluation shows that dnlsat accelerates the solving speed and solves more\u0000satisfiable instances than other state-of-the-art SMT solvers. Demonstration Video: https://youtu.be/T2Z0gZQjnPw Code: https://github.com/yogurt-shadow/dnlsat/tree/master/code Benchmark https://zenodo.org/records/10607722/files/QF_NRA.tar.zst?download=1","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florence Dupin de Saint-CyrIRIT-ADRIA, UT3, Andreas HerzigIRIT-LILaC, CNRS, Jérôme LangLAMSADE, PSL, IRIT-ADRIA, Pierre MarquisCRIL
The purpose of this book is to provide an overview of AI research, ranging from basic work to interfaces and applications, with as much emphasis on results as on current issues. It is aimed at an audience of master students and Ph.D. students, and can be of interest as well for researchers and engineers who want to know more about AI. The book is split into three volumes.
{"title":"Reasoning About Action and Change","authors":"Florence Dupin de Saint-CyrIRIT-ADRIA, UT3, Andreas HerzigIRIT-LILaC, CNRS, Jérôme LangLAMSADE, PSL, IRIT-ADRIA, Pierre MarquisCRIL","doi":"arxiv-2406.18930","DOIUrl":"https://doi.org/arxiv-2406.18930","url":null,"abstract":"The purpose of this book is to provide an overview of AI research, ranging\u0000from basic work to interfaces and applications, with as much emphasis on\u0000results as on current issues. It is aimed at an audience of master students and\u0000Ph.D. students, and can be of interest as well for researchers and engineers\u0000who want to know more about AI. The book is split into three volumes.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}