{"title":"Graph sequence learning for premise selection","authors":"Edvard K. Holden, Konstantin Korovin","doi":"10.1016/j.jsc.2024.102376","DOIUrl":null,"url":null,"abstract":"<div><p>Premise selection is crucial for large theory reasoning with automated theorem provers as the sheer size of the problems quickly leads to resource exhaustion. This paper proposes a premise selection method inspired by the machine learning domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given conjecture. In our <em>axiom captioning</em> approach, a pre-trained graph neural network is combined with a language model via transfer learning to encapsulate both the inter-axiom and conjecture-axiom relationships. We evaluate different configurations of our method and experience a 14% improvement in the number of solved problems over a baseline.</p></div>","PeriodicalId":50031,"journal":{"name":"Journal of Symbolic Computation","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0747717124000804/pdfft?md5=f758e854b5cedd39b04e5e1431d3d6d8&pid=1-s2.0-S0747717124000804-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Symbolic Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747717124000804","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Premise selection is crucial for large theory reasoning with automated theorem provers as the sheer size of the problems quickly leads to resource exhaustion. This paper proposes a premise selection method inspired by the machine learning domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given conjecture. In our axiom captioning approach, a pre-trained graph neural network is combined with a language model via transfer learning to encapsulate both the inter-axiom and conjecture-axiom relationships. We evaluate different configurations of our method and experience a 14% improvement in the number of solved problems over a baseline.
期刊介绍:
An international journal, the Journal of Symbolic Computation, founded by Bruno Buchberger in 1985, is directed to mathematicians and computer scientists who have a particular interest in symbolic computation. The journal provides a forum for research in the algorithmic treatment of all types of symbolic objects: objects in formal languages (terms, formulas, programs); algebraic objects (elements in basic number domains, polynomials, residue classes, etc.); and geometrical objects.
It is the explicit goal of the journal to promote the integration of symbolic computation by establishing one common avenue of communication for researchers working in the different subareas. It is also important that the algorithmic achievements of these areas should be made available to the human problem-solver in integrated software systems for symbolic computation. To help this integration, the journal publishes invited tutorial surveys as well as Applications Letters and System Descriptions.