Martin Gebser, Enrico Giunchiglia, Marco Maratea, Marco Mochi
{"title":"A simple proof-theoretic characterization of stable models: Reduction to difference logic and experiments","authors":"Martin Gebser, Enrico Giunchiglia, Marco Maratea, Marco Mochi","doi":"10.1016/j.artint.2024.104276","DOIUrl":null,"url":null,"abstract":"Stable models of logic programs have been studied and characterized in relation with other formalisms by many researchers. As already argued in previous papers, such characterizations are interesting for diverse reasons, including theoretical investigations and the possibility of leading to new algorithms for computing stable models of logic programs. At the theoretical level, complexity and expressiveness comparisons have brought about fundamental insights. Beyond that, practical implementations of the developed reductions enable the use of existing solvers for other logical formalisms to compute stable models. In this paper, we first provide a simple characterization of stable models that can be viewed as a proof-theoretic counterpart of the standard model-theoretic definition. We further show how it can be naturally encoded in difference logic. Such an encoding, compared to the existing reductions to classical logics, does not require Boolean variables. Then, we implement our novel translation to a Satisfiability Modulo Theories (SMT) formula. We finally compare our approach, employing the SMT solver <ce:small-caps>yices</ce:small-caps>, to the translation-based ASP solver <ce:small-caps>lp2diff</ce:small-caps> and to <ce:small-caps>clingo</ce:small-caps> on domains from the “Basic Decision” track of the 2017 Answer Set Programming competition. The results show that our approach is competitive to and often better than <ce:small-caps>lp2diff</ce:small-caps>, and that it can also be faster than <ce:small-caps>clingo</ce:small-caps> on non-tight domains.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"21 1","pages":""},"PeriodicalIF":5.1000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.artint.2024.104276","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Stable models of logic programs have been studied and characterized in relation with other formalisms by many researchers. As already argued in previous papers, such characterizations are interesting for diverse reasons, including theoretical investigations and the possibility of leading to new algorithms for computing stable models of logic programs. At the theoretical level, complexity and expressiveness comparisons have brought about fundamental insights. Beyond that, practical implementations of the developed reductions enable the use of existing solvers for other logical formalisms to compute stable models. In this paper, we first provide a simple characterization of stable models that can be viewed as a proof-theoretic counterpart of the standard model-theoretic definition. We further show how it can be naturally encoded in difference logic. Such an encoding, compared to the existing reductions to classical logics, does not require Boolean variables. Then, we implement our novel translation to a Satisfiability Modulo Theories (SMT) formula. We finally compare our approach, employing the SMT solver yices, to the translation-based ASP solver lp2diff and to clingo on domains from the “Basic Decision” track of the 2017 Answer Set Programming competition. The results show that our approach is competitive to and often better than lp2diff, and that it can also be faster than clingo on non-tight domains.
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.