The human race is facing one of the most meaningful public health emergencies in the modern era caused by the COVID-19 pandemic. This pandemic introduced various challenges, from lock-downs with significant economic costs to fundamentally altering the way of life for many people around the world. The battle to understand and control the virus is still at its early stages yet meaningful insights have already been made. The uncertainty of why some patients are infected and experience severe symptoms, while others are infected but asymptomatic, and others are not infected at all, makes managing this pandemic very challenging. Furthermore, the development of treatments and vaccines relies on knowledge generated from an ever evolving and expanding information space. Given the availability of digital data in the modern era, artificial intelligence (AI) is a meaningful tool for addressing the various challenges introduced by this unexpected pandemic. Some of the challenges include: outbreak prediction, risk modeling including infection and symptom development, testing strategy optimization, drug development, treatment repurposing, vaccine development, and others.
{"title":"Introduction to the Special Track on Artificial Intelligence and COVID-19","authors":"Martin Michalowski, Robert Moskovitch, N. Chawla","doi":"10.1613/jair.1.14552","DOIUrl":"https://doi.org/10.1613/jair.1.14552","url":null,"abstract":"The human race is facing one of the most meaningful public health emergencies in the modern era caused by the COVID-19 pandemic. This pandemic introduced various challenges, from lock-downs with significant economic costs to fundamentally altering the way of life for many people around the world. The battle to understand and control the virus is still at its early stages yet meaningful insights have already been made. The uncertainty of why some patients are infected and experience severe symptoms, while others are infected but asymptomatic, and others are not infected at all, makes managing this pandemic very challenging. Furthermore, the development of treatments and vaccines relies on knowledge generated from an ever evolving and expanding information space. Given the availability of digital data in the modern era, artificial intelligence (AI) is a meaningful tool for addressing the various challenges introduced by this unexpected pandemic. Some of the challenges include: outbreak prediction, risk modeling including infection and symptom development, testing strategy optimization, drug development, treatment repurposing, vaccine development, and others.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"10 1","pages":"523-525"},"PeriodicalIF":5.0,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78586504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isaac Rudich, Quentin Cappart, Louis-Martin Rousseau
Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. In the original version of this paper, we presented a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We tested the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost. In this extended version of the paper, we also propose new methods for using relaxed decision diagrams to improve the solutions found using restricted decision diagrams, discuss the heuristic decisions involved with the parallelization of peel-and-bound, and discuss how peel-and-bound can be hyper-optimized for sequencing problems. Furthermore, we test the new methods on the sequence ordering problem and the traveling salesman problem with time-windows (TSPTW), and include an updated and generalized implementation of the algorithm capable of handling any discrete optimization problem. The new results show that peel-and-bound outperforms ddo (a decision diagram based branch-and-bound solver) on the TSPTW. We also close 15 open benchmark instances of the TSPTW.
{"title":"Improved Peel-and-Bound: Methods for Generating Dual Bounds with Multivalued Decision Diagrams","authors":"Isaac Rudich, Quentin Cappart, Louis-Martin Rousseau","doi":"10.1613/jair.1.14607","DOIUrl":"https://doi.org/10.1613/jair.1.14607","url":null,"abstract":"Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. In the original version of this paper, we presented a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We tested the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost. In this extended version of the paper, we also propose new methods for using relaxed decision diagrams to improve the solutions found using restricted decision diagrams, discuss the heuristic decisions involved with the parallelization of peel-and-bound, and discuss how peel-and-bound can be hyper-optimized for sequencing problems. Furthermore, we test the new methods on the sequence ordering problem and the traveling salesman problem with time-windows (TSPTW), and include an updated and generalized implementation of the algorithm capable of handling any discrete optimization problem. The new results show that peel-and-bound outperforms ddo (a decision diagram based branch-and-bound solver) on the TSPTW. We also close 15 open benchmark instances of the TSPTW.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"72 1","pages":"1489-1538"},"PeriodicalIF":5.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81719042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongdong Zhao, Lei Liao, Wenjian Luo, Jianwen Xiang, Hao Jiang, Xiaoyi Hu
The generation of SAT instances is an important issue in computer science, and it is useful for researchers to verify the effectiveness of SAT solvers. Addressing this issue could inspire researchers to propose new search strategies. SAT problems exist in various real-world applications, some of which have more than one solution. However, although several algorithms for generating random SAT instances have been proposed, few can be used to generate hard instances that have multiple predefined solutions. In this paper, we propose the KHidden-M algorithm to generate SAT instances with multiple predefined solutions that could be hard to solve by the local search strategy when the number of predefined solutions is small enough and the Hamming distance between them is not less than half of the solution length. Specifically, first, we generate an SAT instance that is satisfied by all of the predefined solutions. Next, if the generated SAT instance does not satisfy the hardness condition, then a strategy will be conducted to adjust clauses through multiple iterations to improve the hardness of the whole instance. We propose three strategies to generate the SAT instance in the first part. The first strategy is called the random strategy, which randomly generates clauses that are satisfied by all of the predefined solutions. The other two strategies are called the estimating strategy and greedy strategy, and using them, we attempt to generate an instance that directly satisfies or is closer to the hardness condition for the local search strategy. We employ two SAT solvers (i.e., WalkSAT and Kissat) to investigate the hardness of the SAT instances generated by our algorithm in the experiments. The experimental results show the effectiveness of the random, estimating and greedy strategies. Compared to the state-of-the-art algorithm for generating SAT instances with predefined solutions, namely, M-hidden, our algorithm could be more effective in generating hard SAT instances.
{"title":"Generating Random SAT Instances: Multiple Solutions could be Predefined and Deeply Hidden","authors":"Dongdong Zhao, Lei Liao, Wenjian Luo, Jianwen Xiang, Hao Jiang, Xiaoyi Hu","doi":"10.1613/jair.1.13909","DOIUrl":"https://doi.org/10.1613/jair.1.13909","url":null,"abstract":"The generation of SAT instances is an important issue in computer science, and it is useful for researchers to verify the effectiveness of SAT solvers. Addressing this issue could inspire researchers to propose new search strategies. SAT problems exist in various real-world applications, some of which have more than one solution. However, although several algorithms for generating random SAT instances have been proposed, few can be used to generate hard instances that have multiple predefined solutions. In this paper, we propose the KHidden-M algorithm to generate SAT instances with multiple predefined solutions that could be hard to solve by the local search strategy when the number of predefined solutions is small enough and the Hamming distance between them is not less than half of the solution length. Specifically, first, we generate an SAT instance that is satisfied by all of the predefined solutions. Next, if the generated SAT instance does not satisfy the hardness condition, then a strategy will be conducted to adjust clauses through multiple iterations to improve the hardness of the whole instance. We propose three strategies to generate the SAT instance in the first part. The first strategy is called the random strategy, which randomly generates clauses that are satisfied by all of the predefined solutions. The other two strategies are called the estimating strategy and greedy strategy, and using them, we attempt to generate an instance that directly satisfies or is closer to the hardness condition for the local search strategy. We employ two SAT solvers (i.e., WalkSAT and Kissat) to investigate the hardness of the SAT instances generated by our algorithm in the experiments. The experimental results show the effectiveness of the random, estimating and greedy strategies. Compared to the state-of-the-art algorithm for generating SAT instances with predefined solutions, namely, M-hidden, our algorithm could be more effective in generating hard SAT instances.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"55 1","pages":"435-470"},"PeriodicalIF":5.0,"publicationDate":"2023-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83823116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DatalogMTL is an extension of Datalog with metric temporal operators that has recently found applications in stream reasoning and temporal ontology-based data access. In contrast to plain Datalog, where materialisation (a.k.a. forward chaining) naturally terminates in finitely many steps, reaching a fixpoint in DatalogMTL may require infinitely many rounds of rule applications. As a result, existing reasoning systems resort to other approaches, such as constructing large Büchi automata, whose implementations turn out to be highly inefficient in practice. In this paper, we propose and study finitely materialisable DatalogMTL programs, for which forward chaining reasoning is guaranteed to terminate. We consider a data-dependent notion of finite materialisability of a program, where termination is guaranteed for a given dataset, as well as a data-independent notion, where termination is guaranteed regardless of the dataset. We show that, for bounded programs (a natural DatalogMTL fragment for which reasoning is as hard as in the full language), checking data-dependent finite materialisability is ExpSpace-complete in combined complexity and PSpace-complete in data complexity; furthermore, we propose a practical materialisation-based decision procedure that works in doubly exponential time. We show that checking data-independent finite materialisability for bounded progams is computationally easier, namely ExpTime-complete; moreover, we propose sufficient conditions for data-indenpendent finite materialisability that can be efficiently checked. We provide also the complexity landscape of fact entailment for different classes of finitely materialisable programs; surprisingly, we could identify a large class of finitely materialisable programs, called MTL-acyclic programs, for which fact entailment has exactly the same data and combined complexity as in plain Datalog, which makes this fragment especially well suited for big-scale applications.
{"title":"Finite Materialisability of Datalog Programs with Metric Temporal Operators","authors":"P. Walega, Michał Zawidzki, B. C. Grau","doi":"10.1613/jair.1.14040","DOIUrl":"https://doi.org/10.1613/jair.1.14040","url":null,"abstract":"\u0000\u0000\u0000DatalogMTL is an extension of Datalog with metric temporal operators that has recently found applications in stream reasoning and temporal ontology-based data access. In contrast to plain Datalog, where materialisation (a.k.a. forward chaining) naturally terminates in finitely many steps, reaching a fixpoint in DatalogMTL may require infinitely many rounds of rule applications. As a result, existing reasoning systems resort to other approaches, such as constructing large Büchi automata, whose implementations turn out to be highly inefficient in practice.\u0000In this paper, we propose and study finitely materialisable DatalogMTL programs, for which forward chaining reasoning is guaranteed to terminate. We consider a data-dependent notion of finite materialisability of a program, where termination is guaranteed for a given dataset, as well as a data-independent notion, where termination is guaranteed regardless of the dataset. We show that, for bounded programs (a natural DatalogMTL fragment for which reasoning is as hard as in the full language), checking data-dependent finite materialisability is ExpSpace-complete in combined complexity and PSpace-complete in data complexity; furthermore, we propose a practical materialisation-based decision procedure that works in doubly exponential time. We show that checking data-independent finite materialisability for bounded progams is computationally easier, namely ExpTime-complete; moreover, we propose sufficient conditions for data-indenpendent finite materialisability that can be efficiently checked. We provide also the complexity landscape of fact entailment for different classes of finitely materialisable programs; surprisingly, we could identify a large class of finitely materialisable programs, called MTL-acyclic programs, for which fact entailment has exactly the same data and combined complexity as in plain Datalog, which makes this fragment especially well suited for big-scale applications.\u0000\u0000\u0000","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"98 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2023-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83204182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tseitin-formulas are systems of parity constraints whose structure is described by a graph. These formulas have been studied extensively in proof complexity as hard instances in many proof systems. In this paper, we prove that a class of unsatisfiable Tseitin-formulas of bounded degree has regular resolution refutations of polynomial length if and only if the treewidth of all underlying graphs G for that class is in O(log |V (G)|). It follows that unsatisfiable Tseitin-formulas with polynomial length of regular resolution refutations are completely determined by the treewidth of the underlying graphs when these graphs have bounded degree. To prove this, we show that any regular resolution refutation of an unsatisfiable Tseitin-formula with graph G of bounded degree has length 2Ω(tw(G))/|V (G)|, thus essentially matching the known 2O(tw(G))poly(|V (G)|) upper bound. Our proof first connects the length of regular resolution refutations of unsatisfiable Tseitin-formulas to the size of representations of satisfiable Tseitin-formulas in decomposable negation normal form (DNNF). Then we prove that for every graph G of bounded degree, every DNNF-representation of every satisfiable Tseitin-formula with graph G must have size 2Ω(tw(G)) which yields our lower bound for regular resolution.
{"title":"Characterizing Tseitin-Formulas with Short Regular Resolution Refutations","authors":"Alexis de Colnet, Stefan Mengel","doi":"10.1613/jair.1.13521","DOIUrl":"https://doi.org/10.1613/jair.1.13521","url":null,"abstract":"Tseitin-formulas are systems of parity constraints whose structure is described by a graph. These formulas have been studied extensively in proof complexity as hard instances in many proof systems. In this paper, we prove that a class of unsatisfiable Tseitin-formulas of bounded degree has regular resolution refutations of polynomial length if and only if the treewidth of all underlying graphs G for that class is in O(log |V (G)|). It follows that unsatisfiable Tseitin-formulas with polynomial length of regular resolution refutations are completely determined by the treewidth of the underlying graphs when these graphs have bounded degree. To prove this, we show that any regular resolution refutation of an unsatisfiable Tseitin-formula with graph G of bounded degree has length 2Ω(tw(G))/|V (G)|, thus essentially matching the known 2O(tw(G))poly(|V (G)|) upper bound. Our proof first connects the length of regular resolution refutations of unsatisfiable Tseitin-formulas to the size of representations of satisfiable Tseitin-formulas in decomposable negation normal form (DNNF). Then we prove that for every graph G of bounded degree, every DNNF-representation of every satisfiable Tseitin-formula with graph G must have size 2Ω(tw(G)) which yields our lower bound for regular resolution.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135013230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence (AI) is at a crucial point in its development: stable enough to be used in production systems, and increasingly pervasive in our lives. What does that mean for its safety? In his book Normal Accidents, the sociologist Charles Perrow proposed a framework to analyze new technologies and the risks they entail. He showed that major accidents are nearly unavoidable in complex systems with tightly coupled components if they are run long enough. In this essay, we apply and extend Perrow’s framework to AI to assess its potential risks. Today’s AI systems are already highly complex, and their complexity is steadily increasing. As they become more ubiquitous, different algorithms will interact directly, leading to tightly coupled systems whose capacity to cause harm we will be unable to predict. We argue that under the current paradigm, Perrow’s normal accidents apply to AI systems and it is only a matter of time before one occurs. This article appears in the AI & Society track.
{"title":"Viewpoint: Artificial Intelligence Accidents Waiting to Happen?","authors":"Federico Bianchi, Amanda Cercas Curry, Dirk Hovy","doi":"10.1613/jair.1.14263","DOIUrl":"https://doi.org/10.1613/jair.1.14263","url":null,"abstract":"Artificial Intelligence (AI) is at a crucial point in its development: stable enough to be used in production systems, and increasingly pervasive in our lives. What does that mean for its safety? In his book Normal Accidents, the sociologist Charles Perrow proposed a framework to analyze new technologies and the risks they entail. He showed that major accidents are nearly unavoidable in complex systems with tightly coupled components if they are run long enough. In this essay, we apply and extend Perrow’s framework to AI to assess its potential risks. Today’s AI systems are already highly complex, and their complexity is steadily increasing. As they become more ubiquitous, different algorithms will interact directly, leading to tightly coupled systems whose capacity to cause harm we will be unable to predict. We argue that under the current paradigm, Perrow’s normal accidents apply to AI systems and it is only a matter of time before one occurs.\u0000This article appears in the AI & Society track.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"230 1","pages":"193-199"},"PeriodicalIF":5.0,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89141588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PDDL+ models are advanced models of hybrid systems and the resulting problems are notoriously difficult for planning engines to cope with. An additional limiting factor for the exploitation of PDDL+ approaches in real-world applications is the restricted number of domain-independent planning engines that can reason upon those models. With the aim of deepening the understanding of PDDL+ models, in this work, we study a novel mapping between a time discretisation of pddl+ and numeric planning as for PDDL2.1 (level 2). The proposed mapping not only clarifies the relationship between these two formalisms but also enables the use of a wider pool of engines, thus fostering the use of hybrid planning in real-world applications. Our experimental analysis shows the usefulness of the proposed translation and demonstrates the potential of the approach for improving the solvability of complex PDDL+ instances.
{"title":"A Practical Approach to Discretised PDDL+ Problems by Translation to Numeric Planning","authors":"Francesco Percassi, Enrico Scala, M. Vallati","doi":"10.1613/jair.1.13904","DOIUrl":"https://doi.org/10.1613/jair.1.13904","url":null,"abstract":"PDDL+ models are advanced models of hybrid systems and the resulting problems are notoriously difficult for planning engines to cope with. An additional limiting factor for the exploitation of PDDL+ approaches in real-world applications is the restricted number of domain-independent planning engines that can reason upon those models.\u0000With the aim of deepening the understanding of PDDL+ models, in this work, we study a novel mapping between a time discretisation of pddl+ and numeric planning as for PDDL2.1 (level 2). The proposed mapping not only clarifies the relationship between these two formalisms but also enables the use of a wider pool of engines, thus fostering the use of hybrid planning in real-world applications. Our experimental analysis shows the usefulness of the proposed translation and demonstrates the potential of the approach for improving the solvability of complex PDDL+ instances.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"8 1","pages":"115-162"},"PeriodicalIF":5.0,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78985982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People are interacting with online systems all the time. In order to use the services being provided, they give consent for their data to be collected. This approach requires too much human effort and is impractical for systems like Internet-of-Things (IoT) where human-device interactions can be large. Ideally, privacy assistants can help humans make privacy decisions while working in collaboration with them. In our work, we focus on the identification and representation of privacy requirements in IoT to help privacy assistants better understand their environment. In recent years, more focus has been on the technical aspects of privacy. However, the dynamic nature of privacy also requires a representation of social aspects (e.g., social trust). In this survey paper, we review the privacy requirements represented in existing IoT ontologies. We discuss how to extend these ontologies with new requirements to better capture privacy, and we introduce case studies to demonstrate the applicability of the novel requirements.
{"title":"A Survey on Understanding and Representing Privacy Requirements in the Internet-of-Things","authors":"G. Ogunniye, Nadin Kökciyan","doi":"10.1613/jair.1.14000","DOIUrl":"https://doi.org/10.1613/jair.1.14000","url":null,"abstract":"People are interacting with online systems all the time. In order to use the services being provided, they give consent for their data to be collected. This approach requires too much human effort and is impractical for systems like Internet-of-Things (IoT) where human-device interactions can be large. Ideally, privacy assistants can help humans make privacy decisions while working in collaboration with them. In our work, we focus on the identification and representation of privacy requirements in IoT to help privacy assistants better understand their environment. In recent years, more focus has been on the technical aspects of privacy. However, the dynamic nature of privacy also requires a representation of social aspects (e.g., social trust). In this survey paper, we review the privacy requirements represented in existing IoT ontologies. We discuss how to extend these ontologies with new requirements to better capture privacy, and we introduce case studies to demonstrate the applicability of the novel requirements.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"70 1","pages":"163-192"},"PeriodicalIF":5.0,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73435889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pietro Totis, Jesse Davis, L. D. Raedt, Angelika Kimmig
Combinatorics math problems are often used as a benchmark to test human cognitive and logical problem-solving skills. These problems are concerned with counting the number of solutions that exist in a specific scenario that is sketched in natural language. Humans are adept at solving such problems as they can identify commonly occurring structures in the questions for which a closed-form formula exists for computing the answer. These formulas exploit the exchangeability of objects and symmetries to avoid a brute-force enumeration of all possible solutions. Unfortunately, current AI approaches are still unable to solve combinatorial problems in this way. This paper aims to fill this gap by developing novel AI techniques for representing and solving such problems. It makes the following five contributions. First, we identify a class of combinatorics math problems which traditional lifted counting techniques fail to model or solve efficiently. Second, we propose a novel declarative language for this class of problems. Third, we propose novel lifted solving algorithms bridging probabilistic inference techniques and constraint programming. Fourth, we implement them in a lifted solver that solves efficiently the class of problems under investigation. Finally, we evaluate our contributions on a real-world combinatorics math problems dataset and synthetic benchmarks.
{"title":"Lifted Reasoning for Combinatorial Counting","authors":"Pietro Totis, Jesse Davis, L. D. Raedt, Angelika Kimmig","doi":"10.1613/jair.1.14062","DOIUrl":"https://doi.org/10.1613/jair.1.14062","url":null,"abstract":"Combinatorics math problems are often used as a benchmark to test human cognitive and logical problem-solving skills. These problems are concerned with counting the number of solutions that exist in a specific scenario that is sketched in natural language. Humans are adept at solving such problems as they can identify commonly occurring structures in the questions for which a closed-form formula exists for computing the answer. These formulas exploit the exchangeability of objects and symmetries to avoid a brute-force enumeration of all possible solutions. Unfortunately, current AI approaches are still unable to solve combinatorial problems in this way. This paper aims to fill this gap by developing novel AI techniques for representing and solving such problems. It makes the following five contributions. First, we identify a class of combinatorics math problems which traditional lifted counting techniques fail to model or solve efficiently. Second, we propose a novel declarative language for this class of problems. Third, we propose novel lifted solving algorithms bridging probabilistic inference techniques and constraint programming. Fourth, we implement them in a lifted solver that solves efficiently the class of problems under investigation. Finally, we evaluate our contributions on a real-world combinatorics math problems dataset and synthetic benchmarks.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"60 1","pages":"1-58"},"PeriodicalIF":5.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84173357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thom S. Badings, Licio Romao, A. Abate, D. Parker, Hasan A. Poonawala, M. Stoelinga, N. Jansen
Controllers for dynamical systems that operate in safety-critical settings must account for stochastic disturbances. Such disturbances are often modeled as process noise in a dynamical system, and common assumptions are that the underlying distributions are known and/or Gaussian. In practice, however, these assumptions may be unrealistic and can lead to poor approximations of the true noise distribution. We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions. In particular, we address the problem of computing a controller that provides probabilistic guarantees on safely reaching a target, while also avoiding unsafe regions of the state space. First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states. As a key contribution, we adapt tools from the scenario approach to compute probably approximately correct (PAC) bounds on these transition probabilities, based on a finite number of samples of the noise. We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP). This iMDP is, with a user-specified confidence probability, robust against uncertainty in the transition probabilities, and the tightness of the probability intervals can be controlled through the number of samples. We use state-of-the-art verification techniques to provide guarantees on the iMDP and compute a controller for which these guarantees carry over to the original control system. In addition, we develop a tailored computational scheme that reduces the complexity of the synthesis of these guarantees on the iMDP. Benchmarks on realistic control systems show the practical applicability of our method, even when the iMDP has hundreds of millions of transitions.
{"title":"Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal Abstractions","authors":"Thom S. Badings, Licio Romao, A. Abate, D. Parker, Hasan A. Poonawala, M. Stoelinga, N. Jansen","doi":"10.1613/jair.1.14253","DOIUrl":"https://doi.org/10.1613/jair.1.14253","url":null,"abstract":"Controllers for dynamical systems that operate in safety-critical settings must account for stochastic disturbances. Such disturbances are often modeled as process noise in a dynamical system, and common assumptions are that the underlying distributions are known and/or Gaussian. In practice, however, these assumptions may be unrealistic and can lead to poor approximations of the true noise distribution. We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions. In particular, we address the problem of computing a controller that provides probabilistic guarantees on safely reaching a target, while also avoiding unsafe regions of the state space. First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states. As a key contribution, we adapt tools from the scenario approach to compute probably approximately correct (PAC) bounds on these transition probabilities, based on a finite number of samples of the noise. We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP). This iMDP is, with a user-specified confidence probability, robust against uncertainty in the transition probabilities, and the tightness of the probability intervals can be controlled through the number of samples. We use state-of-the-art verification techniques to provide guarantees on the iMDP and compute a controller for which these guarantees carry over to the original control system. In addition, we develop a tailored computational scheme that reduces the complexity of the synthesis of these guarantees on the iMDP. Benchmarks on realistic control systems show the practical applicability of our method, even when the iMDP has hundreds of millions of transitions.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"36 1","pages":"341-391"},"PeriodicalIF":5.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85147873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}