Pub Date : 2026-01-01Epub Date: 2025-11-07DOI: 10.1016/j.artint.2025.104453
Andre Opris
This article addresses theory in evolutionary many-objective optimization and focuses on the role of crossover operators. The advantages of using crossover are hardly understood and rigorous runtime analyses with crossover are lagging far behind its use in practice, specifically in the case of more than two objectives. We present two many-objective problems and , and a theoretical runtime analysis of the GSEMO and the widely used NSGA‑III algorithm, to demonstrate that one point crossover on , as well as uniform crossover on , can yield an exponential speedup in the runtime. In particular, when the number of objectives is constant, this algorithms can find the Pareto set of both problems in expected polynomial time when using crossover, while without crossover they require exponential time to even find a single Pareto-optimal point. For either problem, we also demonstrate a significant performance gap in certain superconstant parameter regimes for the number of objectives. To the best of our knowledge, this is the first rigorous runtime analysis in many-objective optimization which demonstrates an exponential performance gap when using crossover for more than two objectives. Additionally, it is the first runtime analysis involving crossover in many-objective optimization where the number of objectives is not necessarily constant.
{"title":"Many-objective problems where crossover is provably essential","authors":"Andre Opris","doi":"10.1016/j.artint.2025.104453","DOIUrl":"10.1016/j.artint.2025.104453","url":null,"abstract":"<div><div>This article addresses theory in evolutionary many-objective optimization and focuses on the role of crossover operators. The advantages of using crossover are hardly understood and rigorous runtime analyses with crossover are lagging far behind its use in practice, specifically in the case of more than two objectives. We present two many-objective problems <span><math><msub><mtext>RR</mtext><mrow><mi>MO</mi></mrow></msub></math></span> and <span><math><msub><mtext>URR</mtext><mrow><mi>MO</mi></mrow></msub></math></span>, and a theoretical runtime analysis of the GSEMO and the widely used NSGA‑III algorithm, to demonstrate that one point crossover on <span><math><msub><mtext>RR</mtext><mrow><mi>MO</mi></mrow></msub></math></span>, as well as uniform crossover on <span><math><msub><mtext>URR</mtext><mrow><mi>MO</mi></mrow></msub></math></span>, can yield an exponential speedup in the runtime. In particular, when the number of objectives is constant, this algorithms can find the Pareto set of both problems in expected polynomial time when using crossover, while without crossover they require exponential time to even find a single Pareto-optimal point. For either problem, we also demonstrate a significant performance gap in certain superconstant parameter regimes for the number of objectives. To the best of our knowledge, this is the first rigorous runtime analysis in many-objective optimization which demonstrates an exponential performance gap when using crossover for more than two objectives. Additionally, it is the first runtime analysis involving crossover in many-objective optimization where the number of objectives is not necessarily constant.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104453"},"PeriodicalIF":4.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145461589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-26DOI: 10.1016/j.artint.2025.104441
Sebastian Ordyniak , Giacomo Paesani , Mateusz Rychlicki , Stefan Szeider
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.
{"title":"A General Theoretical Framework for Learning Smallest Interpretable Models","authors":"Sebastian Ordyniak , Giacomo Paesani , Mateusz Rychlicki , Stefan Szeider","doi":"10.1016/j.artint.2025.104441","DOIUrl":"10.1016/j.artint.2025.104441","url":null,"abstract":"<div><div>We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104441"},"PeriodicalIF":4.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-15DOI: 10.1016/j.artint.2025.104440
Hang Zhang , Kai Ming Ting , Ye Zhu
The research on spectral clustering (SC) has thus far been pursued on the same track using the same tool of eigendecomposition of a matrix since the idea was first introduced in 1973. Despite its successes, SC has been identified to have fundamental limitations that prevent SC from discovering certain types of clusters, and SC has slow runtime. We offer an alternative path that does not involve the eigendecomposition, and, more broadly, it uses no optimization. The proposed new Kernel-Bounded Clustering (KBC) is a complete metamorphosis in 50 years of research in SC in view of the fact that KBC achieves the same objective of SC without eigendecomposition or optimization. We evaluated KBC on the datasets that have been used to demonstrate the fundamental limitations of SC, genome-wide expression data, large image datasets and many commonly used real-world benchmark datasets. KBC produced better quality clusters than various variants of SC, and it ran six orders of magnitude faster than the traditional SC on a set of 5 million data points.
{"title":"Kernel-bounded clustering: Achieving the objective of spectral clustering without eigendecomposition","authors":"Hang Zhang , Kai Ming Ting , Ye Zhu","doi":"10.1016/j.artint.2025.104440","DOIUrl":"10.1016/j.artint.2025.104440","url":null,"abstract":"<div><div>The research on spectral clustering (SC) has thus far been pursued on the same track using the same tool of eigendecomposition of a matrix since the idea was first introduced in 1973. Despite its successes, SC has been identified to have fundamental limitations that prevent SC from discovering certain types of clusters, and SC has slow runtime. We offer an alternative path that does not involve the eigendecomposition, and, more broadly, it uses no optimization. The proposed new Kernel-Bounded Clustering (KBC) is a complete metamorphosis in 50 years of research in SC in view of the fact that KBC achieves the same objective of SC without eigendecomposition or optimization. We evaluated KBC on the datasets that have been used to demonstrate the fundamental limitations of SC, genome-wide expression data, large image datasets and many commonly used real-world benchmark datasets. KBC produced better quality clusters than various variants of SC, and it ran six orders of magnitude faster than the traditional SC on a set of 5 million data points.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104440"},"PeriodicalIF":4.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-24DOI: 10.1016/j.artint.2025.104442
Moran Barenboim , Vadim Indelman
Decision-making under uncertainty is a critical aspect of many practical autonomous systems due to incomplete information. Partially Observable Markov Decision Processes (POMDPs) offer a mathematically principled framework for formulating decision-making problems under such conditions. However, finding an optimal solution for a POMDP is generally intractable. In recent years, there has been a significant progress of scaling approximate solvers from small to moderately sized problems, using online tree search solvers. Often, such approximate solvers are limited to probabilistic or asymptotic guarantees towards the optimal solution. In this paper, we derive a deterministic relationship for discrete POMDPs between an approximated and the optimal solution. We show that at any time, we can derive bounds that relate between the existing solution and the optimal one. We show that our derivations provide an avenue for a new set of algorithms and can be attached to existing algorithms that have a certain structure to provide them with deterministic guarantees with marginal computational overhead. In return, not only do we certify the solution quality, but we demonstrate that making a decision based on the deterministic guarantee may result in superior performance compared to the original algorithm without the deterministic certification.
{"title":"Online POMDP planning with anytime deterministic optimality guarantees","authors":"Moran Barenboim , Vadim Indelman","doi":"10.1016/j.artint.2025.104442","DOIUrl":"10.1016/j.artint.2025.104442","url":null,"abstract":"<div><div>Decision-making under uncertainty is a critical aspect of many practical autonomous systems due to incomplete information. Partially Observable Markov Decision Processes (POMDPs) offer a mathematically principled framework for formulating decision-making problems under such conditions. However, finding an optimal solution for a POMDP is generally intractable. In recent years, there has been a significant progress of scaling approximate solvers from small to moderately sized problems, using online tree search solvers. Often, such approximate solvers are limited to probabilistic or asymptotic guarantees towards the optimal solution. In this paper, we derive a deterministic relationship for discrete POMDPs between an approximated and the optimal solution. We show that at any time, we can derive bounds that relate between the existing solution and the optimal one. We show that our derivations provide an avenue for a new set of algorithms and can be attached to existing algorithms that have a certain structure to provide them with deterministic guarantees with marginal computational overhead. In return, not only do we certify the solution quality, but we demonstrate that making a decision based on the deterministic guarantee may result in superior performance compared to the original algorithm without the deterministic certification.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104442"},"PeriodicalIF":4.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-06DOI: 10.1016/j.artint.2025.104425
Dolev Mutzari , Tonmoay Deb , Cristian Molinaro , Andrea Pugliese , V.S. Subrahmanian , Sarit Kraus
To counter an imminent multi-drone attack on a city, defenders have deployed drones across the city. These drones must intercept/eliminate the threat, thus reducing potential damage from the attack. We model this as a Sequential Stackelberg Security Game, where the defender first commits to a mixed sequential defense strategy, and the attacker then best responds. We develop an efficient algorithm called S2D2, which outputs a defense strategy. We demonstrate the efficacy of S2D2 in extensive experiments on data from 80 real cities, improving the performance of the defender in comparison to greedy heuristics based on prior works. We prove that under some reasonable assumptions about the city structure, S2D2 outputs an approximate Strong Stackelberg Equilibrium (SSE) with a convenient structure.
{"title":"Defending a city from multi-drone attacks: A sequential Stackelberg security games approach","authors":"Dolev Mutzari , Tonmoay Deb , Cristian Molinaro , Andrea Pugliese , V.S. Subrahmanian , Sarit Kraus","doi":"10.1016/j.artint.2025.104425","DOIUrl":"10.1016/j.artint.2025.104425","url":null,"abstract":"<div><div>To counter an imminent multi-drone attack on a city, defenders have deployed drones across the city. These drones must intercept/eliminate the threat, thus reducing potential damage from the attack. We model this as a Sequential Stackelberg Security Game, where the defender first commits to a mixed sequential defense strategy, and the attacker then best responds. We develop an efficient algorithm called S2D2, which outputs a defense strategy. We demonstrate the efficacy of S2D2 in extensive experiments on data from 80 real cities, improving the performance of the defender in comparison to greedy heuristics based on prior works. We prove that under some reasonable assumptions about the city structure, S2D2 outputs an approximate Strong Stackelberg Equilibrium (SSE) with a convenient structure.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104425"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph active learning (GAL) is an important research direction in graph neural networks (GNNs) that aims to select the most valuable nodes for labeling to train GNNs. Previous works in GAL have primarily focused on the overall performance of GNNs, overlooking the balance among different classes. However, graphs in real-world applications are often imbalanced, which leads GAL methods to select class-imbalanced training sets, resulting in biased GNN models. Furthermore, due to the high cost of multi-turn queries, there is an increasing demand for one-step GAL methods, where the entire training set is queried at once. These realities prompt us to investigate the problem of one-step active learning on imbalanced graphs.
In this paper, we propose a theory-driven method called Contrast & Contract (Contra2) to tackle the above issues. The key idea of Contra2 is that intra-class edges within the majority are dominant in the edge set, so contracting these edges will reduce the imbalance ratio. Specifically, Contra2 first learns node representations by graph contrastive learning (GCL), then stochastically contracts the edges that connect nodes with similar embeddings. We theoretically show that Contra2 reduces the imbalance ratio with high probability. By leveraging a more evenly distributed graph, we can achieve a balanced selection of labeled nodes without requiring any seed labels. The effectiveness of Contra2 is evaluated against various baselines on 11 datasets with different budgets. Contra2 demonstrates remarkable performance, achieving either higher or on-par performance with only half of the annotation budget on some datasets.
{"title":"Contra2: A one-step active learning method for imbalanced graphs","authors":"Wenjie Yang , Shengzhong Zhang , Jiaxing Guo , Zengfeng Huang","doi":"10.1016/j.artint.2025.104439","DOIUrl":"10.1016/j.artint.2025.104439","url":null,"abstract":"<div><div>Graph active learning (GAL) is an important research direction in graph neural networks (GNNs) that aims to select the most valuable nodes for labeling to train GNNs. Previous works in GAL have primarily focused on the overall performance of GNNs, overlooking the balance among different classes. However, graphs in real-world applications are often imbalanced, which leads GAL methods to select class-imbalanced training sets, resulting in biased GNN models. Furthermore, due to the high cost of multi-turn queries, there is an increasing demand for one-step GAL methods, where the entire training set is queried at once. These realities prompt us to investigate the problem of one-step active learning on imbalanced graphs.</div><div>In this paper, we propose a theory-driven method called Contrast & Contract (Contra<sup>2</sup>) to tackle the above issues. The key idea of Contra<sup>2</sup> is that intra-class edges within the majority are dominant in the edge set, so contracting these edges will reduce the imbalance ratio. Specifically, Contra<sup>2</sup> first learns node representations by graph <strong>contrast</strong>ive learning (GCL), then stochastically <strong>contract</strong>s the edges that connect nodes with similar embeddings. We theoretically show that Contra<sup>2</sup> reduces the imbalance ratio with high probability. By leveraging a more evenly distributed graph, we can achieve a balanced selection of labeled nodes without requiring any seed labels. The effectiveness of Contra<sup>2</sup> is evaluated against various baselines on 11 datasets with different budgets. Contra<sup>2</sup> demonstrates remarkable performance, achieving either higher or on-par performance with only half of the annotation budget on some datasets.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104439"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-29DOI: 10.1016/j.artint.2025.104423
Alexandru Baltag , Nick Bezhanishvili , David Fernández-Duque
In this paper we present a topological epistemic logic, with modalities for knowledge (modelled as the universal modality), knowability (represented by the topological interior operator), and unknowability of the actual world. The last notion has a non-self-referential reading (modelled by Cantor derivative: the set of limit points of a given set) and a self-referential one (modelled by Cantor's perfect core of a given set: its largest subset without isolated points, where x is isolated iff is open). We completely axiomatize this logic, showing that it is decidable and pspace-complete, and we apply it to the analysis of a famous epistemic puzzle: the Surprise Exam Paradox.
{"title":"The topology of surprise","authors":"Alexandru Baltag , Nick Bezhanishvili , David Fernández-Duque","doi":"10.1016/j.artint.2025.104423","DOIUrl":"10.1016/j.artint.2025.104423","url":null,"abstract":"<div><div>In this paper we present a topological epistemic logic, with modalities for knowledge (modelled as the universal modality), knowability (represented by the topological interior operator), and unknowability of the actual world. The last notion has a non-self-referential reading (modelled by Cantor derivative: the set of limit points of a given set) and a self-referential one (modelled by Cantor's perfect core of a given set: its largest subset without isolated points, where <em>x</em> is isolated iff <span><math><mo>{</mo><mi>x</mi><mo>}</mo></math></span> is open). We completely axiomatize this logic, showing that it is decidable and <span>pspace</span>-complete, and we apply it to the analysis of a famous epistemic puzzle: the Surprise Exam Paradox.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104423"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145189732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-06DOI: 10.1016/j.artint.2025.104426
Georgios Amanatidis , Ben Berger , Tomer Ezra , Michal Feldman , Federico Fusco , Rebecca Reiffenhäuser , Artem Tsikiridis
The Pandora's Box problem models the search for the best alternative when evaluation is costly. In the simplest variant, a decision maker is presented with n boxes, each associated with a cost of inspection and a hidden random reward. The decision maker inspects a subset of these boxes one after the other, in a possibly adaptive order, and gains the difference between the largest revealed reward and the sum of the inspection costs. Although this classic version is well understood (Weitzman 1979), there is a flourishing recent literature on variants of the problem. Here we introduce a general framework—the Pandora's Box Over Time problem—that captures a wide range of variants where time plays a role, e.g., by constraining the schedules of exploration and influencing costs and rewards. In our framework, boxes have time-dependent rewards and costs, whereas inspection may require a box-specific processing time. Moreover, once a box is inspected, its reward may deteriorate over time. Our main result is an efficient constant-factor approximation to the optimal strategy for the Pandora's Box Over Time problem, which is generally NP-hard to compute. We further obtain improved results for the natural special cases where boxes have no processing time, boxes are available only in specific time slots, or when costs and reward distributions are time-independent (but rewards may still deteriorate after inspection).
{"title":"Pandora's box problem with time constraints","authors":"Georgios Amanatidis , Ben Berger , Tomer Ezra , Michal Feldman , Federico Fusco , Rebecca Reiffenhäuser , Artem Tsikiridis","doi":"10.1016/j.artint.2025.104426","DOIUrl":"10.1016/j.artint.2025.104426","url":null,"abstract":"<div><div>The Pandora's Box problem models the search for the best alternative when evaluation is costly. In the simplest variant, a decision maker is presented with <em>n</em> boxes, each associated with a cost of inspection and a hidden random reward. The decision maker inspects a subset of these boxes one after the other, in a possibly adaptive order, and gains the difference between the largest revealed reward and the sum of the inspection costs. Although this classic version is well understood (Weitzman 1979), there is a flourishing recent literature on variants of the problem. Here we introduce a general framework—the Pandora's Box Over Time problem—that captures a wide range of variants where time plays a role, e.g., by constraining the schedules of exploration and influencing costs and rewards. In our framework, boxes have time-dependent rewards and costs, whereas inspection may require a box-specific processing time. Moreover, once a box is inspected, its reward may deteriorate over time. Our main result is an efficient constant-factor approximation to the optimal strategy for the Pandora's Box Over Time problem, which is generally NP-hard to compute. We further obtain improved results for the natural special cases where boxes have no processing time, boxes are available only in specific time slots, or when costs and reward distributions are time-independent (but rewards may still deteriorate after inspection).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104426"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-08DOI: 10.1016/j.artint.2025.104437
Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna
Dealing with controversial information is an important issue in several application contexts. Formal argumentation enables reasoning on arguments for and against a claim to decide on an outcome. Abstract Argumentation Framework (AF) has emerged as a central formalism in argument-based reasoning. In recent years there has been an increasing interest in extending AF to facilitate the knowledge representation and reasoning process. In this paper, we present an extension of AF that allows for the representation of labelled constraints and labelled preferences. A labelled argument is of the form , , or , where a is an argument, whereas in, out, and und denote the acceptance status (i.e., accepted, rejected, undecided, respectively) of the specified argument. We start by considering an extension of AF with labelled constraints, namely Labelled Constrained AF (LCAF), then we focus on AF with labelled preferences (Labelled Preference-based AF, LPAF for short) and, finally, we introduce a general framework called Labelled Preference-based Constrained AF (LPCAF) that combines AF, labelled constraints, and labelled preferences. We also investigate an extension of AF with labelled conditional (or extended) preferences, namely Labelled extended Preference-based AF (LePAF), and its further combination with labelled constraints (Labelled extended Preference-based Constrained AF, LePCAF for short). Herein, conditional preferences are of the form body, where a and b are labelled arguments, whereas body is a propositional formula over labelled arguments. For each framework, we define its syntax and semantics, and investigate the computational complexity of four canonical argumentation problems: existence, verification, and credulous and skeptical acceptance, under the well-known complete, stable, semi-stable, and preferred semantics.
{"title":"Constraints and lifting-based (conditional) preferences in abstract argumentation","authors":"Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna","doi":"10.1016/j.artint.2025.104437","DOIUrl":"10.1016/j.artint.2025.104437","url":null,"abstract":"<div><div>Dealing with controversial information is an important issue in several application contexts. Formal argumentation enables reasoning on arguments for and against a claim to decide on an outcome. Abstract Argumentation Framework (AF) has emerged as a central formalism in argument-based reasoning. In recent years there has been an increasing interest in extending AF to facilitate the knowledge representation and reasoning process. In this paper, we present an extension of AF that allows for the representation of labelled constraints and labelled preferences. A labelled argument is of the form <span><math><mrow><mi>in</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, <span><math><mrow><mi>out</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, or <span><math><mrow><mi>und</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, where <em>a</em> is an argument, whereas <strong>in</strong>, <strong>out</strong>, and <strong>und</strong> denote the acceptance status (i.e., accepted, rejected, undecided, respectively) of the specified argument. We start by considering an extension of AF with labelled constraints, namely <em>Labelled Constrained AF</em> (LCAF), then we focus on AF with labelled preferences (<em>Labelled Preference-based AF</em>, LPAF for short) and, finally, we introduce a general framework called <em>Labelled Preference-based Constrained AF</em> (LPCAF) that combines AF, labelled constraints, and labelled preferences. We also investigate an extension of AF with labelled conditional (or extended) preferences, namely <em>Labelled extended Preference-based AF</em> (LePAF), and its further combination with labelled constraints (<em>Labelled extended Preference-based Constrained AF</em>, LePCAF for short). Herein, conditional preferences are of the form <span><math><mi>a</mi><mo>></mo><mi>b</mi><mo>←</mo></math></span> <em>body</em>, where <strong>a</strong> and <strong>b</strong> are labelled arguments, whereas <em>body</em> is a propositional formula over labelled arguments. For each framework, we define its syntax and semantics, and investigate the computational complexity of four canonical argumentation problems: <em>existence</em>, <em>verification</em>, and <em>credulous</em> and <em>skeptical acceptance</em>, under the well-known <em>complete</em>, <em>stable</em>, <em>semi-stable</em>, and <em>preferred</em> semantics.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104437"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-30DOI: 10.1016/j.artint.2025.104424
Panagiotis Kanellopoulos , Maria Kyropoulou , Hao Zhou
A financial system is represented by a network, where nodes correspond to banks, and directed labeled edges correspond to debt contracts between banks. Once a payment schedule has been defined, the liquidity of the system is defined as the sum of total payments made in the network. Maximizing systemic liquidity is a natural objective of any financial authority, so, we study the setting where the financial authority offers bailout money to some bank(s) or forgives the debts of others in order to help them avoid costs related to default, and, hence, maximize liquidity. We investigate the approximation ratio provided by the greedy bailout policy compared to the optimal one, and we study the computational hardness of finding the optimal debt-removal and budget-constrained optimal bailout policy, respectively.
We also study financial systems from a game-theoretic standpoint. We observe that the removal of some incoming debt might be in the best interest of a bank, if that helps one of its borrowers remain solvent and avoid costs related to default. Assuming that a bank's well-being (i.e., utility) is aligned with the incoming payments they receive from the network, we define and analyze a game among banks who want to maximize their utility by strategically giving up some incoming payments. In addition, we extend the previous game by considering bailout payments. After formally defining the above games, we prove results about the existence and quality of pure Nash equilibria, as well as the computational complexity of finding such equilibria.
{"title":"Optimal bailouts and strategic debt forgiveness in financial networks","authors":"Panagiotis Kanellopoulos , Maria Kyropoulou , Hao Zhou","doi":"10.1016/j.artint.2025.104424","DOIUrl":"10.1016/j.artint.2025.104424","url":null,"abstract":"<div><div>A financial system is represented by a network, where nodes correspond to banks, and directed labeled edges correspond to debt contracts between banks. Once a payment schedule has been defined, the liquidity of the system is defined as the sum of total payments made in the network. Maximizing systemic liquidity is a natural objective of any financial authority, so, we study the setting where the financial authority offers bailout money to some bank(s) or forgives the debts of others in order to help them avoid costs related to default, and, hence, maximize liquidity. We investigate the approximation ratio provided by the greedy bailout policy compared to the optimal one, and we study the computational hardness of finding the optimal debt-removal and budget-constrained optimal bailout policy, respectively.</div><div>We also study financial systems from a game-theoretic standpoint. We observe that the removal of some incoming debt might be in the best interest of a bank, if that helps one of its borrowers remain solvent and avoid costs related to default. Assuming that a bank's well-being (i.e., utility) is aligned with the incoming payments they receive from the network, we define and analyze a game among banks who want to maximize their utility by strategically giving up some incoming payments. In addition, we extend the previous game by considering bailout payments. After formally defining the above games, we prove results about the existence and quality of pure Nash equilibria, as well as the computational complexity of finding such equilibria.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104424"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}