Pub Date : 2025-11-07DOI: 10.1016/j.artint.2025.104453
Andre Opris
This article addresses theory in evolutionary many-objective optimization and focuses on the role of crossover operators. The advantages of using crossover are hardly understood and rigorous runtime analyses with crossover are lagging far behind its use in practice, specifically in the case of more than two objectives. We present two many-objective problems and , and a theoretical runtime analysis of the GSEMO and the widely used NSGA‑III algorithm, to demonstrate that one point crossover on , as well as uniform crossover on , can yield an exponential speedup in the runtime. In particular, when the number of objectives is constant, this algorithms can find the Pareto set of both problems in expected polynomial time when using crossover, while without crossover they require exponential time to even find a single Pareto-optimal point. For either problem, we also demonstrate a significant performance gap in certain superconstant parameter regimes for the number of objectives. To the best of our knowledge, this is the first rigorous runtime analysis in many-objective optimization which demonstrates an exponential performance gap when using crossover for more than two objectives. Additionally, it is the first runtime analysis involving crossover in many-objective optimization where the number of objectives is not necessarily constant.
{"title":"Many-objective problems where crossover is provably essential","authors":"Andre Opris","doi":"10.1016/j.artint.2025.104453","DOIUrl":"10.1016/j.artint.2025.104453","url":null,"abstract":"<div><div>This article addresses theory in evolutionary many-objective optimization and focuses on the role of crossover operators. The advantages of using crossover are hardly understood and rigorous runtime analyses with crossover are lagging far behind its use in practice, specifically in the case of more than two objectives. We present two many-objective problems <span><math><msub><mtext>RR</mtext><mrow><mi>MO</mi></mrow></msub></math></span> and <span><math><msub><mtext>URR</mtext><mrow><mi>MO</mi></mrow></msub></math></span>, and a theoretical runtime analysis of the GSEMO and the widely used NSGA‑III algorithm, to demonstrate that one point crossover on <span><math><msub><mtext>RR</mtext><mrow><mi>MO</mi></mrow></msub></math></span>, as well as uniform crossover on <span><math><msub><mtext>URR</mtext><mrow><mi>MO</mi></mrow></msub></math></span>, can yield an exponential speedup in the runtime. In particular, when the number of objectives is constant, this algorithms can find the Pareto set of both problems in expected polynomial time when using crossover, while without crossover they require exponential time to even find a single Pareto-optimal point. For either problem, we also demonstrate a significant performance gap in certain superconstant parameter regimes for the number of objectives. To the best of our knowledge, this is the first rigorous runtime analysis in many-objective optimization which demonstrates an exponential performance gap when using crossover for more than two objectives. Additionally, it is the first runtime analysis involving crossover in many-objective optimization where the number of objectives is not necessarily constant.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104453"},"PeriodicalIF":4.6,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145461589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.artint.2025.104443
Junjun Guo , Yifan Liu , Zhengtao Yu
Domain-specific Multimodal Neural Machine Translation (DMNMT) aims to translate text in specialized domains by leveraging both linguistic context and associated visual information to resolve domain-specific ambiguities and enhance terminological accuracy. Although accompanying images often provide sparse and fragmented visual cues that could potentially anchor critical domain semantics, the semantic mapping from images to textual domain semantics typically exhibits sparse multi-focal alignment challenges. Existing general-domain multimodal neural machine translation (MNMT) models and large language models (LLMs) struggle to achieve accurate aggregation of domain-salient information, often resulting in near-equivalent yet imprecise terminology translations or outright errors. To bridge this sparse domain semantic correspondence gap, we introduce the Asymmetric Siamese Multimodal Fusion (ASMF) framework, which decouples domain representation learning into two complementary branches that both consume text: a domain-specific virtual visual content generation (DVVG) branch and a terminology-aware textual (TAT) branch. The DVVG branch distills sparse, localized visual features into modality-agnostic semantic anchors through mask-constrained multi-focal distillation, while the TAT branch captures terminology-dense textual context. We introduce a novel Domain-Virtualized Pivot-driven Hierarchical Fusion (DVPH) strategy that progressively injects distilled visual anchors across encoder layers. This asymmetric dual-branch design effectively couples spatially fragmented visual details with terminology-rich text, enabling accurate and domain-consistent translations even for low-frequency terms. Extensive experiments were conducted on four benchmark datasets covering three distinct scenarios: two domain-specific datasets (Fashion-MMT and EMMT), one general-domain dataset (Multi30K), and one multi-domain dataset (WIT). Comprehensive evaluations demonstrate that the proposed approach outperforms existing MNMT, DMNMT and LLMs, achieving state-of-the-art (SOTA) results across all datasets. In-depth analyses validate its robustness and generalization capabilities across diverse scenarios, including visually noisy or image-free conditions.
{"title":"Bridging sparse domain semantics via an asymmetric siamese framework with virtual anchor guidance for domain-specific multimodal translation","authors":"Junjun Guo , Yifan Liu , Zhengtao Yu","doi":"10.1016/j.artint.2025.104443","DOIUrl":"10.1016/j.artint.2025.104443","url":null,"abstract":"<div><div>Domain-specific Multimodal Neural Machine Translation (DMNMT) aims to translate text in specialized domains by leveraging both linguistic context and associated visual information to resolve domain-specific ambiguities and enhance terminological accuracy. Although accompanying images often provide sparse and fragmented visual cues that could potentially anchor critical domain semantics, the semantic mapping from images to textual domain semantics typically exhibits sparse multi-focal alignment challenges. Existing general-domain multimodal neural machine translation (MNMT) models and large language models (LLMs) struggle to achieve accurate aggregation of domain-salient information, often resulting in near-equivalent yet imprecise terminology translations or outright errors. To bridge this sparse domain semantic correspondence gap, we introduce the Asymmetric Siamese Multimodal Fusion (ASMF) framework, which decouples domain representation learning into two complementary branches that both consume text: a domain-specific virtual visual content generation (DVVG) branch and a terminology-aware textual (TAT) branch. The DVVG branch distills sparse, localized visual features into modality-agnostic semantic anchors through mask-constrained multi-focal distillation, while the TAT branch captures terminology-dense textual context. We introduce a novel Domain-Virtualized Pivot-driven Hierarchical Fusion (DVPH) strategy that progressively injects distilled visual anchors across encoder layers. This asymmetric dual-branch design effectively couples spatially fragmented visual details with terminology-rich text, enabling accurate and domain-consistent translations even for low-frequency terms. Extensive experiments were conducted on four benchmark datasets covering three distinct scenarios: two domain-specific datasets (Fashion-MMT and EMMT), one general-domain dataset (Multi30K), and one multi-domain dataset (WIT). Comprehensive evaluations demonstrate that the proposed approach outperforms existing MNMT, DMNMT and LLMs, achieving state-of-the-art (SOTA) results across all datasets. In-depth analyses validate its robustness and generalization capabilities across diverse scenarios, including visually noisy or image-free conditions.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104443"},"PeriodicalIF":4.6,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145404964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-26DOI: 10.1016/j.artint.2025.104441
Sebastian Ordyniak , Giacomo Paesani , Mateusz Rychlicki , Stefan Szeider
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.
{"title":"A General Theoretical Framework for Learning Smallest Interpretable Models","authors":"Sebastian Ordyniak , Giacomo Paesani , Mateusz Rychlicki , Stefan Szeider","doi":"10.1016/j.artint.2025.104441","DOIUrl":"10.1016/j.artint.2025.104441","url":null,"abstract":"<div><div>We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104441"},"PeriodicalIF":4.6,"publicationDate":"2025-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1016/j.artint.2025.104442
Moran Barenboim , Vadim Indelman
Decision-making under uncertainty is a critical aspect of many practical autonomous systems due to incomplete information. Partially Observable Markov Decision Processes (POMDPs) offer a mathematically principled framework for formulating decision-making problems under such conditions. However, finding an optimal solution for a POMDP is generally intractable. In recent years, there has been a significant progress of scaling approximate solvers from small to moderately sized problems, using online tree search solvers. Often, such approximate solvers are limited to probabilistic or asymptotic guarantees towards the optimal solution. In this paper, we derive a deterministic relationship for discrete POMDPs between an approximated and the optimal solution. We show that at any time, we can derive bounds that relate between the existing solution and the optimal one. We show that our derivations provide an avenue for a new set of algorithms and can be attached to existing algorithms that have a certain structure to provide them with deterministic guarantees with marginal computational overhead. In return, not only do we certify the solution quality, but we demonstrate that making a decision based on the deterministic guarantee may result in superior performance compared to the original algorithm without the deterministic certification.
{"title":"Online POMDP planning with anytime deterministic optimality guarantees","authors":"Moran Barenboim , Vadim Indelman","doi":"10.1016/j.artint.2025.104442","DOIUrl":"10.1016/j.artint.2025.104442","url":null,"abstract":"<div><div>Decision-making under uncertainty is a critical aspect of many practical autonomous systems due to incomplete information. Partially Observable Markov Decision Processes (POMDPs) offer a mathematically principled framework for formulating decision-making problems under such conditions. However, finding an optimal solution for a POMDP is generally intractable. In recent years, there has been a significant progress of scaling approximate solvers from small to moderately sized problems, using online tree search solvers. Often, such approximate solvers are limited to probabilistic or asymptotic guarantees towards the optimal solution. In this paper, we derive a deterministic relationship for discrete POMDPs between an approximated and the optimal solution. We show that at any time, we can derive bounds that relate between the existing solution and the optimal one. We show that our derivations provide an avenue for a new set of algorithms and can be attached to existing algorithms that have a certain structure to provide them with deterministic guarantees with marginal computational overhead. In return, not only do we certify the solution quality, but we demonstrate that making a decision based on the deterministic guarantee may result in superior performance compared to the original algorithm without the deterministic certification.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104442"},"PeriodicalIF":4.6,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1016/j.artint.2025.104440
Hang Zhang , Kai Ming Ting , Ye Zhu
The research on spectral clustering (SC) has thus far been pursued on the same track using the same tool of eigendecomposition of a matrix since the idea was first introduced in 1973. Despite its successes, SC has been identified to have fundamental limitations that prevent SC from discovering certain types of clusters, and SC has slow runtime. We offer an alternative path that does not involve the eigendecomposition, and, more broadly, it uses no optimization. The proposed new Kernel-Bounded Clustering (KBC) is a complete metamorphosis in 50 years of research in SC in view of the fact that KBC achieves the same objective of SC without eigendecomposition or optimization. We evaluated KBC on the datasets that have been used to demonstrate the fundamental limitations of SC, genome-wide expression data, large image datasets and many commonly used real-world benchmark datasets. KBC produced better quality clusters than various variants of SC, and it ran six orders of magnitude faster than the traditional SC on a set of 5 million data points.
{"title":"Kernel-bounded clustering: Achieving the objective of spectral clustering without eigendecomposition","authors":"Hang Zhang , Kai Ming Ting , Ye Zhu","doi":"10.1016/j.artint.2025.104440","DOIUrl":"10.1016/j.artint.2025.104440","url":null,"abstract":"<div><div>The research on spectral clustering (SC) has thus far been pursued on the same track using the same tool of eigendecomposition of a matrix since the idea was first introduced in 1973. Despite its successes, SC has been identified to have fundamental limitations that prevent SC from discovering certain types of clusters, and SC has slow runtime. We offer an alternative path that does not involve the eigendecomposition, and, more broadly, it uses no optimization. The proposed new Kernel-Bounded Clustering (KBC) is a complete metamorphosis in 50 years of research in SC in view of the fact that KBC achieves the same objective of SC without eigendecomposition or optimization. We evaluated KBC on the datasets that have been used to demonstrate the fundamental limitations of SC, genome-wide expression data, large image datasets and many commonly used real-world benchmark datasets. KBC produced better quality clusters than various variants of SC, and it ran six orders of magnitude faster than the traditional SC on a set of 5 million data points.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"350 ","pages":"Article 104440"},"PeriodicalIF":4.6,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph active learning (GAL) is an important research direction in graph neural networks (GNNs) that aims to select the most valuable nodes for labeling to train GNNs. Previous works in GAL have primarily focused on the overall performance of GNNs, overlooking the balance among different classes. However, graphs in real-world applications are often imbalanced, which leads GAL methods to select class-imbalanced training sets, resulting in biased GNN models. Furthermore, due to the high cost of multi-turn queries, there is an increasing demand for one-step GAL methods, where the entire training set is queried at once. These realities prompt us to investigate the problem of one-step active learning on imbalanced graphs.
In this paper, we propose a theory-driven method called Contrast & Contract (Contra2) to tackle the above issues. The key idea of Contra2 is that intra-class edges within the majority are dominant in the edge set, so contracting these edges will reduce the imbalance ratio. Specifically, Contra2 first learns node representations by graph contrastive learning (GCL), then stochastically contracts the edges that connect nodes with similar embeddings. We theoretically show that Contra2 reduces the imbalance ratio with high probability. By leveraging a more evenly distributed graph, we can achieve a balanced selection of labeled nodes without requiring any seed labels. The effectiveness of Contra2 is evaluated against various baselines on 11 datasets with different budgets. Contra2 demonstrates remarkable performance, achieving either higher or on-par performance with only half of the annotation budget on some datasets.
{"title":"Contra2: A one-step active learning method for imbalanced graphs","authors":"Wenjie Yang , Shengzhong Zhang , Jiaxing Guo , Zengfeng Huang","doi":"10.1016/j.artint.2025.104439","DOIUrl":"10.1016/j.artint.2025.104439","url":null,"abstract":"<div><div>Graph active learning (GAL) is an important research direction in graph neural networks (GNNs) that aims to select the most valuable nodes for labeling to train GNNs. Previous works in GAL have primarily focused on the overall performance of GNNs, overlooking the balance among different classes. However, graphs in real-world applications are often imbalanced, which leads GAL methods to select class-imbalanced training sets, resulting in biased GNN models. Furthermore, due to the high cost of multi-turn queries, there is an increasing demand for one-step GAL methods, where the entire training set is queried at once. These realities prompt us to investigate the problem of one-step active learning on imbalanced graphs.</div><div>In this paper, we propose a theory-driven method called Contrast & Contract (Contra<sup>2</sup>) to tackle the above issues. The key idea of Contra<sup>2</sup> is that intra-class edges within the majority are dominant in the edge set, so contracting these edges will reduce the imbalance ratio. Specifically, Contra<sup>2</sup> first learns node representations by graph <strong>contrast</strong>ive learning (GCL), then stochastically <strong>contract</strong>s the edges that connect nodes with similar embeddings. We theoretically show that Contra<sup>2</sup> reduces the imbalance ratio with high probability. By leveraging a more evenly distributed graph, we can achieve a balanced selection of labeled nodes without requiring any seed labels. The effectiveness of Contra<sup>2</sup> is evaluated against various baselines on 11 datasets with different budgets. Contra<sup>2</sup> demonstrates remarkable performance, achieving either higher or on-par performance with only half of the annotation budget on some datasets.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104439"},"PeriodicalIF":4.6,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.artint.2025.104438
Guillaume Claus , Hadrien Cambazard , Hugo Apeloig , Pierre Hoppenot
A well known technique to reduce the search space in integer programming is known as variable fixing or reduced cost strengthening. The reduced costs given by an optimal dual solution of the linear relaxation can be used to strengthen the bounds of the variables but this filtering is incomplete. We show how reduced costs can be used to achieve Arc-Consistency (AC), i.e. a complete filtering, of a global constraint with a cost variable and an assignment cost for each value. We assume that an ideal Integer Linear Programming (ILP) formulation is available i.e. the convex hull of the characteristic vectors of the supports is known. A detailed analysis of reduced cost based filtering is proposed. We characterize arc-consistency based on complementary slackness i.e. completeness of reasoning as opposed to only optimality. We also give a simple sufficient condition allowing a set of dual solutions to ensure arc-consistency through reduced costs. In practice, when the constraint has a such an ideal ILP, n dual solutions are always enough to achieve AC (where n is the number of variables of the global constraint). It extends the work presented in [26] for satisfaction problems and in [17] for the specific case of the minimum weighted alldifferent constraint. Our analysis is illustrated on constraints related to the assignment and shortest path problem and also demonstrated on the weighted stable set problem in chordal graphs. A novel AC algorithm is proposed in this latter case based on reduced costs.
{"title":"Arc-consistency with linear programming reduced costs (applied to stable set in chordal graphs)","authors":"Guillaume Claus , Hadrien Cambazard , Hugo Apeloig , Pierre Hoppenot","doi":"10.1016/j.artint.2025.104438","DOIUrl":"10.1016/j.artint.2025.104438","url":null,"abstract":"<div><div>A well known technique to reduce the search space in integer programming is known as <em>variable fixing</em> or <em>reduced cost strengthening</em>. The reduced costs given by an optimal dual solution of the linear relaxation can be used to strengthen the bounds of the variables but this filtering is incomplete. We show how reduced costs can be used to achieve Arc-Consistency (AC), <em>i.e.</em> a complete filtering, of a global constraint with a cost variable and an assignment cost for each value. We assume that an ideal Integer Linear Programming (ILP) formulation is available i.e. the convex hull of the characteristic vectors of the supports is known. A detailed analysis of reduced cost based filtering is proposed. We characterize arc-consistency based on complementary slackness <em>i.e.</em> completeness of reasoning as opposed to only optimality. We also give a simple sufficient condition allowing a set of dual solutions to ensure arc-consistency through reduced costs. In practice, when the constraint has a such an ideal ILP, <em>n</em> dual solutions are always enough to achieve AC (where <em>n</em> is the number of variables of the global constraint). It extends the work presented in <span><span>[26]</span></span> for satisfaction problems and in <span><span>[17]</span></span> for the specific case of the minimum weighted alldifferent constraint. Our analysis is illustrated on constraints related to the assignment and shortest path problem and also demonstrated on the weighted stable set problem in chordal graphs. A novel AC algorithm is proposed in this latter case based on reduced costs.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104438"},"PeriodicalIF":4.6,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145359752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-08DOI: 10.1016/j.artint.2025.104437
Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna
Dealing with controversial information is an important issue in several application contexts. Formal argumentation enables reasoning on arguments for and against a claim to decide on an outcome. Abstract Argumentation Framework (AF) has emerged as a central formalism in argument-based reasoning. In recent years there has been an increasing interest in extending AF to facilitate the knowledge representation and reasoning process. In this paper, we present an extension of AF that allows for the representation of labelled constraints and labelled preferences. A labelled argument is of the form , , or , where a is an argument, whereas in, out, and und denote the acceptance status (i.e., accepted, rejected, undecided, respectively) of the specified argument. We start by considering an extension of AF with labelled constraints, namely Labelled Constrained AF (LCAF), then we focus on AF with labelled preferences (Labelled Preference-based AF, LPAF for short) and, finally, we introduce a general framework called Labelled Preference-based Constrained AF (LPCAF) that combines AF, labelled constraints, and labelled preferences. We also investigate an extension of AF with labelled conditional (or extended) preferences, namely Labelled extended Preference-based AF (LePAF), and its further combination with labelled constraints (Labelled extended Preference-based Constrained AF, LePCAF for short). Herein, conditional preferences are of the form body, where a and b are labelled arguments, whereas body is a propositional formula over labelled arguments. For each framework, we define its syntax and semantics, and investigate the computational complexity of four canonical argumentation problems: existence, verification, and credulous and skeptical acceptance, under the well-known complete, stable, semi-stable, and preferred semantics.
{"title":"Constraints and lifting-based (conditional) preferences in abstract argumentation","authors":"Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna","doi":"10.1016/j.artint.2025.104437","DOIUrl":"10.1016/j.artint.2025.104437","url":null,"abstract":"<div><div>Dealing with controversial information is an important issue in several application contexts. Formal argumentation enables reasoning on arguments for and against a claim to decide on an outcome. Abstract Argumentation Framework (AF) has emerged as a central formalism in argument-based reasoning. In recent years there has been an increasing interest in extending AF to facilitate the knowledge representation and reasoning process. In this paper, we present an extension of AF that allows for the representation of labelled constraints and labelled preferences. A labelled argument is of the form <span><math><mrow><mi>in</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, <span><math><mrow><mi>out</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, or <span><math><mrow><mi>und</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, where <em>a</em> is an argument, whereas <strong>in</strong>, <strong>out</strong>, and <strong>und</strong> denote the acceptance status (i.e., accepted, rejected, undecided, respectively) of the specified argument. We start by considering an extension of AF with labelled constraints, namely <em>Labelled Constrained AF</em> (LCAF), then we focus on AF with labelled preferences (<em>Labelled Preference-based AF</em>, LPAF for short) and, finally, we introduce a general framework called <em>Labelled Preference-based Constrained AF</em> (LPCAF) that combines AF, labelled constraints, and labelled preferences. We also investigate an extension of AF with labelled conditional (or extended) preferences, namely <em>Labelled extended Preference-based AF</em> (LePAF), and its further combination with labelled constraints (<em>Labelled extended Preference-based Constrained AF</em>, LePCAF for short). Herein, conditional preferences are of the form <span><math><mi>a</mi><mo>></mo><mi>b</mi><mo>←</mo></math></span> <em>body</em>, where <strong>a</strong> and <strong>b</strong> are labelled arguments, whereas <em>body</em> is a propositional formula over labelled arguments. For each framework, we define its syntax and semantics, and investigate the computational complexity of four canonical argumentation problems: <em>existence</em>, <em>verification</em>, and <em>credulous</em> and <em>skeptical acceptance</em>, under the well-known <em>complete</em>, <em>stable</em>, <em>semi-stable</em>, and <em>preferred</em> semantics.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104437"},"PeriodicalIF":4.6,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1016/j.artint.2025.104425
Dolev Mutzari , Tonmoay Deb , Cristian Molinaro , Andrea Pugliese , V.S. Subrahmanian , Sarit Kraus
To counter an imminent multi-drone attack on a city, defenders have deployed drones across the city. These drones must intercept/eliminate the threat, thus reducing potential damage from the attack. We model this as a Sequential Stackelberg Security Game, where the defender first commits to a mixed sequential defense strategy, and the attacker then best responds. We develop an efficient algorithm called S2D2, which outputs a defense strategy. We demonstrate the efficacy of S2D2 in extensive experiments on data from 80 real cities, improving the performance of the defender in comparison to greedy heuristics based on prior works. We prove that under some reasonable assumptions about the city structure, S2D2 outputs an approximate Strong Stackelberg Equilibrium (SSE) with a convenient structure.
{"title":"Defending a city from multi-drone attacks: A sequential Stackelberg security games approach","authors":"Dolev Mutzari , Tonmoay Deb , Cristian Molinaro , Andrea Pugliese , V.S. Subrahmanian , Sarit Kraus","doi":"10.1016/j.artint.2025.104425","DOIUrl":"10.1016/j.artint.2025.104425","url":null,"abstract":"<div><div>To counter an imminent multi-drone attack on a city, defenders have deployed drones across the city. These drones must intercept/eliminate the threat, thus reducing potential damage from the attack. We model this as a Sequential Stackelberg Security Game, where the defender first commits to a mixed sequential defense strategy, and the attacker then best responds. We develop an efficient algorithm called S2D2, which outputs a defense strategy. We demonstrate the efficacy of S2D2 in extensive experiments on data from 80 real cities, improving the performance of the defender in comparison to greedy heuristics based on prior works. We prove that under some reasonable assumptions about the city structure, S2D2 outputs an approximate Strong Stackelberg Equilibrium (SSE) with a convenient structure.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104425"},"PeriodicalIF":4.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1016/j.artint.2025.104426
Georgios Amanatidis , Ben Berger , Tomer Ezra , Michal Feldman , Federico Fusco , Rebecca Reiffenhäuser , Artem Tsikiridis
The Pandora's Box problem models the search for the best alternative when evaluation is costly. In the simplest variant, a decision maker is presented with n boxes, each associated with a cost of inspection and a hidden random reward. The decision maker inspects a subset of these boxes one after the other, in a possibly adaptive order, and gains the difference between the largest revealed reward and the sum of the inspection costs. Although this classic version is well understood (Weitzman 1979), there is a flourishing recent literature on variants of the problem. Here we introduce a general framework—the Pandora's Box Over Time problem—that captures a wide range of variants where time plays a role, e.g., by constraining the schedules of exploration and influencing costs and rewards. In our framework, boxes have time-dependent rewards and costs, whereas inspection may require a box-specific processing time. Moreover, once a box is inspected, its reward may deteriorate over time. Our main result is an efficient constant-factor approximation to the optimal strategy for the Pandora's Box Over Time problem, which is generally NP-hard to compute. We further obtain improved results for the natural special cases where boxes have no processing time, boxes are available only in specific time slots, or when costs and reward distributions are time-independent (but rewards may still deteriorate after inspection).
{"title":"Pandora's box problem with time constraints","authors":"Georgios Amanatidis , Ben Berger , Tomer Ezra , Michal Feldman , Federico Fusco , Rebecca Reiffenhäuser , Artem Tsikiridis","doi":"10.1016/j.artint.2025.104426","DOIUrl":"10.1016/j.artint.2025.104426","url":null,"abstract":"<div><div>The Pandora's Box problem models the search for the best alternative when evaluation is costly. In the simplest variant, a decision maker is presented with <em>n</em> boxes, each associated with a cost of inspection and a hidden random reward. The decision maker inspects a subset of these boxes one after the other, in a possibly adaptive order, and gains the difference between the largest revealed reward and the sum of the inspection costs. Although this classic version is well understood (Weitzman 1979), there is a flourishing recent literature on variants of the problem. Here we introduce a general framework—the Pandora's Box Over Time problem—that captures a wide range of variants where time plays a role, e.g., by constraining the schedules of exploration and influencing costs and rewards. In our framework, boxes have time-dependent rewards and costs, whereas inspection may require a box-specific processing time. Moreover, once a box is inspected, its reward may deteriorate over time. Our main result is an efficient constant-factor approximation to the optimal strategy for the Pandora's Box Over Time problem, which is generally NP-hard to compute. We further obtain improved results for the natural special cases where boxes have no processing time, boxes are available only in specific time slots, or when costs and reward distributions are time-independent (but rewards may still deteriorate after inspection).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104426"},"PeriodicalIF":4.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}