Pub Date : 2025-12-18DOI: 10.1016/j.artint.2025.104472
Eduard Eiben , Robert Ganian , Thekla Hamm , Viktoriia Korchemna
Synchronous dynamical systems are well-established models that have been used to capture a range of phenomena in networks, including opinion diffusion, spread of disease and product adoption. We study the three most notable problems in synchronous dynamical systems: whether the system will transition to a target configuration from a starting configuration, whether the system will reach convergence from a starting configuration, and whether the system is guaranteed to converge from every possible starting configuration. While all three problems were known to be intractable in the classical sense, we initiate the study of their exact boundaries of tractability from the perspective of structural parameters of the network by making use of the more fine-grained parameterized complexity paradigm. As our first result, we consider treewidth—as the most prominent and ubiquitous structural parameter—and show that all three problems remain intractable even on instances of constant treewidth. We complement this negative finding with fixed-parameter algorithms for the former two problems parameterized by treedepth, a well-studied restriction of treewidth. While it is possible to rule out a similar algorithm for convergence guarantee under treedepth, we conclude with a fixed-parameter algorithm for this last problem when parameterized by treedepth and the maximum in-degree.
{"title":"A structural complexity analysis of synchronous dynamical systems","authors":"Eduard Eiben , Robert Ganian , Thekla Hamm , Viktoriia Korchemna","doi":"10.1016/j.artint.2025.104472","DOIUrl":"10.1016/j.artint.2025.104472","url":null,"abstract":"<div><div>Synchronous dynamical systems are well-established models that have been used to capture a range of phenomena in networks, including opinion diffusion, spread of disease and product adoption. We study the three most notable problems in synchronous dynamical systems: whether the system will transition to a target configuration from a starting configuration, whether the system will reach convergence from a starting configuration, and whether the system is guaranteed to converge from every possible starting configuration. While all three problems were known to be intractable in the classical sense, we initiate the study of their exact boundaries of tractability from the perspective of structural parameters of the network by making use of the more fine-grained parameterized complexity paradigm. As our first result, we consider treewidth—as the most prominent and ubiquitous structural parameter—and show that all three problems remain intractable even on instances of constant treewidth. We complement this negative finding with fixed-parameter algorithms for the former two problems parameterized by treedepth, a well-studied restriction of treewidth. While it is possible to rule out a similar algorithm for convergence guarantee under treedepth, we conclude with a fixed-parameter algorithm for this last problem when parameterized by treedepth and the maximum in-degree.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104472"},"PeriodicalIF":4.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1016/j.artint.2025.104470
Robert Nieuwenhuis , Albert Oliveras , Enric Rodríguez-Carbonell , Rui Zhao
Among all procedures that CDCL-based SAT solvers implement, unit propagation dominates the total running time. Hence, it is not a surprise that large research efforts have been invested on improving it. As a result, the two-watched-literal scheme, enhanced with implementation details boosting its performance, emerged as the dominant method.
The importance of unit propagation in pseudo-Boolean solvers is similar. However, no dominant method exists: counter and watch-based propagation are well-suited for different types of constraints, opening the door to hybrid methods. The higher complexity of implementing pseudo-Boolean solvers has shifted the research focus to higher-level aspects of other procedures, considering implementation details of unit propagation not a priority.
In this paper, we first present execution logs: a novel methodology that allows us to precisely evaluate the performance of different propagation procedures. Secondly, we show how both counter and watch-based propagation routines in the RoundingSat solver can be largely improved thanks to a careful analysis of various implementation issues. Thirdly, a detailed analysis shows that hybrid methods outperform the ones based on a single technique. Finally, our experiments reveal that improvements in propagation lead to a clearly better overall performance of the solver.
{"title":"Using execution logs for improving Pseudo-Boolean propagation","authors":"Robert Nieuwenhuis , Albert Oliveras , Enric Rodríguez-Carbonell , Rui Zhao","doi":"10.1016/j.artint.2025.104470","DOIUrl":"10.1016/j.artint.2025.104470","url":null,"abstract":"<div><div>Among all procedures that CDCL-based SAT solvers implement, unit propagation dominates the total running time. Hence, it is not a surprise that large research efforts have been invested on improving it. As a result, the two-watched-literal scheme, enhanced with implementation details boosting its performance, emerged as the dominant method.</div><div>The importance of unit propagation in pseudo-Boolean solvers is similar. However, no dominant method exists: counter and watch-based propagation are well-suited for different types of constraints, opening the door to hybrid methods. The higher complexity of implementing pseudo-Boolean solvers has shifted the research focus to higher-level aspects of other procedures, considering implementation details of unit propagation not a priority.</div><div>In this paper, we first present <em>execution logs:</em> a novel methodology that allows us to precisely evaluate the performance of different propagation procedures. Secondly, we show how both counter and watch-based propagation routines in the <span>RoundingSat</span> solver can be largely improved thanks to a careful analysis of various implementation issues. Thirdly, a detailed analysis shows that hybrid methods outperform the ones based on a single technique. Finally, our experiments reveal that improvements in propagation lead to a clearly better overall performance of the solver.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104470"},"PeriodicalIF":4.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of generating robust counterfactual explanations for deep learning models subject to model changes. We focus on plausible model changes altering model parameters and propose a novel framework to reason about the robustness property in this setting. To motivate our solution, we begin by showing for the first time that computing the robustness of counterfactuals with respect to model changes is NP-hard. As this (practically) rules out the existence of scalable algorithms for exactly computing robustness, we propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees while preserving scalability. Remarkably, and differently from existing solutions targeting plausible model changes, our approach does not impose requirements on the network to be analysed, thus enabling robustness analysis on a wider range of architectures, including state-of-the-art tabular transformers. A thorough experimental analysis on four binary classification datasets reveals that our method improves the state of the art in generating robust explanations, outperforming existing methods.
{"title":"Probabilistically robust counterfactual explanations under model changes","authors":"Luca Marzari , Francesco Leofante , Ferdinando Cicalese , Alessandro Farinelli","doi":"10.1016/j.artint.2025.104459","DOIUrl":"10.1016/j.artint.2025.104459","url":null,"abstract":"<div><div>We study the problem of generating robust counterfactual explanations for deep learning models subject to model changes. We focus on <em>plausible model changes</em> altering model parameters and propose a novel framework to reason about the robustness property in this setting. To motivate our solution, we begin by showing for the first time that computing the robustness of counterfactuals with respect to model changes is NP-hard. As this (practically) rules out the existence of scalable algorithms for exactly computing robustness, we propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees while preserving scalability. Remarkably, and differently from existing solutions targeting plausible model changes, our approach does not impose requirements on the network to be analysed, thus enabling robustness analysis on a wider range of architectures, including state-of-the-art tabular transformers. A thorough experimental analysis on four binary classification datasets reveals that our method improves the state of the art in generating robust explanations, outperforming existing methods.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104459"},"PeriodicalIF":4.6,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.artint.2025.104468
Nathan R. Sturtevant , Shahaf Shperberg , Ariel Felner
The recently-introduced Dynamically Improved Bounds Bidirectional Search (DIBBS) algorithm attributes its success to the fact that it is not a deterministic expansion-based black box algorithm (DXBB). After communication with the authors, there is agreement that this characterization is incorrect. The goal of this research note is to provide correction in the literature regarding the claims around DIBBS, to make it clearer why DIBBS is a DXBB algorithm, and to explain why its performance is bounded by bidirectional search theory.
{"title":"Is DIBBS a DXBB algorithm?","authors":"Nathan R. Sturtevant , Shahaf Shperberg , Ariel Felner","doi":"10.1016/j.artint.2025.104468","DOIUrl":"10.1016/j.artint.2025.104468","url":null,"abstract":"<div><div>The recently-introduced Dynamically Improved Bounds Bidirectional Search (DIBBS) algorithm attributes its success to the fact that it is not a deterministic expansion-based black box algorithm (DXBB). After communication with the authors, there is agreement that this characterization is incorrect. The goal of this research note is to provide correction in the literature regarding the claims around DIBBS, to make it clearer why DIBBS is a DXBB algorithm, and to explain why its performance is bounded by bidirectional search theory.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104468"},"PeriodicalIF":4.6,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-07DOI: 10.1016/j.artint.2025.104460
Manel Rodriguez-Soto , Roxana Rădulescu , Filippo Bistaffa , Oriol Ricart , Arnau Mayoral-Macau , Maite Lopez-Sanchez , Juan A. Rodriguez-Aguilar , Ann Nowé
This paper addresses the problem of ensuring that autonomous learning agents align with multiple moral values. Specifically, we present the theoretical principles and algorithmic tools necessary for creating an environment where we ensure that the agent learns a behaviour aligned with multiple moral values while striving to achieve its individual objective. To address this value alignment problem, we adopt the Multi-Objective Reinforcement Learning framework and propose a novel algorithm that combines techniques from Multi-Objective Reinforcement Learning and Linear Programming. In addition, we illustrate our value alignment process with an example involving an autonomous vehicle. Here, we demonstrate that the agent learns to behave in alignment with the ethical values of safety, achievement, and comfort, with achievement representing the agent’s individual objective. Such ethical behaviour differs depending on the ordering between values. We also use a synthetic multi-objective environment to evaluate the computational costs of guaranteeing ethical learning as the number of values increases.
{"title":"Multi-objective reinforcement learning for provably incentivising alignment with value systems","authors":"Manel Rodriguez-Soto , Roxana Rădulescu , Filippo Bistaffa , Oriol Ricart , Arnau Mayoral-Macau , Maite Lopez-Sanchez , Juan A. Rodriguez-Aguilar , Ann Nowé","doi":"10.1016/j.artint.2025.104460","DOIUrl":"10.1016/j.artint.2025.104460","url":null,"abstract":"<div><div>This paper addresses the problem of ensuring that autonomous learning agents align with multiple moral values. Specifically, we present the theoretical principles and algorithmic tools necessary for creating an environment where we ensure that the agent learns a behaviour aligned with multiple moral values while striving to achieve its individual objective. To address this value alignment problem, we adopt the Multi-Objective Reinforcement Learning framework and propose a novel algorithm that combines techniques from Multi-Objective Reinforcement Learning and Linear Programming. In addition, we illustrate our value alignment process with an example involving an autonomous vehicle. Here, we demonstrate that the agent learns to behave in alignment with the ethical values of safety, achievement, and comfort, with achievement representing the agent’s individual objective. Such ethical behaviour differs depending on the ordering between values. We also use a synthetic multi-objective environment to evaluate the computational costs of guaranteeing ethical learning as the number of values increases.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104460"},"PeriodicalIF":4.6,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145689753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1016/j.artint.2025.104456
Lydia Blümel, Markus Ulbricht
{"title":"Defining Defense and Defeat in Abstract Argumentation From Scratch – A Generalizing Approach","authors":"Lydia Blümel, Markus Ulbricht","doi":"10.1016/j.artint.2025.104456","DOIUrl":"https://doi.org/10.1016/j.artint.2025.104456","url":null,"abstract":"","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"26 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145689754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-30DOI: 10.1016/j.artint.2025.104458
Alexis De Colnet , Sebastian Ordyniak , Stefan Szeider
Ordered Binary Decision Diagrams (OBDDs) are dynamic data structures with many application areas. The literature suggested that OBDDs of bounded width equate to Boolean circuits of bounded pathwidth. In this paper, we show that this relationship holds only for complete OBDDs. Additionally, we demonstrate that similar limitations affect the claimed equivalence between Sentential Decision Diagrams (SDDs) of bounded width and Boolean circuits of bounded treewidth.
{"title":"OBDDs, SDDs, and circuits of bounded width: Completeness matters","authors":"Alexis De Colnet , Sebastian Ordyniak , Stefan Szeider","doi":"10.1016/j.artint.2025.104458","DOIUrl":"10.1016/j.artint.2025.104458","url":null,"abstract":"<div><div>Ordered Binary Decision Diagrams (OBDDs) are dynamic data structures with many application areas. The literature suggested that OBDDs of bounded width equate to Boolean circuits of bounded pathwidth. In this paper, we show that this relationship holds only for complete OBDDs. Additionally, we demonstrate that similar limitations affect the claimed equivalence between Sentential Decision Diagrams (SDDs) of bounded width and Boolean circuits of bounded treewidth.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104458"},"PeriodicalIF":4.6,"publicationDate":"2025-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-22DOI: 10.1016/j.artint.2025.104454
Hui Chen , Xuhui Fan , Hengyu Liu , Yaqiong Li , Zhilin Zhao , Feng Zhou , Christopher John Quinn , Longbing Cao
Temporal point processes (TPPs) are effective for modeling event occurrences over time but struggle with sparse and uncertain events in federated systems, where privacy is a major concern. To address this, we propose FedPP, a federated neural nonparametric point process model. FedPP integrates neural embeddings into sigmoidal Gaussian Cox processes (SGCPs) on the client side. SGCPs is a flexible and expressive class of TPPs, allowing FedPP to generate highly flexible intensity functions that capture client-specific event dynamics and uncertainties while efficiently summarizing historical records. For global aggregation, FedPP introduces a divergence-based mechanism to communicate the distributions of kernel hyperparameters in SGCPs between the server and clients, while keeping client-specific parameters local to ensure privacy and personalization. FedPP effectively captures event uncertainty and sparsity. Extensive experiments demonstrate its superior performance in federated settings, showing global aggregation with the KL divergence and the Wasserstein distance.
{"title":"Federated neural nonparametric point processes","authors":"Hui Chen , Xuhui Fan , Hengyu Liu , Yaqiong Li , Zhilin Zhao , Feng Zhou , Christopher John Quinn , Longbing Cao","doi":"10.1016/j.artint.2025.104454","DOIUrl":"10.1016/j.artint.2025.104454","url":null,"abstract":"<div><div>Temporal point processes (TPPs) are effective for modeling event occurrences over time but struggle with sparse and uncertain events in federated systems, where privacy is a major concern. To address this, we propose <em>FedPP</em>, a federated neural nonparametric point process model. FedPP integrates neural embeddings into sigmoidal Gaussian Cox processes (SGCPs) on the client side. SGCPs is a flexible and expressive class of TPPs, allowing FedPP to generate highly flexible intensity functions that capture client-specific event dynamics and uncertainties while efficiently summarizing historical records. For global aggregation, FedPP introduces a divergence-based mechanism to communicate the distributions of kernel hyperparameters in SGCPs between the server and clients, while keeping client-specific parameters local to ensure privacy and personalization. FedPP effectively captures event uncertainty and sparsity. Extensive experiments demonstrate its superior performance in federated settings, showing global aggregation with the KL divergence and the Wasserstein distance.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104454"},"PeriodicalIF":4.6,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1016/j.artint.2025.104455
Xinyuan Zhao , Hanlin Gu , Lixin Fan , Yuxing Han , Qiang Yang
Federated Learning (FL) facilitates collaborative training of a global model whose performance is boosted by private data owned by distributed clients, without compromising data privacy. Yet the wide applicability of FL is hindered by the entanglement of data distributions across different clients. This paper demonstrates for the first time that by disentangling data distributions, FL can in principle achieve efficiencies comparable to those of distributed systems, requiring only one round of communication. To this end, we propose a novel FedDistr algorithm, which employs diffusion models to decouple and recover data distributions. Empirical results on the CIFAR100, DomainNet, OfficeHome, and ISIC2020 datasets show that FedDistr significantly enhances model utility and efficiency in both disentangled and near-disentangled scenarios while ensuring privacy, outperforming traditional federated learning methods.
{"title":"Disentangling data distribution for optimal and communication-efficient federated learning","authors":"Xinyuan Zhao , Hanlin Gu , Lixin Fan , Yuxing Han , Qiang Yang","doi":"10.1016/j.artint.2025.104455","DOIUrl":"10.1016/j.artint.2025.104455","url":null,"abstract":"<div><div>Federated Learning (FL) facilitates collaborative training of a global model whose performance is boosted by private data owned by distributed clients, without compromising data privacy. Yet the wide applicability of FL is hindered by the entanglement of data distributions across different clients. This paper demonstrates for the first time that by disentangling data distributions, FL can in principle achieve efficiencies comparable to those of distributed systems, requiring only one round of communication. To this end, we propose a novel FedDistr algorithm, which employs diffusion models to decouple and recover data distributions. Empirical results on the CIFAR100, DomainNet, OfficeHome, and ISIC2020 datasets show that FedDistr significantly enhances model utility and efficiency in both disentangled and near-disentangled scenarios while ensuring privacy, outperforming traditional federated learning methods.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104455"},"PeriodicalIF":4.6,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145567483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.artint.2025.104457
Predrag Teovanović , Srdjan Vesic , Bruno Yun
This paper presents a comprehensive examination of human compliance with normative principles of argumentation across two experimental studies. The first study investigated whether fundamental argumentation principles such as anonymity, independence, void precedence, and maximality align with human reasoning. Additionally, it explored whether graph-based representations of arguments facilitate better understanding and adherence to these principles compared to textual representations of arguments alone and examined the role of individual cognitive differences in compliance with these principles. Our experiments revealed that graph-based representations significantly improved compliance with argumentation principles, particularly among individuals with higher cognitive reflection. The second study replicated and extended the first study’s findings, introducing new principles such as skeptical precedence and simple reinstatement, and explored the effects of presenting arguments solely in graphical form, as well as the impact of a short tutorial on argumentation theory. The study also assessed participants’ ability to perform graphical tasks and how this influenced their compliance with normative principles. Results partially replicated the first study’s findings, confirming that graphical representations enhance compliance, but also revealed that the effect does not generalize to the new principles. We found evidence that in the absence of a graphical representation, performing graphical tasks can improve compliance with principles; especially drawing the argumentation graph. Moreover, a brief tutorial significantly improved performance on several principles, indicating that even minimal instruction can enhance understanding and compliance. However, the difficulties observed with the simple reinstatement principle hint that the participants’ intuition about the notion of defense diverges significantly from that of the researchers and that more careful thoughts must be put in crafting them. These studies collectively suggest that while argumentation principles can be intuitive to some extent, their comprehension and application are significantly influenced by the instruction given as well as by graphical representations and processes used to obtain them. These findings have important implications for the design of future argumentation-based tools and our understanding of how to bridge human reasoning and formal argumentation.
{"title":"Human compliance with computational argumentation principles","authors":"Predrag Teovanović , Srdjan Vesic , Bruno Yun","doi":"10.1016/j.artint.2025.104457","DOIUrl":"10.1016/j.artint.2025.104457","url":null,"abstract":"<div><div>This paper presents a comprehensive examination of human compliance with normative principles of argumentation across two experimental studies. The first study investigated whether fundamental argumentation principles such as anonymity, independence, void precedence, and maximality align with human reasoning. Additionally, it explored whether graph-based representations of arguments facilitate better understanding and adherence to these principles compared to textual representations of arguments alone and examined the role of individual cognitive differences in compliance with these principles. Our experiments revealed that graph-based representations significantly improved compliance with argumentation principles, particularly among individuals with higher cognitive reflection. The second study replicated and extended the first study’s findings, introducing new principles such as skeptical precedence and simple reinstatement, and explored the effects of presenting arguments solely in graphical form, as well as the impact of a short tutorial on argumentation theory. The study also assessed participants’ ability to perform graphical tasks and how this influenced their compliance with normative principles. Results partially replicated the first study’s findings, confirming that graphical representations enhance compliance, but also revealed that the effect does not generalize to the new principles. We found evidence that in the absence of a graphical representation, performing graphical tasks can improve compliance with principles; especially drawing the argumentation graph. Moreover, a brief tutorial significantly improved performance on several principles, indicating that even minimal instruction can enhance understanding and compliance. However, the difficulties observed with the simple reinstatement principle hint that the participants’ intuition about the notion of defense diverges significantly from that of the researchers and that more careful thoughts must be put in crafting them. These studies collectively suggest that while argumentation principles can be intuitive to some extent, their comprehension and application are significantly influenced by the instruction given as well as by graphical representations and processes used to obtain them. These findings have important implications for the design of future argumentation-based tools and our understanding of how to bridge human reasoning and formal argumentation.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"351 ","pages":"Article 104457"},"PeriodicalIF":4.6,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}