Time-to-event analysis provides insights into clinical prognosis and treatment recommendations. However, this task is more challenging than standard regression problems due to the presence of censored observations. Additionally, the lack of confidence assessment, model robustness, and prediction calibration raises concerns about the reliability of predictions. To address these challenges, we propose an evidential regression model specifically designed for time-to-event prediction. Our approach computes a degree of belief for the event time occurring within a time interval, without any strict distribution assumption. Meanwhile, the proposed model quantifies both epistemic and aleatory uncertainties using Gaussian Random Fuzzy Numbers and belief functions, providing clinicians with uncertainty-aware survival time predictions. Experimental evaluations using simulated and real-world survival datasets highlight the potential of our approach for enhancing clinical decision-making in survival analysis.
{"title":"Evidential time-to-event prediction with calibrated uncertainty quantification","authors":"Ling Huang , Yucheng Xing , Swapnil Mishra , Thierry Denœux , Mengling Feng","doi":"10.1016/j.ijar.2025.109403","DOIUrl":"10.1016/j.ijar.2025.109403","url":null,"abstract":"<div><div>Time-to-event analysis provides insights into clinical prognosis and treatment recommendations. However, this task is more challenging than standard regression problems due to the presence of censored observations. Additionally, the lack of confidence assessment, model robustness, and prediction calibration raises concerns about the reliability of predictions. To address these challenges, we propose an evidential regression model specifically designed for time-to-event prediction. Our approach computes a degree of belief for the event time occurring within a time interval, without any strict distribution assumption. Meanwhile, the proposed model quantifies both epistemic and aleatory uncertainties using Gaussian Random Fuzzy Numbers and belief functions, providing clinicians with uncertainty-aware survival time predictions. Experimental evaluations using simulated and real-world survival datasets highlight the potential of our approach for enhancing clinical decision-making in survival analysis.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"181 ","pages":"Article 109403"},"PeriodicalIF":3.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143561978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-02DOI: 10.1016/j.ijar.2025.109402
Siyi Qiu , Yuefei Wang , Zixu Wang , Jinyan Cao , Xi Yu
In recent years, multi-view data has seen widespread application across various fields, presenting both opportunities and challenges due to its complex distribution across different views. Detecting outliers in such heterogeneous data has become a significant research problem. Existing multi-view outlier detection methods often rely on clustering assumptions, pairwise constraints between views, and a focus on learning consensus information, which overlook the inherent differences across views. To address the aforementioned issues, this paper proposes an outlier detection method based on the fusion of multi-granularity fuzzy rough information (MGFMOD). The method calculates a multi-granularity similarity matrix using fuzzy similarity relationships, combines similarity matrices from different granularities to form an upper approximation matrix, and constructs fused upper approximation granules to detect attribute anomalies. Neighbor domain probabilistic mapping is then employed to unify neighborhood relationships across views, allowing the analysis of both consistency and distribution differences to capture class outliers. Additionally, this paper employs a novel coarse-to-fine approximation method to construct the upper approximation matrix, further improving the accuracy of attribute outlier detection. Experimental results on multiple public datasets demonstrate that the proposed method generally outperforms existing multi-view outlier detection methods in terms of detection accuracy and robustness.
{"title":"Multi-view outlier detection based on multi-granularity fusion of fuzzy rough granules","authors":"Siyi Qiu , Yuefei Wang , Zixu Wang , Jinyan Cao , Xi Yu","doi":"10.1016/j.ijar.2025.109402","DOIUrl":"10.1016/j.ijar.2025.109402","url":null,"abstract":"<div><div>In recent years, multi-view data has seen widespread application across various fields, presenting both opportunities and challenges due to its complex distribution across different views. Detecting outliers in such heterogeneous data has become a significant research problem. Existing multi-view outlier detection methods often rely on clustering assumptions, pairwise constraints between views, and a focus on learning consensus information, which overlook the inherent differences across views. To address the aforementioned issues, this paper proposes an outlier detection method based on the fusion of multi-granularity fuzzy rough information (MGFMOD). The method calculates a multi-granularity similarity matrix using fuzzy similarity relationships, combines similarity matrices from different granularities to form an upper approximation matrix, and constructs fused upper approximation granules to detect attribute anomalies. Neighbor domain probabilistic mapping is then employed to unify neighborhood relationships across views, allowing the analysis of both consistency and distribution differences to capture class outliers. Additionally, this paper employs a novel coarse-to-fine approximation method to construct the upper approximation matrix, further improving the accuracy of attribute outlier detection. Experimental results on multiple public datasets demonstrate that the proposed method generally outperforms existing multi-view outlier detection methods in terms of detection accuracy and robustness.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"181 ","pages":"Article 109402"},"PeriodicalIF":3.2,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143636801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1016/j.ijar.2025.109401
D. Boixader, J. Recasens
In this paper (binary) equivalence relations and their fuzzification, indistinguishability operators, are generalized to n-equivalence relations and n-multiindistinguishability operators respectively. Some of the properties of these two last objects are stated as well as their relation with binary ones.
{"title":"Multiindistinguishability operators","authors":"D. Boixader, J. Recasens","doi":"10.1016/j.ijar.2025.109401","DOIUrl":"10.1016/j.ijar.2025.109401","url":null,"abstract":"<div><div>In this paper (binary) equivalence relations and their fuzzification, indistinguishability operators, are generalized to <em>n</em>-equivalence relations and <em>n</em>-multiindistinguishability operators respectively. Some of the properties of these two last objects are stated as well as their relation with binary ones.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"181 ","pages":"Article 109401"},"PeriodicalIF":3.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143529582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DEEM (Deep Evidential Encoding of iMages) is a clustering algorithm that combines belief functions with convolutional neural networks in a Siamese-like framework for unsupervised and semi-supervised image clustering. In DEEM, images are mapped to Dempster–Shafer mass functions to quantify uncertainty in cluster membership. Various forms of prior information, including must-link and cannot-link constraints, supervised dissimilarities, and Distance Metric Learning, are incorporated to guide training and improve generalisation. By processing image pairs through shared network weights, DEEM aligns pairwise dissimilarities with the conflict between mass functions, thereby mitigating errors in noisy or incomplete distance matrices. Experiments on MNIST demonstrate that DEEM generalises effectively to unseen data while managing different types of prior knowledge, making it a promising approach for clustering and semi-supervised learning from image data under uncertainty.
{"title":"DEEM: A novel approach to semi-supervised and unsupervised image clustering under uncertainty using belief functions and convolutional neural networks","authors":"Loïc Guiziou , Emmanuel Ramasso , Sébastien Thibaud , Sébastien Denneulin","doi":"10.1016/j.ijar.2025.109400","DOIUrl":"10.1016/j.ijar.2025.109400","url":null,"abstract":"<div><div>DEEM (Deep Evidential Encoding of iMages) is a clustering algorithm that combines belief functions with convolutional neural networks in a Siamese-like framework for unsupervised and semi-supervised image clustering. In DEEM, images are mapped to Dempster–Shafer mass functions to quantify uncertainty in cluster membership. Various forms of prior information, including must-link and cannot-link constraints, supervised dissimilarities, and Distance Metric Learning, are incorporated to guide training and improve generalisation. By processing image pairs through shared network weights, DEEM aligns pairwise dissimilarities with the conflict between mass functions, thereby mitigating errors in noisy or incomplete distance matrices. Experiments on MNIST demonstrate that DEEM generalises effectively to unseen data while managing different types of prior knowledge, making it a promising approach for clustering and semi-supervised learning from image data under uncertainty.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"181 ","pages":"Article 109400"},"PeriodicalIF":3.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-24DOI: 10.1016/j.ijar.2025.109397
Jason Pillay , Andriette Bekker , Johannes Ferreira , Mohammad Arashi
Modeling noisy data in a network context remains an unavoidable obstacle; fortunately, random matrix theory may comprehensively describe network environments. Noisy data necessitates the probabilistic characterization of these networks using matrix variate models. Denoising network data using a Bayesian approach is not common in surveyed literature. Therefore, this paper adopts the Bayesian viewpoint and introduces a new version of the matrix variate t graphical network. This model's prior beliefs rely on the matrix variate gamma distribution to handle the noise process flexibly; from a statistical learning viewpoint, such a theoretical consideration benefits the comprehension of structures and processes that cause network-based noise in data as part of machine learning and offers real-world interpretation. A proposed Gibbs algorithm is provided for computing and approximating the resulting posterior probability distribution of interest to assess the considered model's network centrality measures. Experiments with synthetic and real-world stock price data are performed to validate the proposed algorithm's capabilities and show that this model has wider flexibility than the model proposed by [13].
{"title":"Soft computing for the posterior of a matrix t graphical network","authors":"Jason Pillay , Andriette Bekker , Johannes Ferreira , Mohammad Arashi","doi":"10.1016/j.ijar.2025.109397","DOIUrl":"10.1016/j.ijar.2025.109397","url":null,"abstract":"<div><div>Modeling noisy data in a network context remains an unavoidable obstacle; fortunately, random matrix theory may comprehensively describe network environments. Noisy data necessitates the probabilistic characterization of these networks using matrix variate models. Denoising network data using a Bayesian approach is not common in surveyed literature. Therefore, this paper adopts the Bayesian viewpoint and introduces a new version of the matrix variate t graphical network. This model's prior beliefs rely on the matrix variate gamma distribution to handle the noise process flexibly; from a statistical learning viewpoint, such a theoretical consideration benefits the comprehension of structures and processes that cause network-based noise in data as part of machine learning and offers real-world interpretation. A proposed Gibbs algorithm is provided for computing and approximating the resulting posterior probability distribution of interest to assess the considered model's network centrality measures. Experiments with synthetic and real-world stock price data are performed to validate the proposed algorithm's capabilities and show that this model has wider flexibility than the model proposed by <span><span>[13]</span></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109397"},"PeriodicalIF":3.2,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.ijar.2025.109387
Hugo J. Bello , Manuel Ojeda-Hernández , Domingo López-Rodríguez , Carlos Bejines
This article delves into the process of fuzzifying time series, which entails converting a conventional time series into a time-indexed sequence of fuzzy numbers. The focus lies on the well-established practice of fuzzifying time series when a predefined degree of uncertainty is known, employing fuzzy numbers to quantify volatility or vagueness. To address practical challenges associated with volatility or vagueness quantification, we introduce the concept of informed time series. An algorithm is proposed to derive fuzzy time series, and findings include the examination of structural breaks within the realm of fuzzy time series. Additionally, this article underscores the significance of employing topological tools in the analysis of fuzzy time series, accentuating the role of these tools in extracting insights and unraveling intricate relationships within the data.
{"title":"Fuzzy time series analysis: Expanding the scope with fuzzy numbers","authors":"Hugo J. Bello , Manuel Ojeda-Hernández , Domingo López-Rodríguez , Carlos Bejines","doi":"10.1016/j.ijar.2025.109387","DOIUrl":"10.1016/j.ijar.2025.109387","url":null,"abstract":"<div><div>This article delves into the process of fuzzifying time series, which entails converting a conventional time series into a time-indexed sequence of fuzzy numbers. The focus lies on the well-established practice of fuzzifying time series when a predefined degree of uncertainty is known, employing fuzzy numbers to quantify volatility or vagueness. To address practical challenges associated with volatility or vagueness quantification, we introduce the concept of informed time series. An algorithm is proposed to derive fuzzy time series, and findings include the examination of structural breaks within the realm of fuzzy time series. Additionally, this article underscores the significance of employing topological tools in the analysis of fuzzy time series, accentuating the role of these tools in extracting insights and unraveling intricate relationships within the data.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109387"},"PeriodicalIF":3.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1016/j.ijar.2025.109389
Ryan Martin, Jonathan P. Williams
The inferential model (IM) framework offers an alternative to the classical probabilistic (e.g., Bayesian and fiducial) uncertainty quantification in statistical inference. A key distinction is that classical uncertainty quantification takes the form of precise probabilities and offers only limited large-sample validity guarantees, whereas the IM's uncertainty quantification is imprecise in such a way that exact, finite-sample valid inference is possible. But are the IM's imprecision and finite-sample validity compatible with statistical efficiency? That is, can IMs be both finite-sample valid and asymptotically efficient? This paper gives an affirmative answer to this question via a new possibilistic Bernstein–von Mises theorem that parallels a fundamental Bayesian result. Among other things, our result shows that the IM solution is efficient in the sense that, asymptotically, its credal set is the smallest that contains the Gaussian distribution with variance equal to the Cramér–Rao lower bound. Moreover, a corresponding version of this new Bernstein–von Mises theorem is presented for problems that involve the elimination of nuisance parameters, which settles an open question concerning the relative efficiency of profiling-based versus extension-based marginalization strategies.
{"title":"Asymptotic efficiency of inferential models and a possibilistic Bernstein–von Mises theorem","authors":"Ryan Martin, Jonathan P. Williams","doi":"10.1016/j.ijar.2025.109389","DOIUrl":"10.1016/j.ijar.2025.109389","url":null,"abstract":"<div><div>The inferential model (IM) framework offers an alternative to the classical probabilistic (e.g., Bayesian and fiducial) uncertainty quantification in statistical inference. A key distinction is that classical uncertainty quantification takes the form of precise probabilities and offers only limited large-sample validity guarantees, whereas the IM's uncertainty quantification is imprecise in such a way that exact, finite-sample valid inference is possible. But are the IM's imprecision and finite-sample validity compatible with statistical efficiency? That is, can IMs be both finite-sample valid and asymptotically efficient? This paper gives an affirmative answer to this question via a new possibilistic Bernstein–von Mises theorem that parallels a fundamental Bayesian result. Among other things, our result shows that the IM solution is efficient in the sense that, asymptotically, its credal set is the smallest that contains the Gaussian distribution with variance equal to the Cramér–Rao lower bound. Moreover, a corresponding version of this new Bernstein–von Mises theorem is presented for problems that involve the elimination of nuisance parameters, which settles an open question concerning the relative efficiency of profiling-based versus extension-based marginalization strategies.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109389"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1016/j.ijar.2025.109386
Jiawei Wang , Fei Hao , Jie Gao , Li Zou , Zheng Pei
Maximal hyperclique search, focused on finding the largest hypernode subsets in a hypergraph such that every combination of r nodes in these subsets forms a hyperedge, is a fundamental problem in hypergraph mining. However, compared to traditional graphs, the combinatorial explosion of hyperedges significantly increases the complexity of enumeration, especially as the r-value and the number of hypernodes grow, rapidly expanding the search space. Moreover, overlapping hyperedges in dense hypergraphs lead to substantial redundant checks, further exacerbating search inefficiency, making traditional methods inadequate for large-scale hypergraphs. To tackle these challenges, this paper proposes a novel approach MHSC that handles the maximal hyperclique search task in r-uniform hypergraph based on concept-cognitive learning. Concept-cognitive learning refers to the process of understanding and structuring knowledge through the formation of concepts and their interrelationships. Technically, the hypernode-neighbor structure of the hypergraph is first expressed as a formal context, and the required concepts are generated using the concept lattice algorithm. Based on the shared relationships between hypernodes represented by the hyperedges, a series of theorems are proposed to prune hypernodes that cannot form maximal hypercliques within the sets of 1-intent and 2-intent concepts, thereby narrowing the search space and reducing redundant computations. Furthermore, an optimization method termed MHSC+ is introduced. Extensive experiments conducted on both test datasets and real-world datasets demonstrate the effectiveness, efficiency, and applicability of the proposed algorithm.
{"title":"Maximal hypercliques search based on concept-cognitive learning","authors":"Jiawei Wang , Fei Hao , Jie Gao , Li Zou , Zheng Pei","doi":"10.1016/j.ijar.2025.109386","DOIUrl":"10.1016/j.ijar.2025.109386","url":null,"abstract":"<div><div>Maximal hyperclique search, focused on finding the largest hypernode subsets in a hypergraph such that every combination of <em>r</em> nodes in these subsets forms a hyperedge, is a fundamental problem in hypergraph mining. However, compared to traditional graphs, the combinatorial explosion of hyperedges significantly increases the complexity of enumeration, especially as the <em>r</em>-value and the number of hypernodes grow, rapidly expanding the search space. Moreover, overlapping hyperedges in dense hypergraphs lead to substantial redundant checks, further exacerbating search inefficiency, making traditional methods inadequate for large-scale hypergraphs. To tackle these challenges, this paper proposes a novel approach MHSC that handles the maximal hyperclique search task in <em>r</em>-uniform hypergraph based on concept-cognitive learning. Concept-cognitive learning refers to the process of understanding and structuring knowledge through the formation of concepts and their interrelationships. Technically, the hypernode-neighbor structure of the hypergraph is first expressed as a formal context, and the required concepts are generated using the concept lattice algorithm. Based on the shared relationships between hypernodes represented by the hyperedges, a series of theorems are proposed to prune hypernodes that cannot form maximal hypercliques within the sets of 1-intent and 2-intent concepts, thereby narrowing the search space and reducing redundant computations. Furthermore, an optimization method termed MHSC+ is introduced. Extensive experiments conducted on both test datasets and real-world datasets demonstrate the effectiveness, efficiency, and applicability of the proposed algorithm.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109386"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1016/j.ijar.2025.109388
Francesco De Pretis , Aldo Glielmo , Jürgen Landes
Explanatory relationships between data and hypotheses have been suggested to play a role in the formation of posterior probabilities. This suggestion was tested in a toy environment and supported by simulations by David H. Glass. We here put forward a variety of inference to the best explanation approaches for determining posterior probabilities by intertwining Bayesian and inference to the best explanation approaches. We then simulate their performances for the estimation of parameters in the Brock and Hommes agent-based model for asset pricing in finance. We find that performances depend on circumstances and also on the evaluation metric. However, most of the time our suggested approaches outperform the Bayesian approach.
{"title":"Assessing inference to the best explanation posteriors for the estimation of economic agent-based models","authors":"Francesco De Pretis , Aldo Glielmo , Jürgen Landes","doi":"10.1016/j.ijar.2025.109388","DOIUrl":"10.1016/j.ijar.2025.109388","url":null,"abstract":"<div><div>Explanatory relationships between data and hypotheses have been suggested to play a role in the formation of posterior probabilities. This suggestion was tested in a toy environment and supported by simulations by David H. Glass. We here put forward a variety of inference to the best explanation approaches for determining posterior probabilities by intertwining Bayesian and inference to the best explanation approaches. We then simulate their performances for the estimation of parameters in the Brock and Hommes agent-based model for asset pricing in finance. We find that performances depend on circumstances and also on the evaluation metric. However, most of the time our suggested approaches outperform the Bayesian approach.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109388"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-17DOI: 10.1016/j.ijar.2025.109385
Kees van Berkel , Marcello D'Agostino , Sanjay Modgil
Dialectical Classical Argumentation (Dialectical Cl-Arg) has been shown to satisfy rationality postulates under resource bounds. In particular, the consistency and non-contamination postulates are satisfied despite dropping the assumption of logical omniscience and the consistency and subset minimality checks on arguments' premises that are deployed by standard approaches to Cl-Arg. This paper studies Dialectical Cl-Arg's formalisation of Preferred Subtheories (PS) non-monotonic reasoning under resource bounds. The contribution of this paper is twofold. First, we establish soundness and completeness for Dialectical Cl-Arg's credulous consequence relation under the preferred semantics and credulous PS consequences. This result paves the way for the use of argument game proof theories and dialogues that establish membership of arguments in admissible (and so preferred) extensions, and hence the credulous PS consequences of a belief base. Second, we refine the non-standard characteristic function for Dialectical Cl-Arg, and use this refined function to show soundness for Dialectical Cl-Arg consequences under the grounded semantics and resource-bounded sceptical PS consequence. We provide a counterexample that shows that completeness does not hold. However, we also show that the grounded consequences defined by Dialectical Cl-Arg strictly subsume the grounded consequences defined by standard Cl-Arg formalisations of PS, so that we recover sceptical PS consequences that one would intuitively expect to hold.
{"title":"A dialectical formalisation of preferred subtheories reasoning under resource bounds","authors":"Kees van Berkel , Marcello D'Agostino , Sanjay Modgil","doi":"10.1016/j.ijar.2025.109385","DOIUrl":"10.1016/j.ijar.2025.109385","url":null,"abstract":"<div><div><em>Dialectical Classical Argumentation</em> (Dialectical <em>Cl-Arg</em>) has been shown to satisfy rationality postulates under resource bounds. In particular, the consistency and non-contamination postulates are satisfied despite dropping the assumption of logical omniscience and the consistency and subset minimality checks on arguments' premises that are deployed by standard approaches to <em>Cl-Arg</em>. This paper studies Dialectical <em>Cl-Arg</em>'s formalisation of Preferred Subtheories (<em>PS</em>) non-monotonic reasoning under resource bounds. The contribution of this paper is twofold. First, we establish soundness and completeness for Dialectical <em>Cl-Arg</em>'s credulous consequence relation under the <em>preferred</em> semantics and credulous <em>PS</em> consequences. This result paves the way for the use of argument game proof theories and dialogues that establish membership of arguments in admissible (and so preferred) extensions, and hence the credulous <em>PS</em> consequences of a belief base. Second, we refine the non-standard characteristic function for Dialectical <em>Cl-Arg</em>, and use this refined function to show soundness for Dialectical <em>Cl-Arg</em> consequences under the grounded semantics and resource-bounded sceptical <em>PS</em> consequence. We provide a counterexample that shows that completeness does not hold. However, we also show that the grounded consequences defined by Dialectical <em>Cl-Arg</em> strictly subsume the grounded consequences defined by standard <em>Cl-Arg</em> formalisations of <em>PS</em>, so that we recover sceptical <em>PS</em> consequences that one would intuitively expect to hold.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109385"},"PeriodicalIF":3.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}