Based on the intuitive idea that sets of objects or entities can be categorized in very different ways, and that some ways to categorise objects are better than others, depending on the purpose of the categorization, in this paper, a formal framework is introduced for parametrically generating a space of possible categorizations of a set of objects, based on the features which individual agents or groups thereof regard as relevant (formally encoded in the notion of interrogative agenda). This formal framework accounts both for two-valued (crisp), and for many-valued (fuzzy) judgments about the relevance of given features, and introduces ways to aggregate individual agendas to group agendas. As an application on this framework, we discuss a machine-learning meta-algorithm for outlier detection and classification which provides local and global explanations of its results.
{"title":"Flexible categorization using formal concept analysis and Dempster-Shafer theory","authors":"Marcel Boersma , Krishna Manoorkar , Alessandra Palmigiano , Mattia Panettiere , Apostolos Tzimoulis , Nachoem Wijnberg","doi":"10.1016/j.ijar.2025.109548","DOIUrl":"10.1016/j.ijar.2025.109548","url":null,"abstract":"<div><div>Based on the intuitive idea that sets of objects or entities can be categorized in very different ways, and that some ways to categorise objects are better than others, depending on the purpose of the categorization, in this paper, a formal framework is introduced for parametrically generating a space of possible categorizations of a set of objects, based on the features which individual agents or groups thereof regard as relevant (formally encoded in the notion of <em>interrogative agenda</em>). This formal framework accounts both for two-valued (crisp), and for many-valued (fuzzy) judgments about the relevance of given features, and introduces ways to aggregate individual agendas to group agendas. As an application on this framework, we discuss a machine-learning meta-algorithm for outlier detection and classification which provides local and global explanations of its results.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109548"},"PeriodicalIF":3.0,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-11DOI: 10.1016/j.ijar.2025.109546
Andrea Cinfrignini , Silvia Lorenzini , Davide Petturiti
We deal with a single period two-player newsvendor game where both newsvendors are assumed to be rational and risk-neutral, and to operate under ambiguity. Each newsvendor needs to choose his/her order quantity of the same perishable product, whose global market demand is modeled by a discrete random variable, endowed with a reference probability measure. Furthermore, the global market demand is distributed to newsvendors according to a proportional allocation rule. We model the uncertainty faced by each newsvendor with an individual ϵ-contamination of the reference probability measure, computed with respect to a suitable class of probability measures. The resulting ϵ-contamination model preserves the expected demand under the reference probability and is used to compute the individual lower expected profit as a Choquet expectation. Therefore, the optimization problem of each player reduces to settle the order quantity that maximizes his/her lower expected profit, given the opponent choice, which is a maximin problem. In the resulting game, we prove that a Nash equilibrium always exists, though it may not be unique. Finally, we provide a characterization of Nash equilibria in terms of best response functions.
{"title":"A two-player newsvendor game with competition on demand under ambiguity","authors":"Andrea Cinfrignini , Silvia Lorenzini , Davide Petturiti","doi":"10.1016/j.ijar.2025.109546","DOIUrl":"10.1016/j.ijar.2025.109546","url":null,"abstract":"<div><div>We deal with a single period two-player newsvendor game where both newsvendors are assumed to be rational and risk-neutral, and to operate under ambiguity. Each newsvendor needs to choose his/her order quantity of the same perishable product, whose global market demand is modeled by a discrete random variable, endowed with a reference probability measure. Furthermore, the global market demand is distributed to newsvendors according to a proportional allocation rule. We model the uncertainty faced by each newsvendor with an individual <em>ϵ</em>-contamination of the reference probability measure, computed with respect to a suitable class of probability measures. The resulting <em>ϵ</em>-contamination model preserves the expected demand under the reference probability and is used to compute the individual lower expected profit as a Choquet expectation. Therefore, the optimization problem of each player reduces to settle the order quantity that maximizes his/her lower expected profit, given the opponent choice, which is a maximin problem. In the resulting game, we prove that a Nash equilibrium always exists, though it may not be unique. Finally, we provide a characterization of Nash equilibria in terms of best response functions.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109546"},"PeriodicalIF":3.0,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To bridge the performance gap between deep learning models and tree ensemble methods in tabular data tasks, we propose GTransformer, a novel deep architecture that innovatively integrates granular computing and self-attention mechanisms. Our approach introduces a scalable granulation function set, from which diverse functions are randomly sampled to construct multi-view feature granules. These granules are aggregated into granule vectors, forming a multi-view functional granulation layer that provides comprehensive representations of tabular features from multiple perspectives. Subsequently, a Transformer encoder driven by granule sequences is employed to model deep interactions among features, with predictions generated via a hierarchical multilayer perceptron (MLP) classification head. Experiments on 12 datasets show that GTransformer achieves an average AUC of 92.9%, which is comparable to the 92.3% performance of LightGBM. Compared with the current mainstream deep model TabNet, the average AUC gain is 2.74%, with a 14.5% improvement on the Sonar dataset. GTransformer demonstrates strong robustness in scenarios with noise and missing data, especially on the Credit and HTRU2 datasets, where the accuracy decline is 24.73% and 17.03% less than that of MLP-Head respectively, further verifying its applicability in complex real-world application scenarios.
{"title":"GTransformer: Multi-view functional granulation and self-attention for tabular data modeling","authors":"Liang Liao , Yumin Chen , Yingyue Chen , Yiting Lin","doi":"10.1016/j.ijar.2025.109547","DOIUrl":"10.1016/j.ijar.2025.109547","url":null,"abstract":"<div><div>To bridge the performance gap between deep learning models and tree ensemble methods in tabular data tasks, we propose GTransformer, a novel deep architecture that innovatively integrates granular computing and self-attention mechanisms. Our approach introduces a scalable granulation function set, from which diverse functions are randomly sampled to construct multi-view feature granules. These granules are aggregated into granule vectors, forming a multi-view functional granulation layer that provides comprehensive representations of tabular features from multiple perspectives. Subsequently, a Transformer encoder driven by granule sequences is employed to model deep interactions among features, with predictions generated via a hierarchical multilayer perceptron (MLP) classification head. Experiments on 12 datasets show that GTransformer achieves an average AUC of 92.9%, which is comparable to the 92.3% performance of LightGBM. Compared with the current mainstream deep model TabNet, the average AUC gain is 2.74%, with a 14.5% improvement on the Sonar dataset. GTransformer demonstrates strong robustness in scenarios with noise and missing data, especially on the Credit and HTRU2 datasets, where the accuracy decline is 24.73% and 17.03% less than that of MLP-Head respectively, further verifying its applicability in complex real-world application scenarios.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109547"},"PeriodicalIF":3.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-07DOI: 10.1016/j.ijar.2025.109544
Hajime Okawa , Yasuo Kudo , Tetsuya Murai
In this paper, we introduce the concept of relative pre-reducts to derive the relative reducts from a large dataset. The relative reduct is considered a consistency-based attribute reduction method that is commonly utilized to extract concise subsets of condition attributes. Nonetheless, calculating all relative reducts necessitates substantial time and memory to build a discernibility matrix. In this research, we demonstrate that all relative pre-reducts can be computed using a simplified matrix referred to as the partial discernibility matrix, which can be readily converted into relative reducts. We also suggest employing a data partitioning approach to generate the discernibility matrix. This method alleviates the issue of an increased number of results for each partition. The outcomes from this technique yield the relative pre-reducts proposed in this study. Since our enhancements to the computation of relative reducts are independent of other advancements, they can be implemented in conjunction with existing methods. Experimental findings indicate that utilizing relative pre-reducts for computing relative reducts is efficient for large datasets.
{"title":"Relative pre-reducts for computing the relative reducts of large data sets","authors":"Hajime Okawa , Yasuo Kudo , Tetsuya Murai","doi":"10.1016/j.ijar.2025.109544","DOIUrl":"10.1016/j.ijar.2025.109544","url":null,"abstract":"<div><div>In this paper, we introduce the concept of relative pre-reducts to derive the relative reducts from a large dataset. The relative reduct is considered a consistency-based attribute reduction method that is commonly utilized to extract concise subsets of condition attributes. Nonetheless, calculating all relative reducts necessitates substantial time and memory to build a discernibility matrix. In this research, we demonstrate that all relative pre-reducts can be computed using a simplified matrix referred to as the partial discernibility matrix, which can be readily converted into relative reducts. We also suggest employing a data partitioning approach to generate the discernibility matrix. This method alleviates the issue of an increased number of results for each partition. The outcomes from this technique yield the relative pre-reducts proposed in this study. Since our enhancements to the computation of relative reducts are independent of other advancements, they can be implemented in conjunction with existing methods. Experimental findings indicate that utilizing relative pre-reducts for computing relative reducts is efficient for large datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109544"},"PeriodicalIF":3.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-07DOI: 10.1016/j.ijar.2025.109543
Shizhe Zhang , Liwen Ma
Classical rough set theory fundamentally requires upper and lower approximations to be definite sets for precise knowledge representation. However, a significant problem arises as many widely used approximation operators inherently produce rough approximations (with non-empty boundaries), contradicting this core theoretical intent and undermining practical applicability. To resolve this core discrepancy, we introduce stable approximation operators and stable sets, and develop an optimization method that transforms unstable operators into stable ones, ensuring definite approximations. This method includes detailing the optimization process with algorithmic implementation, analyzing the topological structure of resulting approximation spaces and connections between optimized operators, and enhancing computational efficiency via matrix-based computation. This work may strengthen rough set theory's foundation by bridging the gap between theory and practice while enhancing its scope for practical applications.
{"title":"Optimizations of approximation operators in covering rough set theory","authors":"Shizhe Zhang , Liwen Ma","doi":"10.1016/j.ijar.2025.109543","DOIUrl":"10.1016/j.ijar.2025.109543","url":null,"abstract":"<div><div>Classical rough set theory fundamentally requires upper and lower approximations to be definite sets for precise knowledge representation. However, a significant problem arises as many widely used approximation operators inherently produce rough approximations (with non-empty boundaries), contradicting this core theoretical intent and undermining practical applicability. To resolve this core discrepancy, we introduce stable approximation operators and stable sets, and develop an optimization method that transforms unstable operators into stable ones, ensuring definite approximations. This method includes detailing the optimization process with algorithmic implementation, analyzing the topological structure of resulting approximation spaces and connections between optimized operators, and enhancing computational efficiency via matrix-based computation. This work may strengthen rough set theory's foundation by bridging the gap between theory and practice while enhancing its scope for practical applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109543"},"PeriodicalIF":3.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-06DOI: 10.1016/j.ijar.2025.109538
Hongpeng Tian , Zuowei Zhang , Zhunga Liu , Jingwei Zuo , Caixing Yang
Over-sampling methods concentrate on creating balanced samples and have proven successful in classifying imbalanced data. However, current over-sampling methods fail to consider the uncertainty of produced samples, potentially altering the data distribution and impacting the classification process. To address this issue, we propose a distribution assessment-based multiple over-sampling (DAMO) method for classifying imbalanced data. We first introduce a multiple over-sampling method based on distribution assessment to create different forms of synthetic samples. The core is quantifying the inconsistency of data distribution before and after sampling as a constraint to guide multiple over-sampling, thereby minimizing the data shift and characterizing the uncertainty of produced samples. Then, we quantify the local reliability of the classification results and select several imprecise samples with low local reliability that are indistinguishable between classes. Neighbors serve as additional complementary information to calibrate the results of imprecise samples, thereby reducing the likelihood of misclassification. The calibrated results are combined by the discounting Dempster-Shafer fusion rule to make a final decision. DAMO's efficiency has been demonstrated through comparisons with related methods on various real imbalanced datasets.
{"title":"Distribution assessment-based multiple over-sampling with evidence fusion for imbalanced data classification","authors":"Hongpeng Tian , Zuowei Zhang , Zhunga Liu , Jingwei Zuo , Caixing Yang","doi":"10.1016/j.ijar.2025.109538","DOIUrl":"10.1016/j.ijar.2025.109538","url":null,"abstract":"<div><div>Over-sampling methods concentrate on creating balanced samples and have proven successful in classifying imbalanced data. However, current over-sampling methods fail to consider the uncertainty of produced samples, potentially altering the data distribution and impacting the classification process. To address this issue, we propose a distribution assessment-based multiple over-sampling (DAMO) method for classifying imbalanced data. We first introduce a multiple over-sampling method based on distribution assessment to create different forms of synthetic samples. The core is quantifying the inconsistency of data distribution before and after sampling as a constraint to guide multiple over-sampling, thereby minimizing the data shift and characterizing the uncertainty of produced samples. Then, we quantify the local reliability of the classification results and select several imprecise samples with low local reliability that are indistinguishable between classes. Neighbors serve as additional complementary information to calibrate the results of imprecise samples, thereby reducing the likelihood of misclassification. The calibrated results are combined by the discounting Dempster-Shafer fusion rule to make a final decision. DAMO's efficiency has been demonstrated through comparisons with related methods on various real imbalanced datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109538"},"PeriodicalIF":3.0,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1016/j.ijar.2025.109536
Zhihui Zhang, Dun Liu, Rongping Shen
In the era of social media and diverse communication platforms, understanding human emotion across various modalities has become a crucial challenge. While significant progress has been made in feature extraction and interaction techniques, several unresolved issues persist, particularly concerning the balance between these two aspects. A central question is whether all extracted features are of equal importance, or if some may contain redundant or noisy information that undermines effective modality interaction. To address these challenges, we propose a novel Three-Way Decision-Based Self-Adaptive Filtering Model (TWSAFM). Inspired by the three-way decision (TWD) theory, we introduce a self-adaptive filtering module that categorizes extracted modal features into three distinct domains: acceptable, rejectable, and reconsidering. This classification allows for separate processing of features, enabling the model to prioritize essential information while minimizing the impact of redundant and noisy data. Experimental validation on three benchmark datasets demonstrates that TWSAFM outperforms state-of-the-art methods in sentiment analysis tasks. Furthermore, training studies and parameter sensitivity analysis underscore the effectiveness of TWSAFM in efficiently filtering out irrelevant and noisy features, highlighting its robust contribution to enhancing feature interaction.
{"title":"A novel three-way based self-adaptive filtering model for sentiment analysis","authors":"Zhihui Zhang, Dun Liu, Rongping Shen","doi":"10.1016/j.ijar.2025.109536","DOIUrl":"10.1016/j.ijar.2025.109536","url":null,"abstract":"<div><div>In the era of social media and diverse communication platforms, understanding human emotion across various modalities has become a crucial challenge. While significant progress has been made in feature extraction and interaction techniques, several unresolved issues persist, particularly concerning the balance between these two aspects. A central question is whether all extracted features are of equal importance, or if some may contain redundant or noisy information that undermines effective modality interaction. To address these challenges, we propose a novel Three-Way Decision-Based Self-Adaptive Filtering Model (TWSAFM). Inspired by the three-way decision (TWD) theory, we introduce a self-adaptive filtering module that categorizes extracted modal features into three distinct domains: acceptable, rejectable, and reconsidering. This classification allows for separate processing of features, enabling the model to prioritize essential information while minimizing the impact of redundant and noisy data. Experimental validation on three benchmark datasets demonstrates that TWSAFM outperforms state-of-the-art methods in sentiment analysis tasks. Furthermore, training studies and parameter sensitivity analysis underscore the effectiveness of TWSAFM in efficiently filtering out irrelevant and noisy features, highlighting its robust contribution to enhancing feature interaction.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109536"},"PeriodicalIF":3.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1016/j.ijar.2025.109534
Sofiane Daimellah , Sylvie Le Hégarat-Mascle , Clotilde Boust
Identifying pigments in Cultural Heritage artifacts is key to uncovering their origin and guiding conservation strategies. Although recent advances in non-invasive imaging have enabled the collection of rich multimodal data, existing methods often fall short in dealing with uncertain, ambiguous, or noisy information. This paper introduces a versatile fusion framework grounded in Belief Function Theory, combining domain-informed evidence modeling with neural optimization. Specifically, we propose a general strategy for assigning mass functions by leveraging expert knowledge encoded in parametric Evidence Mapping Functions, which are further refined through task-specific training using constrained neural networks. When applied to pigment classification, our method demonstrates robustness against source variability and class ambiguity. Experiments conducted on both synthetic and mock-up datasets validate its effectiveness and suggest promising potential for broader applications.
{"title":"Domain-informed and neural-optimized belief assignments: A framework applied to cultural heritage","authors":"Sofiane Daimellah , Sylvie Le Hégarat-Mascle , Clotilde Boust","doi":"10.1016/j.ijar.2025.109534","DOIUrl":"10.1016/j.ijar.2025.109534","url":null,"abstract":"<div><div>Identifying pigments in Cultural Heritage artifacts is key to uncovering their origin and guiding conservation strategies. Although recent advances in non-invasive imaging have enabled the collection of rich multimodal data, existing methods often fall short in dealing with uncertain, ambiguous, or noisy information. This paper introduces a versatile fusion framework grounded in Belief Function Theory, combining domain-informed evidence modeling with neural optimization. Specifically, we propose a general strategy for assigning mass functions by leveraging expert knowledge encoded in parametric Evidence Mapping Functions, which are further refined through task-specific training using constrained neural networks. When applied to pigment classification, our method demonstrates robustness against source variability and class ambiguity. Experiments conducted on both synthetic and mock-up datasets validate its effectiveness and suggest promising potential for broader applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109534"},"PeriodicalIF":3.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-30DOI: 10.1016/j.ijar.2025.109531
Sourabh Balgi , Marc Braun , Jose M. Peña , Adel Daoud
We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term ρ-GNF, where is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the ρ-GNF enables scholars to estimate the average causal effect (ACE) as a function of ρ, accounting for various confounding strengths. The output of the ρ-GNF is what we term the , which provides the bounds for the ACE given an interval of assumed ρ values. The also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter ρ enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.
{"title":"Sensitivity analysis to unobserved confounding with copula-based normalizing flows","authors":"Sourabh Balgi , Marc Braun , Jose M. Peña , Adel Daoud","doi":"10.1016/j.ijar.2025.109531","DOIUrl":"10.1016/j.ijar.2025.109531","url":null,"abstract":"<div><div>We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term <em>ρ</em>-GNF, where <span><math><mi>ρ</mi><mo>∈</mo><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mo>+</mo><mn>1</mn><mo>]</mo></math></span> is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the <em>ρ</em>-GNF enables scholars to estimate the average causal effect (ACE) as a function of <em>ρ</em>, accounting for various confounding strengths. The output of the <em>ρ</em>-GNF is what we term the <span><math><msub><mrow><mi>ρ</mi></mrow><mrow><mi>c</mi><mi>u</mi><mi>r</mi><mi>v</mi><mi>e</mi></mrow></msub></math></span>, which provides the bounds for the ACE given an interval of assumed <em>ρ</em> values. The <span><math><msub><mrow><mi>ρ</mi></mrow><mrow><mi>c</mi><mi>u</mi><mi>r</mi><mi>v</mi><mi>e</mi></mrow></msub></math></span> also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter <em>ρ</em> enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109531"},"PeriodicalIF":3.0,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Triadic Concept Analysis (TCA) is an extension of Formal Concept Analysis (FCA) for handling data represented as a set of objects described by attributes and conditions via a ternary relation. However, the intuition to go from FCA to TCA is not always straightforward. In this paper we discuss some FCA notions from dyadic to triadic. Although some ideas admit straightforward adaptation, most do not. In particular, we address the representation problem, the notion of redundant attributes and subcontexts in the triadic setting.
{"title":"Triadic data: Representation and reduction","authors":"Léa Aubin Kouankam Djouohou , Blaise Blériot Koguep Njionou , Leonard Kwuida","doi":"10.1016/j.ijar.2025.109532","DOIUrl":"10.1016/j.ijar.2025.109532","url":null,"abstract":"<div><div>Triadic Concept Analysis (TCA) is an extension of Formal Concept Analysis (FCA) for handling data represented as a set of objects described by attributes and conditions via a ternary relation. However, the intuition to go from FCA to TCA is not always straightforward. In this paper we discuss some FCA notions from dyadic to triadic. Although some ideas admit straightforward adaptation, most do not. In particular, we address the representation problem, the notion of redundant attributes and subcontexts in the triadic setting.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109532"},"PeriodicalIF":3.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}