首页 > 最新文献

International Journal of Approximate Reasoning最新文献

英文 中文
GTransformer: Multi-view functional granulation and self-attention for tabular data modeling GTransformer:用于表格数据建模的多视图功能粒化和自关注
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-08 DOI: 10.1016/j.ijar.2025.109547
Liang Liao , Yumin Chen , Yingyue Chen , Yiting Lin
To bridge the performance gap between deep learning models and tree ensemble methods in tabular data tasks, we propose GTransformer, a novel deep architecture that innovatively integrates granular computing and self-attention mechanisms. Our approach introduces a scalable granulation function set, from which diverse functions are randomly sampled to construct multi-view feature granules. These granules are aggregated into granule vectors, forming a multi-view functional granulation layer that provides comprehensive representations of tabular features from multiple perspectives. Subsequently, a Transformer encoder driven by granule sequences is employed to model deep interactions among features, with predictions generated via a hierarchical multilayer perceptron (MLP) classification head. Experiments on 12 datasets show that GTransformer achieves an average AUC of 92.9%, which is comparable to the 92.3% performance of LightGBM. Compared with the current mainstream deep model TabNet, the average AUC gain is 2.74%, with a 14.5% improvement on the Sonar dataset. GTransformer demonstrates strong robustness in scenarios with noise and missing data, especially on the Credit and HTRU2 datasets, where the accuracy decline is 24.73% and 17.03% less than that of MLP-Head respectively, further verifying its applicability in complex real-world application scenarios.
为了弥合深度学习模型和树集成方法在表格数据任务中的性能差距,我们提出了一种新的深度架构GTransformer,它创新地集成了颗粒计算和自关注机制。我们的方法引入了一个可扩展的粒化函数集,从中随机抽取不同的函数来构建多视图特征粒。这些颗粒聚集成颗粒向量,形成一个多视图功能造粒层,从多个角度提供表格特征的综合表示。随后,采用由颗粒序列驱动的Transformer编码器对特征之间的深度交互进行建模,并通过分层多层感知器(MLP)分类头生成预测。在12个数据集上的实验表明,GTransformer的平均AUC达到了92.9%,与LightGBM的92.3%相当。与当前主流深度模型TabNet相比,平均AUC增益为2.74%,在Sonar数据集上提高了14.5%。GTransformer在存在噪声和缺失数据的场景下表现出了较强的鲁棒性,尤其是在Credit和HTRU2数据集上,其准确率降幅分别比MLP-Head小24.73%和17.03%,进一步验证了其在复杂的现实应用场景中的适用性。
{"title":"GTransformer: Multi-view functional granulation and self-attention for tabular data modeling","authors":"Liang Liao ,&nbsp;Yumin Chen ,&nbsp;Yingyue Chen ,&nbsp;Yiting Lin","doi":"10.1016/j.ijar.2025.109547","DOIUrl":"10.1016/j.ijar.2025.109547","url":null,"abstract":"<div><div>To bridge the performance gap between deep learning models and tree ensemble methods in tabular data tasks, we propose GTransformer, a novel deep architecture that innovatively integrates granular computing and self-attention mechanisms. Our approach introduces a scalable granulation function set, from which diverse functions are randomly sampled to construct multi-view feature granules. These granules are aggregated into granule vectors, forming a multi-view functional granulation layer that provides comprehensive representations of tabular features from multiple perspectives. Subsequently, a Transformer encoder driven by granule sequences is employed to model deep interactions among features, with predictions generated via a hierarchical multilayer perceptron (MLP) classification head. Experiments on 12 datasets show that GTransformer achieves an average AUC of 92.9%, which is comparable to the 92.3% performance of LightGBM. Compared with the current mainstream deep model TabNet, the average AUC gain is 2.74%, with a 14.5% improvement on the Sonar dataset. GTransformer demonstrates strong robustness in scenarios with noise and missing data, especially on the Credit and HTRU2 datasets, where the accuracy decline is 24.73% and 17.03% less than that of MLP-Head respectively, further verifying its applicability in complex real-world application scenarios.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109547"},"PeriodicalIF":3.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relative pre-reducts for computing the relative reducts of large data sets 用于计算大型数据集的相对约简的相对预约
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-07 DOI: 10.1016/j.ijar.2025.109544
Hajime Okawa , Yasuo Kudo , Tetsuya Murai
In this paper, we introduce the concept of relative pre-reducts to derive the relative reducts from a large dataset. The relative reduct is considered a consistency-based attribute reduction method that is commonly utilized to extract concise subsets of condition attributes. Nonetheless, calculating all relative reducts necessitates substantial time and memory to build a discernibility matrix. In this research, we demonstrate that all relative pre-reducts can be computed using a simplified matrix referred to as the partial discernibility matrix, which can be readily converted into relative reducts. We also suggest employing a data partitioning approach to generate the discernibility matrix. This method alleviates the issue of an increased number of results for each partition. The outcomes from this technique yield the relative pre-reducts proposed in this study. Since our enhancements to the computation of relative reducts are independent of other advancements, they can be implemented in conjunction with existing methods. Experimental findings indicate that utilizing relative pre-reducts for computing relative reducts is efficient for large datasets.
在本文中,我们引入了相对预约的概念,从大型数据集中推导出相对约简。相对约简被认为是一种基于一致性的属性约简方法,通常用于提取条件属性的简明子集。尽管如此,计算所有的相对减少需要大量的时间和内存来构建一个可分辨性矩阵。在本研究中,我们证明了所有的相对预约都可以用一个简化的矩阵来计算,即部分差别矩阵,它可以很容易地转换为相对约简。我们还建议采用数据划分方法来生成差别矩阵。这种方法减轻了每个分区的结果数量增加的问题。这项技术的结果产生了本研究中提出的相对预还原。由于我们对相对缩减计算的增强是独立于其他改进的,因此它们可以与现有方法一起实现。实验结果表明,对于大型数据集,利用相对预约简计算相对约简是有效的。
{"title":"Relative pre-reducts for computing the relative reducts of large data sets","authors":"Hajime Okawa ,&nbsp;Yasuo Kudo ,&nbsp;Tetsuya Murai","doi":"10.1016/j.ijar.2025.109544","DOIUrl":"10.1016/j.ijar.2025.109544","url":null,"abstract":"<div><div>In this paper, we introduce the concept of relative pre-reducts to derive the relative reducts from a large dataset. The relative reduct is considered a consistency-based attribute reduction method that is commonly utilized to extract concise subsets of condition attributes. Nonetheless, calculating all relative reducts necessitates substantial time and memory to build a discernibility matrix. In this research, we demonstrate that all relative pre-reducts can be computed using a simplified matrix referred to as the partial discernibility matrix, which can be readily converted into relative reducts. We also suggest employing a data partitioning approach to generate the discernibility matrix. This method alleviates the issue of an increased number of results for each partition. The outcomes from this technique yield the relative pre-reducts proposed in this study. Since our enhancements to the computation of relative reducts are independent of other advancements, they can be implemented in conjunction with existing methods. Experimental findings indicate that utilizing relative pre-reducts for computing relative reducts is efficient for large datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109544"},"PeriodicalIF":3.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizations of approximation operators in covering rough set theory 覆盖粗糙集理论中近似算子的优化
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-07 DOI: 10.1016/j.ijar.2025.109543
Shizhe Zhang , Liwen Ma
Classical rough set theory fundamentally requires upper and lower approximations to be definite sets for precise knowledge representation. However, a significant problem arises as many widely used approximation operators inherently produce rough approximations (with non-empty boundaries), contradicting this core theoretical intent and undermining practical applicability. To resolve this core discrepancy, we introduce stable approximation operators and stable sets, and develop an optimization method that transforms unstable operators into stable ones, ensuring definite approximations. This method includes detailing the optimization process with algorithmic implementation, analyzing the topological structure of resulting approximation spaces and connections between optimized operators, and enhancing computational efficiency via matrix-based computation. This work may strengthen rough set theory's foundation by bridging the gap between theory and practice while enhancing its scope for practical applications.
经典粗糙集理论从根本上要求上近似和下近似是精确知识表示的定集。然而,一个重要的问题出现了,因为许多广泛使用的近似算子固有地产生粗略的近似(非空边界),与这一核心理论意图相矛盾,破坏了实际的适用性。为了解决这一核心差异,我们引入了稳定逼近算子和稳定集合,并提出了一种将不稳定算子转化为稳定算子的优化方法,从而保证了近似的确定。该方法包括通过算法实现详细描述优化过程,分析得到的近似空间的拓扑结构和优化算子之间的联系,并通过基于矩阵的计算提高计算效率。这项工作可以通过缩小理论与实践之间的差距来加强粗糙集理论的基础,同时扩大其实际应用的范围。
{"title":"Optimizations of approximation operators in covering rough set theory","authors":"Shizhe Zhang ,&nbsp;Liwen Ma","doi":"10.1016/j.ijar.2025.109543","DOIUrl":"10.1016/j.ijar.2025.109543","url":null,"abstract":"<div><div>Classical rough set theory fundamentally requires upper and lower approximations to be definite sets for precise knowledge representation. However, a significant problem arises as many widely used approximation operators inherently produce rough approximations (with non-empty boundaries), contradicting this core theoretical intent and undermining practical applicability. To resolve this core discrepancy, we introduce stable approximation operators and stable sets, and develop an optimization method that transforms unstable operators into stable ones, ensuring definite approximations. This method includes detailing the optimization process with algorithmic implementation, analyzing the topological structure of resulting approximation spaces and connections between optimized operators, and enhancing computational efficiency via matrix-based computation. This work may strengthen rough set theory's foundation by bridging the gap between theory and practice while enhancing its scope for practical applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109543"},"PeriodicalIF":3.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distribution assessment-based multiple over-sampling with evidence fusion for imbalanced data classification 基于分布评估的多重过采样与证据融合的不平衡数据分类
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-06 DOI: 10.1016/j.ijar.2025.109538
Hongpeng Tian , Zuowei Zhang , Zhunga Liu , Jingwei Zuo , Caixing Yang
Over-sampling methods concentrate on creating balanced samples and have proven successful in classifying imbalanced data. However, current over-sampling methods fail to consider the uncertainty of produced samples, potentially altering the data distribution and impacting the classification process. To address this issue, we propose a distribution assessment-based multiple over-sampling (DAMO) method for classifying imbalanced data. We first introduce a multiple over-sampling method based on distribution assessment to create different forms of synthetic samples. The core is quantifying the inconsistency of data distribution before and after sampling as a constraint to guide multiple over-sampling, thereby minimizing the data shift and characterizing the uncertainty of produced samples. Then, we quantify the local reliability of the classification results and select several imprecise samples with low local reliability that are indistinguishable between classes. Neighbors serve as additional complementary information to calibrate the results of imprecise samples, thereby reducing the likelihood of misclassification. The calibrated results are combined by the discounting Dempster-Shafer fusion rule to make a final decision. DAMO's efficiency has been demonstrated through comparisons with related methods on various real imbalanced datasets.
过度抽样方法专注于创建平衡样本,并已被证明在分类不平衡数据方面是成功的。然而,目前的过度抽样方法没有考虑到产生样本的不确定性,这可能会改变数据分布并影响分类过程。为了解决这个问题,我们提出了一种基于分布评估的多重过采样(DAMO)方法来对不平衡数据进行分类。我们首先介绍了基于分布评估的多重过采样方法来创建不同形式的合成样本。其核心是量化采样前后数据分布的不一致性,作为约束来指导多次过采样,从而最大限度地减少数据的移位,表征所产生样本的不确定性。然后,对分类结果的局部信度进行量化,选取局部信度较低且类间无法区分的不精确样本。邻域作为额外的补充信息来校准不精确样本的结果,从而减少误分类的可能性。将标定结果结合贴现Dempster-Shafer融合规则进行最终决策。通过与相关方法在各种实际不平衡数据集上的比较,证明了DAMO的有效性。
{"title":"Distribution assessment-based multiple over-sampling with evidence fusion for imbalanced data classification","authors":"Hongpeng Tian ,&nbsp;Zuowei Zhang ,&nbsp;Zhunga Liu ,&nbsp;Jingwei Zuo ,&nbsp;Caixing Yang","doi":"10.1016/j.ijar.2025.109538","DOIUrl":"10.1016/j.ijar.2025.109538","url":null,"abstract":"<div><div>Over-sampling methods concentrate on creating balanced samples and have proven successful in classifying imbalanced data. However, current over-sampling methods fail to consider the uncertainty of produced samples, potentially altering the data distribution and impacting the classification process. To address this issue, we propose a distribution assessment-based multiple over-sampling (DAMO) method for classifying imbalanced data. We first introduce a multiple over-sampling method based on distribution assessment to create different forms of synthetic samples. The core is quantifying the inconsistency of data distribution before and after sampling as a constraint to guide multiple over-sampling, thereby minimizing the data shift and characterizing the uncertainty of produced samples. Then, we quantify the local reliability of the classification results and select several imprecise samples with low local reliability that are indistinguishable between classes. Neighbors serve as additional complementary information to calibrate the results of imprecise samples, thereby reducing the likelihood of misclassification. The calibrated results are combined by the discounting Dempster-Shafer fusion rule to make a final decision. DAMO's efficiency has been demonstrated through comparisons with related methods on various real imbalanced datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109538"},"PeriodicalIF":3.0,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel three-way based self-adaptive filtering model for sentiment analysis 一种新的基于三向自适应的情感分析模型
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-05 DOI: 10.1016/j.ijar.2025.109536
Zhihui Zhang, Dun Liu, Rongping Shen
In the era of social media and diverse communication platforms, understanding human emotion across various modalities has become a crucial challenge. While significant progress has been made in feature extraction and interaction techniques, several unresolved issues persist, particularly concerning the balance between these two aspects. A central question is whether all extracted features are of equal importance, or if some may contain redundant or noisy information that undermines effective modality interaction. To address these challenges, we propose a novel Three-Way Decision-Based Self-Adaptive Filtering Model (TWSAFM). Inspired by the three-way decision (TWD) theory, we introduce a self-adaptive filtering module that categorizes extracted modal features into three distinct domains: acceptable, rejectable, and reconsidering. This classification allows for separate processing of features, enabling the model to prioritize essential information while minimizing the impact of redundant and noisy data. Experimental validation on three benchmark datasets demonstrates that TWSAFM outperforms state-of-the-art methods in sentiment analysis tasks. Furthermore, training studies and parameter sensitivity analysis underscore the effectiveness of TWSAFM in efficiently filtering out irrelevant and noisy features, highlighting its robust contribution to enhancing feature interaction.
在社交媒体和多种交流平台的时代,跨多种方式理解人类情感已成为一项至关重要的挑战。虽然在特征提取和交互技术方面取得了重大进展,但仍然存在一些未解决的问题,特别是关于这两个方面之间的平衡。一个核心问题是,是否所有提取的特征都同等重要,或者是否有些特征可能包含冗余或噪声信息,从而破坏有效的模态交互。为了解决这些挑战,我们提出了一种新的基于决策的三向自适应滤波模型(TWSAFM)。受三向决策(TWD)理论的启发,我们引入了一个自适应滤波模块,该模块将提取的模态特征分为三个不同的领域:可接受的、可拒绝的和重新考虑的。这种分类允许对特征进行单独处理,使模型能够优先考虑基本信息,同时最大限度地减少冗余和噪声数据的影响。在三个基准数据集上的实验验证表明,TWSAFM在情感分析任务中优于最先进的方法。此外,训练研究和参数灵敏度分析强调了TWSAFM在有效滤除不相关和噪声特征方面的有效性,突出了其对增强特征交互的鲁棒性贡献。
{"title":"A novel three-way based self-adaptive filtering model for sentiment analysis","authors":"Zhihui Zhang,&nbsp;Dun Liu,&nbsp;Rongping Shen","doi":"10.1016/j.ijar.2025.109536","DOIUrl":"10.1016/j.ijar.2025.109536","url":null,"abstract":"<div><div>In the era of social media and diverse communication platforms, understanding human emotion across various modalities has become a crucial challenge. While significant progress has been made in feature extraction and interaction techniques, several unresolved issues persist, particularly concerning the balance between these two aspects. A central question is whether all extracted features are of equal importance, or if some may contain redundant or noisy information that undermines effective modality interaction. To address these challenges, we propose a novel Three-Way Decision-Based Self-Adaptive Filtering Model (TWSAFM). Inspired by the three-way decision (TWD) theory, we introduce a self-adaptive filtering module that categorizes extracted modal features into three distinct domains: acceptable, rejectable, and reconsidering. This classification allows for separate processing of features, enabling the model to prioritize essential information while minimizing the impact of redundant and noisy data. Experimental validation on three benchmark datasets demonstrates that TWSAFM outperforms state-of-the-art methods in sentiment analysis tasks. Furthermore, training studies and parameter sensitivity analysis underscore the effectiveness of TWSAFM in efficiently filtering out irrelevant and noisy features, highlighting its robust contribution to enhancing feature interaction.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109536"},"PeriodicalIF":3.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-informed and neural-optimized belief assignments: A framework applied to cultural heritage 领域信息和神经优化的信念分配:一个应用于文化遗产的框架
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-05 DOI: 10.1016/j.ijar.2025.109534
Sofiane Daimellah , Sylvie Le Hégarat-Mascle , Clotilde Boust
Identifying pigments in Cultural Heritage artifacts is key to uncovering their origin and guiding conservation strategies. Although recent advances in non-invasive imaging have enabled the collection of rich multimodal data, existing methods often fall short in dealing with uncertain, ambiguous, or noisy information. This paper introduces a versatile fusion framework grounded in Belief Function Theory, combining domain-informed evidence modeling with neural optimization. Specifically, we propose a general strategy for assigning mass functions by leveraging expert knowledge encoded in parametric Evidence Mapping Functions, which are further refined through task-specific training using constrained neural networks. When applied to pigment classification, our method demonstrates robustness against source variability and class ambiguity. Experiments conducted on both synthetic and mock-up datasets validate its effectiveness and suggest promising potential for broader applications.
识别文化遗产文物中的颜料是揭示其来源和指导保护策略的关键。尽管最近在非侵入性成像方面的进展使收集丰富的多模态数据成为可能,但现有的方法在处理不确定、模糊或有噪声的信息时往往存在不足。本文介绍了一种基于信念函数理论的多功能融合框架,将领域知情证据建模与神经网络优化相结合。具体来说,我们提出了一种通过利用编码在参数化证据映射函数中的专家知识来分配质量函数的一般策略,并通过使用约束神经网络进行任务特定训练来进一步改进。当应用于颜料分类时,我们的方法对源可变性和类歧义具有鲁棒性。在合成数据集和模型数据集上进行的实验验证了其有效性,并表明其具有更广泛应用的潜力。
{"title":"Domain-informed and neural-optimized belief assignments: A framework applied to cultural heritage","authors":"Sofiane Daimellah ,&nbsp;Sylvie Le Hégarat-Mascle ,&nbsp;Clotilde Boust","doi":"10.1016/j.ijar.2025.109534","DOIUrl":"10.1016/j.ijar.2025.109534","url":null,"abstract":"<div><div>Identifying pigments in Cultural Heritage artifacts is key to uncovering their origin and guiding conservation strategies. Although recent advances in non-invasive imaging have enabled the collection of rich multimodal data, existing methods often fall short in dealing with uncertain, ambiguous, or noisy information. This paper introduces a versatile fusion framework grounded in Belief Function Theory, combining domain-informed evidence modeling with neural optimization. Specifically, we propose a general strategy for assigning mass functions by leveraging expert knowledge encoded in parametric Evidence Mapping Functions, which are further refined through task-specific training using constrained neural networks. When applied to pigment classification, our method demonstrates robustness against source variability and class ambiguity. Experiments conducted on both synthetic and mock-up datasets validate its effectiveness and suggest promising potential for broader applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109534"},"PeriodicalIF":3.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity analysis to unobserved confounding with copula-based normalizing flows 基于copula的归一化流对未观测混杂的敏感性分析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-30 DOI: 10.1016/j.ijar.2025.109531
Sourabh Balgi , Marc Braun , Jose M. Peña , Adel Daoud
We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term ρ-GNF, where ρ[1,+1] is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the ρ-GNF enables scholars to estimate the average causal effect (ACE) as a function of ρ, accounting for various confounding strengths. The output of the ρ-GNF is what we term the ρcurve, which provides the bounds for the ACE given an interval of assumed ρ values. The ρcurve also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter ρ enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.
我们提出了一种对因果推理中未观察到的混杂因素进行敏感性分析的新方法。该方法建立在一个基于copula的因果图归一化流上,我们称之为ρ- gnf,其中ρ∈[−1,+1]是灵敏度参数。该参数表示由于未观察到的混杂而导致的暴露与结果之间的非因果关联,其建模为高斯联结。换句话说,ρ- gnf使学者能够估计平均因果效应(ACE)作为ρ的函数,考虑到各种混杂强度。ρ- gnf的输出就是我们所说的ρ曲线,它提供了给定假定ρ值区间的ACE的界。ρ曲线还使学者能够识别使ACE无效所需的混杂强度。我们还提出了灵敏度分析方法的贝叶斯版本。假设灵敏度参数ρ的先验使我们能够推导出ACE的后验分布,从而使我们能够推导出可信区间。最后,利用模拟和真实数据的实验,我们展示了灵敏度分析方法的优点。
{"title":"Sensitivity analysis to unobserved confounding with copula-based normalizing flows","authors":"Sourabh Balgi ,&nbsp;Marc Braun ,&nbsp;Jose M. Peña ,&nbsp;Adel Daoud","doi":"10.1016/j.ijar.2025.109531","DOIUrl":"10.1016/j.ijar.2025.109531","url":null,"abstract":"<div><div>We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term <em>ρ</em>-GNF, where <span><math><mi>ρ</mi><mo>∈</mo><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mo>+</mo><mn>1</mn><mo>]</mo></math></span> is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the <em>ρ</em>-GNF enables scholars to estimate the average causal effect (ACE) as a function of <em>ρ</em>, accounting for various confounding strengths. The output of the <em>ρ</em>-GNF is what we term the <span><math><msub><mrow><mi>ρ</mi></mrow><mrow><mi>c</mi><mi>u</mi><mi>r</mi><mi>v</mi><mi>e</mi></mrow></msub></math></span>, which provides the bounds for the ACE given an interval of assumed <em>ρ</em> values. The <span><math><msub><mrow><mi>ρ</mi></mrow><mrow><mi>c</mi><mi>u</mi><mi>r</mi><mi>v</mi><mi>e</mi></mrow></msub></math></span> also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter <em>ρ</em> enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109531"},"PeriodicalIF":3.0,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Triadic data: Representation and reduction 三元数据:表示与约简
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-29 DOI: 10.1016/j.ijar.2025.109532
Léa Aubin Kouankam Djouohou , Blaise Blériot Koguep Njionou , Leonard Kwuida
Triadic Concept Analysis (TCA) is an extension of Formal Concept Analysis (FCA) for handling data represented as a set of objects described by attributes and conditions via a ternary relation. However, the intuition to go from FCA to TCA is not always straightforward. In this paper we discuss some FCA notions from dyadic to triadic. Although some ideas admit straightforward adaptation, most do not. In particular, we address the representation problem, the notion of redundant attributes and subcontexts in the triadic setting.
三元概念分析(TCA)是形式概念分析(FCA)的扩展,用于处理通过三元关系由属性和条件描述的一组对象表示的数据。然而,从FCA到TCA的直觉并不总是直截了当的。本文讨论了从二进到三进的FCA概念。尽管有些想法允许直接适应,但大多数想法不允许。特别是,我们解决了表示问题,冗余属性和子上下文的概念在三元设置。
{"title":"Triadic data: Representation and reduction","authors":"Léa Aubin Kouankam Djouohou ,&nbsp;Blaise Blériot Koguep Njionou ,&nbsp;Leonard Kwuida","doi":"10.1016/j.ijar.2025.109532","DOIUrl":"10.1016/j.ijar.2025.109532","url":null,"abstract":"<div><div>Triadic Concept Analysis (TCA) is an extension of Formal Concept Analysis (FCA) for handling data represented as a set of objects described by attributes and conditions via a ternary relation. However, the intuition to go from FCA to TCA is not always straightforward. In this paper we discuss some FCA notions from dyadic to triadic. Although some ideas admit straightforward adaptation, most do not. In particular, we address the representation problem, the notion of redundant attributes and subcontexts in the triadic setting.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109532"},"PeriodicalIF":3.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing connectivity in fuzzy graphs for resilient disaster response networks 弹性灾害响应网络模糊图连通性优化
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-29 DOI: 10.1016/j.ijar.2025.109535
P Sujithra , Sunil Mathew , J.N. Mordeson
Despite significant technological advances in recent years, communication challenges still persist. These issues are especially evident during crises, where system failures, network overloads, and incompatibilities among the communication technologies used by different organizations create major obstacles. Catastrophe scenarios are marked by high information uncertainty and limited control, which raises challenges for crisis communication. However, these aspects remain underexplored from a network-theoretic perspective. This study investigates the (x,y)-connectivity parameter between two nodes in a fuzzy graph, offering insights into network structure, robustness, and performance. We introduce a novel classification of nodes and edges into three categories: enhancing, eroded, and persisting, based on their impact on node-to-node connectivity. The behavior of these classifications is analyzed across different classes of fuzzy graphs. Furthermore, we establish upper and lower bounds for the (x,y)-connectivity under two graph operations. An efficient algorithm is proposed to identify and categorize nodes and edges accordingly. The practical relevance of our classification is illustrated through its application to disaster response communication networks, where maintaining resilient and adaptive communication is critical.
尽管近年来取得了重大的技术进步,但通信挑战仍然存在。这些问题在危机期间尤其明显,在危机期间,系统故障、网络过载以及不同组织使用的通信技术之间的不兼容造成了主要障碍。灾难情景具有信息不确定性高、控制有限的特点,给危机沟通带来了挑战。然而,从网络理论的角度来看,这些方面还没有得到充分的探讨。本研究调查了模糊图中两个节点之间的(x,y)连接参数,提供了对网络结构,鲁棒性和性能的见解。基于对节点到节点连通性的影响,我们将节点和边分为三类:增强、侵蚀和持久。在不同类别的模糊图中分析了这些分类的行为。进一步,我们建立了两种图运算下(x,y)-连通性的上界和下界。提出了一种有效的节点和边的识别和分类算法。我们的分类的实际意义是通过它在灾难响应通信网络中的应用来说明的,其中保持弹性和适应性通信是至关重要的。
{"title":"Optimizing connectivity in fuzzy graphs for resilient disaster response networks","authors":"P Sujithra ,&nbsp;Sunil Mathew ,&nbsp;J.N. Mordeson","doi":"10.1016/j.ijar.2025.109535","DOIUrl":"10.1016/j.ijar.2025.109535","url":null,"abstract":"<div><div>Despite significant technological advances in recent years, communication challenges still persist. These issues are especially evident during crises, where system failures, network overloads, and incompatibilities among the communication technologies used by different organizations create major obstacles. Catastrophe scenarios are marked by high information uncertainty and limited control, which raises challenges for crisis communication. However, these aspects remain underexplored from a network-theoretic perspective. This study investigates the <span><math><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></math></span>-connectivity parameter between two nodes in a fuzzy graph, offering insights into network structure, robustness, and performance. We introduce a novel classification of nodes and edges into three categories: enhancing, eroded, and persisting, based on their impact on node-to-node connectivity. The behavior of these classifications is analyzed across different classes of fuzzy graphs. Furthermore, we establish upper and lower bounds for the <span><math><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></math></span>-connectivity under two graph operations. An efficient algorithm is proposed to identify and categorize nodes and edges accordingly. The practical relevance of our classification is illustrated through its application to disaster response communication networks, where maintaining resilient and adaptive communication is critical.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109535"},"PeriodicalIF":3.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized conjunction and disjunction of two conditional events in the setting of conditional random quantities 条件随机量下两个条件事件的广义合取与析取
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-28 DOI: 10.1016/j.ijar.2025.109533
Lydia Castronovo , Giuseppe Sanfilippo
In recent papers, notions of conjunction and disjunction of two conditional events as suitable conditional random quantities, which satisfy basic probabilistic properties, have been deepened in the setting of coherence. In this framework, the conjunction and the disjunction of two conditional events are defined as five-valued objects, among which are the values of the (subjectively) assigned probabilities of the two conditional events. In the present paper we propose a generalization of these structures, where these new objects, instead of depending on the probabilities of the two conditional events, depend on two arbitrary values a,b in the unit interval. We show that they are connected by a generalized version of the De Morgan's law and, by means of a geometrical approach, we compute the lower and upper bounds on these new objects both in the precise and the imprecise case. Moreover, some particular cases, obtained for specific values of a and b or in case of some logical relations, are analyzed. The results of this paper lead to the conclusion that the only objects satisfying all the logical and the probabilistic properties already valid for the operations between events are the ones depending on the probabilities of the two conditional events.
近年来,在相干性的背景下,深化了两个条件事件作为满足基本概率性质的条件随机量的合取和析取的概念。在该框架中,将两个条件事件的合取和析取定义为五值对象,其中五值对象为两个条件事件(主观)赋值概率的值。在本文中,我们提出了这些结构的推广,其中这些新对象不是依赖于两个条件事件的概率,而是依赖于单位区间内的两个任意值a,b。我们用广义的德摩根定律证明了它们之间的联系,并通过几何方法计算了这些新对象在精确和不精确情况下的下界和上界。此外,还分析了a和b的特定值或某些逻辑关系下的一些特殊情况。本文的结果表明,满足事件间运算所有有效的逻辑和概率性质的对象是依赖于两个条件事件的概率的对象。
{"title":"Generalized conjunction and disjunction of two conditional events in the setting of conditional random quantities","authors":"Lydia Castronovo ,&nbsp;Giuseppe Sanfilippo","doi":"10.1016/j.ijar.2025.109533","DOIUrl":"10.1016/j.ijar.2025.109533","url":null,"abstract":"<div><div>In recent papers, notions of conjunction and disjunction of two conditional events as suitable conditional random quantities, which satisfy basic probabilistic properties, have been deepened in the setting of coherence. In this framework, the conjunction and the disjunction of two conditional events are defined as five-valued objects, among which are the values of the (subjectively) assigned probabilities of the two conditional events. In the present paper we propose a generalization of these structures, where these new objects, instead of depending on the probabilities of the two conditional events, depend on two arbitrary values <span><math><mi>a</mi><mo>,</mo><mi>b</mi></math></span> in the unit interval. We show that they are connected by a generalized version of the De Morgan's law and, by means of a geometrical approach, we compute the lower and upper bounds on these new objects both in the precise and the imprecise case. Moreover, some particular cases, obtained for specific values of <em>a</em> and <em>b</em> or in case of some logical relations, are analyzed. The results of this paper lead to the conclusion that the only objects satisfying all the logical and the probabilistic properties already valid for the operations between events are the ones depending on the probabilities of the two conditional events.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109533"},"PeriodicalIF":3.0,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144757942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Approximate Reasoning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1