首页 > 最新文献

International Journal of Approximate Reasoning最新文献

英文 中文
Flexible categorization using formal concept analysis and Dempster-Shafer theory 使用形式概念分析和Dempster-Shafer理论的灵活分类
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-12 DOI: 10.1016/j.ijar.2025.109548
Marcel Boersma , Krishna Manoorkar , Alessandra Palmigiano , Mattia Panettiere , Apostolos Tzimoulis , Nachoem Wijnberg
Based on the intuitive idea that sets of objects or entities can be categorized in very different ways, and that some ways to categorise objects are better than others, depending on the purpose of the categorization, in this paper, a formal framework is introduced for parametrically generating a space of possible categorizations of a set of objects, based on the features which individual agents or groups thereof regard as relevant (formally encoded in the notion of interrogative agenda). This formal framework accounts both for two-valued (crisp), and for many-valued (fuzzy) judgments about the relevance of given features, and introduces ways to aggregate individual agendas to group agendas. As an application on this framework, we discuss a machine-learning meta-algorithm for outlier detection and classification which provides local and global explanations of its results.
基于对象或实体的直观想法集可以以非常不同的方式分类,和一些方法归类对象比别人好,根据分类的目的,本文介绍了一个正式的框架为参数化生成空间可能的一组对象的分类,根据其特性个体或团体认为相关(正式编码在疑问议程的概念)。这个正式的框架解释了关于给定特征相关性的双值(清晰)和多值(模糊)判断,并引入了将个人议程聚合到组议程的方法。作为该框架的应用,我们讨论了一种用于异常值检测和分类的机器学习元算法,该算法为其结果提供了局部和全局解释。
{"title":"Flexible categorization using formal concept analysis and Dempster-Shafer theory","authors":"Marcel Boersma ,&nbsp;Krishna Manoorkar ,&nbsp;Alessandra Palmigiano ,&nbsp;Mattia Panettiere ,&nbsp;Apostolos Tzimoulis ,&nbsp;Nachoem Wijnberg","doi":"10.1016/j.ijar.2025.109548","DOIUrl":"10.1016/j.ijar.2025.109548","url":null,"abstract":"<div><div>Based on the intuitive idea that sets of objects or entities can be categorized in very different ways, and that some ways to categorise objects are better than others, depending on the purpose of the categorization, in this paper, a formal framework is introduced for parametrically generating a space of possible categorizations of a set of objects, based on the features which individual agents or groups thereof regard as relevant (formally encoded in the notion of <em>interrogative agenda</em>). This formal framework accounts both for two-valued (crisp), and for many-valued (fuzzy) judgments about the relevance of given features, and introduces ways to aggregate individual agendas to group agendas. As an application on this framework, we discuss a machine-learning meta-algorithm for outlier detection and classification which provides local and global explanations of its results.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109548"},"PeriodicalIF":3.0,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-player newsvendor game with competition on demand under ambiguity 歧义条件下按需竞争的二人报摊博弈
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-11 DOI: 10.1016/j.ijar.2025.109546
Andrea Cinfrignini , Silvia Lorenzini , Davide Petturiti
We deal with a single period two-player newsvendor game where both newsvendors are assumed to be rational and risk-neutral, and to operate under ambiguity. Each newsvendor needs to choose his/her order quantity of the same perishable product, whose global market demand is modeled by a discrete random variable, endowed with a reference probability measure. Furthermore, the global market demand is distributed to newsvendors according to a proportional allocation rule. We model the uncertainty faced by each newsvendor with an individual ϵ-contamination of the reference probability measure, computed with respect to a suitable class of probability measures. The resulting ϵ-contamination model preserves the expected demand under the reference probability and is used to compute the individual lower expected profit as a Choquet expectation. Therefore, the optimization problem of each player reduces to settle the order quantity that maximizes his/her lower expected profit, given the opponent choice, which is a maximin problem. In the resulting game, we prove that a Nash equilibrium always exists, though it may not be unique. Finally, we provide a characterization of Nash equilibria in terms of best response functions.
我们处理一个单时期的两方报贩博弈,假设两个报贩都是理性和风险中立的,并且在模糊性下运作。每个报贩需要选择自己订购的同一种易腐产品的数量,其全球市场需求用一个离散随机变量来建模,并赋予参考概率测度。此外,根据比例分配规则将全球市场需求分配给报贩。我们用参考概率度量的单个ϵ-contamination对每个报贩所面临的不确定性进行建模,并根据合适的概率度量类进行计算。所得ϵ-contamination模型保留了参考概率下的期望需求,并用于计算个体较低的期望利润作为Choquet期望。因此,在给定对手选择的情况下,每个参与者的优化问题归结为解决使其最低期望利润最大化的订货量问题,这是一个极大值问题。在最终的博弈中,我们证明了纳什均衡总是存在的,尽管它可能不是唯一的。最后,我们用最佳对策函数给出了纳什均衡的表征。
{"title":"A two-player newsvendor game with competition on demand under ambiguity","authors":"Andrea Cinfrignini ,&nbsp;Silvia Lorenzini ,&nbsp;Davide Petturiti","doi":"10.1016/j.ijar.2025.109546","DOIUrl":"10.1016/j.ijar.2025.109546","url":null,"abstract":"<div><div>We deal with a single period two-player newsvendor game where both newsvendors are assumed to be rational and risk-neutral, and to operate under ambiguity. Each newsvendor needs to choose his/her order quantity of the same perishable product, whose global market demand is modeled by a discrete random variable, endowed with a reference probability measure. Furthermore, the global market demand is distributed to newsvendors according to a proportional allocation rule. We model the uncertainty faced by each newsvendor with an individual <em>ϵ</em>-contamination of the reference probability measure, computed with respect to a suitable class of probability measures. The resulting <em>ϵ</em>-contamination model preserves the expected demand under the reference probability and is used to compute the individual lower expected profit as a Choquet expectation. Therefore, the optimization problem of each player reduces to settle the order quantity that maximizes his/her lower expected profit, given the opponent choice, which is a maximin problem. In the resulting game, we prove that a Nash equilibrium always exists, though it may not be unique. Finally, we provide a characterization of Nash equilibria in terms of best response functions.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109546"},"PeriodicalIF":3.0,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GTransformer: Multi-view functional granulation and self-attention for tabular data modeling GTransformer:用于表格数据建模的多视图功能粒化和自关注
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-08 DOI: 10.1016/j.ijar.2025.109547
Liang Liao , Yumin Chen , Yingyue Chen , Yiting Lin
To bridge the performance gap between deep learning models and tree ensemble methods in tabular data tasks, we propose GTransformer, a novel deep architecture that innovatively integrates granular computing and self-attention mechanisms. Our approach introduces a scalable granulation function set, from which diverse functions are randomly sampled to construct multi-view feature granules. These granules are aggregated into granule vectors, forming a multi-view functional granulation layer that provides comprehensive representations of tabular features from multiple perspectives. Subsequently, a Transformer encoder driven by granule sequences is employed to model deep interactions among features, with predictions generated via a hierarchical multilayer perceptron (MLP) classification head. Experiments on 12 datasets show that GTransformer achieves an average AUC of 92.9%, which is comparable to the 92.3% performance of LightGBM. Compared with the current mainstream deep model TabNet, the average AUC gain is 2.74%, with a 14.5% improvement on the Sonar dataset. GTransformer demonstrates strong robustness in scenarios with noise and missing data, especially on the Credit and HTRU2 datasets, where the accuracy decline is 24.73% and 17.03% less than that of MLP-Head respectively, further verifying its applicability in complex real-world application scenarios.
为了弥合深度学习模型和树集成方法在表格数据任务中的性能差距,我们提出了一种新的深度架构GTransformer,它创新地集成了颗粒计算和自关注机制。我们的方法引入了一个可扩展的粒化函数集,从中随机抽取不同的函数来构建多视图特征粒。这些颗粒聚集成颗粒向量,形成一个多视图功能造粒层,从多个角度提供表格特征的综合表示。随后,采用由颗粒序列驱动的Transformer编码器对特征之间的深度交互进行建模,并通过分层多层感知器(MLP)分类头生成预测。在12个数据集上的实验表明,GTransformer的平均AUC达到了92.9%,与LightGBM的92.3%相当。与当前主流深度模型TabNet相比,平均AUC增益为2.74%,在Sonar数据集上提高了14.5%。GTransformer在存在噪声和缺失数据的场景下表现出了较强的鲁棒性,尤其是在Credit和HTRU2数据集上,其准确率降幅分别比MLP-Head小24.73%和17.03%,进一步验证了其在复杂的现实应用场景中的适用性。
{"title":"GTransformer: Multi-view functional granulation and self-attention for tabular data modeling","authors":"Liang Liao ,&nbsp;Yumin Chen ,&nbsp;Yingyue Chen ,&nbsp;Yiting Lin","doi":"10.1016/j.ijar.2025.109547","DOIUrl":"10.1016/j.ijar.2025.109547","url":null,"abstract":"<div><div>To bridge the performance gap between deep learning models and tree ensemble methods in tabular data tasks, we propose GTransformer, a novel deep architecture that innovatively integrates granular computing and self-attention mechanisms. Our approach introduces a scalable granulation function set, from which diverse functions are randomly sampled to construct multi-view feature granules. These granules are aggregated into granule vectors, forming a multi-view functional granulation layer that provides comprehensive representations of tabular features from multiple perspectives. Subsequently, a Transformer encoder driven by granule sequences is employed to model deep interactions among features, with predictions generated via a hierarchical multilayer perceptron (MLP) classification head. Experiments on 12 datasets show that GTransformer achieves an average AUC of 92.9%, which is comparable to the 92.3% performance of LightGBM. Compared with the current mainstream deep model TabNet, the average AUC gain is 2.74%, with a 14.5% improvement on the Sonar dataset. GTransformer demonstrates strong robustness in scenarios with noise and missing data, especially on the Credit and HTRU2 datasets, where the accuracy decline is 24.73% and 17.03% less than that of MLP-Head respectively, further verifying its applicability in complex real-world application scenarios.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109547"},"PeriodicalIF":3.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relative pre-reducts for computing the relative reducts of large data sets 用于计算大型数据集的相对约简的相对预约
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-07 DOI: 10.1016/j.ijar.2025.109544
Hajime Okawa , Yasuo Kudo , Tetsuya Murai
In this paper, we introduce the concept of relative pre-reducts to derive the relative reducts from a large dataset. The relative reduct is considered a consistency-based attribute reduction method that is commonly utilized to extract concise subsets of condition attributes. Nonetheless, calculating all relative reducts necessitates substantial time and memory to build a discernibility matrix. In this research, we demonstrate that all relative pre-reducts can be computed using a simplified matrix referred to as the partial discernibility matrix, which can be readily converted into relative reducts. We also suggest employing a data partitioning approach to generate the discernibility matrix. This method alleviates the issue of an increased number of results for each partition. The outcomes from this technique yield the relative pre-reducts proposed in this study. Since our enhancements to the computation of relative reducts are independent of other advancements, they can be implemented in conjunction with existing methods. Experimental findings indicate that utilizing relative pre-reducts for computing relative reducts is efficient for large datasets.
在本文中,我们引入了相对预约的概念,从大型数据集中推导出相对约简。相对约简被认为是一种基于一致性的属性约简方法,通常用于提取条件属性的简明子集。尽管如此,计算所有的相对减少需要大量的时间和内存来构建一个可分辨性矩阵。在本研究中,我们证明了所有的相对预约都可以用一个简化的矩阵来计算,即部分差别矩阵,它可以很容易地转换为相对约简。我们还建议采用数据划分方法来生成差别矩阵。这种方法减轻了每个分区的结果数量增加的问题。这项技术的结果产生了本研究中提出的相对预还原。由于我们对相对缩减计算的增强是独立于其他改进的,因此它们可以与现有方法一起实现。实验结果表明,对于大型数据集,利用相对预约简计算相对约简是有效的。
{"title":"Relative pre-reducts for computing the relative reducts of large data sets","authors":"Hajime Okawa ,&nbsp;Yasuo Kudo ,&nbsp;Tetsuya Murai","doi":"10.1016/j.ijar.2025.109544","DOIUrl":"10.1016/j.ijar.2025.109544","url":null,"abstract":"<div><div>In this paper, we introduce the concept of relative pre-reducts to derive the relative reducts from a large dataset. The relative reduct is considered a consistency-based attribute reduction method that is commonly utilized to extract concise subsets of condition attributes. Nonetheless, calculating all relative reducts necessitates substantial time and memory to build a discernibility matrix. In this research, we demonstrate that all relative pre-reducts can be computed using a simplified matrix referred to as the partial discernibility matrix, which can be readily converted into relative reducts. We also suggest employing a data partitioning approach to generate the discernibility matrix. This method alleviates the issue of an increased number of results for each partition. The outcomes from this technique yield the relative pre-reducts proposed in this study. Since our enhancements to the computation of relative reducts are independent of other advancements, they can be implemented in conjunction with existing methods. Experimental findings indicate that utilizing relative pre-reducts for computing relative reducts is efficient for large datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109544"},"PeriodicalIF":3.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizations of approximation operators in covering rough set theory 覆盖粗糙集理论中近似算子的优化
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-07 DOI: 10.1016/j.ijar.2025.109543
Shizhe Zhang , Liwen Ma
Classical rough set theory fundamentally requires upper and lower approximations to be definite sets for precise knowledge representation. However, a significant problem arises as many widely used approximation operators inherently produce rough approximations (with non-empty boundaries), contradicting this core theoretical intent and undermining practical applicability. To resolve this core discrepancy, we introduce stable approximation operators and stable sets, and develop an optimization method that transforms unstable operators into stable ones, ensuring definite approximations. This method includes detailing the optimization process with algorithmic implementation, analyzing the topological structure of resulting approximation spaces and connections between optimized operators, and enhancing computational efficiency via matrix-based computation. This work may strengthen rough set theory's foundation by bridging the gap between theory and practice while enhancing its scope for practical applications.
经典粗糙集理论从根本上要求上近似和下近似是精确知识表示的定集。然而,一个重要的问题出现了,因为许多广泛使用的近似算子固有地产生粗略的近似(非空边界),与这一核心理论意图相矛盾,破坏了实际的适用性。为了解决这一核心差异,我们引入了稳定逼近算子和稳定集合,并提出了一种将不稳定算子转化为稳定算子的优化方法,从而保证了近似的确定。该方法包括通过算法实现详细描述优化过程,分析得到的近似空间的拓扑结构和优化算子之间的联系,并通过基于矩阵的计算提高计算效率。这项工作可以通过缩小理论与实践之间的差距来加强粗糙集理论的基础,同时扩大其实际应用的范围。
{"title":"Optimizations of approximation operators in covering rough set theory","authors":"Shizhe Zhang ,&nbsp;Liwen Ma","doi":"10.1016/j.ijar.2025.109543","DOIUrl":"10.1016/j.ijar.2025.109543","url":null,"abstract":"<div><div>Classical rough set theory fundamentally requires upper and lower approximations to be definite sets for precise knowledge representation. However, a significant problem arises as many widely used approximation operators inherently produce rough approximations (with non-empty boundaries), contradicting this core theoretical intent and undermining practical applicability. To resolve this core discrepancy, we introduce stable approximation operators and stable sets, and develop an optimization method that transforms unstable operators into stable ones, ensuring definite approximations. This method includes detailing the optimization process with algorithmic implementation, analyzing the topological structure of resulting approximation spaces and connections between optimized operators, and enhancing computational efficiency via matrix-based computation. This work may strengthen rough set theory's foundation by bridging the gap between theory and practice while enhancing its scope for practical applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109543"},"PeriodicalIF":3.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distribution assessment-based multiple over-sampling with evidence fusion for imbalanced data classification 基于分布评估的多重过采样与证据融合的不平衡数据分类
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-06 DOI: 10.1016/j.ijar.2025.109538
Hongpeng Tian , Zuowei Zhang , Zhunga Liu , Jingwei Zuo , Caixing Yang
Over-sampling methods concentrate on creating balanced samples and have proven successful in classifying imbalanced data. However, current over-sampling methods fail to consider the uncertainty of produced samples, potentially altering the data distribution and impacting the classification process. To address this issue, we propose a distribution assessment-based multiple over-sampling (DAMO) method for classifying imbalanced data. We first introduce a multiple over-sampling method based on distribution assessment to create different forms of synthetic samples. The core is quantifying the inconsistency of data distribution before and after sampling as a constraint to guide multiple over-sampling, thereby minimizing the data shift and characterizing the uncertainty of produced samples. Then, we quantify the local reliability of the classification results and select several imprecise samples with low local reliability that are indistinguishable between classes. Neighbors serve as additional complementary information to calibrate the results of imprecise samples, thereby reducing the likelihood of misclassification. The calibrated results are combined by the discounting Dempster-Shafer fusion rule to make a final decision. DAMO's efficiency has been demonstrated through comparisons with related methods on various real imbalanced datasets.
过度抽样方法专注于创建平衡样本,并已被证明在分类不平衡数据方面是成功的。然而,目前的过度抽样方法没有考虑到产生样本的不确定性,这可能会改变数据分布并影响分类过程。为了解决这个问题,我们提出了一种基于分布评估的多重过采样(DAMO)方法来对不平衡数据进行分类。我们首先介绍了基于分布评估的多重过采样方法来创建不同形式的合成样本。其核心是量化采样前后数据分布的不一致性,作为约束来指导多次过采样,从而最大限度地减少数据的移位,表征所产生样本的不确定性。然后,对分类结果的局部信度进行量化,选取局部信度较低且类间无法区分的不精确样本。邻域作为额外的补充信息来校准不精确样本的结果,从而减少误分类的可能性。将标定结果结合贴现Dempster-Shafer融合规则进行最终决策。通过与相关方法在各种实际不平衡数据集上的比较,证明了DAMO的有效性。
{"title":"Distribution assessment-based multiple over-sampling with evidence fusion for imbalanced data classification","authors":"Hongpeng Tian ,&nbsp;Zuowei Zhang ,&nbsp;Zhunga Liu ,&nbsp;Jingwei Zuo ,&nbsp;Caixing Yang","doi":"10.1016/j.ijar.2025.109538","DOIUrl":"10.1016/j.ijar.2025.109538","url":null,"abstract":"<div><div>Over-sampling methods concentrate on creating balanced samples and have proven successful in classifying imbalanced data. However, current over-sampling methods fail to consider the uncertainty of produced samples, potentially altering the data distribution and impacting the classification process. To address this issue, we propose a distribution assessment-based multiple over-sampling (DAMO) method for classifying imbalanced data. We first introduce a multiple over-sampling method based on distribution assessment to create different forms of synthetic samples. The core is quantifying the inconsistency of data distribution before and after sampling as a constraint to guide multiple over-sampling, thereby minimizing the data shift and characterizing the uncertainty of produced samples. Then, we quantify the local reliability of the classification results and select several imprecise samples with low local reliability that are indistinguishable between classes. Neighbors serve as additional complementary information to calibrate the results of imprecise samples, thereby reducing the likelihood of misclassification. The calibrated results are combined by the discounting Dempster-Shafer fusion rule to make a final decision. DAMO's efficiency has been demonstrated through comparisons with related methods on various real imbalanced datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109538"},"PeriodicalIF":3.0,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel three-way based self-adaptive filtering model for sentiment analysis 一种新的基于三向自适应的情感分析模型
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-05 DOI: 10.1016/j.ijar.2025.109536
Zhihui Zhang, Dun Liu, Rongping Shen
In the era of social media and diverse communication platforms, understanding human emotion across various modalities has become a crucial challenge. While significant progress has been made in feature extraction and interaction techniques, several unresolved issues persist, particularly concerning the balance between these two aspects. A central question is whether all extracted features are of equal importance, or if some may contain redundant or noisy information that undermines effective modality interaction. To address these challenges, we propose a novel Three-Way Decision-Based Self-Adaptive Filtering Model (TWSAFM). Inspired by the three-way decision (TWD) theory, we introduce a self-adaptive filtering module that categorizes extracted modal features into three distinct domains: acceptable, rejectable, and reconsidering. This classification allows for separate processing of features, enabling the model to prioritize essential information while minimizing the impact of redundant and noisy data. Experimental validation on three benchmark datasets demonstrates that TWSAFM outperforms state-of-the-art methods in sentiment analysis tasks. Furthermore, training studies and parameter sensitivity analysis underscore the effectiveness of TWSAFM in efficiently filtering out irrelevant and noisy features, highlighting its robust contribution to enhancing feature interaction.
在社交媒体和多种交流平台的时代,跨多种方式理解人类情感已成为一项至关重要的挑战。虽然在特征提取和交互技术方面取得了重大进展,但仍然存在一些未解决的问题,特别是关于这两个方面之间的平衡。一个核心问题是,是否所有提取的特征都同等重要,或者是否有些特征可能包含冗余或噪声信息,从而破坏有效的模态交互。为了解决这些挑战,我们提出了一种新的基于决策的三向自适应滤波模型(TWSAFM)。受三向决策(TWD)理论的启发,我们引入了一个自适应滤波模块,该模块将提取的模态特征分为三个不同的领域:可接受的、可拒绝的和重新考虑的。这种分类允许对特征进行单独处理,使模型能够优先考虑基本信息,同时最大限度地减少冗余和噪声数据的影响。在三个基准数据集上的实验验证表明,TWSAFM在情感分析任务中优于最先进的方法。此外,训练研究和参数灵敏度分析强调了TWSAFM在有效滤除不相关和噪声特征方面的有效性,突出了其对增强特征交互的鲁棒性贡献。
{"title":"A novel three-way based self-adaptive filtering model for sentiment analysis","authors":"Zhihui Zhang,&nbsp;Dun Liu,&nbsp;Rongping Shen","doi":"10.1016/j.ijar.2025.109536","DOIUrl":"10.1016/j.ijar.2025.109536","url":null,"abstract":"<div><div>In the era of social media and diverse communication platforms, understanding human emotion across various modalities has become a crucial challenge. While significant progress has been made in feature extraction and interaction techniques, several unresolved issues persist, particularly concerning the balance between these two aspects. A central question is whether all extracted features are of equal importance, or if some may contain redundant or noisy information that undermines effective modality interaction. To address these challenges, we propose a novel Three-Way Decision-Based Self-Adaptive Filtering Model (TWSAFM). Inspired by the three-way decision (TWD) theory, we introduce a self-adaptive filtering module that categorizes extracted modal features into three distinct domains: acceptable, rejectable, and reconsidering. This classification allows for separate processing of features, enabling the model to prioritize essential information while minimizing the impact of redundant and noisy data. Experimental validation on three benchmark datasets demonstrates that TWSAFM outperforms state-of-the-art methods in sentiment analysis tasks. Furthermore, training studies and parameter sensitivity analysis underscore the effectiveness of TWSAFM in efficiently filtering out irrelevant and noisy features, highlighting its robust contribution to enhancing feature interaction.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109536"},"PeriodicalIF":3.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-informed and neural-optimized belief assignments: A framework applied to cultural heritage 领域信息和神经优化的信念分配:一个应用于文化遗产的框架
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-05 DOI: 10.1016/j.ijar.2025.109534
Sofiane Daimellah , Sylvie Le Hégarat-Mascle , Clotilde Boust
Identifying pigments in Cultural Heritage artifacts is key to uncovering their origin and guiding conservation strategies. Although recent advances in non-invasive imaging have enabled the collection of rich multimodal data, existing methods often fall short in dealing with uncertain, ambiguous, or noisy information. This paper introduces a versatile fusion framework grounded in Belief Function Theory, combining domain-informed evidence modeling with neural optimization. Specifically, we propose a general strategy for assigning mass functions by leveraging expert knowledge encoded in parametric Evidence Mapping Functions, which are further refined through task-specific training using constrained neural networks. When applied to pigment classification, our method demonstrates robustness against source variability and class ambiguity. Experiments conducted on both synthetic and mock-up datasets validate its effectiveness and suggest promising potential for broader applications.
识别文化遗产文物中的颜料是揭示其来源和指导保护策略的关键。尽管最近在非侵入性成像方面的进展使收集丰富的多模态数据成为可能,但现有的方法在处理不确定、模糊或有噪声的信息时往往存在不足。本文介绍了一种基于信念函数理论的多功能融合框架,将领域知情证据建模与神经网络优化相结合。具体来说,我们提出了一种通过利用编码在参数化证据映射函数中的专家知识来分配质量函数的一般策略,并通过使用约束神经网络进行任务特定训练来进一步改进。当应用于颜料分类时,我们的方法对源可变性和类歧义具有鲁棒性。在合成数据集和模型数据集上进行的实验验证了其有效性,并表明其具有更广泛应用的潜力。
{"title":"Domain-informed and neural-optimized belief assignments: A framework applied to cultural heritage","authors":"Sofiane Daimellah ,&nbsp;Sylvie Le Hégarat-Mascle ,&nbsp;Clotilde Boust","doi":"10.1016/j.ijar.2025.109534","DOIUrl":"10.1016/j.ijar.2025.109534","url":null,"abstract":"<div><div>Identifying pigments in Cultural Heritage artifacts is key to uncovering their origin and guiding conservation strategies. Although recent advances in non-invasive imaging have enabled the collection of rich multimodal data, existing methods often fall short in dealing with uncertain, ambiguous, or noisy information. This paper introduces a versatile fusion framework grounded in Belief Function Theory, combining domain-informed evidence modeling with neural optimization. Specifically, we propose a general strategy for assigning mass functions by leveraging expert knowledge encoded in parametric Evidence Mapping Functions, which are further refined through task-specific training using constrained neural networks. When applied to pigment classification, our method demonstrates robustness against source variability and class ambiguity. Experiments conducted on both synthetic and mock-up datasets validate its effectiveness and suggest promising potential for broader applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109534"},"PeriodicalIF":3.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity analysis to unobserved confounding with copula-based normalizing flows 基于copula的归一化流对未观测混杂的敏感性分析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-30 DOI: 10.1016/j.ijar.2025.109531
Sourabh Balgi , Marc Braun , Jose M. Peña , Adel Daoud
We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term ρ-GNF, where ρ[1,+1] is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the ρ-GNF enables scholars to estimate the average causal effect (ACE) as a function of ρ, accounting for various confounding strengths. The output of the ρ-GNF is what we term the ρcurve, which provides the bounds for the ACE given an interval of assumed ρ values. The ρcurve also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter ρ enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.
我们提出了一种对因果推理中未观察到的混杂因素进行敏感性分析的新方法。该方法建立在一个基于copula的因果图归一化流上,我们称之为ρ- gnf,其中ρ∈[−1,+1]是灵敏度参数。该参数表示由于未观察到的混杂而导致的暴露与结果之间的非因果关联,其建模为高斯联结。换句话说,ρ- gnf使学者能够估计平均因果效应(ACE)作为ρ的函数,考虑到各种混杂强度。ρ- gnf的输出就是我们所说的ρ曲线,它提供了给定假定ρ值区间的ACE的界。ρ曲线还使学者能够识别使ACE无效所需的混杂强度。我们还提出了灵敏度分析方法的贝叶斯版本。假设灵敏度参数ρ的先验使我们能够推导出ACE的后验分布,从而使我们能够推导出可信区间。最后,利用模拟和真实数据的实验,我们展示了灵敏度分析方法的优点。
{"title":"Sensitivity analysis to unobserved confounding with copula-based normalizing flows","authors":"Sourabh Balgi ,&nbsp;Marc Braun ,&nbsp;Jose M. Peña ,&nbsp;Adel Daoud","doi":"10.1016/j.ijar.2025.109531","DOIUrl":"10.1016/j.ijar.2025.109531","url":null,"abstract":"<div><div>We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term <em>ρ</em>-GNF, where <span><math><mi>ρ</mi><mo>∈</mo><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mo>+</mo><mn>1</mn><mo>]</mo></math></span> is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the <em>ρ</em>-GNF enables scholars to estimate the average causal effect (ACE) as a function of <em>ρ</em>, accounting for various confounding strengths. The output of the <em>ρ</em>-GNF is what we term the <span><math><msub><mrow><mi>ρ</mi></mrow><mrow><mi>c</mi><mi>u</mi><mi>r</mi><mi>v</mi><mi>e</mi></mrow></msub></math></span>, which provides the bounds for the ACE given an interval of assumed <em>ρ</em> values. The <span><math><msub><mrow><mi>ρ</mi></mrow><mrow><mi>c</mi><mi>u</mi><mi>r</mi><mi>v</mi><mi>e</mi></mrow></msub></math></span> also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter <em>ρ</em> enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109531"},"PeriodicalIF":3.0,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Triadic data: Representation and reduction 三元数据:表示与约简
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-29 DOI: 10.1016/j.ijar.2025.109532
Léa Aubin Kouankam Djouohou , Blaise Blériot Koguep Njionou , Leonard Kwuida
Triadic Concept Analysis (TCA) is an extension of Formal Concept Analysis (FCA) for handling data represented as a set of objects described by attributes and conditions via a ternary relation. However, the intuition to go from FCA to TCA is not always straightforward. In this paper we discuss some FCA notions from dyadic to triadic. Although some ideas admit straightforward adaptation, most do not. In particular, we address the representation problem, the notion of redundant attributes and subcontexts in the triadic setting.
三元概念分析(TCA)是形式概念分析(FCA)的扩展,用于处理通过三元关系由属性和条件描述的一组对象表示的数据。然而,从FCA到TCA的直觉并不总是直截了当的。本文讨论了从二进到三进的FCA概念。尽管有些想法允许直接适应,但大多数想法不允许。特别是,我们解决了表示问题,冗余属性和子上下文的概念在三元设置。
{"title":"Triadic data: Representation and reduction","authors":"Léa Aubin Kouankam Djouohou ,&nbsp;Blaise Blériot Koguep Njionou ,&nbsp;Leonard Kwuida","doi":"10.1016/j.ijar.2025.109532","DOIUrl":"10.1016/j.ijar.2025.109532","url":null,"abstract":"<div><div>Triadic Concept Analysis (TCA) is an extension of Formal Concept Analysis (FCA) for handling data represented as a set of objects described by attributes and conditions via a ternary relation. However, the intuition to go from FCA to TCA is not always straightforward. In this paper we discuss some FCA notions from dyadic to triadic. Although some ideas admit straightforward adaptation, most do not. In particular, we address the representation problem, the notion of redundant attributes and subcontexts in the triadic setting.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"187 ","pages":"Article 109532"},"PeriodicalIF":3.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Approximate Reasoning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1