首页 > 最新文献

International Journal of Approximate Reasoning最新文献

英文 中文
Predicting Lockean from gradational accuracy 从梯度精度预测洛克式
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-30 DOI: 10.1016/j.ijar.2026.109636
Igor Douven , Nikolaus Kriegeskorte
The debate about which scoring rule best measures the accuracy of our credences has largely been conducted on an a priori basis. We pursue an empirical approach, asking which rule best predicts a practical, decision-relevant criterion: Lockean accuracy, the ability to make correct categorical judgments based on a threshold of belief. Analyzing a large dataset of probability judgments, we compare the most widely used scoring rules (Brier, logarithmic, spherical, absolute error, and power rules) and find that, among them, there is no single best one. Instead, the optimal choice is context dependent: the Spherical score is the best predictor for lower belief thresholds, while the Power3 rule is best at higher thresholds. In particular, the widely used Brier and log scores are rarely optimal for this task. A mediation analysis reveals that while much of a rule’s success is explained by its ability to reward calibration and sharpness, the Spherical and Brier rules retain significant predictive power independently of these standard virtues.
关于哪种评分规则最能衡量我们的可信度的准确性的争论,在很大程度上是基于先验的基础上进行的。我们采用经验方法,询问哪条规则最能预测实际的、与决策相关的标准:洛克准确度,即基于信念阈值做出正确的分类判断的能力。通过分析大量的概率判断数据集,我们比较了最广泛使用的评分规则(Brier规则、对数规则、球面规则、绝对误差规则和幂规则),发现其中没有单一的最佳规则。相反,最佳选择取决于上下文:对于较低的置信阈值,球形分数是最好的预测器,而对于较高的阈值,Power3规则是最好的预测器。特别是,广泛使用的Brier和log分数很少适合此任务。一项中介分析显示,虽然规则的成功在很大程度上是由其奖励校准和清晰度的能力来解释的,但球面和Brier规则在独立于这些标准优点的情况下仍具有重要的预测能力。
{"title":"Predicting Lockean from gradational accuracy","authors":"Igor Douven ,&nbsp;Nikolaus Kriegeskorte","doi":"10.1016/j.ijar.2026.109636","DOIUrl":"10.1016/j.ijar.2026.109636","url":null,"abstract":"<div><div>The debate about which scoring rule best measures the accuracy of our credences has largely been conducted on an <em>a priori</em> basis. We pursue an empirical approach, asking which rule best predicts a practical, decision-relevant criterion: Lockean accuracy, the ability to make correct categorical judgments based on a threshold of belief. Analyzing a large dataset of probability judgments, we compare the most widely used scoring rules (Brier, logarithmic, spherical, absolute error, and power rules) and find that, among them, there is no single best one. Instead, the optimal choice is context dependent: the Spherical score is the best predictor for lower belief thresholds, while the Power<sup>3</sup> rule is best at higher thresholds. In particular, the widely used Brier and log scores are rarely optimal for this task. A mediation analysis reveals that while much of a rule’s success is explained by its ability to reward calibration and sharpness, the Spherical and Brier rules retain significant predictive power independently of these standard virtues.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109636"},"PeriodicalIF":3.0,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction-based complexity measures in a progressive partition model of granular computing 基于交互的复杂度度量在颗粒计算的渐进划分模型中
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1016/j.ijar.2026.109637
Qiaoyi Li, Yiyu Yao
One of the claims regarding the power of granular computing is the computational efficiency and practical applicability in solving complex problems. The majority of discussions supporting this claim are typically made based on intuitive arguments or through examples by equating the complexity and the granularity of granules and granular structures. In 2019, Matthew Yao (Knowledge-Based Systems 163 (2019) 885–897) argued that the granularity and complexity are two related but different concepts. For quantifying the complexity, he proposed a class of interaction-based measures. Unfortunately, this direction of research has not received its due attention. To further promote theoretical studies on the concept of the complexity in granular computing, in this paper, we conduct an in-depth and systematic analysis of complexity measures in a progressive partitioning model of granular computing. By taking interactions among components as the underlying notion for explaining the complexity of problem-solving using granular computing, we investigate a class of interaction-based complexity measures. As a step towards systematic research in pursuit of a deeper understanding of the fundamental notion of complexity in granular computing, we move beyond intuitive arguments and aim at a sound theoretical foundation.
关于颗粒计算能力的一个主张是计算效率和解决复杂问题的实际适用性。支持这一主张的大多数讨论通常是基于直观的论点或通过将颗粒和颗粒结构的复杂性和粒度等同起来的例子来进行的。2019年,Matthew Yao (Knowledge-Based Systems 163(2019) 885-897)认为粒度和复杂性是两个相关但不同的概念。为了量化复杂性,他提出了一类基于交互的度量。不幸的是,这一研究方向并没有得到应有的重视。为了进一步推动对粒度计算中复杂性概念的理论研究,本文对粒度计算渐进划分模型中的复杂性测度进行了深入系统的分析。通过将组件之间的交互作为解释使用颗粒计算解决问题的复杂性的基本概念,我们研究了一类基于交互的复杂性度量。为了更深入地理解颗粒计算中复杂性的基本概念,我们朝着系统研究的方向迈出了一步,我们超越了直觉的论点,瞄准了一个健全的理论基础。
{"title":"Interaction-based complexity measures in a progressive partition model of granular computing","authors":"Qiaoyi Li,&nbsp;Yiyu Yao","doi":"10.1016/j.ijar.2026.109637","DOIUrl":"10.1016/j.ijar.2026.109637","url":null,"abstract":"<div><div>One of the claims regarding the power of granular computing is the computational efficiency and practical applicability in solving complex problems. The majority of discussions supporting this claim are typically made based on intuitive arguments or through examples by equating the complexity and the granularity of granules and granular structures. In 2019, Matthew Yao (Knowledge-Based Systems 163 (2019) 885–897) argued that the granularity and complexity are two related but different concepts. For quantifying the complexity, he proposed a class of interaction-based measures. Unfortunately, this direction of research has not received its due attention. To further promote theoretical studies on the concept of the complexity in granular computing, in this paper, we conduct an in-depth and systematic analysis of complexity measures in a progressive partitioning model of granular computing. By taking interactions among components as the underlying notion for explaining the complexity of problem-solving using granular computing, we investigate a class of interaction-based complexity measures. As a step towards systematic research in pursuit of a deeper understanding of the fundamental notion of complexity in granular computing, we move beyond intuitive arguments and aim at a sound theoretical foundation.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109637"},"PeriodicalIF":3.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy neighborhood components analysis: Supervised dimensionality reduction under uncertain labels 模糊邻域成分分析:不确定标签下的监督降维
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-24 DOI: 10.1016/j.ijar.2026.109635
Mohd Aquib , Mohd Suhail Naim
Real-world supervision is often soft or uncertain due to annotator disagreement, class ambiguity, and distribution shift, yet most dimensionality-reduction methods and classical Neighborhood Components Analysis (NCA) in particular, assume hard labels. We propose Fuzzy Neighborhood Components Analysis (Fuzzy-NCA), a linear metric-learning method that directly optimizes a stochastic k-Nearest Neighbors (kNN) objective under fuzzy supervision. Each sample carries a row-stochastic membership vector over classes; pairwise supervision is defined by a principled fuzzy overlap between membership vectors, optionally sharpened by a power parameter to emphasize confident assignments. Ambiguous anchors can be attenuated via an entropy-based reliability weight, yielding an objective that maximizes a reliability-weighted expected fuzzy hit rate. The formulation reduces exactly to classical NCA when memberships are one-hot and admits a closed-form gradient, enabling efficient optimization with standard first-order methods. To scale training, we restrict stochastic neighbors to an input-space k-nearest-neighbor graph, which preserves local geometry while reducing the per-iteration complexity from quadratic to near-linear in the dataset size. The framework is compatible with multiple ways of constructing fuzzy supervision including label smoothing, fuzzy kNN, calibrated posteriors, and type-2 fuzzy reductions making it broadly applicable. Empirically, across a diverse suite of benchmarks, Fuzzy-NCA yields stable, discriminative embeddings under noisy or ambiguous labels, improving both linear separability and neighborhood quality and exhibiting consistent robustness across settings.
现实世界的监督通常是软的或不确定的,因为注释者的分歧,类的模糊性和分布的转移,然而大多数降维方法和经典的邻域成分分析(NCA),特别是,假设硬标签。我们提出了模糊邻域成分分析(Fuzzy- nca),这是一种线性度量学习方法,在模糊监督下直接优化随机k近邻(kNN)目标。每个样本在类上携带一个行随机隶属向量;两两监督由隶属向量之间的原则模糊重叠定义,可选地通过功率参数锐化以强调自信分配。模糊锚可以通过基于熵的可靠性权重来减弱,从而产生一个最大化可靠性加权期望模糊命中率的目标。当成员是一个热点时,该公式精确地减少到经典的NCA,并承认一个封闭形式的梯度,使有效的优化与标准的一阶方法。为了扩展训练,我们将随机邻居限制为输入空间的k近邻图,这保留了局部几何形状,同时将数据集大小的每次迭代复杂度从二次型降低到近线性。该框架兼容多种构建模糊监督的方法,包括标签平滑、模糊kNN、校准后验和2型模糊约简,使其具有广泛的适用性。从经验上看,在不同的基准测试中,Fuzzy-NCA在噪声或模糊标签下产生稳定的判别嵌入,提高了线性可分性和邻域质量,并在不同设置中表现出一致的鲁棒性。
{"title":"Fuzzy neighborhood components analysis: Supervised dimensionality reduction under uncertain labels","authors":"Mohd Aquib ,&nbsp;Mohd Suhail Naim","doi":"10.1016/j.ijar.2026.109635","DOIUrl":"10.1016/j.ijar.2026.109635","url":null,"abstract":"<div><div>Real-world supervision is often soft or uncertain due to annotator disagreement, class ambiguity, and distribution shift, yet most dimensionality-reduction methods and classical Neighborhood Components Analysis (NCA) in particular, assume hard labels. We propose Fuzzy Neighborhood Components Analysis (Fuzzy-NCA), a linear metric-learning method that directly optimizes a stochastic <em>k</em>-Nearest Neighbors (kNN) objective under fuzzy supervision. Each sample carries a row-stochastic membership vector over classes; pairwise supervision is defined by a principled fuzzy overlap between membership vectors, optionally sharpened by a power parameter to emphasize confident assignments. Ambiguous anchors can be attenuated via an entropy-based reliability weight, yielding an objective that maximizes a reliability-weighted expected fuzzy hit rate. The formulation reduces exactly to classical NCA when memberships are one-hot and admits a closed-form gradient, enabling efficient optimization with standard first-order methods. To scale training, we restrict stochastic neighbors to an input-space <em>k</em>-nearest-neighbor graph, which preserves local geometry while reducing the per-iteration complexity from quadratic to near-linear in the dataset size. The framework is compatible with multiple ways of constructing fuzzy supervision including label smoothing, fuzzy <em>k</em>NN, calibrated posteriors, and type-2 fuzzy reductions making it broadly applicable. Empirically, across a diverse suite of benchmarks, Fuzzy-NCA yields stable, discriminative embeddings under noisy or ambiguous labels, improving both linear separability and neighborhood quality and exhibiting consistent robustness across settings.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109635"},"PeriodicalIF":3.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On admissibility in post-hoc hypothesis testing 论事后假设检验的可采性
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.ijar.2026.109634
Ben Chugg , Tyron Lardy , Aaditya Ramdas , Peter Grünwald
The validity of classical hypothesis testing requires the significance level α be fixed before any statistical analysis takes place. This is a stringent requirement. For instance, it prohibits updating α during (or after) an experiment due to changing concern about the cost of false positives, or to reflect unexpectedly strong evidence against the null. Perhaps most disturbingly, witnessing a p-value p ≪ α vs p=αϵ for tiny ϵ > 0 has no (statistical) relevance for any downstream decision-making. Following recent work of Grünwald [1], we develop a theory of post-hoc hypothesis testing, enabling α to be chosen after seeing and analyzing the data. To study “good” post-hoc tests we introduce Γ-admissibility, where Γ is a set of adversaries which map the data to a significance level. We classify the set of Γ-admissible rules for various sets Γ, showing they must be based on e-values, and recover the Neyman-Pearson lemma when Γ is the constant map.
经典假设检验的有效性要求在进行任何统计分析之前,显著性水平α是固定的。这是一个严格的要求。例如,它禁止在实验期间(或之后)由于对假阳性成本的变化的关注而更新α,或者反映出乎意料的反对null的强有力证据。也许最令人不安的是,p值p ≪ α vs p=α - λ对于微小的λ >; 0,与任何下游决策都没有(统计上的)相关性。根据gr nwald[1]最近的工作,我们开发了一种事后假设检验理论,使α能够在看到和分析数据后选择。为了研究“好的”事后测试,我们引入Γ-admissibility,其中Γ是一组将数据映射到显著性水平的对手。我们对不同集合Γ的Γ-admissible规则集进行了分类,表明它们必须基于e值,并在Γ为常数映射时恢复了Neyman-Pearson引理。
{"title":"On admissibility in post-hoc hypothesis testing","authors":"Ben Chugg ,&nbsp;Tyron Lardy ,&nbsp;Aaditya Ramdas ,&nbsp;Peter Grünwald","doi":"10.1016/j.ijar.2026.109634","DOIUrl":"10.1016/j.ijar.2026.109634","url":null,"abstract":"<div><div>The validity of classical hypothesis testing requires the significance level <em>α</em> be fixed before any statistical analysis takes place. This is a stringent requirement. For instance, it prohibits updating <em>α</em> during (or after) an experiment due to changing concern about the cost of false positives, or to reflect unexpectedly strong evidence against the null. Perhaps most disturbingly, witnessing a p-value <em>p</em> ≪ <em>α</em> vs <span><math><mrow><mi>p</mi><mo>=</mo><mi>α</mi><mo>−</mo><mi>ϵ</mi></mrow></math></span> for tiny ϵ &gt; 0 has no (statistical) relevance for any downstream decision-making. Following recent work of Grünwald [1], we develop a theory of <em>post-hoc</em> hypothesis testing, enabling <em>α</em> to be chosen after seeing and analyzing the data. To study “good” post-hoc tests we introduce Γ-admissibility, where Γ is a set of adversaries which map the data to a significance level. We classify the set of Γ-admissible rules for various sets Γ, showing they must be based on e-values, and recover the Neyman-Pearson lemma when Γ is the constant map.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109634"},"PeriodicalIF":3.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the classes of uninorms Umin and Umax on bounded trellises 关于有界格子上的制服类Umin和Umax
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1016/j.ijar.2026.109627
Ya-Ming Wang , Yexing Dan , Bernard De Baets
Recently, the concept of a uninorm on a bounded lattice has been generalized to bounded trellises. A fundamental distinction between lattices and trellises lies in the fact that the underlying pseudo-order relation of a trellis does not need to be transitive. In this paper, we undertake an in-depth dissection of certain types of uninorms on bounded trellises, both from a construction and a characterization point of view. We begin by introducing the uninorms in the classes Umin and Umax on a bounded trellis and provide necessary and sufficient conditions for their characterization. Subsequently, we present two approaches for constructing uninorms on a bounded trellis by utilizing a uninorm defined on a closed subinterval of that bounded trellis. It is shown that these approaches yield uninorms on a bounded trellis that not only differ from the ones obtainable through existing methods, but also generalize those on a bounded lattice constructed via a uninorm defined on a closed subinterval of that bounded lattice.
最近,有界格上的一致信息的概念被推广到有界格上。格和格的根本区别在于格的伪序关系不一定是传递的。在本文中,我们从结构和表征的角度对有界格子上的某些类型的均匀进行了深入的解剖。我们首先在有界网格上引入了类Umin和类Umax的均匀信息,并为它们的表征提供了充分必要条件。随后,我们利用在有界网格的闭子区间上定义的一致项,给出了两种构造有界网格上一致项的方法。结果表明,这些方法不仅在有界格上得到与现有方法不同的一致信息,而且推广了在有界格的闭子区间上定义一致信息构造的有界格上的一致信息。
{"title":"On the classes of uninorms Umin and Umax on bounded trellises","authors":"Ya-Ming Wang ,&nbsp;Yexing Dan ,&nbsp;Bernard De Baets","doi":"10.1016/j.ijar.2026.109627","DOIUrl":"10.1016/j.ijar.2026.109627","url":null,"abstract":"<div><div>Recently, the concept of a uninorm on a bounded lattice has been generalized to bounded trellises. A fundamental distinction between lattices and trellises lies in the fact that the underlying pseudo-order relation of a trellis does not need to be transitive. In this paper, we undertake an in-depth dissection of certain types of uninorms on bounded trellises, both from a construction and a characterization point of view. We begin by introducing the uninorms in the classes <span><math><msub><mi>U</mi><mi>min</mi></msub></math></span> and <span><math><msub><mi>U</mi><mi>max</mi></msub></math></span> on a bounded trellis and provide necessary and sufficient conditions for their characterization. Subsequently, we present two approaches for constructing uninorms on a bounded trellis by utilizing a uninorm defined on a closed subinterval of that bounded trellis. It is shown that these approaches yield uninorms on a bounded trellis that not only differ from the ones obtainable through existing methods, but also generalize those on a bounded lattice constructed via a uninorm defined on a closed subinterval of that bounded lattice.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109627"},"PeriodicalIF":3.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inconsistency reduction in pairwise comparison matrices using genetic algorithms 利用遗传算法减少两两比较矩阵的不一致性
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1016/j.ijar.2026.109626
Atiyeh Sayadi, Ryszard Janicki
This paper discusses inconsistency reduction in qualitative and quantitative multiplicative pairwise comparison matrices by applying efficient genetic algorithms in detail. Three new algorithms are presented and discussed. One for the classical quantitative multiplicative pairwise comparisons, and two for a formal version of qualitative pairwise comparisons. For the quantitative case, a distance-based inconsistency index (Koczkodaj’s index) is used. Moreover the effects of different factors on its efficiency and the quality of results are analyzed. For the qualitative case, no numbers are used, so evaluation functions are tailored to use qualitative relations. For both quantitative and qualitative cases, the genetic algorithms perform reliably, and in the qualitative case they show strong performance compared to the existing method we evaluated.
本文详细讨论了应用高效遗传算法减少定性和定量乘法两两比较矩阵的不一致。提出并讨论了三种新的算法。一个用于经典的定量乘法两两比较,两个用于定性两两比较的正式版本。对于定量情况,使用基于距离的不一致性指数(Koczkodaj指数)。分析了不同因素对其效率和结果质量的影响。对于定性案例,没有使用数字,因此评估函数被定制为使用定性关系。对于定量和定性案例,遗传算法都表现可靠,并且在定性案例中,与我们评估的现有方法相比,它们表现出强大的性能。
{"title":"Inconsistency reduction in pairwise comparison matrices using genetic algorithms","authors":"Atiyeh Sayadi,&nbsp;Ryszard Janicki","doi":"10.1016/j.ijar.2026.109626","DOIUrl":"10.1016/j.ijar.2026.109626","url":null,"abstract":"<div><div>This paper discusses inconsistency reduction in qualitative and quantitative multiplicative pairwise comparison matrices by applying efficient genetic algorithms in detail. Three new algorithms are presented and discussed. One for the classical quantitative multiplicative pairwise comparisons, and two for a formal version of qualitative pairwise comparisons. For the quantitative case, a distance-based inconsistency index (Koczkodaj’s index) is used. Moreover the effects of different factors on its efficiency and the quality of results are analyzed. For the qualitative case, no numbers are used, so evaluation functions are tailored to use qualitative relations. For both quantitative and qualitative cases, the genetic algorithms perform reliably, and in the qualitative case they show strong performance compared to the existing method we evaluated.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109626"},"PeriodicalIF":3.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel attribute reduction algorithm based on simplified neighborhood matrix with Apache Spark 基于简化邻域矩阵的并行属性约简算法
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-03 DOI: 10.1016/j.ijar.2026.109625
Linzi Yin , Anqi Liao , Zhanqi Li , Zhaohui Jiang
As an important branch of rough set theory, neighborhood rough set theory effectively addresses the problem of information loss originated from discretization process. Nevertheless, the computational efficiency of existing parallel neighborhood algorithms remains limited. In this paper, a parallel attribute reduction algorithm based on simplified neighborhood matrix is proposed and implemented with Apache Spark. Firstly, we define a novel neighborhood matrix to describe the neighborhood relationships among objects; Next, the neighborhood matrix is divided into a simplified neighborhood matrix and a set of neighborhood information granules, which is referred to as neighborhood knowledge in this paper. On the basis, a parallel attribute reduction algorithm is proposed based on simplified neighborhood matrix. The new reduction algorithm utilizes Spark’s sorting technique to generate the simplified neighborhood matrix swiftly and employs Python’s interrupt capabilities to enhance computational efficiency. Theoretical analysis and experimental results show that the proposed algorithm keeps the consistency of neighborhood knowledge and exhibits excellent parallel performance. It improves computational efficiency by 93.2%, 69.1%, and 80.4% compared to the benchmark algorithms.
邻域粗糙集理论作为粗糙集理论的一个重要分支,有效地解决了离散化过程中产生的信息丢失问题。然而,现有的并行邻域算法的计算效率仍然有限。本文提出了一种基于简化邻域矩阵的并行属性约简算法,并在Apache Spark上实现。首先,我们定义了一个新的邻域矩阵来描述对象之间的邻域关系;其次,将邻域矩阵分解为一个简化的邻域矩阵和一组邻域信息颗粒,本文将其称为邻域知识。在此基础上,提出了一种基于简化邻域矩阵的并行属性约简算法。新的约简算法利用Spark的排序技术快速生成简化的邻域矩阵,并利用Python的中断能力提高计算效率。理论分析和实验结果表明,该算法保持了邻域知识的一致性,具有良好的并行性能。与基准算法相比,计算效率分别提高了93.2%、69.1%和80.4%。
{"title":"Parallel attribute reduction algorithm based on simplified neighborhood matrix with Apache Spark","authors":"Linzi Yin ,&nbsp;Anqi Liao ,&nbsp;Zhanqi Li ,&nbsp;Zhaohui Jiang","doi":"10.1016/j.ijar.2026.109625","DOIUrl":"10.1016/j.ijar.2026.109625","url":null,"abstract":"<div><div>As an important branch of rough set theory, neighborhood rough set theory effectively addresses the problem of information loss originated from discretization process. Nevertheless, the computational efficiency of existing parallel neighborhood algorithms remains limited. In this paper, a parallel attribute reduction algorithm based on simplified neighborhood matrix is proposed and implemented with Apache Spark. Firstly, we define a novel neighborhood matrix to describe the neighborhood relationships among objects; Next, the neighborhood matrix is divided into a simplified neighborhood matrix and a set of neighborhood information granules, which is referred to as neighborhood knowledge in this paper. On the basis, a parallel attribute reduction algorithm is proposed based on simplified neighborhood matrix. The new reduction algorithm utilizes Spark’s sorting technique to generate the simplified neighborhood matrix swiftly and employs Python’s interrupt capabilities to enhance computational efficiency. Theoretical analysis and experimental results show that the proposed algorithm keeps the consistency of neighborhood knowledge and exhibits excellent parallel performance. It improves computational efficiency by 93.2%, 69.1%, and 80.4% compared to the benchmark algorithms.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109625"},"PeriodicalIF":3.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the upper n-Sugeno integral: Theory and applications to scientometric index design 探索上n-Sugeno积分:在科学计量指标设计中的理论与应用
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-26 DOI: 10.1016/j.ijar.2025.109624
Jana Borzová, Miriam Kleinová, Lukáš Medvec
In order to overcome some limitations of the classical Hirsch index, Boczek et al. (2021) introduced the upper and lower n-Sugeno integrals, extending in particular the approach of Mesiar and Gagolewski (2016). In this paper, we concentrate on the upper n-Sugeno integral, which plays a central role in the definition of the Hirsch-Sugeno operator, a construction with significant potential in scientometrics. We investigate its theoretical properties and show, building on the results of Chitescu (2022), that although the upper n-Sugeno integral constitutes a genuine generalization of the classical Sugeno integral, in some cases the extended construction collapses back to its original form. Moreover, we demonstrate that the computation of the upper n-Sugeno integral can be reformulated as the problem of finding a midpoint of a level measure. This interpretation also connects it to the solution of certain nonlinear equations, including those arising in informetrics.
为了克服经典Hirsch指数的一些局限性,Boczek等人(2021)引入了上下n-Sugeno积分,特别是扩展了Mesiar和gagolowski(2016)的方法。在本文中,我们集中于上n-Sugeno积分,它在Hirsch-Sugeno算子的定义中起着核心作用,Hirsch-Sugeno算子是科学计量学中具有重要潜力的构造。我们研究了它的理论性质,并在Chitescu(2022)的结果的基础上表明,尽管上n-Sugeno积分构成了经典Sugeno积分的真正推广,但在某些情况下,扩展结构坍塌回其原始形式。此外,我们证明了上n-Sugeno积分的计算可以重新表述为寻找一个水平测度的中点问题。这种解释也将它与某些非线性方程的解联系起来,包括那些在信息计量学中出现的方程。
{"title":"Exploring the upper n-Sugeno integral: Theory and applications to scientometric index design","authors":"Jana Borzová,&nbsp;Miriam Kleinová,&nbsp;Lukáš Medvec","doi":"10.1016/j.ijar.2025.109624","DOIUrl":"10.1016/j.ijar.2025.109624","url":null,"abstract":"<div><div>In order to overcome some limitations of the classical Hirsch index, Boczek et al. (2021) introduced the upper and lower <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integrals, extending in particular the approach of Mesiar and Gagolewski (2016). In this paper, we concentrate on the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral, which plays a central role in the definition of the Hirsch-Sugeno operator, a construction with significant potential in scientometrics. We investigate its theoretical properties and show, building on the results of Chitescu (2022), that although the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral constitutes a genuine generalization of the classical Sugeno integral, in some cases the extended construction collapses back to its original form. Moreover, we demonstrate that the computation of the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral can be reformulated as the problem of finding a midpoint of a level measure. This interpretation also connects it to the solution of certain nonlinear equations, including those arising in informetrics.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109624"},"PeriodicalIF":3.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An axiomatic development of Pereira-Stern e-value as a measure of support for statistical hypotheses Pereira-Stern e值作为统计假设支持度度量的公理发展
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-25 DOI: 10.1016/j.ijar.2025.109622
Yasmin F. Cavaliere, Luís G. Esteves, Victor Fossaluza
Testing hypotheses is fundamental to any scientific investigation or data-driven decision-making process. Since Neyman and Pearson systematized hypothesis testing, this statistical procedure has significantly contributed to the development of competing theories of statistical inference. Common approaches to hypothesis testing include significance tests, most powerful tests, likelihood ratio tests, and Bayesian tests. However, practitioners often use evidence measures, such as p-values, the Pereira-Stern e-value and likelihood ratio statistics, in lieu of the reject-or-fail-to-reject approach proposed by Neyman and Pearson, as they provide a more nuanced understanding of statistical hypotheses from data. This study proposes an axiomatic development of belief relations representing the extent to which sample data support a statistical hypothesis, which is consistent with a few logical requirements that capture, in a sense, the Onus Probandi principle in law. It also examines whether the above-mentioned evidence measures are reasonable mathematical representations of such belief relations, that is, of how much a sample supports a hypothesis. It shows that for discrete parameter and sample spaces, the measure of evidence by Pereira and Stern is a fair representation of such a belief relation, especially for Bayesian decision-makers as it formally considers the uncertainty one has about the unknown parameter at the same time it induces a relation that coincides with the belief relation meeting the axioms. This result renders the Pereira-Stern e-value a genuine measure of support for statistical hypotheses in the discrete case, in addition to its recognized importance in the continuous case.
检验假设是任何科学调查或数据驱动决策过程的基础。自从Neyman和Pearson将假设检验系统化以来,这一统计过程对统计推断的竞争理论的发展做出了重大贡献。假设检验的常用方法包括显著性检验、最有力检验、似然比检验和贝叶斯检验。然而,从业者经常使用证据度量,如p值,Pereira-Stern e值和似然比统计,而不是Neyman和Pearson提出的拒绝或不拒绝方法,因为它们从数据中提供了对统计假设的更细致的理解。本研究提出了一个信念关系的公理发展,代表了样本数据支持统计假设的程度,这与一些逻辑要求是一致的,在某种意义上,法律上的举证责任原则。它还检验了上述证据度量是否是这种信念关系的合理数学表示,即样本支持假设的程度。结果表明,对于离散参数和样本空间,Pereira和Stern的证据度量是这种信念关系的公平表示,特别是对于贝叶斯决策者,因为它在形式上考虑了一个人对未知参数的不确定性的同时,它推导出与满足公理的信念关系一致的关系。这一结果使得佩雷拉-斯特恩e值除了在连续情况下的公认重要性之外,在离散情况下是支持统计假设的真正度量。
{"title":"An axiomatic development of Pereira-Stern e-value as a measure of support for statistical hypotheses","authors":"Yasmin F. Cavaliere,&nbsp;Luís G. Esteves,&nbsp;Victor Fossaluza","doi":"10.1016/j.ijar.2025.109622","DOIUrl":"10.1016/j.ijar.2025.109622","url":null,"abstract":"<div><div>Testing hypotheses is fundamental to any scientific investigation or data-driven decision-making process. Since Neyman and Pearson systematized hypothesis testing, this statistical procedure has significantly contributed to the development of competing theories of statistical inference. Common approaches to hypothesis testing include significance tests, most powerful tests, likelihood ratio tests, and Bayesian tests. However, practitioners often use evidence measures, such as p-values, the Pereira-Stern e-value and likelihood ratio statistics, in lieu of the reject-or-fail-to-reject approach proposed by Neyman and Pearson, as they provide a more nuanced understanding of statistical hypotheses from data. This study proposes an axiomatic development of belief relations representing the extent to which sample data support a statistical hypothesis, which is consistent with a few logical requirements that capture, in a sense, the Onus Probandi principle in law. It also examines whether the above-mentioned evidence measures are reasonable mathematical representations of such belief relations, that is, of how much a sample supports a hypothesis. It shows that for discrete parameter and sample spaces, the measure of evidence by Pereira and Stern is a fair representation of such a belief relation, especially for Bayesian decision-makers as it formally considers the uncertainty one has about the unknown parameter at the same time it induces a relation that coincides with the belief relation meeting the axioms. This result renders the Pereira-Stern e-value a genuine measure of support for statistical hypotheses in the discrete case, in addition to its recognized importance in the continuous case.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"190 ","pages":"Article 109622"},"PeriodicalIF":3.0,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New construction of decision evaluation functions on three-way decision spaces based on automorphisms of hesitant fuzzy truth values 基于犹豫模糊真值自同构的三向决策空间决策评价函数的新构造
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-22 DOI: 10.1016/j.ijar.2025.109620
Yuedong Zheng , Bao Qing Hu , Guitian He
As a pivotal theoretical branch of three-way decision (3WD), the concept of Three-way Decision Space (3WDS) effectively unifies 3WD models within fuzzy lattices and other partially ordered sets, providing researchers with a comprehensive information system for 3WD. While various types of 3WDSs have been extensively studied, decision-makers often benefit from a wider array of options to achieve better outcomes. To address this need and enrich the decision-making toolkit, this paper introduces novel construction of decision evaluation functions (DEFs). At first, based on the core axiomatic definitions in 3WDSs- DEF -this paper introduces automorphisms on bounded posets to construct such functions, deriving numerous novel DEFs. The second, this paper proposes new transformation methods from semi-decision evaluation functions (S-DEFs) (resp., quasi-decision evaluation functions (Q-DEFs)) to DEFs, extending the methodological toolkit. And then, this paper investigates the interplay between automorphisms and involution negations on bounded posets, with a focused analysis of their properties in the context of truth value set 2[0,1]{}.
三向决策空间(three-way decision Space, 3WDS)的概念作为三向决策(three-way decision, 3WD)的关键理论分支,有效地将模糊格内的三向决策模型与其他偏序集合进行了统一,为研究人员提供了一个全面的三向决策信息系统。虽然已经对各种类型的3wds进行了广泛的研究,但决策者往往受益于更广泛的选择,以实现更好的结果。为了满足这一需求并丰富决策工具箱,本文引入了决策评价函数(DEFs)的新构造。首先,基于3WDSs- DEF -中的核心公理定义,引入有界序集上的自同构来构造这类函数,得到了许多新的DEF。其次,本文提出了从半决策评价函数(S-DEFs)(相对于将准决策评价函数(Q-DEFs)转化为DEFs,扩展了方法论工具包。然后,本文研究了有界偏置集上的自同构与对合否定之间的相互作用,重点分析了它们在真值集2[0,1]−{}中的性质。
{"title":"New construction of decision evaluation functions on three-way decision spaces based on automorphisms of hesitant fuzzy truth values","authors":"Yuedong Zheng ,&nbsp;Bao Qing Hu ,&nbsp;Guitian He","doi":"10.1016/j.ijar.2025.109620","DOIUrl":"10.1016/j.ijar.2025.109620","url":null,"abstract":"<div><div>As a pivotal theoretical branch of three-way decision (3WD), the concept of Three-way Decision Space (3WDS) effectively unifies 3WD models within fuzzy lattices and other partially ordered sets, providing researchers with a comprehensive information system for 3WD. While various types of 3WDSs have been extensively studied, decision-makers often benefit from a wider array of options to achieve better outcomes. To address this need and enrich the decision-making toolkit, this paper introduces novel construction of decision evaluation functions (DEFs). At first, based on the core axiomatic definitions in 3WDSs- DEF -this paper introduces automorphisms on bounded posets to construct such functions, deriving numerous novel DEFs. The second, this paper proposes new transformation methods from semi-decision evaluation functions (S-DEFs) (resp., quasi-decision evaluation functions (Q-DEFs)) to DEFs, extending the methodological toolkit. And then, this paper investigates the interplay between automorphisms and involution negations on bounded posets, with a focused analysis of their properties in the context of truth value set <span><math><mrow><msup><mn>2</mn><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></msup><mo>−</mo><mrow><mo>{</mo><mi>⌀</mi><mo>}</mo></mrow></mrow></math></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"190 ","pages":"Article 109620"},"PeriodicalIF":3.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Approximate Reasoning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1