Pub Date : 2026-01-30DOI: 10.1016/j.ijar.2026.109636
Igor Douven , Nikolaus Kriegeskorte
The debate about which scoring rule best measures the accuracy of our credences has largely been conducted on an a priori basis. We pursue an empirical approach, asking which rule best predicts a practical, decision-relevant criterion: Lockean accuracy, the ability to make correct categorical judgments based on a threshold of belief. Analyzing a large dataset of probability judgments, we compare the most widely used scoring rules (Brier, logarithmic, spherical, absolute error, and power rules) and find that, among them, there is no single best one. Instead, the optimal choice is context dependent: the Spherical score is the best predictor for lower belief thresholds, while the Power3 rule is best at higher thresholds. In particular, the widely used Brier and log scores are rarely optimal for this task. A mediation analysis reveals that while much of a rule’s success is explained by its ability to reward calibration and sharpness, the Spherical and Brier rules retain significant predictive power independently of these standard virtues.
{"title":"Predicting Lockean from gradational accuracy","authors":"Igor Douven , Nikolaus Kriegeskorte","doi":"10.1016/j.ijar.2026.109636","DOIUrl":"10.1016/j.ijar.2026.109636","url":null,"abstract":"<div><div>The debate about which scoring rule best measures the accuracy of our credences has largely been conducted on an <em>a priori</em> basis. We pursue an empirical approach, asking which rule best predicts a practical, decision-relevant criterion: Lockean accuracy, the ability to make correct categorical judgments based on a threshold of belief. Analyzing a large dataset of probability judgments, we compare the most widely used scoring rules (Brier, logarithmic, spherical, absolute error, and power rules) and find that, among them, there is no single best one. Instead, the optimal choice is context dependent: the Spherical score is the best predictor for lower belief thresholds, while the Power<sup>3</sup> rule is best at higher thresholds. In particular, the widely used Brier and log scores are rarely optimal for this task. A mediation analysis reveals that while much of a rule’s success is explained by its ability to reward calibration and sharpness, the Spherical and Brier rules retain significant predictive power independently of these standard virtues.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109636"},"PeriodicalIF":3.0,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.ijar.2026.109637
Qiaoyi Li, Yiyu Yao
One of the claims regarding the power of granular computing is the computational efficiency and practical applicability in solving complex problems. The majority of discussions supporting this claim are typically made based on intuitive arguments or through examples by equating the complexity and the granularity of granules and granular structures. In 2019, Matthew Yao (Knowledge-Based Systems 163 (2019) 885–897) argued that the granularity and complexity are two related but different concepts. For quantifying the complexity, he proposed a class of interaction-based measures. Unfortunately, this direction of research has not received its due attention. To further promote theoretical studies on the concept of the complexity in granular computing, in this paper, we conduct an in-depth and systematic analysis of complexity measures in a progressive partitioning model of granular computing. By taking interactions among components as the underlying notion for explaining the complexity of problem-solving using granular computing, we investigate a class of interaction-based complexity measures. As a step towards systematic research in pursuit of a deeper understanding of the fundamental notion of complexity in granular computing, we move beyond intuitive arguments and aim at a sound theoretical foundation.
关于颗粒计算能力的一个主张是计算效率和解决复杂问题的实际适用性。支持这一主张的大多数讨论通常是基于直观的论点或通过将颗粒和颗粒结构的复杂性和粒度等同起来的例子来进行的。2019年,Matthew Yao (Knowledge-Based Systems 163(2019) 885-897)认为粒度和复杂性是两个相关但不同的概念。为了量化复杂性,他提出了一类基于交互的度量。不幸的是,这一研究方向并没有得到应有的重视。为了进一步推动对粒度计算中复杂性概念的理论研究,本文对粒度计算渐进划分模型中的复杂性测度进行了深入系统的分析。通过将组件之间的交互作为解释使用颗粒计算解决问题的复杂性的基本概念,我们研究了一类基于交互的复杂性度量。为了更深入地理解颗粒计算中复杂性的基本概念,我们朝着系统研究的方向迈出了一步,我们超越了直觉的论点,瞄准了一个健全的理论基础。
{"title":"Interaction-based complexity measures in a progressive partition model of granular computing","authors":"Qiaoyi Li, Yiyu Yao","doi":"10.1016/j.ijar.2026.109637","DOIUrl":"10.1016/j.ijar.2026.109637","url":null,"abstract":"<div><div>One of the claims regarding the power of granular computing is the computational efficiency and practical applicability in solving complex problems. The majority of discussions supporting this claim are typically made based on intuitive arguments or through examples by equating the complexity and the granularity of granules and granular structures. In 2019, Matthew Yao (Knowledge-Based Systems 163 (2019) 885–897) argued that the granularity and complexity are two related but different concepts. For quantifying the complexity, he proposed a class of interaction-based measures. Unfortunately, this direction of research has not received its due attention. To further promote theoretical studies on the concept of the complexity in granular computing, in this paper, we conduct an in-depth and systematic analysis of complexity measures in a progressive partitioning model of granular computing. By taking interactions among components as the underlying notion for explaining the complexity of problem-solving using granular computing, we investigate a class of interaction-based complexity measures. As a step towards systematic research in pursuit of a deeper understanding of the fundamental notion of complexity in granular computing, we move beyond intuitive arguments and aim at a sound theoretical foundation.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109637"},"PeriodicalIF":3.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-24DOI: 10.1016/j.ijar.2026.109635
Mohd Aquib , Mohd Suhail Naim
Real-world supervision is often soft or uncertain due to annotator disagreement, class ambiguity, and distribution shift, yet most dimensionality-reduction methods and classical Neighborhood Components Analysis (NCA) in particular, assume hard labels. We propose Fuzzy Neighborhood Components Analysis (Fuzzy-NCA), a linear metric-learning method that directly optimizes a stochastic k-Nearest Neighbors (kNN) objective under fuzzy supervision. Each sample carries a row-stochastic membership vector over classes; pairwise supervision is defined by a principled fuzzy overlap between membership vectors, optionally sharpened by a power parameter to emphasize confident assignments. Ambiguous anchors can be attenuated via an entropy-based reliability weight, yielding an objective that maximizes a reliability-weighted expected fuzzy hit rate. The formulation reduces exactly to classical NCA when memberships are one-hot and admits a closed-form gradient, enabling efficient optimization with standard first-order methods. To scale training, we restrict stochastic neighbors to an input-space k-nearest-neighbor graph, which preserves local geometry while reducing the per-iteration complexity from quadratic to near-linear in the dataset size. The framework is compatible with multiple ways of constructing fuzzy supervision including label smoothing, fuzzy kNN, calibrated posteriors, and type-2 fuzzy reductions making it broadly applicable. Empirically, across a diverse suite of benchmarks, Fuzzy-NCA yields stable, discriminative embeddings under noisy or ambiguous labels, improving both linear separability and neighborhood quality and exhibiting consistent robustness across settings.
{"title":"Fuzzy neighborhood components analysis: Supervised dimensionality reduction under uncertain labels","authors":"Mohd Aquib , Mohd Suhail Naim","doi":"10.1016/j.ijar.2026.109635","DOIUrl":"10.1016/j.ijar.2026.109635","url":null,"abstract":"<div><div>Real-world supervision is often soft or uncertain due to annotator disagreement, class ambiguity, and distribution shift, yet most dimensionality-reduction methods and classical Neighborhood Components Analysis (NCA) in particular, assume hard labels. We propose Fuzzy Neighborhood Components Analysis (Fuzzy-NCA), a linear metric-learning method that directly optimizes a stochastic <em>k</em>-Nearest Neighbors (kNN) objective under fuzzy supervision. Each sample carries a row-stochastic membership vector over classes; pairwise supervision is defined by a principled fuzzy overlap between membership vectors, optionally sharpened by a power parameter to emphasize confident assignments. Ambiguous anchors can be attenuated via an entropy-based reliability weight, yielding an objective that maximizes a reliability-weighted expected fuzzy hit rate. The formulation reduces exactly to classical NCA when memberships are one-hot and admits a closed-form gradient, enabling efficient optimization with standard first-order methods. To scale training, we restrict stochastic neighbors to an input-space <em>k</em>-nearest-neighbor graph, which preserves local geometry while reducing the per-iteration complexity from quadratic to near-linear in the dataset size. The framework is compatible with multiple ways of constructing fuzzy supervision including label smoothing, fuzzy <em>k</em>NN, calibrated posteriors, and type-2 fuzzy reductions making it broadly applicable. Empirically, across a diverse suite of benchmarks, Fuzzy-NCA yields stable, discriminative embeddings under noisy or ambiguous labels, improving both linear separability and neighborhood quality and exhibiting consistent robustness across settings.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109635"},"PeriodicalIF":3.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1016/j.ijar.2026.109634
Ben Chugg , Tyron Lardy , Aaditya Ramdas , Peter Grünwald
The validity of classical hypothesis testing requires the significance level α be fixed before any statistical analysis takes place. This is a stringent requirement. For instance, it prohibits updating α during (or after) an experiment due to changing concern about the cost of false positives, or to reflect unexpectedly strong evidence against the null. Perhaps most disturbingly, witnessing a p-value p ≪ α vs for tiny ϵ > 0 has no (statistical) relevance for any downstream decision-making. Following recent work of Grünwald [1], we develop a theory of post-hoc hypothesis testing, enabling α to be chosen after seeing and analyzing the data. To study “good” post-hoc tests we introduce Γ-admissibility, where Γ is a set of adversaries which map the data to a significance level. We classify the set of Γ-admissible rules for various sets Γ, showing they must be based on e-values, and recover the Neyman-Pearson lemma when Γ is the constant map.
经典假设检验的有效性要求在进行任何统计分析之前,显著性水平α是固定的。这是一个严格的要求。例如,它禁止在实验期间(或之后)由于对假阳性成本的变化的关注而更新α,或者反映出乎意料的反对null的强有力证据。也许最令人不安的是,p值p ≪ α vs p=α - λ对于微小的λ >; 0,与任何下游决策都没有(统计上的)相关性。根据gr nwald[1]最近的工作,我们开发了一种事后假设检验理论,使α能够在看到和分析数据后选择。为了研究“好的”事后测试,我们引入Γ-admissibility,其中Γ是一组将数据映射到显著性水平的对手。我们对不同集合Γ的Γ-admissible规则集进行了分类,表明它们必须基于e值,并在Γ为常数映射时恢复了Neyman-Pearson引理。
{"title":"On admissibility in post-hoc hypothesis testing","authors":"Ben Chugg , Tyron Lardy , Aaditya Ramdas , Peter Grünwald","doi":"10.1016/j.ijar.2026.109634","DOIUrl":"10.1016/j.ijar.2026.109634","url":null,"abstract":"<div><div>The validity of classical hypothesis testing requires the significance level <em>α</em> be fixed before any statistical analysis takes place. This is a stringent requirement. For instance, it prohibits updating <em>α</em> during (or after) an experiment due to changing concern about the cost of false positives, or to reflect unexpectedly strong evidence against the null. Perhaps most disturbingly, witnessing a p-value <em>p</em> ≪ <em>α</em> vs <span><math><mrow><mi>p</mi><mo>=</mo><mi>α</mi><mo>−</mo><mi>ϵ</mi></mrow></math></span> for tiny ϵ > 0 has no (statistical) relevance for any downstream decision-making. Following recent work of Grünwald [1], we develop a theory of <em>post-hoc</em> hypothesis testing, enabling <em>α</em> to be chosen after seeing and analyzing the data. To study “good” post-hoc tests we introduce Γ-admissibility, where Γ is a set of adversaries which map the data to a significance level. We classify the set of Γ-admissible rules for various sets Γ, showing they must be based on e-values, and recover the Neyman-Pearson lemma when Γ is the constant map.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109634"},"PeriodicalIF":3.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.ijar.2026.109627
Ya-Ming Wang , Yexing Dan , Bernard De Baets
Recently, the concept of a uninorm on a bounded lattice has been generalized to bounded trellises. A fundamental distinction between lattices and trellises lies in the fact that the underlying pseudo-order relation of a trellis does not need to be transitive. In this paper, we undertake an in-depth dissection of certain types of uninorms on bounded trellises, both from a construction and a characterization point of view. We begin by introducing the uninorms in the classes and on a bounded trellis and provide necessary and sufficient conditions for their characterization. Subsequently, we present two approaches for constructing uninorms on a bounded trellis by utilizing a uninorm defined on a closed subinterval of that bounded trellis. It is shown that these approaches yield uninorms on a bounded trellis that not only differ from the ones obtainable through existing methods, but also generalize those on a bounded lattice constructed via a uninorm defined on a closed subinterval of that bounded lattice.
{"title":"On the classes of uninorms Umin and Umax on bounded trellises","authors":"Ya-Ming Wang , Yexing Dan , Bernard De Baets","doi":"10.1016/j.ijar.2026.109627","DOIUrl":"10.1016/j.ijar.2026.109627","url":null,"abstract":"<div><div>Recently, the concept of a uninorm on a bounded lattice has been generalized to bounded trellises. A fundamental distinction between lattices and trellises lies in the fact that the underlying pseudo-order relation of a trellis does not need to be transitive. In this paper, we undertake an in-depth dissection of certain types of uninorms on bounded trellises, both from a construction and a characterization point of view. We begin by introducing the uninorms in the classes <span><math><msub><mi>U</mi><mi>min</mi></msub></math></span> and <span><math><msub><mi>U</mi><mi>max</mi></msub></math></span> on a bounded trellis and provide necessary and sufficient conditions for their characterization. Subsequently, we present two approaches for constructing uninorms on a bounded trellis by utilizing a uninorm defined on a closed subinterval of that bounded trellis. It is shown that these approaches yield uninorms on a bounded trellis that not only differ from the ones obtainable through existing methods, but also generalize those on a bounded lattice constructed via a uninorm defined on a closed subinterval of that bounded lattice.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109627"},"PeriodicalIF":3.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.ijar.2026.109626
Atiyeh Sayadi, Ryszard Janicki
This paper discusses inconsistency reduction in qualitative and quantitative multiplicative pairwise comparison matrices by applying efficient genetic algorithms in detail. Three new algorithms are presented and discussed. One for the classical quantitative multiplicative pairwise comparisons, and two for a formal version of qualitative pairwise comparisons. For the quantitative case, a distance-based inconsistency index (Koczkodaj’s index) is used. Moreover the effects of different factors on its efficiency and the quality of results are analyzed. For the qualitative case, no numbers are used, so evaluation functions are tailored to use qualitative relations. For both quantitative and qualitative cases, the genetic algorithms perform reliably, and in the qualitative case they show strong performance compared to the existing method we evaluated.
{"title":"Inconsistency reduction in pairwise comparison matrices using genetic algorithms","authors":"Atiyeh Sayadi, Ryszard Janicki","doi":"10.1016/j.ijar.2026.109626","DOIUrl":"10.1016/j.ijar.2026.109626","url":null,"abstract":"<div><div>This paper discusses inconsistency reduction in qualitative and quantitative multiplicative pairwise comparison matrices by applying efficient genetic algorithms in detail. Three new algorithms are presented and discussed. One for the classical quantitative multiplicative pairwise comparisons, and two for a formal version of qualitative pairwise comparisons. For the quantitative case, a distance-based inconsistency index (Koczkodaj’s index) is used. Moreover the effects of different factors on its efficiency and the quality of results are analyzed. For the qualitative case, no numbers are used, so evaluation functions are tailored to use qualitative relations. For both quantitative and qualitative cases, the genetic algorithms perform reliably, and in the qualitative case they show strong performance compared to the existing method we evaluated.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109626"},"PeriodicalIF":3.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1016/j.ijar.2026.109625
Linzi Yin , Anqi Liao , Zhanqi Li , Zhaohui Jiang
As an important branch of rough set theory, neighborhood rough set theory effectively addresses the problem of information loss originated from discretization process. Nevertheless, the computational efficiency of existing parallel neighborhood algorithms remains limited. In this paper, a parallel attribute reduction algorithm based on simplified neighborhood matrix is proposed and implemented with Apache Spark. Firstly, we define a novel neighborhood matrix to describe the neighborhood relationships among objects; Next, the neighborhood matrix is divided into a simplified neighborhood matrix and a set of neighborhood information granules, which is referred to as neighborhood knowledge in this paper. On the basis, a parallel attribute reduction algorithm is proposed based on simplified neighborhood matrix. The new reduction algorithm utilizes Spark’s sorting technique to generate the simplified neighborhood matrix swiftly and employs Python’s interrupt capabilities to enhance computational efficiency. Theoretical analysis and experimental results show that the proposed algorithm keeps the consistency of neighborhood knowledge and exhibits excellent parallel performance. It improves computational efficiency by 93.2%, 69.1%, and 80.4% compared to the benchmark algorithms.
{"title":"Parallel attribute reduction algorithm based on simplified neighborhood matrix with Apache Spark","authors":"Linzi Yin , Anqi Liao , Zhanqi Li , Zhaohui Jiang","doi":"10.1016/j.ijar.2026.109625","DOIUrl":"10.1016/j.ijar.2026.109625","url":null,"abstract":"<div><div>As an important branch of rough set theory, neighborhood rough set theory effectively addresses the problem of information loss originated from discretization process. Nevertheless, the computational efficiency of existing parallel neighborhood algorithms remains limited. In this paper, a parallel attribute reduction algorithm based on simplified neighborhood matrix is proposed and implemented with Apache Spark. Firstly, we define a novel neighborhood matrix to describe the neighborhood relationships among objects; Next, the neighborhood matrix is divided into a simplified neighborhood matrix and a set of neighborhood information granules, which is referred to as neighborhood knowledge in this paper. On the basis, a parallel attribute reduction algorithm is proposed based on simplified neighborhood matrix. The new reduction algorithm utilizes Spark’s sorting technique to generate the simplified neighborhood matrix swiftly and employs Python’s interrupt capabilities to enhance computational efficiency. Theoretical analysis and experimental results show that the proposed algorithm keeps the consistency of neighborhood knowledge and exhibits excellent parallel performance. It improves computational efficiency by 93.2%, 69.1%, and 80.4% compared to the benchmark algorithms.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109625"},"PeriodicalIF":3.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.ijar.2025.109624
Jana Borzová, Miriam Kleinová, Lukáš Medvec
In order to overcome some limitations of the classical Hirsch index, Boczek et al. (2021) introduced the upper and lower -Sugeno integrals, extending in particular the approach of Mesiar and Gagolewski (2016). In this paper, we concentrate on the upper -Sugeno integral, which plays a central role in the definition of the Hirsch-Sugeno operator, a construction with significant potential in scientometrics. We investigate its theoretical properties and show, building on the results of Chitescu (2022), that although the upper -Sugeno integral constitutes a genuine generalization of the classical Sugeno integral, in some cases the extended construction collapses back to its original form. Moreover, we demonstrate that the computation of the upper -Sugeno integral can be reformulated as the problem of finding a midpoint of a level measure. This interpretation also connects it to the solution of certain nonlinear equations, including those arising in informetrics.
{"title":"Exploring the upper n-Sugeno integral: Theory and applications to scientometric index design","authors":"Jana Borzová, Miriam Kleinová, Lukáš Medvec","doi":"10.1016/j.ijar.2025.109624","DOIUrl":"10.1016/j.ijar.2025.109624","url":null,"abstract":"<div><div>In order to overcome some limitations of the classical Hirsch index, Boczek et al. (2021) introduced the upper and lower <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integrals, extending in particular the approach of Mesiar and Gagolewski (2016). In this paper, we concentrate on the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral, which plays a central role in the definition of the Hirsch-Sugeno operator, a construction with significant potential in scientometrics. We investigate its theoretical properties and show, building on the results of Chitescu (2022), that although the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral constitutes a genuine generalization of the classical Sugeno integral, in some cases the extended construction collapses back to its original form. Moreover, we demonstrate that the computation of the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral can be reformulated as the problem of finding a midpoint of a level measure. This interpretation also connects it to the solution of certain nonlinear equations, including those arising in informetrics.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109624"},"PeriodicalIF":3.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1016/j.ijar.2025.109622
Yasmin F. Cavaliere, Luís G. Esteves, Victor Fossaluza
Testing hypotheses is fundamental to any scientific investigation or data-driven decision-making process. Since Neyman and Pearson systematized hypothesis testing, this statistical procedure has significantly contributed to the development of competing theories of statistical inference. Common approaches to hypothesis testing include significance tests, most powerful tests, likelihood ratio tests, and Bayesian tests. However, practitioners often use evidence measures, such as p-values, the Pereira-Stern e-value and likelihood ratio statistics, in lieu of the reject-or-fail-to-reject approach proposed by Neyman and Pearson, as they provide a more nuanced understanding of statistical hypotheses from data. This study proposes an axiomatic development of belief relations representing the extent to which sample data support a statistical hypothesis, which is consistent with a few logical requirements that capture, in a sense, the Onus Probandi principle in law. It also examines whether the above-mentioned evidence measures are reasonable mathematical representations of such belief relations, that is, of how much a sample supports a hypothesis. It shows that for discrete parameter and sample spaces, the measure of evidence by Pereira and Stern is a fair representation of such a belief relation, especially for Bayesian decision-makers as it formally considers the uncertainty one has about the unknown parameter at the same time it induces a relation that coincides with the belief relation meeting the axioms. This result renders the Pereira-Stern e-value a genuine measure of support for statistical hypotheses in the discrete case, in addition to its recognized importance in the continuous case.
{"title":"An axiomatic development of Pereira-Stern e-value as a measure of support for statistical hypotheses","authors":"Yasmin F. Cavaliere, Luís G. Esteves, Victor Fossaluza","doi":"10.1016/j.ijar.2025.109622","DOIUrl":"10.1016/j.ijar.2025.109622","url":null,"abstract":"<div><div>Testing hypotheses is fundamental to any scientific investigation or data-driven decision-making process. Since Neyman and Pearson systematized hypothesis testing, this statistical procedure has significantly contributed to the development of competing theories of statistical inference. Common approaches to hypothesis testing include significance tests, most powerful tests, likelihood ratio tests, and Bayesian tests. However, practitioners often use evidence measures, such as p-values, the Pereira-Stern e-value and likelihood ratio statistics, in lieu of the reject-or-fail-to-reject approach proposed by Neyman and Pearson, as they provide a more nuanced understanding of statistical hypotheses from data. This study proposes an axiomatic development of belief relations representing the extent to which sample data support a statistical hypothesis, which is consistent with a few logical requirements that capture, in a sense, the Onus Probandi principle in law. It also examines whether the above-mentioned evidence measures are reasonable mathematical representations of such belief relations, that is, of how much a sample supports a hypothesis. It shows that for discrete parameter and sample spaces, the measure of evidence by Pereira and Stern is a fair representation of such a belief relation, especially for Bayesian decision-makers as it formally considers the uncertainty one has about the unknown parameter at the same time it induces a relation that coincides with the belief relation meeting the axioms. This result renders the Pereira-Stern e-value a genuine measure of support for statistical hypotheses in the discrete case, in addition to its recognized importance in the continuous case.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"190 ","pages":"Article 109622"},"PeriodicalIF":3.0,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1016/j.ijar.2025.109620
Yuedong Zheng , Bao Qing Hu , Guitian He
As a pivotal theoretical branch of three-way decision (3WD), the concept of Three-way Decision Space (3WDS) effectively unifies 3WD models within fuzzy lattices and other partially ordered sets, providing researchers with a comprehensive information system for 3WD. While various types of 3WDSs have been extensively studied, decision-makers often benefit from a wider array of options to achieve better outcomes. To address this need and enrich the decision-making toolkit, this paper introduces novel construction of decision evaluation functions (DEFs). At first, based on the core axiomatic definitions in 3WDSs- DEF -this paper introduces automorphisms on bounded posets to construct such functions, deriving numerous novel DEFs. The second, this paper proposes new transformation methods from semi-decision evaluation functions (S-DEFs) (resp., quasi-decision evaluation functions (Q-DEFs)) to DEFs, extending the methodological toolkit. And then, this paper investigates the interplay between automorphisms and involution negations on bounded posets, with a focused analysis of their properties in the context of truth value set .
{"title":"New construction of decision evaluation functions on three-way decision spaces based on automorphisms of hesitant fuzzy truth values","authors":"Yuedong Zheng , Bao Qing Hu , Guitian He","doi":"10.1016/j.ijar.2025.109620","DOIUrl":"10.1016/j.ijar.2025.109620","url":null,"abstract":"<div><div>As a pivotal theoretical branch of three-way decision (3WD), the concept of Three-way Decision Space (3WDS) effectively unifies 3WD models within fuzzy lattices and other partially ordered sets, providing researchers with a comprehensive information system for 3WD. While various types of 3WDSs have been extensively studied, decision-makers often benefit from a wider array of options to achieve better outcomes. To address this need and enrich the decision-making toolkit, this paper introduces novel construction of decision evaluation functions (DEFs). At first, based on the core axiomatic definitions in 3WDSs- DEF -this paper introduces automorphisms on bounded posets to construct such functions, deriving numerous novel DEFs. The second, this paper proposes new transformation methods from semi-decision evaluation functions (S-DEFs) (resp., quasi-decision evaluation functions (Q-DEFs)) to DEFs, extending the methodological toolkit. And then, this paper investigates the interplay between automorphisms and involution negations on bounded posets, with a focused analysis of their properties in the context of truth value set <span><math><mrow><msup><mn>2</mn><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></msup><mo>−</mo><mrow><mo>{</mo><mi>⌀</mi><mo>}</mo></mrow></mrow></math></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"190 ","pages":"Article 109620"},"PeriodicalIF":3.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}