Recently, Michał Baczyński et al. presented the fuzzy Sheffer stroke within the framework of fuzzy logic, which generalizes the classical operation when truth values are restricted to the set {0, 1}2. Inspired by this work, we investigate the fuzzy Peirce arrow as the dual of the fuzzy Sheffer stroke, and further introduce the UEL functions as a unification of the fuzzy Sheffer strokes and the fuzzy Peirce arrows in this paper. First, some basic properties of the fuzzy Peirce arrow are presented, and the construction methods of the most important fuzzy logical connectives using the fuzzy Peirce arrow in a dual manner are proposed. Second, the concept of UEL functions, which unifies the fuzzy Sheffer stroke and the fuzzy Peirce arrow, is introduced, and several properties of such functions are derived. Specifically, the relationship between the UEL functions and nullnorms is analyzed, and the N-dual and De Morgan’s laws for the UEL functions are studied. Third, several construction methods for the UEL functions are explored. Finally, a practical application example of the UEL functions in the design of automatic door and window opening-closing systems is presented.
{"title":"Unification of fuzzy Sheffer stroke and fuzzy Peirce arrow","authors":"Meiping Zhang, Feng-Xia Zhang, Xiaoheng Zhang, Maoen Qin","doi":"10.1016/j.ijar.2026.109640","DOIUrl":"10.1016/j.ijar.2026.109640","url":null,"abstract":"<div><div>Recently, Michał Baczyński et al. presented the fuzzy Sheffer stroke within the framework of fuzzy logic, which generalizes the classical operation when truth values are restricted to the set {0, 1}<sup>2</sup>. Inspired by this work, we investigate the fuzzy Peirce arrow as the dual of the fuzzy Sheffer stroke, and further introduce the UEL functions as a unification of the fuzzy Sheffer strokes and the fuzzy Peirce arrows in this paper. First, some basic properties of the fuzzy Peirce arrow are presented, and the construction methods of the most important fuzzy logical connectives using the fuzzy Peirce arrow in a dual manner are proposed. Second, the concept of UEL functions, which unifies the fuzzy Sheffer stroke and the fuzzy Peirce arrow, is introduced, and several properties of such functions are derived. Specifically, the relationship between the UEL functions and nullnorms is analyzed, and the <em>N</em>-dual and De Morgan’s laws for the UEL functions are studied. Third, several construction methods for the UEL functions are explored. Finally, a practical application example of the UEL functions in the design of automatic door and window opening-closing systems is presented.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109640"},"PeriodicalIF":3.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-01Epub Date: 2026-01-28DOI: 10.1016/j.ijar.2026.109637
Qiaoyi Li, Yiyu Yao
One of the claims regarding the power of granular computing is the computational efficiency and practical applicability in solving complex problems. The majority of discussions supporting this claim are typically made based on intuitive arguments or through examples by equating the complexity and the granularity of granules and granular structures. In 2019, Matthew Yao (Knowledge-Based Systems 163 (2019) 885–897) argued that the granularity and complexity are two related but different concepts. For quantifying the complexity, he proposed a class of interaction-based measures. Unfortunately, this direction of research has not received its due attention. To further promote theoretical studies on the concept of the complexity in granular computing, in this paper, we conduct an in-depth and systematic analysis of complexity measures in a progressive partitioning model of granular computing. By taking interactions among components as the underlying notion for explaining the complexity of problem-solving using granular computing, we investigate a class of interaction-based complexity measures. As a step towards systematic research in pursuit of a deeper understanding of the fundamental notion of complexity in granular computing, we move beyond intuitive arguments and aim at a sound theoretical foundation.
关于颗粒计算能力的一个主张是计算效率和解决复杂问题的实际适用性。支持这一主张的大多数讨论通常是基于直观的论点或通过将颗粒和颗粒结构的复杂性和粒度等同起来的例子来进行的。2019年,Matthew Yao (Knowledge-Based Systems 163(2019) 885-897)认为粒度和复杂性是两个相关但不同的概念。为了量化复杂性,他提出了一类基于交互的度量。不幸的是,这一研究方向并没有得到应有的重视。为了进一步推动对粒度计算中复杂性概念的理论研究,本文对粒度计算渐进划分模型中的复杂性测度进行了深入系统的分析。通过将组件之间的交互作为解释使用颗粒计算解决问题的复杂性的基本概念,我们研究了一类基于交互的复杂性度量。为了更深入地理解颗粒计算中复杂性的基本概念,我们朝着系统研究的方向迈出了一步,我们超越了直觉的论点,瞄准了一个健全的理论基础。
{"title":"Interaction-based complexity measures in a progressive partition model of granular computing","authors":"Qiaoyi Li, Yiyu Yao","doi":"10.1016/j.ijar.2026.109637","DOIUrl":"10.1016/j.ijar.2026.109637","url":null,"abstract":"<div><div>One of the claims regarding the power of granular computing is the computational efficiency and practical applicability in solving complex problems. The majority of discussions supporting this claim are typically made based on intuitive arguments or through examples by equating the complexity and the granularity of granules and granular structures. In 2019, Matthew Yao (Knowledge-Based Systems 163 (2019) 885–897) argued that the granularity and complexity are two related but different concepts. For quantifying the complexity, he proposed a class of interaction-based measures. Unfortunately, this direction of research has not received its due attention. To further promote theoretical studies on the concept of the complexity in granular computing, in this paper, we conduct an in-depth and systematic analysis of complexity measures in a progressive partitioning model of granular computing. By taking interactions among components as the underlying notion for explaining the complexity of problem-solving using granular computing, we investigate a class of interaction-based complexity measures. As a step towards systematic research in pursuit of a deeper understanding of the fundamental notion of complexity in granular computing, we move beyond intuitive arguments and aim at a sound theoretical foundation.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109637"},"PeriodicalIF":3.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-01Epub Date: 2026-01-30DOI: 10.1016/j.ijar.2026.109636
Igor Douven , Nikolaus Kriegeskorte
The debate about which scoring rule best measures the accuracy of our credences has largely been conducted on an a priori basis. We pursue an empirical approach, asking which rule best predicts a practical, decision-relevant criterion: Lockean accuracy, the ability to make correct categorical judgments based on a threshold of belief. Analyzing a large dataset of probability judgments, we compare the most widely used scoring rules (Brier, logarithmic, spherical, absolute error, and power rules) and find that, among them, there is no single best one. Instead, the optimal choice is context dependent: the Spherical score is the best predictor for lower belief thresholds, while the Power3 rule is best at higher thresholds. In particular, the widely used Brier and log scores are rarely optimal for this task. A mediation analysis reveals that while much of a rule’s success is explained by its ability to reward calibration and sharpness, the Spherical and Brier rules retain significant predictive power independently of these standard virtues.
{"title":"Predicting Lockean from gradational accuracy","authors":"Igor Douven , Nikolaus Kriegeskorte","doi":"10.1016/j.ijar.2026.109636","DOIUrl":"10.1016/j.ijar.2026.109636","url":null,"abstract":"<div><div>The debate about which scoring rule best measures the accuracy of our credences has largely been conducted on an <em>a priori</em> basis. We pursue an empirical approach, asking which rule best predicts a practical, decision-relevant criterion: Lockean accuracy, the ability to make correct categorical judgments based on a threshold of belief. Analyzing a large dataset of probability judgments, we compare the most widely used scoring rules (Brier, logarithmic, spherical, absolute error, and power rules) and find that, among them, there is no single best one. Instead, the optimal choice is context dependent: the Spherical score is the best predictor for lower belief thresholds, while the Power<sup>3</sup> rule is best at higher thresholds. In particular, the widely used Brier and log scores are rarely optimal for this task. A mediation analysis reveals that while much of a rule’s success is explained by its ability to reward calibration and sharpness, the Spherical and Brier rules retain significant predictive power independently of these standard virtues.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109636"},"PeriodicalIF":3.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-01Epub Date: 2026-02-02DOI: 10.1016/j.ijar.2026.109639
Tianli Su, Yanling Bao
Many existing interval-valued three-way decision methods primarily rely on crisp similarity assessments. Besides, they usually focus on single-objective modeling based on utility maximization or loss minimization, which may lead to conservative and biased outcomes. To address these issues, this paper develops a similarity degree-based interval-valued three-way decision method that integrates utilities and losses within a unified framework. Specifically, we define similarity degrees between interval values by a distance function, and further derive similarity-driven conditional probabilities, relative utility and loss functions. On this basis, we compute expected utilities and expected losses. Furthermore, we combine them via a risk-preference coefficient to obtain the expected values of alternatives, which can be used to reasonably categorize and rank alternatives. Moreover, the proposed method is applied in mine potential evaluation, and the evaluation results exhibit a high level of agreement with practical exploitation outcomes. What’s more, comparative analyses and sensitivity experiments further confirm that the method achieves more accurate and robust decision performance than some representative methods.
{"title":"A similarity degree-based three-way decision method with application to mine potential evaluation","authors":"Tianli Su, Yanling Bao","doi":"10.1016/j.ijar.2026.109639","DOIUrl":"10.1016/j.ijar.2026.109639","url":null,"abstract":"<div><div>Many existing interval-valued three-way decision methods primarily rely on crisp similarity assessments. Besides, they usually focus on single-objective modeling based on utility maximization or loss minimization, which may lead to conservative and biased outcomes. To address these issues, this paper develops a similarity degree-based interval-valued three-way decision method that integrates utilities and losses within a unified framework. Specifically, we define similarity degrees between interval values by a distance function, and further derive similarity-driven conditional probabilities, relative utility and loss functions. On this basis, we compute expected utilities and expected losses. Furthermore, we combine them via a risk-preference coefficient to obtain the expected values of alternatives, which can be used to reasonably categorize and rank alternatives. Moreover, the proposed method is applied in mine potential evaluation, and the evaluation results exhibit a high level of agreement with practical exploitation outcomes. What’s more, comparative analyses and sensitivity experiments further confirm that the method achieves more accurate and robust decision performance than some representative methods.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109639"},"PeriodicalIF":3.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploring the application of the three-way decision model in multi-criteria ranking problems from the perspective of criterion-oriented fuzzy concept has emerged as a promising research direction in the current three-way multi-criteria decision-making system. This direction aims to fully meet the decision-maker’s fundamental need and achieve a comprehensive ranking of all objects. However, since the difference between the criterion-oriented fuzzy concept and the criterion evaluation value of a object may be negative, the existing criterion-oriented three-way decision model, due to its inability to effectively handle negative value, poses a risk of deviation in overall ranking results. To address this issue, this paper introduces regret theory to transform this difference into a regret-rejoice value for decision-maker, thereby constructing a novel criterion-oriented three-way decision model and further proposing a three-stage ranking mechanism. Firstly, all objects are preliminarily classified qualitatively based on the relative magnitude relationship between the criterion-oriented fuzzy concept and the criterion evaluation value of a object, resulting in the division of four object subsets. Secondly, the constructed criterion-oriented three-way decision model, combined with weight aggregation method, is employed to determine the internal ranking relationships within these four subsets. Notably, in this three-way decision model, a calculation method for relative loss functions with precision control and a calculation method for conditional probability based on fuzzy clustering results are proposed. Finally, the ranking among subsets is determined by considering the semantic relationships between them, and the overall ranking of all objects is obtained by integrating the internal ranking results of subsets. Through numerical analysis, comparative analysis, and experimental verification, the effectiveness and superiority of the proposed three-stage ranking mechanism are fully demonstrated.
{"title":"Three-stage ranking mechanism: A decision-making fusion of regret psychology and criterion-oriented three-way decision","authors":"Zhenni Ding , Kai Zhang , Haibo Jiang , Ligang Zhou","doi":"10.1016/j.ijar.2026.109641","DOIUrl":"10.1016/j.ijar.2026.109641","url":null,"abstract":"<div><div>Exploring the application of the three-way decision model in multi-criteria ranking problems from the perspective of criterion-oriented fuzzy concept has emerged as a promising research direction in the current three-way multi-criteria decision-making system. This direction aims to fully meet the decision-maker’s fundamental need and achieve a comprehensive ranking of all objects. However, since the difference between the criterion-oriented fuzzy concept and the criterion evaluation value of a object may be negative, the existing criterion-oriented three-way decision model, due to its inability to effectively handle negative value, poses a risk of deviation in overall ranking results. To address this issue, this paper introduces regret theory to transform this difference into a regret-rejoice value for decision-maker, thereby constructing a novel criterion-oriented three-way decision model and further proposing a three-stage ranking mechanism. Firstly, all objects are preliminarily classified qualitatively based on the relative magnitude relationship between the criterion-oriented fuzzy concept and the criterion evaluation value of a object, resulting in the division of four object subsets. Secondly, the constructed criterion-oriented three-way decision model, combined with weight aggregation method, is employed to determine the internal ranking relationships within these four subsets. Notably, in this three-way decision model, a calculation method for relative loss functions with precision control and a calculation method for conditional probability based on fuzzy clustering results are proposed. Finally, the ranking among subsets is determined by considering the semantic relationships between them, and the overall ranking of all objects is obtained by integrating the internal ranking results of subsets. Through numerical analysis, comparative analysis, and experimental verification, the effectiveness and superiority of the proposed three-stage ranking mechanism are fully demonstrated.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"192 ","pages":"Article 109641"},"PeriodicalIF":3.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-09DOI: 10.1016/j.ijar.2026.109627
Ya-Ming Wang , Yexing Dan , Bernard De Baets
Recently, the concept of a uninorm on a bounded lattice has been generalized to bounded trellises. A fundamental distinction between lattices and trellises lies in the fact that the underlying pseudo-order relation of a trellis does not need to be transitive. In this paper, we undertake an in-depth dissection of certain types of uninorms on bounded trellises, both from a construction and a characterization point of view. We begin by introducing the uninorms in the classes and on a bounded trellis and provide necessary and sufficient conditions for their characterization. Subsequently, we present two approaches for constructing uninorms on a bounded trellis by utilizing a uninorm defined on a closed subinterval of that bounded trellis. It is shown that these approaches yield uninorms on a bounded trellis that not only differ from the ones obtainable through existing methods, but also generalize those on a bounded lattice constructed via a uninorm defined on a closed subinterval of that bounded lattice.
{"title":"On the classes of uninorms Umin and Umax on bounded trellises","authors":"Ya-Ming Wang , Yexing Dan , Bernard De Baets","doi":"10.1016/j.ijar.2026.109627","DOIUrl":"10.1016/j.ijar.2026.109627","url":null,"abstract":"<div><div>Recently, the concept of a uninorm on a bounded lattice has been generalized to bounded trellises. A fundamental distinction between lattices and trellises lies in the fact that the underlying pseudo-order relation of a trellis does not need to be transitive. In this paper, we undertake an in-depth dissection of certain types of uninorms on bounded trellises, both from a construction and a characterization point of view. We begin by introducing the uninorms in the classes <span><math><msub><mi>U</mi><mi>min</mi></msub></math></span> and <span><math><msub><mi>U</mi><mi>max</mi></msub></math></span> on a bounded trellis and provide necessary and sufficient conditions for their characterization. Subsequently, we present two approaches for constructing uninorms on a bounded trellis by utilizing a uninorm defined on a closed subinterval of that bounded trellis. It is shown that these approaches yield uninorms on a bounded trellis that not only differ from the ones obtainable through existing methods, but also generalize those on a bounded lattice constructed via a uninorm defined on a closed subinterval of that bounded lattice.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109627"},"PeriodicalIF":3.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-06DOI: 10.1016/j.ijar.2026.109626
Atiyeh Sayadi, Ryszard Janicki
This paper discusses inconsistency reduction in qualitative and quantitative multiplicative pairwise comparison matrices by applying efficient genetic algorithms in detail. Three new algorithms are presented and discussed. One for the classical quantitative multiplicative pairwise comparisons, and two for a formal version of qualitative pairwise comparisons. For the quantitative case, a distance-based inconsistency index (Koczkodaj’s index) is used. Moreover the effects of different factors on its efficiency and the quality of results are analyzed. For the qualitative case, no numbers are used, so evaluation functions are tailored to use qualitative relations. For both quantitative and qualitative cases, the genetic algorithms perform reliably, and in the qualitative case they show strong performance compared to the existing method we evaluated.
{"title":"Inconsistency reduction in pairwise comparison matrices using genetic algorithms","authors":"Atiyeh Sayadi, Ryszard Janicki","doi":"10.1016/j.ijar.2026.109626","DOIUrl":"10.1016/j.ijar.2026.109626","url":null,"abstract":"<div><div>This paper discusses inconsistency reduction in qualitative and quantitative multiplicative pairwise comparison matrices by applying efficient genetic algorithms in detail. Three new algorithms are presented and discussed. One for the classical quantitative multiplicative pairwise comparisons, and two for a formal version of qualitative pairwise comparisons. For the quantitative case, a distance-based inconsistency index (Koczkodaj’s index) is used. Moreover the effects of different factors on its efficiency and the quality of results are analyzed. For the qualitative case, no numbers are used, so evaluation functions are tailored to use qualitative relations. For both quantitative and qualitative cases, the genetic algorithms perform reliably, and in the qualitative case they show strong performance compared to the existing method we evaluated.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109626"},"PeriodicalIF":3.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-03DOI: 10.1016/j.ijar.2026.109625
Linzi Yin , Anqi Liao , Zhanqi Li , Zhaohui Jiang
As an important branch of rough set theory, neighborhood rough set theory effectively addresses the problem of information loss originated from discretization process. Nevertheless, the computational efficiency of existing parallel neighborhood algorithms remains limited. In this paper, a parallel attribute reduction algorithm based on simplified neighborhood matrix is proposed and implemented with Apache Spark. Firstly, we define a novel neighborhood matrix to describe the neighborhood relationships among objects; Next, the neighborhood matrix is divided into a simplified neighborhood matrix and a set of neighborhood information granules, which is referred to as neighborhood knowledge in this paper. On the basis, a parallel attribute reduction algorithm is proposed based on simplified neighborhood matrix. The new reduction algorithm utilizes Spark’s sorting technique to generate the simplified neighborhood matrix swiftly and employs Python’s interrupt capabilities to enhance computational efficiency. Theoretical analysis and experimental results show that the proposed algorithm keeps the consistency of neighborhood knowledge and exhibits excellent parallel performance. It improves computational efficiency by 93.2%, 69.1%, and 80.4% compared to the benchmark algorithms.
{"title":"Parallel attribute reduction algorithm based on simplified neighborhood matrix with Apache Spark","authors":"Linzi Yin , Anqi Liao , Zhanqi Li , Zhaohui Jiang","doi":"10.1016/j.ijar.2026.109625","DOIUrl":"10.1016/j.ijar.2026.109625","url":null,"abstract":"<div><div>As an important branch of rough set theory, neighborhood rough set theory effectively addresses the problem of information loss originated from discretization process. Nevertheless, the computational efficiency of existing parallel neighborhood algorithms remains limited. In this paper, a parallel attribute reduction algorithm based on simplified neighborhood matrix is proposed and implemented with Apache Spark. Firstly, we define a novel neighborhood matrix to describe the neighborhood relationships among objects; Next, the neighborhood matrix is divided into a simplified neighborhood matrix and a set of neighborhood information granules, which is referred to as neighborhood knowledge in this paper. On the basis, a parallel attribute reduction algorithm is proposed based on simplified neighborhood matrix. The new reduction algorithm utilizes Spark’s sorting technique to generate the simplified neighborhood matrix swiftly and employs Python’s interrupt capabilities to enhance computational efficiency. Theoretical analysis and experimental results show that the proposed algorithm keeps the consistency of neighborhood knowledge and exhibits excellent parallel performance. It improves computational efficiency by 93.2%, 69.1%, and 80.4% compared to the benchmark algorithms.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109625"},"PeriodicalIF":3.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-26DOI: 10.1016/j.ijar.2025.109624
Jana Borzová, Miriam Kleinová, Lukáš Medvec
In order to overcome some limitations of the classical Hirsch index, Boczek et al. (2021) introduced the upper and lower -Sugeno integrals, extending in particular the approach of Mesiar and Gagolewski (2016). In this paper, we concentrate on the upper -Sugeno integral, which plays a central role in the definition of the Hirsch-Sugeno operator, a construction with significant potential in scientometrics. We investigate its theoretical properties and show, building on the results of Chitescu (2022), that although the upper -Sugeno integral constitutes a genuine generalization of the classical Sugeno integral, in some cases the extended construction collapses back to its original form. Moreover, we demonstrate that the computation of the upper -Sugeno integral can be reformulated as the problem of finding a midpoint of a level measure. This interpretation also connects it to the solution of certain nonlinear equations, including those arising in informetrics.
{"title":"Exploring the upper n-Sugeno integral: Theory and applications to scientometric index design","authors":"Jana Borzová, Miriam Kleinová, Lukáš Medvec","doi":"10.1016/j.ijar.2025.109624","DOIUrl":"10.1016/j.ijar.2025.109624","url":null,"abstract":"<div><div>In order to overcome some limitations of the classical Hirsch index, Boczek et al. (2021) introduced the upper and lower <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integrals, extending in particular the approach of Mesiar and Gagolewski (2016). In this paper, we concentrate on the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral, which plays a central role in the definition of the Hirsch-Sugeno operator, a construction with significant potential in scientometrics. We investigate its theoretical properties and show, building on the results of Chitescu (2022), that although the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral constitutes a genuine generalization of the classical Sugeno integral, in some cases the extended construction collapses back to its original form. Moreover, we demonstrate that the computation of the upper <span><math><mstyle><mi>n</mi></mstyle></math></span>-Sugeno integral can be reformulated as the problem of finding a midpoint of a level measure. This interpretation also connects it to the solution of certain nonlinear equations, including those arising in informetrics.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109624"},"PeriodicalIF":3.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-21DOI: 10.1016/j.ijar.2026.109634
Ben Chugg , Tyron Lardy , Aaditya Ramdas , Peter Grünwald
The validity of classical hypothesis testing requires the significance level α be fixed before any statistical analysis takes place. This is a stringent requirement. For instance, it prohibits updating α during (or after) an experiment due to changing concern about the cost of false positives, or to reflect unexpectedly strong evidence against the null. Perhaps most disturbingly, witnessing a p-value p ≪ α vs for tiny ϵ > 0 has no (statistical) relevance for any downstream decision-making. Following recent work of Grünwald [1], we develop a theory of post-hoc hypothesis testing, enabling α to be chosen after seeing and analyzing the data. To study “good” post-hoc tests we introduce Γ-admissibility, where Γ is a set of adversaries which map the data to a significance level. We classify the set of Γ-admissible rules for various sets Γ, showing they must be based on e-values, and recover the Neyman-Pearson lemma when Γ is the constant map.
经典假设检验的有效性要求在进行任何统计分析之前,显著性水平α是固定的。这是一个严格的要求。例如,它禁止在实验期间(或之后)由于对假阳性成本的变化的关注而更新α,或者反映出乎意料的反对null的强有力证据。也许最令人不安的是,p值p ≪ α vs p=α - λ对于微小的λ >; 0,与任何下游决策都没有(统计上的)相关性。根据gr nwald[1]最近的工作,我们开发了一种事后假设检验理论,使α能够在看到和分析数据后选择。为了研究“好的”事后测试,我们引入Γ-admissibility,其中Γ是一组将数据映射到显著性水平的对手。我们对不同集合Γ的Γ-admissible规则集进行了分类,表明它们必须基于e值,并在Γ为常数映射时恢复了Neyman-Pearson引理。
{"title":"On admissibility in post-hoc hypothesis testing","authors":"Ben Chugg , Tyron Lardy , Aaditya Ramdas , Peter Grünwald","doi":"10.1016/j.ijar.2026.109634","DOIUrl":"10.1016/j.ijar.2026.109634","url":null,"abstract":"<div><div>The validity of classical hypothesis testing requires the significance level <em>α</em> be fixed before any statistical analysis takes place. This is a stringent requirement. For instance, it prohibits updating <em>α</em> during (or after) an experiment due to changing concern about the cost of false positives, or to reflect unexpectedly strong evidence against the null. Perhaps most disturbingly, witnessing a p-value <em>p</em> ≪ <em>α</em> vs <span><math><mrow><mi>p</mi><mo>=</mo><mi>α</mi><mo>−</mo><mi>ϵ</mi></mrow></math></span> for tiny ϵ > 0 has no (statistical) relevance for any downstream decision-making. Following recent work of Grünwald [1], we develop a theory of <em>post-hoc</em> hypothesis testing, enabling <em>α</em> to be chosen after seeing and analyzing the data. To study “good” post-hoc tests we introduce Γ-admissibility, where Γ is a set of adversaries which map the data to a significance level. We classify the set of Γ-admissible rules for various sets Γ, showing they must be based on e-values, and recover the Neyman-Pearson lemma when Γ is the constant map.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"191 ","pages":"Article 109634"},"PeriodicalIF":3.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146074609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}