Pub Date : 2025-01-23DOI: 10.1016/j.ijar.2025.109370
Florian Andreas Marwitz, Ralf Möller, Marcel Gehrke
In Dynamic Bayesian Networks, time is considered discrete: In medical applications, a time step can correspond to, for example, one day. Existing temporal inference algorithms process each time step sequentially, making long-term predictions computationally expensive. We present an exact, GPU-optimizable approach exploiting symmetries over time for prediction queries, which constructs a matrix for the underlying temporal process in a preprocessing step. Additionally, we construct a vector for each query capturing the probability distribution at the current time step. Then, we time-warp into the future by matrix exponentiation. In our empirical evaluation, we show an order of magnitude speedup over the interface algorithm. The work-heavy preprocessing step can be done offline, and the runtime of prediction queries is significantly reduced. Therefore, we can handle application problems that could not be handled efficiently before.
{"title":"PETS: Predicting efficiently using temporal symmetries in temporal probabilistic graphical models","authors":"Florian Andreas Marwitz, Ralf Möller, Marcel Gehrke","doi":"10.1016/j.ijar.2025.109370","DOIUrl":"10.1016/j.ijar.2025.109370","url":null,"abstract":"<div><div>In Dynamic Bayesian Networks, time is considered discrete: In medical applications, a time step can correspond to, for example, one day. Existing temporal inference algorithms process each time step sequentially, making long-term predictions computationally expensive. We present an exact, GPU-optimizable approach exploiting symmetries over time for prediction queries, which constructs a matrix for the underlying temporal process in a preprocessing step. Additionally, we construct a vector for each query capturing the probability distribution at the current time step. Then, we time-warp into the future by matrix exponentiation. In our empirical evaluation, we show an order of magnitude speedup over the interface algorithm. The work-heavy preprocessing step can be done offline, and the runtime of prediction queries is significantly reduced. Therefore, we can handle application problems that could not be handled efficiently before.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109370"},"PeriodicalIF":3.2,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1016/j.ijar.2025.109364
Xia Ji , Wanyu Duan , Jianhua Peng , Sheng Yao
Attribute reduction is a crucial step in data preprocessing in the field of data mining. Accurate measurement of the classification ability of attribute sets stands a central issue in attribute reduction research. The existing fuzzy rough set attribute reduction algorithms measure the classification ability of attribute sets by evaluating the proximity between fuzzy similarity classes and decision classes. However, the granularity of the decision class is too large to reflect the data distribution within the decision class, which may lead to misclassification of samples, thus affecting the effectiveness of attribute reduction. To address this problem, we refine the decision class to propose the concept of decision ball, and study a new extended fuzzy rough set model based on decision ball. In this model, decision balls serve as the evaluation granularity, facilitating the fitting of data distributions and measuring the classification ability of attributes. Expanding on this foundation, we have designed a fuzzy rough set attribute reduction algorithm based on decision ball model (DBFRS). We conducted extensive comparative experiments involving 9 state-of-the-art attribute reduction algorithms on 18 public datasets. Experimental results demonstrate that DBFRS attains high classification accuracy. Moreover, DBFRS exhibits better reduction performance on large and high-dimensional datasets. Compared to current fuzzy rough set methods, DBFRS demonstrates better applicability.
{"title":"Fuzzy rough set attribute reduction based on decision ball model","authors":"Xia Ji , Wanyu Duan , Jianhua Peng , Sheng Yao","doi":"10.1016/j.ijar.2025.109364","DOIUrl":"10.1016/j.ijar.2025.109364","url":null,"abstract":"<div><div>Attribute reduction is a crucial step in data preprocessing in the field of data mining. Accurate measurement of the classification ability of attribute sets stands a central issue in attribute reduction research. The existing fuzzy rough set attribute reduction algorithms measure the classification ability of attribute sets by evaluating the proximity between fuzzy similarity classes and decision classes. However, the granularity of the decision class is too large to reflect the data distribution within the decision class, which may lead to misclassification of samples, thus affecting the effectiveness of attribute reduction. To address this problem, we refine the decision class to propose the concept of decision ball, and study a new extended fuzzy rough set model based on decision ball. In this model, decision balls serve as the evaluation granularity, facilitating the fitting of data distributions and measuring the classification ability of attributes. Expanding on this foundation, we have designed a fuzzy rough set attribute reduction algorithm based on decision ball model (DBFRS). We conducted extensive comparative experiments involving 9 state-of-the-art attribute reduction algorithms on 18 public datasets. Experimental results demonstrate that DBFRS attains high classification accuracy. Moreover, DBFRS exhibits better reduction performance on large and high-dimensional datasets. Compared to current fuzzy rough set methods, DBFRS demonstrates better applicability.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109364"},"PeriodicalIF":3.2,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-22DOI: 10.1016/j.ijar.2025.109367
Rosario Delgado
This paper introduces a novel strategy for labeling bags in binary Multiple Instance Learning (MIL) under the standard MI assumption. The proposed approach addresses errors in instance labeling by classifying a bag as positive if it contains at least c positively labeled instances. This strategy seeks to balance the trade-off between controlling the false positive rate (mislabeling a negative bag as positive) and the false negative rate (mislabeling a positive bag as negative) while reducing labeling efforts.
The study provides theoretical justifications for this approach and introduces algorithms for its implementation, including determining the minimum value of c required to keep error rates below predefined thresholds. Additionally, it proposes a methodology to estimate the number of genuinely positive and negative instances within bags. Simulations demonstrate the superior performance of the “c-rule” compared to the standard rule (corresponding to ) in scenarios with sparse positive bags and moderately low to high probability of misclassifying a negative instance. This trend is further validated through comparisons using two real-world datasets. Overall, this research advances the understanding of error management in MIL and provides practical tools for real-world applications.
{"title":"Controlling false positives in multiple instance learning: The “c-rule” approach","authors":"Rosario Delgado","doi":"10.1016/j.ijar.2025.109367","DOIUrl":"10.1016/j.ijar.2025.109367","url":null,"abstract":"<div><div>This paper introduces a novel strategy for labeling bags in binary Multiple Instance Learning (MIL) under the <em>standard MI assumption</em>. The proposed approach addresses errors in instance labeling by classifying a bag as positive if it contains at least <em>c</em> positively labeled instances. This strategy seeks to balance the trade-off between controlling the <em>false positive rate</em> (mislabeling a negative bag as positive) and the <em>false negative rate</em> (mislabeling a positive bag as negative) while reducing labeling efforts.</div><div>The study provides theoretical justifications for this approach and introduces algorithms for its implementation, including determining the minimum value of <em>c</em> required to keep error rates below predefined thresholds. Additionally, it proposes a methodology to estimate the number of genuinely positive and negative instances within bags. Simulations demonstrate the superior performance of the “<em>c</em>-rule” compared to the <em>standard</em> rule (corresponding to <span><math><mi>c</mi><mo>=</mo><mn>1</mn></math></span>) in scenarios with sparse positive bags and moderately low to high probability of misclassifying a negative instance. This trend is further validated through comparisons using two real-world datasets. Overall, this research advances the understanding of error management in MIL and provides practical tools for real-world applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109367"},"PeriodicalIF":3.2,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-22DOI: 10.1016/j.ijar.2025.109369
Viktor Pfanschilling , Hikaru Shindo , Devendra Singh Dhami , Kristian Kersting
Tractable Probabilistic Models such as Sum-Product Networks are a powerful category of models that offer a rich choice of fast probabilistic queries. However, they are limited in the distributions they can represent, e.g., they cannot define distributions using loops or recursion. To move towards more complex distributions, we introduce a novel neurosymbolic programming language, Sum Product Loop Language (SPLL), along with the Neuro-Symbolic Transpiler (NeST). SPLL aims to build inference code most closely resembling Tractable Probabilistic Models. NeST is the first neuro-symbolic transpiler—a compiler from one high-level language to another. It generates inference code from SPLL but natively supports other computing platforms, too. This way, SPLL can seamlessly interface with e.g. pretrained (neural) models in PyTorch or Julia. The result is a language that can run probabilistic inference on more generalized distributions, reason on neural network outputs, and provide gradients for training.
{"title":"NeST: The neuro-symbolic transpiler","authors":"Viktor Pfanschilling , Hikaru Shindo , Devendra Singh Dhami , Kristian Kersting","doi":"10.1016/j.ijar.2025.109369","DOIUrl":"10.1016/j.ijar.2025.109369","url":null,"abstract":"<div><div>Tractable Probabilistic Models such as Sum-Product Networks are a powerful category of models that offer a rich choice of fast probabilistic queries. However, they are limited in the distributions they can represent, e.g., they cannot define distributions using loops or recursion. To move towards more complex distributions, we introduce a novel neurosymbolic programming language, Sum Product Loop Language (SPLL), along with the Neuro-Symbolic Transpiler (NeST). SPLL aims to build inference code most closely resembling Tractable Probabilistic Models. NeST is the first neuro-symbolic transpiler—a compiler from one high-level language to another. It generates inference code from SPLL but natively supports other computing platforms, too. This way, SPLL can seamlessly interface with e.g. pretrained (neural) models in PyTorch or Julia. The result is a language that can run probabilistic inference on more generalized distributions, reason on neural network outputs, and provide gradients for training.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109369"},"PeriodicalIF":3.2,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-21DOI: 10.1016/j.ijar.2025.109366
Lea Maislinger, Wolfgang Trutschnig
<div><div>We revisit the family <span><math><msup><mrow><mi>C</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span> of all bivariate lower semilinear (LSL) copulas first introduced by Durante et al. in 2008 and, using the characterization of LSL copulas in terms of diagonals with specific properties, derive several novel and partially unexpected results. In particular we prove that the star product (also known as Markov product) <span><math><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></msub></math></span> of two LSL copulas <span><math><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></msub></math></span> is again an LSL copula, i.e., that the family <span><math><msup><mrow><mi>C</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span> is closed with respect to the star product. Moreover, we show that translating the star product to the class of corresponding diagonals <span><math><msup><mrow><mi>D</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span> allows to determine the limit of the sequence <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>,</mo><mo>…</mo></math></span> for every diagonal <span><math><mi>δ</mi><mo>∈</mo><msup><mrow><mi>D</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span>. In fact, for every LSL copula <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub></math></span> the sequence <span><math><msub><mrow><mo>(</mo><msubsup><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow><mrow><mo>⁎</mo><mi>n</mi></mrow></msubsup><mo>)</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>N</mi></mrow></msub></math></span> converges to some LSL copula <span><math><msub><mrow><mi>S</mi></mrow><mrow><mover><mrow><mi>δ</mi></mrow><mo>‾</mo></mover></mrow></msub></math></span>, the limit <span><math><msub><mrow><mi>S</mi></mrow><mrow><mover><mrow><mi>δ</mi></mrow><mo>‾</mo></mover></mrow></msub></math></span> is idempotent, and the class of all idempotent LSL copulas allows for a simple characterization.</div><div>Complementing these results we then focus on concordance of LSL copulas. After recalling simple formulas for Kendall's <em>τ</em> and Spearman's <em>ρ</em> we study the exact region <span><math><msup><mrow><mi>Ω</mi></mrow><mrow><mi>L</mi><mi>S<
{"title":"On bivariate lower semilinear copulas and the star product","authors":"Lea Maislinger, Wolfgang Trutschnig","doi":"10.1016/j.ijar.2025.109366","DOIUrl":"10.1016/j.ijar.2025.109366","url":null,"abstract":"<div><div>We revisit the family <span><math><msup><mrow><mi>C</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span> of all bivariate lower semilinear (LSL) copulas first introduced by Durante et al. in 2008 and, using the characterization of LSL copulas in terms of diagonals with specific properties, derive several novel and partially unexpected results. In particular we prove that the star product (also known as Markov product) <span><math><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></msub></math></span> of two LSL copulas <span><math><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><msub><mrow><mi>δ</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></msub></math></span> is again an LSL copula, i.e., that the family <span><math><msup><mrow><mi>C</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span> is closed with respect to the star product. Moreover, we show that translating the star product to the class of corresponding diagonals <span><math><msup><mrow><mi>D</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span> allows to determine the limit of the sequence <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>⁎</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub><mo>,</mo><mo>…</mo></math></span> for every diagonal <span><math><mi>δ</mi><mo>∈</mo><msup><mrow><mi>D</mi></mrow><mrow><mi>L</mi><mi>S</mi><mi>L</mi></mrow></msup></math></span>. In fact, for every LSL copula <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow></msub></math></span> the sequence <span><math><msub><mrow><mo>(</mo><msubsup><mrow><mi>S</mi></mrow><mrow><mi>δ</mi></mrow><mrow><mo>⁎</mo><mi>n</mi></mrow></msubsup><mo>)</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>N</mi></mrow></msub></math></span> converges to some LSL copula <span><math><msub><mrow><mi>S</mi></mrow><mrow><mover><mrow><mi>δ</mi></mrow><mo>‾</mo></mover></mrow></msub></math></span>, the limit <span><math><msub><mrow><mi>S</mi></mrow><mrow><mover><mrow><mi>δ</mi></mrow><mo>‾</mo></mover></mrow></msub></math></span> is idempotent, and the class of all idempotent LSL copulas allows for a simple characterization.</div><div>Complementing these results we then focus on concordance of LSL copulas. After recalling simple formulas for Kendall's <em>τ</em> and Spearman's <em>ρ</em> we study the exact region <span><math><msup><mrow><mi>Ω</mi></mrow><mrow><mi>L</mi><mi>S<","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109366"},"PeriodicalIF":3.2,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-20DOI: 10.1016/j.ijar.2025.109365
Francisco Pérez-Gámez, Carlos Bejines
This paper explores weak Heyting algebras, an extension of complete Heyting algebras, focusing on characterizing this concept and identifying essential properties in terms of implication operators. The main emphasis is on unraveling the defining features and significance of the novel weak Heyting algebras. We further classify these structures within the context of a complete lattice and extend our findings to the Cartesian product. We facilitate comprehensive comparisons among these structures, by contributing to the broader understanding of weak Heyting algebras in mathematical research.
{"title":"An exploration of weak Heyting algebras: Characterization and properties","authors":"Francisco Pérez-Gámez, Carlos Bejines","doi":"10.1016/j.ijar.2025.109365","DOIUrl":"10.1016/j.ijar.2025.109365","url":null,"abstract":"<div><div>This paper explores weak Heyting algebras, an extension of complete Heyting algebras, focusing on characterizing this concept and identifying essential properties in terms of implication operators. The main emphasis is on unraveling the defining features and significance of the novel weak Heyting algebras. We further classify these structures within the context of a complete lattice and extend our findings to the Cartesian product. We facilitate comprehensive comparisons among these structures, by contributing to the broader understanding of weak Heyting algebras in mathematical research.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109365"},"PeriodicalIF":3.2,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1016/j.ijar.2025.109363
Gongxun Wang , Jinjin Li , Bochi Xu
Rough set theory primarily focuses on the characteristics of upper and lower approximations of specific sets, rather than their overall structure. Knowledge space theory can provide a new perspective on rough sets. In recent years, this theory has introduced polytomous knowledge structures, which have emerged as a significant and innovative concept in the field. This paper embeds L-fuzzy sets in S-approximation spaces and establishes a connection between polytomous knowledge structures and L-fuzzy S-approximation operators. We generate polytomous knowledge structures using these operators, present their corresponding properties, and show that a polytomous knowledge space and a polytomous closure space can be fully characterized by an upper and lower L-fuzzy S-approximation, respectively. In particular, we discuss four special L-fuzzy S-approximation operators and relate them to existing fuzzy skill maps. Subsequently, we further investigate the construction of two specific dichotomous knowledge structures, called backward-graded and forward-graded, using one of these four L-fuzzy S-approximation operators. We want to offer a new viewpoint for analyzing the structures of L-fuzzy S-approximation spaces through the lens of knowledge space theory.
{"title":"Constructing polytomous knowledge structures from L-fuzzy S-approximation operators","authors":"Gongxun Wang , Jinjin Li , Bochi Xu","doi":"10.1016/j.ijar.2025.109363","DOIUrl":"10.1016/j.ijar.2025.109363","url":null,"abstract":"<div><div>Rough set theory primarily focuses on the characteristics of upper and lower approximations of specific sets, rather than their overall structure. Knowledge space theory can provide a new perspective on rough sets. In recent years, this theory has introduced polytomous knowledge structures, which have emerged as a significant and innovative concept in the field. This paper embeds <em>L</em>-fuzzy sets in <em>S</em>-approximation spaces and establishes a connection between polytomous knowledge structures and <em>L</em>-fuzzy <em>S</em>-approximation operators. We generate polytomous knowledge structures using these operators, present their corresponding properties, and show that a polytomous knowledge space and a polytomous closure space can be fully characterized by an upper and lower <em>L</em>-fuzzy <em>S</em>-approximation, respectively. In particular, we discuss four special <em>L</em>-fuzzy <em>S</em>-approximation operators and relate them to existing fuzzy skill maps. Subsequently, we further investigate the construction of two specific dichotomous knowledge structures, called backward-graded and forward-graded, using one of these four <em>L</em>-fuzzy <em>S</em>-approximation operators. We want to offer a new viewpoint for analyzing the structures of <em>L</em>-fuzzy <em>S</em>-approximation spaces through the lens of knowledge space theory.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109363"},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15DOI: 10.1016/j.ijar.2025.109362
Marko Stanković , Stefan Stanimirović , Miroslav Ćirić
In the present paper, we introduce λ-approximate weak simulations and bisimulations on a given set of modal formulae between two fuzzy Kripke models of fuzzy multimodal logics. The parameter λ, which is an element from the linearly ordered Heyting algebra, is used to quantify the approximation degree of modal equivalence between the two worlds from the different models, with respect to the given set of formulae, within the framework of linearly ordered Heyting algebras. In a recent paper, we introduced λ-approximate simulations and bisimulations between fuzzy Kripke models. This paper investigates the relationships between λ-approximate bisimulations and λ-approximate weak bisimulations, yielding three Approximate Hennessy-Milner Type Theorems. We also provide an algorithm that divides the real unit interval into subintervals with the same degree of modal equivalence for two given fuzzy Kripke models. Moreover, we extend the Approximate Hennessy-Milner Type Theorems to the class of witnessed and modally saturated fuzzy Kripke models.
{"title":"Approximate Hennessy-Milner type theorems for fuzzy multimodal logics over Heyting algebras","authors":"Marko Stanković , Stefan Stanimirović , Miroslav Ćirić","doi":"10.1016/j.ijar.2025.109362","DOIUrl":"10.1016/j.ijar.2025.109362","url":null,"abstract":"<div><div>In the present paper, we introduce <em>λ</em>-approximate weak simulations and bisimulations on a given set of modal formulae between two fuzzy Kripke models of fuzzy multimodal logics. The parameter <em>λ</em>, which is an element from the linearly ordered Heyting algebra, is used to quantify the approximation degree of modal equivalence between the two worlds from the different models, with respect to the given set of formulae, within the framework of linearly ordered Heyting algebras. In a recent paper, we introduced <em>λ</em>-approximate simulations and bisimulations between fuzzy Kripke models. This paper investigates the relationships between <em>λ</em>-approximate bisimulations and <em>λ</em>-approximate weak bisimulations, yielding three Approximate Hennessy-Milner Type Theorems. We also provide an algorithm that divides the real unit interval into subintervals with the same degree of modal equivalence for two given fuzzy Kripke models. Moreover, we extend the Approximate Hennessy-Milner Type Theorems to the class of witnessed and modally saturated fuzzy Kripke models.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109362"},"PeriodicalIF":3.2,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-09DOI: 10.1016/j.ijar.2025.109361
Alfonso Gordaliza, Agustín Mayo-Íscar, María Asunción Lubiano, Beatriz Sinova
{"title":"Editorial: Statistical/soft approaches and computing for data analysis and classification","authors":"Alfonso Gordaliza, Agustín Mayo-Íscar, María Asunción Lubiano, Beatriz Sinova","doi":"10.1016/j.ijar.2025.109361","DOIUrl":"10.1016/j.ijar.2025.109361","url":null,"abstract":"","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109361"},"PeriodicalIF":3.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143104836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1016/j.ijar.2024.109359
Taoju Liang , Yidong Lin , Jinjin Li , Guoping Lin , Qijun Wang
Concept-cognitive learning (CCL) offers an innovative approach to classification, and concept reduction serves as a powerful method for compressing data. Nonetheless, most existing CCLs encounter a significant issue when attempting to downscale the concept space: information loss. This loss leads to cognitive incompleteness and increased complexity. Meanwhile, preserving the native characterization of formal concepts ensures both validity and interpretability for CCL. On the other hand, current incremental CCLs have limited capacity to effectively utilize newly acquired knowledge. In view of these observations, in this article, we propose a novel incremental CCL method based on concept reduction for dynamic classification. To enhance the efficiency of knowledge acquisition, recovery degree is developed to obtain concept reduction from granular concept space. Subsequently, the updating mechanism for concept reduction is explored in dynamic environments. For label recognition, a learning method based on concept reduction is discussed and an incremental learning mechanism for dynamic increased data is further constructed. Empirical studies on fifteen datasets reveal the feasibility and effectiveness of proposed model.
{"title":"Incremental cognitive learning approach based on concept reduction","authors":"Taoju Liang , Yidong Lin , Jinjin Li , Guoping Lin , Qijun Wang","doi":"10.1016/j.ijar.2024.109359","DOIUrl":"10.1016/j.ijar.2024.109359","url":null,"abstract":"<div><div>Concept-cognitive learning (CCL) offers an innovative approach to classification, and concept reduction serves as a powerful method for compressing data. Nonetheless, most existing CCLs encounter a significant issue when attempting to downscale the concept space: information loss. This loss leads to cognitive incompleteness and increased complexity. Meanwhile, preserving the native characterization of formal concepts ensures both validity and interpretability for CCL. On the other hand, current incremental CCLs have limited capacity to effectively utilize newly acquired knowledge. In view of these observations, in this article, we propose a novel incremental CCL method based on concept reduction for dynamic classification. To enhance the efficiency of knowledge acquisition, recovery degree is developed to obtain concept reduction from granular concept space. Subsequently, the updating mechanism for concept reduction is explored in dynamic environments. For label recognition, a learning method based on concept reduction is discussed and an incremental learning mechanism for dynamic increased data is further constructed. Empirical studies on fifteen datasets reveal the feasibility and effectiveness of proposed model.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109359"},"PeriodicalIF":3.2,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}