首页 > 最新文献

International Journal of Approximate Reasoning最新文献

英文 中文
An attribute ranking method based on rough sets and interval-valued fuzzy sets 基于粗糙集和区间值模糊集的属性排序法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-21 DOI: 10.1016/j.ijar.2024.109215
Bich Khue Vo , Hung Son Nguyen

Feature importance is a complex issue in machine learning, as determining a superior attribute is vague, uncertain, and dependent on the model. This study introduces a rough-fuzzy hybrid (RAFAR) method that merges various techniques from rough set theory and fuzzy set theory to tackle uncertainty in attribute importance and ranking. RAFAR utilizes an interval-valued fuzzy matrix to depict preference between attribute pairs. This research focuses on constructing these matrices from datasets and identifying suitable rankings based on these matrices. The concept of interval-valued weight vectors is introduced to represent attribute importance, and their additive and multiplicative compatibility is examined. The properties of these consistency types and the efficient algorithms for solving related problems are discussed. These new theoretical findings are valuable for creating effective optimization models and algorithms within the RAFAR framework. Additionally, novel approaches for constructing pairwise comparison matrices and enhancing the scalability of RAFAR are suggested. The study also includes experimental results on benchmark datasets to demonstrate the accuracy of the proposed solutions.

特征重要性是机器学习中的一个复杂问题,因为确定一个优越的属性是模糊的、不确定的,并且取决于模型。本研究介绍了一种粗糙模糊混合(RAFAR)方法,该方法融合了粗糙集理论和模糊集理论的各种技术,以解决属性重要性和排序中的不确定性问题。RAFAR 利用区间值模糊矩阵来描述属性对之间的偏好。这项研究的重点是从数据集中构建这些矩阵,并根据这些矩阵确定合适的排序。引入了区间值权重向量的概念来表示属性的重要性,并研究了它们的加法和乘法兼容性。讨论了这些一致性类型的属性以及解决相关问题的高效算法。这些新的理论发现对于在 RAFAR 框架内创建有效的优化模型和算法非常有价值。此外,还提出了构建成对比较矩阵和增强 RAFAR 可扩展性的新方法。研究还包括基准数据集的实验结果,以证明所提解决方案的准确性。
{"title":"An attribute ranking method based on rough sets and interval-valued fuzzy sets","authors":"Bich Khue Vo ,&nbsp;Hung Son Nguyen","doi":"10.1016/j.ijar.2024.109215","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109215","url":null,"abstract":"<div><p>Feature importance is a complex issue in machine learning, as determining a superior attribute is vague, uncertain, and dependent on the model. This study introduces a rough-fuzzy hybrid (RAFAR) method that merges various techniques from rough set theory and fuzzy set theory to tackle uncertainty in attribute importance and ranking. RAFAR utilizes an interval-valued fuzzy matrix to depict preference between attribute pairs. This research focuses on constructing these matrices from datasets and identifying suitable rankings based on these matrices. The concept of interval-valued weight vectors is introduced to represent attribute importance, and their additive and multiplicative compatibility is examined. The properties of these consistency types and the efficient algorithms for solving related problems are discussed. These new theoretical findings are valuable for creating effective optimization models and algorithms within the RAFAR framework. Additionally, novel approaches for constructing pairwise comparison matrices and enhancing the scalability of RAFAR are suggested. The study also includes experimental results on benchmark datasets to demonstrate the accuracy of the proposed solutions.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109215"},"PeriodicalIF":3.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141083329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A possible worlds semantics for trustworthy non-deterministic computations 可信的非确定性计算的可能世界语义
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-18 DOI: 10.1016/j.ijar.2024.109212
Ekaterina Kubyshkina, Giuseppe Primiero

The notion of trustworthiness, central to many fields of human inquiry, has recently attracted the attention of various researchers in logic, computer science, and artificial intelligence (AI). Both conceptual and formal approaches for modeling trustworthiness as a (desirable) property of AI systems are emerging in the literature. To develop logics fit for this aim means to analyze both the non-deterministic aspect of AI systems and to offer a formalization of the intended meaning of their trustworthiness. In this work we take a semantic perspective on representing such processes, and provide a measure on possible worlds for evaluating them as trustworthy. In particular, we intend trustworthiness as the correspondence within acceptable limits between a model in which the theoretical probability of a process to produce a given output is expressed and a model in which the frequency of showing such output as established during a relevant number of tests is measured. From a technical perspective, we show that our semantics characterizes the probabilistic typed natural deduction calculus introduced in D'Asaro and Primiero (2021)[12] and further extended in D'Asaro et al. (2023) [13]. This contribution connects those results on trustworthy probabilistic processes with the mainstream method in modal logic, thereby facilitating the understanding of this field of research for a larger audience of logicians, as well as setting the stage for an epistemic logic appropriate to the task.

可信度这一概念是人类许多研究领域的核心,最近也吸引了逻辑学、计算机科学和人工智能(AI)领域研究人员的关注。将可信性作为人工智能系统的(理想)属性进行建模的概念和形式方法在文献中不断涌现。要开发适合这一目标的逻辑,就意味着既要分析人工智能系统的非确定性方面,又要对其可信性的预期含义进行形式化。在这项工作中,我们从语义学的角度来表述这类过程,并提供了一种可能世界的衡量标准,用于评估它们是否值得信赖。具体而言,我们将可信度定义为:在可接受的范围内,流程产生给定输出的理论概率模型与在相关测试次数中确定的显示该输出的频率模型之间的对应关系。从技术角度来看,我们证明了我们的语义是 D'Asaro 和 Primiero (2021)[12] 中引入的概率类型化自然演绎微积分的特征,并在 D'Asaro 等人 (2023)[13] 中得到了进一步扩展。这一贡献将这些关于可信概率过程的成果与模态逻辑的主流方法联系起来,从而促进了更多逻辑学家对这一研究领域的理解,并为适合这一任务的认识论逻辑奠定了基础。
{"title":"A possible worlds semantics for trustworthy non-deterministic computations","authors":"Ekaterina Kubyshkina,&nbsp;Giuseppe Primiero","doi":"10.1016/j.ijar.2024.109212","DOIUrl":"10.1016/j.ijar.2024.109212","url":null,"abstract":"<div><p>The notion of trustworthiness, central to many fields of human inquiry, has recently attracted the attention of various researchers in logic, computer science, and artificial intelligence (AI). Both conceptual and formal approaches for modeling trustworthiness as a (desirable) property of AI systems are emerging in the literature. To develop logics fit for this aim means to analyze both the non-deterministic aspect of AI systems and to offer a formalization of the intended meaning of their trustworthiness. In this work we take a semantic perspective on representing such processes, and provide a measure on possible worlds for evaluating them as trustworthy. In particular, we intend trustworthiness as the correspondence within acceptable limits between a model in which the theoretical probability of a process to produce a given output is expressed and a model in which the frequency of showing such output as established during a relevant number of tests is measured. From a technical perspective, we show that our semantics characterizes the probabilistic typed natural deduction calculus introduced in D'Asaro and Primiero (2021)<span>[12]</span> and further extended in D'Asaro et al. (2023) <span>[13]</span>. This contribution connects those results on trustworthy probabilistic processes with the mainstream method in modal logic, thereby facilitating the understanding of this field of research for a larger audience of logicians, as well as setting the stage for an epistemic logic appropriate to the task.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"172 ","pages":"Article 109212"},"PeriodicalIF":3.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000999/pdfft?md5=7a0c991c70c70e79ac2349285a1a28c0&pid=1-s2.0-S0888613X24000999-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imprecision in martingale- and test-theoretic prequential randomness 马廷格尔和检验理论前序随机性中的不精确性
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-16 DOI: 10.1016/j.ijar.2024.109213
Floris Persiau, Gert de Cooman

In a prequential approach to algorithmic randomness, probabilities for the next outcome can be forecast ‘on the fly’ without the need for fully specifying a probability measure on all possible sequences of outcomes, as is the case in the more standard approach. We take the first steps in allowing for probability intervals instead of precise probabilities on this prequential approach, based on ideas borrowed from our earlier imprecise-probabilistic, standard account of algorithmic randomness. We define what it means for an infinite sequence (I1,x1,I2,x2,) of successive interval forecasts Ik and subsequent binary outcomes xk to be random, both in a martingale-theoretic and a test-theoretic sense. We prove that these two versions of prequential randomness coincide, we compare the resulting prequential randomness notions with the more standard ones, and we investigate where the prequential and standard randomness notions coincide.

在算法随机性的前序方法中,可以 "即时 "预测下一个结果的概率,而无需像更标准的方法那样,完全指定所有可能结果序列的概率度量。我们借鉴了早先关于算法随机性的不精确概率标准论述,并在此基础上迈出了第一步,允许用概率区间代替精确概率。我们定义了连续区间预测 Ik 的无穷序列(I1,x1,I2,x2,......)和随后的二元结果 xk 在马丁格尔理论和检验理论意义上的随机性。我们证明这两个版本的前序随机性是重合的,我们将由此得出的前序随机性概念与更标准的随机性概念进行比较,并研究前序随机性概念与标准随机性概念的重合之处。
{"title":"Imprecision in martingale- and test-theoretic prequential randomness","authors":"Floris Persiau,&nbsp;Gert de Cooman","doi":"10.1016/j.ijar.2024.109213","DOIUrl":"10.1016/j.ijar.2024.109213","url":null,"abstract":"<div><p>In a prequential approach to algorithmic randomness, probabilities for the next outcome can be forecast ‘on the fly’ without the need for fully specifying a probability measure on all possible sequences of outcomes, as is the case in the more standard approach. We take the first steps in allowing for probability intervals instead of precise probabilities on this prequential approach, based on ideas borrowed from our earlier imprecise-probabilistic, standard account of algorithmic randomness. We define what it means for an infinite sequence <span><math><mo>(</mo><msub><mrow><mi>I</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>x</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>I</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>,</mo><msub><mrow><mi>x</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>,</mo><mo>…</mo><mo>)</mo></math></span> of successive interval forecasts <span><math><msub><mrow><mi>I</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span> and subsequent binary outcomes <span><math><msub><mrow><mi>x</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span> to be random, both in a martingale-theoretic and a test-theoretic sense. We prove that these two versions of prequential randomness coincide, we compare the resulting prequential randomness notions with the more standard ones, and we investigate where the prequential and standard randomness notions coincide.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109213"},"PeriodicalIF":3.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141023719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distribution-free Inferential Models: Achieving finite-sample valid probabilistic inference, with emphasis on quantile regression 无分布推断模型:实现有限样本有效概率推断,重点是量子回归
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-10 DOI: 10.1016/j.ijar.2024.109211
Leonardo Cella

This paper presents a novel distribution-free Inferential Model (IM) construction that provides valid probabilistic inference across a broad spectrum of distribution-free problems, even in finite sample settings. More specifically, the proposed IM has the capability to assign (imprecise) probabilities to assertions of interest about any feature of the unknown quantities under examination, and these probabilities are well-calibrated in a frequentist sense. It is also shown that finite-sample confidence regions can be derived from the IM for any such features. Particular emphasis is placed on quantile regression, a domain where uncertainty quantification often takes the form of set estimates for the regression coefficients in applications. Within this context, the IM facilitates the acquisition of these set estimates, ensuring they are finite-sample confidence regions. It also enables the provision of finite-sample valid probabilistic assignments for any assertions of interest about the regression coefficients. As a result, regardless of the type of uncertainty quantification desired, the proposed framework offers an appealing solution to quantile regression.

本文提出了一种新颖的无分布推理模型(IM)结构,它能在广泛的无分布问题中提供有效的概率推理,即使在有限样本环境中也是如此。更具体地说,所提出的推理模型有能力为所研究的未知量的任何特征的相关断言分配(不精确的)概率,而且这些概率在频数主义意义上是经过良好校准的。研究还表明,对于任何此类特征,都可以从 IM 中推导出有限样本置信区。本文特别强调了量化回归,在这一领域中,不确定性量化通常采用回归系数集合估计的形式。在这种情况下,IM 可以帮助获取这些集合估计值,确保它们是有限样本置信区域。它还能为回归系数的任何相关断言提供有限样本有效概率分配。因此,无论所需的不确定性量化类型如何,所提出的框架都为量化回归提供了一个极具吸引力的解决方案。
{"title":"Distribution-free Inferential Models: Achieving finite-sample valid probabilistic inference, with emphasis on quantile regression","authors":"Leonardo Cella","doi":"10.1016/j.ijar.2024.109211","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109211","url":null,"abstract":"<div><p>This paper presents a novel distribution-free Inferential Model (IM) construction that provides valid probabilistic inference across a broad spectrum of distribution-free problems, even in finite sample settings. More specifically, the proposed IM has the capability to assign (imprecise) probabilities to assertions of interest about any feature of the unknown quantities under examination, and these probabilities are well-calibrated in a frequentist sense. It is also shown that finite-sample confidence regions can be derived from the IM for any such features. Particular emphasis is placed on quantile regression, a domain where uncertainty quantification often takes the form of set estimates for the regression coefficients in applications. Within this context, the IM facilitates the acquisition of these set estimates, ensuring they are finite-sample confidence regions. It also enables the provision of finite-sample valid probabilistic assignments for any assertions of interest about the regression coefficients. As a result, regardless of the type of uncertainty quantification desired, the proposed framework offers an appealing solution to quantile regression.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109211"},"PeriodicalIF":3.9,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140948806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attribute reduction for heterogeneous data based on monotonic relative neighborhood granularity 基于单调相对邻域粒度的异构数据属性缩减
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-08 DOI: 10.1016/j.ijar.2024.109210
Jianhua Dai , Zhilin Zhu , Min Li , Xiongtao Zou , Chucai Zhang

The neighborhood rough set model serves as an important tool for handling attribute reduction tasks involving heterogeneous attributes. However, measuring the relationship between conditional attributes and decision in the neighborhood rough set model is a crucial issue. Most studies have utilized neighborhood information entropy to measure the relationship between attributes. When using neighborhood conditional information entropy to measure the relationships between the decision and conditional attributes, it lacks monotonicity, consequently affecting the rationality of the final attribute reduction subset. In this paper, we introduce the concept of neighborhood granularity and propose a new form of relative neighborhood granularity to measure the relationship between the decision and conditional attributes, which exhibits monotonicity. Moreover, our approach for measuring neighborhood granularity avoids the logarithmic function computation involved in neighborhood information entropy. Finally, we conduct comparative experiments on 12 datasets using two classifiers to compare the results of attribute reduction with six other attribute reduction algorithms. The comparison demonstrates the advantages of our measurement approach.

邻域粗糙集模型是处理涉及异质属性的属性还原任务的重要工具。然而,衡量邻域粗糙集模型中条件属性和决策之间的关系是一个关键问题。大多数研究利用邻域信息熵来衡量属性之间的关系。当使用邻域条件信息熵来衡量决策与条件属性之间的关系时,它缺乏单调性,从而影响了最终属性缩减子集的合理性。本文引入了邻域粒度的概念,并提出了一种新形式的相对邻域粒度来衡量决策属性和条件属性之间的关系,这种粒度具有单调性。此外,我们的邻域粒度测量方法避免了邻域信息熵中的对数函数计算。最后,我们使用两种分类器在 12 个数据集上进行了对比实验,比较了属性缩减与其他六种属性缩减算法的结果。比较结果表明了我们的测量方法的优势。
{"title":"Attribute reduction for heterogeneous data based on monotonic relative neighborhood granularity","authors":"Jianhua Dai ,&nbsp;Zhilin Zhu ,&nbsp;Min Li ,&nbsp;Xiongtao Zou ,&nbsp;Chucai Zhang","doi":"10.1016/j.ijar.2024.109210","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109210","url":null,"abstract":"<div><p>The neighborhood rough set model serves as an important tool for handling attribute reduction tasks involving heterogeneous attributes. However, measuring the relationship between conditional attributes and decision in the neighborhood rough set model is a crucial issue. Most studies have utilized neighborhood information entropy to measure the relationship between attributes. When using neighborhood conditional information entropy to measure the relationships between the decision and conditional attributes, it lacks monotonicity, consequently affecting the rationality of the final attribute reduction subset. In this paper, we introduce the concept of neighborhood granularity and propose a new form of relative neighborhood granularity to measure the relationship between the decision and conditional attributes, which exhibits monotonicity. Moreover, our approach for measuring neighborhood granularity avoids the logarithmic function computation involved in neighborhood information entropy. Finally, we conduct comparative experiments on 12 datasets using two classifiers to compare the results of attribute reduction with six other attribute reduction algorithms. The comparison demonstrates the advantages of our measurement approach.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109210"},"PeriodicalIF":3.9,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140924463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tuning fuzzy SPARQL queries 调整模糊 SPARQL 查询
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-30 DOI: 10.1016/j.ijar.2024.109209
Jesús M. Almendros-Jiménez , Antonio Becerra-Terón , Ginés Moreno , José A. Riaza

During the last years, the study of fuzzy database query languages has attracted the attention of many researchers. In this line of research, our group has proposed and developed FSA-SPARQL (Fuzzy Sets and Aggregators based SPARQL), which is a fuzzy extension of the Semantic Web query language SPARQL. FSA-SPARQL works with fuzzy RDF datasets and allows the definition of fuzzy queries involving fuzzy conditions through fuzzy connectives and aggregators. However, there are two main challenges to be solved for the practical applicability of FSA-SPARQL. The first problem is the lack of fuzzy RDF data sources. The second is how to customize fuzzy queries on fuzzy RDF data sources. Our research group has also recently proposed a fuzzy logic programming language called FASILL that offers powerful tuning capabilities that can accept applications in many fields. The purpose of this paper is to show how the FASILL tuning capabilities serve to accomplish in a unified framework both challenges in FSA-SPARQL: data fuzzification and query customization. More concretely, from a FSA-SPARQL to FASILL transformation, data fuzzification and query customization in FSA-SPARQL become FASILL tuning problems. We have validated the approach with queries against datasets from online communities.

近年来,模糊数据库查询语言的研究吸引了众多研究人员的关注。在这一研究领域,我们小组提出并开发了 FSA-SPARQL(基于模糊集和聚合器的 SPARQL),它是语义网查询语言 SPARQL 的模糊扩展。FSA-SPARQL 适用于模糊 RDF 数据集,允许通过模糊连接词和聚合器定义涉及模糊条件的模糊查询。然而,FSA-SPARQL 在实际应用中面临两大挑战。第一个问题是缺乏模糊 RDF 数据源。第二个问题是如何在模糊 RDF 数据源上定制模糊查询。我们的研究小组最近还提出了一种名为 FASILL 的模糊逻辑编程语言,它提供了强大的调整功能,可以接受许多领域的应用。本文的目的是展示 FASILL 的调整功能如何在一个统一的框架内完成 FSA-SPARQL 中的两个挑战:数据模糊化和查询定制。更具体地说,从 FSA-SPARQL 到 FASILL 转换,FSA-SPARQL 中的数据模糊化和查询定制成为了 FASILL 调整问题。我们利用在线社区数据集的查询验证了这种方法。
{"title":"Tuning fuzzy SPARQL queries","authors":"Jesús M. Almendros-Jiménez ,&nbsp;Antonio Becerra-Terón ,&nbsp;Ginés Moreno ,&nbsp;José A. Riaza","doi":"10.1016/j.ijar.2024.109209","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109209","url":null,"abstract":"<div><p>During the last years, the study of fuzzy database query languages has attracted the attention of many researchers. In this line of research, our group has proposed and developed <span>FSA-SPARQL</span> (<em>Fuzzy Sets and Aggregators based SPARQL</em>), which is a fuzzy extension of the Semantic Web query language SPARQL. <span>FSA-SPARQL</span> works with fuzzy RDF datasets and allows the definition of fuzzy queries involving fuzzy conditions through fuzzy connectives and aggregators. However, there are two main challenges to be solved for the practical applicability of <span>FSA-SPARQL</span>. The first problem is the lack of fuzzy RDF data sources. The second is how to customize fuzzy queries on fuzzy RDF data sources. Our research group has also recently proposed a fuzzy logic programming language called <span><math><mi>F</mi><mi>A</mi><mi>S</mi><mi>I</mi><mi>L</mi><mi>L</mi></math></span> that offers powerful tuning capabilities that can accept applications in many fields. The purpose of this paper is to show how the <span><math><mi>F</mi><mi>A</mi><mi>S</mi><mi>I</mi><mi>L</mi><mi>L</mi></math></span> tuning capabilities serve to accomplish in a unified framework both challenges in <span>FSA-SPARQL</span>: data fuzzification and query customization. More concretely, from a <span>FSA-SPARQL</span> to <span><math><mi>F</mi><mi>A</mi><mi>S</mi><mi>I</mi><mi>L</mi><mi>L</mi></math></span> transformation, data fuzzification and query customization in <span>FSA-SPARQL</span> become <span><math><mi>F</mi><mi>A</mi><mi>S</mi><mi>I</mi><mi>L</mi><mi>L</mi></math></span> tuning problems. We have validated the approach with queries against datasets from online communities.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109209"},"PeriodicalIF":3.9,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000963/pdfft?md5=d1a2bfe1bba5c82a286b84ddb02fef74&pid=1-s2.0-S0888613X24000963-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional independence collapsibility for acyclic directed mixed graph models 非循环有向混合图模型的条件独立性可折叠性
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-29 DOI: 10.1016/j.ijar.2024.109208
Weihua Li , Yi Sun , Pei Heng

Collapsibility refers to the property that, when marginalizing over some variables that are not of interest from the full model, the resulting marginal model of the remaining variables is equivalent to the local model induced by the subgraph on these variables. This means that when the marginal model satisfies collapsibility, statistical inference results based on the marginal model and the local model are consistent. This has significant implications for small-sample data, modeling latent variable data, and reducing the computational complexity of statistical inference. This paper focuses on studying the conditional independence collapsibility of acyclic directed mixed graph (ADMG) models. By introducing the concept of inducing paths in ADMGs and exploring its properties, the conditional independence collapsibility of ADMGs is characterized equivalently from both graph theory and statistical perspectives.

可折叠性是指这样一种特性,即在对完整模型中不感兴趣的某些变量进行边际化时,所得到的其余变量的边际模型等同于这些变量的子图所诱导的局部模型。这意味着当边际模型满足可折叠性时,基于边际模型和局部模型的统计推断结果是一致的。这对于小样本数据、潜变量数据建模以及降低统计推断的计算复杂度都有重要意义。本文重点研究非循环有向混合图(ADMG)模型的条件独立性可折叠性。通过引入 ADMG 中诱导路径的概念并探索其特性,从图论和统计学的角度等效地描述了 ADMG 的条件独立性可折叠性。
{"title":"Conditional independence collapsibility for acyclic directed mixed graph models","authors":"Weihua Li ,&nbsp;Yi Sun ,&nbsp;Pei Heng","doi":"10.1016/j.ijar.2024.109208","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109208","url":null,"abstract":"<div><p>Collapsibility refers to the property that, when marginalizing over some variables that are not of interest from the full model, the resulting marginal model of the remaining variables is equivalent to the local model induced by the subgraph on these variables. This means that when the marginal model satisfies collapsibility, statistical inference results based on the marginal model and the local model are consistent. This has significant implications for small-sample data, modeling latent variable data, and reducing the computational complexity of statistical inference. This paper focuses on studying the conditional independence collapsibility of acyclic directed mixed graph (ADMG) models. By introducing the concept of inducing paths in ADMGs and exploring its properties, the conditional independence collapsibility of ADMGs is characterized equivalently from both graph theory and statistical perspectives.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109208"},"PeriodicalIF":3.9,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Being Bayesian about learning Bayesian networks from ordinal data 从序数数据中学习贝叶斯网络的贝叶斯方法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-29 DOI: 10.1016/j.ijar.2024.109205
Marco Grzegorczyk

In this paper we propose a Bayesian approach for inferring Bayesian network (BN) structures from ordinal data. Our approach can be seen as the Bayesian counterpart of a recently proposed frequentist approach, referred to as the ‘ordinal structure expectation maximization’ (OSEM) method. Like for the OSEM method, the key idea is to assume that each ordinal variable originates from a Gaussian variable that can only be observed in discretized form, and that the dependencies in the latent Gaussian space can be modeled by BNs; i.e. by directed acyclic graphs (DAGs). Our Bayesian method combines the ‘structure MCMC sampler’ for DAG posterior sampling, a slightly modified version of the ‘Bayesian metric for Gaussian networks having score equivalence’ (BGe score), the concept of the ‘extended rank likelihood’, and a recently proposed algorithm for posterior sampling the parameters of Gaussian BNs. In simulation studies we compare the new Bayesian approach and the OSEM method in terms of the network reconstruction accuracy. The empirical results show that the new Bayesian approach leads to significantly improved network reconstruction accuracies.

本文提出了一种从序数数据推断贝叶斯网络(BN)结构的贝叶斯方法。我们的方法可以看作是最近提出的一种频繁主义方法的贝叶斯对应方法,即 "序数结构期望最大化"(OSEM)方法。与 OSEM 方法一样,我们的关键思路是假设每个序变量都来自于一个高斯变量,而这个高斯变量只能以离散的形式被观测到,并且潜在高斯空间中的依赖关系可以用 BN(即有向无环图(DAG))来建模。我们的贝叶斯方法结合了用于 DAG 后验采样的 "结构 MCMC 采样器"、稍作修改的 "具有分数等价性的高斯网络贝叶斯度量"(BGe score)版本、"扩展秩似然法 "概念以及最近提出的用于高斯 BN 参数后验采样的算法。在模拟研究中,我们比较了新贝叶斯方法和 OSEM 方法的网络重建精度。实证结果表明,新贝叶斯方法显著提高了网络重建精度。
{"title":"Being Bayesian about learning Bayesian networks from ordinal data","authors":"Marco Grzegorczyk","doi":"10.1016/j.ijar.2024.109205","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109205","url":null,"abstract":"<div><p>In this paper we propose a Bayesian approach for inferring Bayesian network (BN) structures from ordinal data. Our approach can be seen as the Bayesian counterpart of a recently proposed frequentist approach, referred to as the ‘ordinal structure expectation maximization’ (OSEM) method. Like for the OSEM method, the key idea is to assume that each ordinal variable originates from a Gaussian variable that can only be observed in discretized form, and that the dependencies in the latent Gaussian space can be modeled by BNs; i.e. by directed acyclic graphs (DAGs). Our Bayesian method combines the ‘structure MCMC sampler’ for DAG posterior sampling, a slightly modified version of the ‘Bayesian metric for Gaussian networks having score equivalence’ (BGe score), the concept of the ‘extended rank likelihood’, and a recently proposed algorithm for posterior sampling the parameters of Gaussian BNs. In simulation studies we compare the new Bayesian approach and the OSEM method in terms of the network reconstruction accuracy. The empirical results show that the new Bayesian approach leads to significantly improved network reconstruction accuracies.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109205"},"PeriodicalIF":3.9,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000926/pdfft?md5=399d96b4aa67e5b17c35c12d9efa291d&pid=1-s2.0-S0888613X24000926-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergies between machine learning and reasoning - An introduction by the Kay R. Amel group 机器学习与推理之间的协同作用--凯-阿梅尔小组的介绍
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-25 DOI: 10.1016/j.ijar.2024.109206
Ismaïl Baaj , Zied Bouraoui , Antoine Cornuéjols , Thierry Denœux , Sébastien Destercke , Didier Dubois , Marie-Jeanne Lesot , João Marques-Silva , Jérôme Mengin , Henri Prade , Steven Schockaert , Mathieu Serrurier , Olivier Strauss , Christel Vrain

This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developed quite separately in the last four decades. First, some common concerns are identified and discussed such as the types of representation used, the roles of knowledge and data, the lack or the excess of information, or the need for explanations and causal understanding. Then, the survey is organised in seven sections covering most of the territory where KRR and ML meet. We start with a section dealing with prototypical approaches from the literature on learning and reasoning: Inductive Logic Programming, Statistical Relational Learning, and Neurosymbolic AI, where ideas from rule-based reasoning are combined with ML. Then we focus on the use of various forms of background knowledge in learning, ranging from additional regularisation terms in loss functions, to the problem of aligning symbolic and vector space representations, or the use of knowledge graphs for learning. Then, the next section describes how KRR notions may benefit to learning tasks. For instance, constraints can be used as in declarative data mining for influencing the learned patterns; or semantic features are exploited in low-shot learning to compensate for the lack of data; or yet we can take advantage of analogies for learning purposes. Conversely, another section investigates how ML methods may serve KRR goals. For instance, one may learn special kinds of rules such as default rules, fuzzy rules or threshold rules, or special types of information such as constraints, or preferences. The section also covers formal concept analysis and rough sets-based methods. Yet another section reviews various interactions between Automated Reasoning and ML, such as the use of ML methods in SAT solving to make reasoning faster. Then a section deals with works related to model accountability, including explainability and interpretability, fairness and robustness. Finally, a section covers works on handling imperfect or incomplete data, including the problem of learning from uncertain or coarse data, the use of belief functions for regression, a revision-based view of the EM algorithm, the use of possibility theory in statistics, or the learning of imprecise models. This paper thus aims at a better mutual understanding of research in KRR and ML, and how they can cooperate. The paper is completed by an abundant bibliography.

知识表示与推理(KRR)和机器学习(ML)这两个领域在过去四十年中各自发展,本文对这两个领域的交汇点进行了初步和原创性的调查。首先,确定并讨论了一些共同关注的问题,如所使用的表示类型、知识和数据的作用、信息的缺乏或过剩,或对解释和因果理解的需求。然后,调查分为七个部分,涵盖了 KRR 和 ML 的大部分领域。我们首先讨论学习与推理文献中的原型方法:归纳逻辑编程、统计关系学习和神经符号人工智能将基于规则的推理思想与 ML 相结合。然后,我们将重点讨论在学习中使用各种形式的背景知识,从损失函数中的附加正则化项,到符号和向量空间表征的对齐问题,或在学习中使用知识图谱。然后,下一节将介绍 KRR 概念如何有益于学习任务。例如,在声明式数据挖掘中,可以使用约束条件来影响学习模式;在低射学习中,可以利用语义特征来弥补数据的不足;我们还可以利用类比来达到学习目的。与此相反,另一部分研究了 ML 方法如何服务于 KRR 目标。例如,我们可以学习特殊类型的规则,如默认规则、模糊规则或阈值规则,或特殊类型的信息,如约束或偏好。这一部分还涉及形式概念分析和基于粗糙集的方法。还有一节回顾了自动推理与 ML 之间的各种互动,例如在 SAT 求解中使用 ML 方法来加快推理速度。然后,还有一节涉及与模型责任相关的工作,包括可解释性和可解释性、公平性和稳健性。最后,还有一部分涉及处理不完善或不完整数据的工作,包括从不确定性或粗糙数据中学习的问题、使用信念函数进行回归、基于修正的 EM 算法观点、统计学中可能性理论的使用或不精确模型的学习。因此,本文旨在更好地相互理解 KRR 和 ML 的研究,以及它们之间如何合作。本文最后附有丰富的参考书目。
{"title":"Synergies between machine learning and reasoning - An introduction by the Kay R. Amel group","authors":"Ismaïl Baaj ,&nbsp;Zied Bouraoui ,&nbsp;Antoine Cornuéjols ,&nbsp;Thierry Denœux ,&nbsp;Sébastien Destercke ,&nbsp;Didier Dubois ,&nbsp;Marie-Jeanne Lesot ,&nbsp;João Marques-Silva ,&nbsp;Jérôme Mengin ,&nbsp;Henri Prade ,&nbsp;Steven Schockaert ,&nbsp;Mathieu Serrurier ,&nbsp;Olivier Strauss ,&nbsp;Christel Vrain","doi":"10.1016/j.ijar.2024.109206","DOIUrl":"10.1016/j.ijar.2024.109206","url":null,"abstract":"<div><p>This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developed quite separately in the last four decades. First, some common concerns are identified and discussed such as the types of representation used, the roles of knowledge and data, the lack or the excess of information, or the need for explanations and causal understanding. Then, the survey is organised in seven sections covering most of the territory where KRR and ML meet. We start with a section dealing with prototypical approaches from the literature on learning and reasoning: Inductive Logic Programming, Statistical Relational Learning, and Neurosymbolic AI, where ideas from rule-based reasoning are combined with ML. Then we focus on the use of various forms of background knowledge in learning, ranging from additional regularisation terms in loss functions, to the problem of aligning symbolic and vector space representations, or the use of knowledge graphs for learning. Then, the next section describes how KRR notions may benefit to learning tasks. For instance, constraints can be used as in declarative data mining for influencing the learned patterns; or semantic features are exploited in low-shot learning to compensate for the lack of data; or yet we can take advantage of analogies for learning purposes. Conversely, another section investigates how ML methods may serve KRR goals. For instance, one may learn special kinds of rules such as default rules, fuzzy rules or threshold rules, or special types of information such as constraints, or preferences. The section also covers formal concept analysis and rough sets-based methods. Yet another section reviews various interactions between Automated Reasoning and ML, such as the use of ML methods in SAT solving to make reasoning faster. Then a section deals with works related to model accountability, including explainability and interpretability, fairness and robustness. Finally, a section covers works on handling imperfect or incomplete data, including the problem of learning from uncertain or coarse data, the use of belief functions for regression, a revision-based view of the EM algorithm, the use of possibility theory in statistics, or the learning of imprecise models. This paper thus aims at a better mutual understanding of research in KRR and ML, and how they can cooperate. The paper is completed by an abundant bibliography.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109206"},"PeriodicalIF":3.9,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000938/pdfft?md5=8e95ec50ba09a214586a09348ae99f54&pid=1-s2.0-S0888613X24000938-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140774538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial of the special issue “Synergies Between Machine Learning and Reasoning” 机器学习与推理之间的协同作用 "特刊编辑
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-24 DOI: 10.1016/j.ijar.2024.109207
Sébastien Destercke , Jérôme Mengin , Henri Prade
{"title":"Editorial of the special issue “Synergies Between Machine Learning and Reasoning”","authors":"Sébastien Destercke ,&nbsp;Jérôme Mengin ,&nbsp;Henri Prade","doi":"10.1016/j.ijar.2024.109207","DOIUrl":"10.1016/j.ijar.2024.109207","url":null,"abstract":"","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109207"},"PeriodicalIF":3.9,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140755910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Approximate Reasoning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1