首页 > 最新文献

Semantic Web最新文献

英文 中文
Using Wikidata lexemes and items to generate text from abstract representations 使用维基数据词条和项目从抽象表述中生成文本
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-06-13 DOI: 10.3233/sw-243564
Mahir Morshed
Ninai/Udiron, a living function-based natural language generation system, uses knowledge in Wikidata lexemes and items to transform abstract representations of factual statements into human-readable text. The combined system first produces syntax trees based on those abstract representations (Ninai) and then yields sentences from those syntax trees (Udiron). The system relies on information about individual lexical units and links to the concepts those units represent, as well as rules encoded in various types of functions to which users may contribute, to make decisions about words, phrases, and other morphemes to use and how to arrange them. Various system design choices work toward using the information in Wikidata lexemes and items efficiently and effectively, making different components individually contributable and extensible, and making the overall resultant outputs from the system expectable and analyzable. These targets accompany the intentions for Ninai/Udiron to ultimately power the Abstract Wikipedia project as well as be hosted on the Wikifunctions project.
Ninai/Udiron 是一个基于活函数的自然语言生成系统,它利用维基数据词条和项目中的知识,将事实陈述的抽象表述转化为人类可读的文本。组合系统首先根据这些抽象表述生成语法树(Ninai),然后根据这些语法树生成句子(Udiron)。该系统依靠单个词汇单位的信息和这些单位所代表概念的链接,以及用户可参与的各类函数中编码的规则,来决定要使用的词、短语和其他词素以及如何排列它们。各种系统设计选择都是为了高效地使用维基数据词目和条目中的信息,使不同的组件各自具有可贡献性和可扩展性,并使系统的整体输出结果具有可预期性和可分析性。这些目标与 Ninai/Udiron 最终为维基百科摘要项目提供动力以及托管于维基功能项目的意图相一致。
{"title":"Using Wikidata lexemes and items to generate text from abstract representations","authors":"Mahir Morshed","doi":"10.3233/sw-243564","DOIUrl":"https://doi.org/10.3233/sw-243564","url":null,"abstract":"Ninai/Udiron, a living function-based natural language generation system, uses knowledge in Wikidata lexemes and items to transform abstract representations of factual statements into human-readable text. The combined system first produces syntax trees based on those abstract representations (Ninai) and then yields sentences from those syntax trees (Udiron). The system relies on information about individual lexical units and links to the concepts those units represent, as well as rules encoded in various types of functions to which users may contribute, to make decisions about words, phrases, and other morphemes to use and how to arrange them. Various system design choices work toward using the information in Wikidata lexemes and items efficiently and effectively, making different components individually contributable and extensible, and making the overall resultant outputs from the system expectable and analyzable. These targets accompany the intentions for Ninai/Udiron to ultimately power the Abstract Wikipedia project as well as be hosted on the Wikifunctions project.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141349302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Editorial: Special issue on Interactive Semantic Web 编辑交互式语义网特刊
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-23 DOI: 10.3233/sw-243672
Bo Fu, Patrick Lambrix, Catia Pesquita
This special issue on the Interactive Semantic Web presents selected papers that contribute to the design, development, and refinement of all aspects of human interaction with the Semantic Web. The scope of this special issue is twofold: i) interactive techniques and visualizations that assist the human in tasks (e.g., browsing, inspecting, inferring) involving semantic data such as ontologies, linked data, knowledge graphs etc.; and ii) intelligent interfaces such as those that are driven by semantic technologies and other forms of machine intelligence, as well as those that empower users with personalizable and adaptive features. Given
本期特刊以交互式语义网为主题,精选了有助于设计、开发和完善人类与语义网交互的各个方面的论文。本特刊的范围包括两个方面:i)协助人类完成涉及本体、链接数据、知识图谱等语义数据的任务(如浏览、检查、推断)的交互技术和可视化;ii)智能界面,如由语义技术和其他形式的机器智能驱动的界面,以及赋予用户个性化和自适应功能的界面。鉴于
{"title":"Editorial: Special issue on Interactive Semantic Web","authors":"Bo Fu, Patrick Lambrix, Catia Pesquita","doi":"10.3233/sw-243672","DOIUrl":"https://doi.org/10.3233/sw-243672","url":null,"abstract":"This special issue on the Interactive Semantic Web presents selected papers that contribute to the design, development, and refinement of all aspects of human interaction with the Semantic Web. The scope of this special issue is twofold: i) interactive techniques and visualizations that assist the human in tasks (e.g., browsing, inspecting, inferring) involving semantic data such as ontologies, linked data, knowledge graphs etc.; and ii) intelligent interfaces such as those that are driven by semantic technologies and other forms of machine intelligence, as well as those that empower users with personalizable and adaptive features. Given","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141106474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering the SDM-RDFizer tool for scaling up to complex knowledge graph creation pipelines1 增强 SDM-RDFizer 工具的能力,以扩展到复杂的知识图谱创建管道1
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-28 DOI: 10.3233/sw-243580
Enrique Iglesias, Maria-Esther Vidal, D. Collarana, David Chaves-Fraga
The significant increase in data volume in recent years has prompted the adoption of knowledge graphs as valuable data structures for integrating diverse data and metadata. However, this surge in data availability has brought to light challenges related to standardization, interoperability, and data quality. Knowledge graph creation faces complexities from large data volumes, data heterogeneity, and high duplicate rates. This work addresses these challenges and proposes data management techniques to scale up the creation of knowledge graphs specified using the RDF Mapping Language (RML). These techniques are integrated into SDM-RDFizer, transforming it into a two-fold solution designed to address the complexities of generating knowledge graphs. Firstly, we introduce a reordering approach for RML triples maps, prioritizing the evaluation of the most selective maps first to reduce memory usage. Secondly, we employ an RDF compression strategy, along with optimized data structures and novel operators, to prevent the generation of duplicate RDF triples and optimize the execution of RML operators. We assess the performance of SDM-RDFizer through established benchmarks. The evaluation showcases the effectiveness of SDM-RDFizer compared to state-of-the-art RML engines, emphasizing the benefits of our techniques. Furthermore, the paper presents real-world projects where SDM-RDFizer has been utilized, providing insights into the advantages of declaratively defining knowledge graphs and efficiently executing these specifications using this engine.
近年来,数据量的大幅增加促使人们采用知识图谱作为整合各种数据和元数据的重要数据结构。然而,数据可用性的激增也带来了与标准化、互操作性和数据质量相关的挑战。知识图谱的创建面临着数据量大、数据异构和重复率高等复杂问题。本研究针对这些挑战,提出了数据管理技术,以扩大使用 RDF 映射语言(RML)创建知识图谱的规模。这些技术被集成到 SDM-RDFizer 中,使其成为一个双重解决方案,旨在解决生成知识图谱的复杂性。首先,我们为 RML 三元映射引入了一种重新排序方法,优先评估选择性最强的映射,以减少内存使用。其次,我们采用了 RDF 压缩策略、优化的数据结构和新颖的运算符,以防止生成重复的 RDF 三元组,并优化 RML 运算符的执行。我们通过已建立的基准来评估 SDM-RDFizer 的性能。评估结果表明,与最先进的 RML 引擎相比,SDM-RDFizer 非常有效,凸显了我们技术的优势。此外,本文还介绍了使用 SDM-RDFizer 的实际项目,让人们深入了解声明式定义知识图谱和使用该引擎高效执行这些规范的优势。
{"title":"Empowering the SDM-RDFizer tool for scaling up to complex knowledge graph creation pipelines1","authors":"Enrique Iglesias, Maria-Esther Vidal, D. Collarana, David Chaves-Fraga","doi":"10.3233/sw-243580","DOIUrl":"https://doi.org/10.3233/sw-243580","url":null,"abstract":"The significant increase in data volume in recent years has prompted the adoption of knowledge graphs as valuable data structures for integrating diverse data and metadata. However, this surge in data availability has brought to light challenges related to standardization, interoperability, and data quality. Knowledge graph creation faces complexities from large data volumes, data heterogeneity, and high duplicate rates. This work addresses these challenges and proposes data management techniques to scale up the creation of knowledge graphs specified using the RDF Mapping Language (RML). These techniques are integrated into SDM-RDFizer, transforming it into a two-fold solution designed to address the complexities of generating knowledge graphs. Firstly, we introduce a reordering approach for RML triples maps, prioritizing the evaluation of the most selective maps first to reduce memory usage. Secondly, we employ an RDF compression strategy, along with optimized data structures and novel operators, to prevent the generation of duplicate RDF triples and optimize the execution of RML operators. We assess the performance of SDM-RDFizer through established benchmarks. The evaluation showcases the effectiveness of SDM-RDFizer compared to state-of-the-art RML engines, emphasizing the benefits of our techniques. Furthermore, the paper presents real-world projects where SDM-RDFizer has been utilized, providing insights into the advantages of declaratively defining knowledge graphs and efficiently executing these specifications using this engine.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140371182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special Issue on Semantic Web for Industrial Engineering: Research and Applications 工业工程语义网特刊:研究与应用
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-21 DOI: 10.3233/sw-243623
Bahar Aameri, M. Poveda-Villalón, Emilio M. Sanfilippo, Walter Terkaj
{"title":"Special Issue on Semantic Web for Industrial Engineering: Research and Applications","authors":"Bahar Aameri, M. Poveda-Villalón, Emilio M. Sanfilippo, Walter Terkaj","doi":"10.3233/sw-243623","DOIUrl":"https://doi.org/10.3233/sw-243623","url":null,"abstract":"","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140223539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
smart-KG: Partition-Based Linked Data Fragments for querying knowledge graphs smart-KG:基于分区的关联数据片段,用于查询知识图谱
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-20 DOI: 10.3233/sw-243571
Amr Azzam, A. Polleres, Javier D. Fernández, Maribel Acosta
RDF and SPARQL provide a uniform way to publish and query billions of triples in open knowledge graphs (KGs) on the Web. Yet, provisioning of a fast, reliable, and responsive live querying solution for open KGs is still hardly possible through SPARQL endpoints alone: while such endpoints provide a remarkable performance for single queries, they typically can not cope with highly concurrent query workloads by multiple clients. To mitigate this, the Linked Data Fragments (LDF) framework sparked the design of different alternative low-cost interfaces such as Triple Pattern Fragments (TPF), that partially offload the query processing workload to the client side. On the downside, such interfaces still come with the expense of unnecessarily high network load due to the necessary transfer of intermediate results to the client, leading to query performance degradation compared with endpoints. To address this problem, in the present work, we investigate alternative interfaces, refining and extending the original TPF idea, which also aims at reducing server-resource consumption, by shipping query-relevant partitions of KGs from the server to the client. To this end, first, we align formal definitions and notations of the original LDF framework to uniformly present existing LDF implements and such “partition-based” LDF approaches. These novel LDF interfaces retrieve, instead of the exact triples matching a particular query pattern, a subset of pre-materialized, compressed, partitions of the original graph, containing all answers to a query pattern, to be further evaluated on the client side. As a concrete representative of partition-based LDF, we present smart-KG+, extending and refining our prior work (In WWW ’20: The Web Conference 2020 (2020) 984–994 ACM / IW3C2) in several respects. Our proposed approach is a step forward towards a better-balanced share of the query processing load between clients and servers by shipping graph partitions driven by the structure of RDF graphs to group entities described with the same sets of properties and classes, resulting in significant data transfer reduction. Our experiments demonstrate that the smart-KG+ significantly outperforms existing Web SPARQL interfaces on both pre-existing benchmarks for highly concurrent query execution as well as an accustomed query workload inspired by query logs of existing SPARQL endpoints.
RDF 和 SPARQL 为在网络上发布和查询开放知识图谱(KG)中的数十亿个三元组提供了统一的方法。然而,仅仅通过SPARQL端点还很难为开放知识图谱提供快速、可靠、响应迅速的实时查询解决方案:虽然这些端点为单次查询提供了出色的性能,但它们通常无法应对多个客户端高度并发的查询工作量。为了缓解这一问题,关联数据片段(LDF)框架激发了人们设计不同的低成本替代接口,比如三重模式片段(TPF),它们可以将查询处理的工作量部分卸载到客户端。但缺点是,由于需要将中间结果传输到客户端,这些接口仍会带来不必要的高网络负载,导致查询性能比端点低。为了解决这个问题,在本研究中,我们研究了替代接口,完善并扩展了最初的 TPF 概念,其目的也是通过将与查询相关的 KG 分区从服务器传送到客户端来减少服务器资源消耗。为此,我们首先调整了原始 LDF 框架的正式定义和符号,以统一呈现现有的 LDF 实现和这种 "基于分区 "的 LDF 方法。这些新颖的 LDF 接口检索的不是与特定查询模式匹配的精确三元组,而是原始图的预实体化压缩分区子集,其中包含查询模式的所有答案,以便在客户端进行进一步评估。作为基于分区的 LDF 的具体代表,我们提出了 smart-KG+,在多个方面扩展并完善了我们之前的工作(In WWW '20: The Web Conference 2020 (2020) 984-994 ACM / IW3C2)。我们提出的方法通过利用 RDF 图结构驱动的图分区,将具有相同属性和类集的实体进行分组,从而显著减少了数据传输量,在更好地平衡客户端和服务器之间的查询处理负载分担方面向前迈进了一步。我们的实验证明,无论是在高度并发查询执行的现有基准上,还是在受现有 SPARQL 端点查询日志启发的习惯查询工作量上,smart-KG+ 都明显优于现有的网络 SPARQL 接口。
{"title":"smart-KG: Partition-Based Linked Data Fragments for querying knowledge graphs","authors":"Amr Azzam, A. Polleres, Javier D. Fernández, Maribel Acosta","doi":"10.3233/sw-243571","DOIUrl":"https://doi.org/10.3233/sw-243571","url":null,"abstract":"RDF and SPARQL provide a uniform way to publish and query billions of triples in open knowledge graphs (KGs) on the Web. Yet, provisioning of a fast, reliable, and responsive live querying solution for open KGs is still hardly possible through SPARQL endpoints alone: while such endpoints provide a remarkable performance for single queries, they typically can not cope with highly concurrent query workloads by multiple clients. To mitigate this, the Linked Data Fragments (LDF) framework sparked the design of different alternative low-cost interfaces such as Triple Pattern Fragments (TPF), that partially offload the query processing workload to the client side. On the downside, such interfaces still come with the expense of unnecessarily high network load due to the necessary transfer of intermediate results to the client, leading to query performance degradation compared with endpoints. To address this problem, in the present work, we investigate alternative interfaces, refining and extending the original TPF idea, which also aims at reducing server-resource consumption, by shipping query-relevant partitions of KGs from the server to the client. To this end, first, we align formal definitions and notations of the original LDF framework to uniformly present existing LDF implements and such “partition-based” LDF approaches. These novel LDF interfaces retrieve, instead of the exact triples matching a particular query pattern, a subset of pre-materialized, compressed, partitions of the original graph, containing all answers to a query pattern, to be further evaluated on the client side. As a concrete representative of partition-based LDF, we present smart-KG+, extending and refining our prior work (In WWW ’20: The Web Conference 2020 (2020) 984–994 ACM / IW3C2) in several respects. Our proposed approach is a step forward towards a better-balanced share of the query processing load between clients and servers by shipping graph partitions driven by the structure of RDF graphs to group entities described with the same sets of properties and classes, resulting in significant data transfer reduction. Our experiments demonstrate that the smart-KG+ significantly outperforms existing Web SPARQL interfaces on both pre-existing benchmarks for highly concurrent query execution as well as an accustomed query workload inspired by query logs of existing SPARQL endpoints.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140227883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Empirical ontology design patterns and shapes from Wikidata 维基数据中的经验本体设计模式和形状
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-20 DOI: 10.3233/sw-243613
Valentina Anita Carriero, Paul Groth, Valentina Presutti
The ontology underlying the Wikidata knowledge graph (KG) has not been formalized. Instead, its semantics emerges bottom-up from the use of its classes and properties. Flexible guidelines and rules have been defined by the Wikidata project for the use of its ontology, however, it is still often difficult to reuse the ontology’s constructs. Based on the assumption that identifying ontology design patterns from a knowledge graph contributes to making its (possibly) implicit ontology emerge, in this paper we present a method for extracting what we term empirical ontology design patterns (EODPs) from a knowledge graph. This method takes as input a knowledge graph and extracts EODPs as sets of axioms/constraints involving the classes instantiated in the KG. These EODPs include data about the probability of such axioms/constraints happening. We apply our method on two domain-specific portions of Wikidata, addressing the music and art, architecture, and archaeology domains, and we compare the empirical ontology design patterns we extract with the current support present in Wikidata. We show how these patterns can provide guidance for the use of the Wikidata ontology and its potential improvement, and can give insight into the content of (domain-specific portions of) the Wikidata knowledge graph.
维基数据知识图谱(KG)的本体论尚未正式化。相反,它的语义是通过使用其类和属性自下而上形成的。维基数据项目为本体的使用定义了灵活的指导原则和规则,但本体构造的重用通常仍很困难。从知识图谱中识别本体设计模式有助于使本体(可能是)隐含本体显现出来,基于这一假设,我们在本文中提出了一种从知识图谱中提取经验本体设计模式(EODP)的方法。该方法将知识图谱作为输入,并将 EODPs 提取为涉及知识图谱实例化类的公理/约束集。这些 EODP 包括有关此类公理/约束发生概率的数据。我们将我们的方法应用于维基数据的两个特定领域部分,涉及音乐和艺术、建筑和考古学领域,并将我们提取的经验本体设计模式与维基数据中现有的支持进行比较。我们展示了这些模式如何为维基数据本体的使用及其潜在改进提供指导,以及如何深入了解维基数据知识图谱(特定领域部分)的内容。
{"title":"Empirical ontology design patterns and shapes from Wikidata","authors":"Valentina Anita Carriero, Paul Groth, Valentina Presutti","doi":"10.3233/sw-243613","DOIUrl":"https://doi.org/10.3233/sw-243613","url":null,"abstract":"The ontology underlying the Wikidata knowledge graph (KG) has not been formalized. Instead, its semantics emerges bottom-up from the use of its classes and properties. Flexible guidelines and rules have been defined by the Wikidata project for the use of its ontology, however, it is still often difficult to reuse the ontology’s constructs. Based on the assumption that identifying ontology design patterns from a knowledge graph contributes to making its (possibly) implicit ontology emerge, in this paper we present a method for extracting what we term empirical ontology design patterns (EODPs) from a knowledge graph. This method takes as input a knowledge graph and extracts EODPs as sets of axioms/constraints involving the classes instantiated in the KG. These EODPs include data about the probability of such axioms/constraints happening. We apply our method on two domain-specific portions of Wikidata, addressing the music and art, architecture, and archaeology domains, and we compare the empirical ontology design patterns we extract with the current support present in Wikidata. We show how these patterns can provide guidance for the use of the Wikidata ontology and its potential improvement, and can give insight into the content of (domain-specific portions of) the Wikidata knowledge graph.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140227697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Declarative generation of RDF-star graphs from heterogeneous data 从异构数据中声明式生成 RDF 星图
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-20 DOI: 10.3233/sw-243602
Julián Arenas-Guerrero, Ana Iglesias-Molina, David Chaves-Fraga, Daniel Garijo, Óscar Corcho, Anastasia Dimou
RDF-star has been proposed as an extension of RDF to make statements about statements. Libraries and graph stores have started adopting RDF-star, but the generation of RDF-star data remains largely unexplored. To allow generating RDF-star from heterogeneous data, RML-star was proposed as an extension of RML. However, no system has been developed so far that implements the RML-star specification. In this work, we present Morph-KGCstar, which extends the Morph-KGC materialization engine to generate RDF-star datasets. We validate Morph-KGCstar by running test cases derived from the N-Triples-star syntax tests and we apply it to two real-world use cases from the biomedical and open science domains. We compare the performance of our approach against other RDF-star generation methods (SPARQL-Anything), showing that Morph-KGCstar scales better for large input datasets, but it is slower when processing multiple smaller files.
RDF-star 作为 RDF 的扩展被提出来,用于对语句进行陈述。图书馆和图库已开始采用 RDF-star,但 RDF-star 数据的生成在很大程度上仍有待探索。为了能从异构数据生成 RDF-star,有人提出了 RML-star 作为 RML 的扩展。然而,迄今为止还没有开发出实现 RML-star 规范的系统。在这项工作中,我们提出了 Morph-KGCstar,它扩展了 Morph-KGC 物化引擎,以生成 RDF-star 数据集。我们通过运行源自 N-Triples-star 语法测试的测试用例来验证 Morph-KGCstar,并将其应用于生物医学和开放科学领域的两个实际用例。我们比较了我们的方法与其他 RDF-star 生成方法(SPARQL-Anything)的性能,结果表明 Morph-KGCstar 对大型输入数据集的扩展性更好,但在处理多个较小的文件时速度较慢。
{"title":"Declarative generation of RDF-star graphs from heterogeneous data","authors":"Julián Arenas-Guerrero, Ana Iglesias-Molina, David Chaves-Fraga, Daniel Garijo, Óscar Corcho, Anastasia Dimou","doi":"10.3233/sw-243602","DOIUrl":"https://doi.org/10.3233/sw-243602","url":null,"abstract":"RDF-star has been proposed as an extension of RDF to make statements about statements. Libraries and graph stores have started adopting RDF-star, but the generation of RDF-star data remains largely unexplored. To allow generating RDF-star from heterogeneous data, RML-star was proposed as an extension of RML. However, no system has been developed so far that implements the RML-star specification. In this work, we present Morph-KGCstar, which extends the Morph-KGC materialization engine to generate RDF-star datasets. We validate Morph-KGCstar by running test cases derived from the N-Triples-star syntax tests and we apply it to two real-world use cases from the biomedical and open science domains. We compare the performance of our approach against other RDF-star generation methods (SPARQL-Anything), showing that Morph-KGCstar scales better for large input datasets, but it is slower when processing multiple smaller files.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140225655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The role of ontologies and knowledge in Explainable AI 本体论和知识在可解释人工智能中的作用
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-14 DOI: 10.3233/sw-243529
Roberto Confalonieri, Oliver Kutz, Diego Calvanese, J. Alonso-Moral, Shang-Ming Zhou
science. These papers introduced domain-specific ontologies, providing a structured framework to facilitate understanding and explanation of the systems within each domain. The other group of papers took a more foundational approach by presenting logic-based methodologies that fostered the development of explainable-by-design systems. These papers emphasized the use of logical reasoning techniques to achieve explainability and offered frameworks for constructing systems that inherently prioritize interpretability. In summary, the accepted papers demonstrated the utilization of ontologies, knowledge graphs, and knowledge representation and reasoning in advancing the field of XAI. In the following, we provide a broad overview of all the accepted papers.
科学。这些论文介绍了特定领域的本体论,提供了一个结构化框架,以促进对每个领域内系统的理解和解释。另一组论文则采用了更基础的方法,介绍了基于逻辑的方法论,促进了可解释设计系统的开发。这些论文强调使用逻辑推理技术来实现可解释性,并为构建本质上优先考虑可解释性的系统提供了框架。总之,被录用的论文展示了本体、知识图谱、知识表示和推理在推动 XAI 领域发展方面的应用。下面,我们将对所有录用论文进行概述。
{"title":"The role of ontologies and knowledge in Explainable AI","authors":"Roberto Confalonieri, Oliver Kutz, Diego Calvanese, J. Alonso-Moral, Shang-Ming Zhou","doi":"10.3233/sw-243529","DOIUrl":"https://doi.org/10.3233/sw-243529","url":null,"abstract":"science. These papers introduced domain-specific ontologies, providing a structured framework to facilitate understanding and explanation of the systems within each domain. The other group of papers took a more foundational approach by presenting logic-based methodologies that fostered the development of explainable-by-design systems. These papers emphasized the use of logical reasoning techniques to achieve explainability and offered frameworks for constructing systems that inherently prioritize interpretability. In summary, the accepted papers demonstrated the utilization of ontologies, knowledge graphs, and knowledge representation and reasoning in advancing the field of XAI. In the following, we provide a broad overview of all the accepted papers.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140243905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A behaviouristic semantic approach to blockchain-based e-commerce 基于区块链的电子商务的行为语义方法
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-14 DOI: 10.3233/sw-243543
Giampaolo Bella, Domenico Cantone, Gianpietro Castiglione, Marianna Nicolosi Asmundo, Daniele Francesco Santamaria
Electronic commerce and finance are progressively supporting and including decentralized, shared and public ledgers such as the blockchain. This is reshaping traditional commercial activities by advancing them towards Decentralized Finance (DeFi) and Commerce 3.0, thereby supporting the latter’s potential to outpace the hurdles of central authority controllers and lawgivers. The quantity and entropy of the information that must be sought and managed to become active participants in such a relentlessly evolving scenario are increasing at a steady pace. For example, that information comprises asset or service description, general rules of the game, and specific technologies involved for decentralization. Moreover, the relevant information ought to be shared among innumerable and heterogeneous stakeholders, such as producers, buyers, digital identity providers, valuation services, and shipment services, to just name a few. A clear semantic representation of such a complex and multifaceted blockchain-based e-Commerce ecosystem would contribute dramatically to make it more usable, namely more automatically accessible to virtually anyone wanting to play the role of a stakeholder, thereby reducing programmers’ effort. However, we feel that reaching that goal still requires substantial effort in the tailoring of Semantic Web technologies, hence this article sets out on such a route and advances a stack of OWL 2 ontologies for the semantic description of decentralized e-commerce. The stack includes a number of relevant features, ranging from the applicable stakeholders through the supply chain of the offerings for an asset, up to the Ethereum blockchain, its tokens and smart contracts. Ontologies are defined by taking a behaviouristic approach to represent the various participants as agents in terms of their actions, inspired by the Theory of Agents and the related mentalistic notions. The stack is validated through appropriate metrics and SPARQL queries implementing suitable competency questions, then demonstrated through the representation of a real world use case, namely, the iExec marketplace.
电子商务和金融正逐步支持并包括去中心化、共享和公共分类账,如区块链。这正在重塑传统的商业活动,将其推向去中心化金融(DeFi)和商业 3.0,从而支持后者超越中央权力控制者和法律制定者障碍的潜力。在这种不断发展的情况下,要成为积极的参与者,必须寻找和管理的信息的数量和熵都在稳步增加。例如,这些信息包括资产或服务说明、一般游戏规则以及去中心化所涉及的具体技术。此外,相关信息应在无数异构的利益相关者之间共享,如生产者、购买者、数字身份提供者、估价服务和运输服务等。对这样一个基于区块链的复杂而多面的电子商务生态系统进行清晰的语义表述,将大大有助于提高其可用性,即几乎任何想要扮演利益相关者角色的人都能更自动地访问该系统,从而减少程序员的工作量。然而,我们认为,要实现这一目标,仍需要在定制语义网技术方面付出大量努力,因此,本文开始了这样一条路线,并为去中心化电子商务的语义描述推进了OWL 2本体堆栈。该堆栈包括许多相关特征,从适用的利益相关者到资产产品的供应链,直至以太坊区块链、其代币和智能合约。受代理理论(Theory of Agents)和相关心智学概念的启发,本体的定义采用了行为学方法,将不同的参与者作为代理来表示他们的行为。通过适当的指标和 SPARQL 查询实现适当的能力问题,对堆栈进行验证,然后通过表示真实世界的使用案例(即 iExec 市场)进行演示。
{"title":"A behaviouristic semantic approach to blockchain-based e-commerce","authors":"Giampaolo Bella, Domenico Cantone, Gianpietro Castiglione, Marianna Nicolosi Asmundo, Daniele Francesco Santamaria","doi":"10.3233/sw-243543","DOIUrl":"https://doi.org/10.3233/sw-243543","url":null,"abstract":"Electronic commerce and finance are progressively supporting and including decentralized, shared and public ledgers such as the blockchain. This is reshaping traditional commercial activities by advancing them towards Decentralized Finance (DeFi) and Commerce 3.0, thereby supporting the latter’s potential to outpace the hurdles of central authority controllers and lawgivers. The quantity and entropy of the information that must be sought and managed to become active participants in such a relentlessly evolving scenario are increasing at a steady pace. For example, that information comprises asset or service description, general rules of the game, and specific technologies involved for decentralization. Moreover, the relevant information ought to be shared among innumerable and heterogeneous stakeholders, such as producers, buyers, digital identity providers, valuation services, and shipment services, to just name a few. A clear semantic representation of such a complex and multifaceted blockchain-based e-Commerce ecosystem would contribute dramatically to make it more usable, namely more automatically accessible to virtually anyone wanting to play the role of a stakeholder, thereby reducing programmers’ effort. However, we feel that reaching that goal still requires substantial effort in the tailoring of Semantic Web technologies, hence this article sets out on such a route and advances a stack of OWL 2 ontologies for the semantic description of decentralized e-commerce. The stack includes a number of relevant features, ranging from the applicable stakeholders through the supply chain of the offerings for an asset, up to the Ethereum blockchain, its tokens and smart contracts. Ontologies are defined by taking a behaviouristic approach to represent the various participants as agents in terms of their actions, inspired by the Theory of Agents and the related mentalistic notions. The stack is validated through appropriate metrics and SPARQL queries implementing suitable competency questions, then demonstrated through the representation of a real world use case, namely, the iExec marketplace.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140242901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards counterfactual explanations for ontologies 实现本体的反事实解释
IF 3 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-08 DOI: 10.3233/sw-243566
Matthieu Bellucci, Nicolas Delestre, Nicolas Malandain, Cecilia Zanni-Merk
Debugging and repairing Web Ontology Language (OWL) ontologies has been a key field of research since OWL became a W3C recommendation. One way to understand errors and fix them is through explanations. These explanations are usually extracted from the reasoner and displayed to the ontology authors as is. In the meantime, there has been a recent call in the eXplainable AI (XAI) field to use expert knowledge in the form of knowledge graphs and ontologies. In this paper, a parallel between explanations for machine learning and for ontologies is drawn. This link enables the adaptation of XAI methods to explain ontologies and their entailments. Counterfactual explanations have been identified as a good candidate to solve the explainability problem in machine learning. The CEO (Counterfactual Explanations for Ontologies) method is thus proposed to explain inconsistent ontologies using counterfactual explanations. A preliminary user study is conducted to ensure that using XAI methods for ontologies is relevant and worth pursuing.
自网络本体语言(OWL)成为 W3C 建议以来,调试和修复网络本体语言(OWL)本体一直是一个重要的研究领域。理解错误和修复错误的一种方法是通过解释。这些解释通常是从推理器中提取的,并按原样显示给本体作者。与此同时,最近在可解释人工智能(XAI)领域出现了使用知识图谱和本体形式的专家知识的呼声。本文将对机器学习和本体进行平行解释。这种联系使 XAI 方法能够解释本体及其蕴涵。反事实解释被认为是解决机器学习中可解释性问题的最佳候选方案。因此,我们提出了 CEO(本体的反事实解释)方法,利用反事实解释来解释不一致的本体。我们进行了一项初步的用户研究,以确保在本体中使用 XAI 方法是有意义的、值得追求的。
{"title":"Towards counterfactual explanations for ontologies","authors":"Matthieu Bellucci, Nicolas Delestre, Nicolas Malandain, Cecilia Zanni-Merk","doi":"10.3233/sw-243566","DOIUrl":"https://doi.org/10.3233/sw-243566","url":null,"abstract":"Debugging and repairing Web Ontology Language (OWL) ontologies has been a key field of research since OWL became a W3C recommendation. One way to understand errors and fix them is through explanations. These explanations are usually extracted from the reasoner and displayed to the ontology authors as is. In the meantime, there has been a recent call in the eXplainable AI (XAI) field to use expert knowledge in the form of knowledge graphs and ontologies. In this paper, a parallel between explanations for machine learning and for ontologies is drawn. This link enables the adaptation of XAI methods to explain ontologies and their entailments. Counterfactual explanations have been identified as a good candidate to solve the explainability problem in machine learning. The CEO (Counterfactual Explanations for Ontologies) method is thus proposed to explain inconsistent ontologies using counterfactual explanations. A preliminary user study is conducted to ensure that using XAI methods for ontologies is relevant and worth pursuing.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140257629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Semantic Web
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1