首页 > 最新文献

Proceedings of the 12th International Conference on Semantic Systems最新文献

英文 中文
Towards a Vocabulary Terms Discovery Assistant 迈向词汇术语发现助手
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993347
Ioannis Stavrakantonakis, A. Fensel, D. Fensel
The Linked Open Vocabularies (LOV) curated directory list of vocabularies has changed radically the way that engineers are assisted to explore the vocabulary space by searching for terms and vocabularies using the provided keyword based search. Running a survey regarding the decision of the vocabulary terms that can be used to annotate a specific webpage, we realised the gap between the vocabulary creators side and the vocabulary users. In this direction, the presented framework, namely the LOVR framework, aims to facilitate the vocabulary terms discovery by providing a Web service with a set of endpoints that can be invoked to get a list of recommended terms for a given webpage. Within this work, we present the framework architecture and the fundamental parts of the prototype that implements the methodology behind the LOVR framework, which leverages the LOV search. Furthermore, the various endpoints of the Web service are described by explaining their usage scenarios.
链接开放词汇表(LOV)管理的词汇表目录列表从根本上改变了帮助工程师通过使用提供的基于关键字的搜索搜索术语和词汇表来探索词汇表空间的方式。我们做了一项关于选择哪些词汇可以用于特定网页注释的调查,发现词汇创造者和词汇使用者之间存在差距。在这个方向上,所提出的框架,即LOVR框架,旨在通过提供具有一组端点的Web服务来促进词汇表术语的发现,可以调用这些端点来获得给定网页的推荐术语列表。在这项工作中,我们展示了框架架构和原型的基本部分,这些部分实现了LOVR框架背后的方法,它利用了LOV搜索。此外,通过解释Web服务的各种端点的使用场景来描述它们。
{"title":"Towards a Vocabulary Terms Discovery Assistant","authors":"Ioannis Stavrakantonakis, A. Fensel, D. Fensel","doi":"10.1145/2993318.2993347","DOIUrl":"https://doi.org/10.1145/2993318.2993347","url":null,"abstract":"The Linked Open Vocabularies (LOV) curated directory list of vocabularies has changed radically the way that engineers are assisted to explore the vocabulary space by searching for terms and vocabularies using the provided keyword based search. Running a survey regarding the decision of the vocabulary terms that can be used to annotate a specific webpage, we realised the gap between the vocabulary creators side and the vocabulary users. In this direction, the presented framework, namely the LOVR framework, aims to facilitate the vocabulary terms discovery by providing a Web service with a set of endpoints that can be invoked to get a list of recommended terms for a given webpage. Within this work, we present the framework architecture and the fundamental parts of the prototype that implements the methodology behind the LOVR framework, which leverages the LOV search. Furthermore, the various endpoints of the Web service are described by explaining their usage scenarios.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129213939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEX Interfaces: Automating Machine Learning Metadata Generation MEX接口:自动化机器学习元数据生成
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993320
Diego Esteves, Pablo N. Mendes, Diego Moussallem, J. C. Duarte, A. Zaveri, Jens Lehmann
Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: "What is the cleanest and lowest-impact solution, i.e., the minimal effort to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?". We introduce a novel and low-impact methodology specifically designed for code built in that context, combining Semantic Web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: MEX Vocabulary.
尽管最近努力实现机器学习(ML)实验的高水平互操作性,积极地与可复制研究上下文合作,但我们仍然遇到由于不同ML平台的存在而产生的问题:每个平台都有一个特定的概念或模式来表示数据和元数据。这种情况会导致额外的编码工作,以实现所需的互操作性和更好的来源级别,以及获得生成结果的更自动化的环境。因此,在使用ML库时,重新设计特定的数据模型(模式)和开发包装器来管理生成的输出是一项常见的任务。在本文中,我们将重点讨论以下问题的解决方案:“什么是最干净和影响最小的解决方案,即在集成开发环境(IDE)上下文中实现更高的互操作性和源元数据级别的最小努力,以及如何促进固有的数据查询任务?”我们引入了一种新颖的、低影响的方法,专门为在这种情况下构建的代码设计,结合语义Web概念和反射,以最大限度地减少以结构化方式导出ML元数据的差距,允许在运行时将嵌入的代码注释转换为用于语义Web的最先进的ML模式之一:MEX Vocabulary。
{"title":"MEX Interfaces: Automating Machine Learning Metadata Generation","authors":"Diego Esteves, Pablo N. Mendes, Diego Moussallem, J. C. Duarte, A. Zaveri, Jens Lehmann","doi":"10.1145/2993318.2993320","DOIUrl":"https://doi.org/10.1145/2993318.2993320","url":null,"abstract":"Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: \"What is the cleanest and lowest-impact solution, i.e., the minimal effort to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?\". We introduce a novel and low-impact methodology specifically designed for code built in that context, combining Semantic Web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: MEX Vocabulary.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128647205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Executing SPARQL queries over Mapped Document Store with SparqlMap-M 使用SparqlMap-M在映射文档存储上执行SPARQL查询
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993326
Jörg Unbehauen, Michael Martin
With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.
随着MongoDB或CouchDB等NoSQL数据库系统的日益普及,越来越多的应用程序根据非关系的、面向文档的模型来存储结构化数据。由于缺乏标准和工具,将这种结构化数据公开为关联数据目前受到限制,并且需要实现自定义解决方案。虽然最近的努力旨在以标准化的方式将这些数据模型转换为RDF,但缺乏促进SPARQL在映射的非关系数据源上执行的方法。通过SparqlMap-M,我们将展示如何实现对非关系数据的动态SPARQL访问。SparqlMap- m是我们的SPARQL-to- sql重写器SparqlMap的扩展,它通过在文档存储上使用关系抽象来执行SPARQL查询的(部分)转换。此外,使用文档存储中的重复数据来减少连接的数量,并引入自定义优化。我们的演示场景使用Berlin SPARQL Benchmark (BSBM),并对文档数据模型进行了不同的调整。我们使用这个场景来演示我们的方法的可行性,并将其与不同的MongoDB设置和本地SQL进行比较。
{"title":"Executing SPARQL queries over Mapped Document Store with SparqlMap-M","authors":"Jörg Unbehauen, Michael Martin","doi":"10.1145/2993318.2993326","DOIUrl":"https://doi.org/10.1145/2993318.2993326","url":null,"abstract":"With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115875880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Twitter Sentiment Gold Standard for the Brexit Referendum 英国脱欧公投的推特情绪黄金标准
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993350
M. Hürlimann, Brian Davis, Keith Cortis, A. Freitas, S. Handschuh, Sergio Fernández
In this paper, we present a sentiment-annotated Twitter gold standard for the Brexit referendum. The data set consists of 2,000 Twitter messages ("tweets") annotated with information about the sentiment expressed, the strength of the sentiment, and context dependence. This is a valuable resource for social media-based opinion mining in the context of political events.
在本文中,我们为英国脱欧公投提出了一个带有情绪注释的Twitter黄金标准。该数据集由2,000条Twitter消息(“tweets”)组成,并注释了有关所表达的情感、情感强度和上下文依赖性的信息。这是在政治事件背景下基于社交媒体的意见挖掘的宝贵资源。
{"title":"A Twitter Sentiment Gold Standard for the Brexit Referendum","authors":"M. Hürlimann, Brian Davis, Keith Cortis, A. Freitas, S. Handschuh, Sergio Fernández","doi":"10.1145/2993318.2993350","DOIUrl":"https://doi.org/10.1145/2993318.2993350","url":null,"abstract":"In this paper, we present a sentiment-annotated Twitter gold standard for the Brexit referendum. The data set consists of 2,000 Twitter messages (\"tweets\") annotated with information about the sentiment expressed, the strength of the sentiment, and context dependence. This is a valuable resource for social media-based opinion mining in the context of political events.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132789157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Linked Open Vocabulary Ranking and Terms Discovery 链接开放词汇排名和术语发现
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993338
Ioannis Stavrakantonakis, A. Fensel, D. Fensel
Searching among the existing 500 and more vocabularies was never easier than today with the Linked Open Vocabularies (LOV) curated directory list. The LOV search provides one central point to explore the vocabulary terms space. However, it can be still cumbersome for non-experts or semantic annotation experts to discover the appropriate terms for the description of given website content. In this direction, the proposed approach is the cornerstone part of a methodology that aims to facilitate the selection of the highest ranked terms from the abundance of the registered vocabularies based on a keyword search. Moreover, it introduces for the first time the role of the contributors' background, which is retrieved from the LOV repository, in the ranking of the vocabularies. With this addition, we aim to address the issue of very low scores for the newly published vocabularies. The paper underlines the difficulty of selecting vocabulary terms through a survey and describes the approach that enables the ranking of vocabularies within the above mentioned methodology.
使用链接开放词汇表(LOV)策划的目录列表,在现有的500多个词汇表中进行搜索从来没有像今天这样容易。LOV搜索为探索词汇表术语空间提供了一个中心点。然而,对于非专家或语义注释专家来说,为给定网站内容的描述发现适当的术语仍然是很麻烦的。在这个方向上,所建议的方法是一种方法学的基础部分,该方法学旨在促进基于关键字搜索从大量注册词汇表中选择排名最高的术语。此外,它还首次在词汇表排名中引入了贡献者背景的角色,这是从LOV存储库中检索到的。有了这个补充,我们的目标是解决新发布的词汇表得分很低的问题。本文强调了通过调查选择词汇术语的困难,并描述了在上述方法中实现词汇排名的方法。
{"title":"Linked Open Vocabulary Ranking and Terms Discovery","authors":"Ioannis Stavrakantonakis, A. Fensel, D. Fensel","doi":"10.1145/2993318.2993338","DOIUrl":"https://doi.org/10.1145/2993318.2993338","url":null,"abstract":"Searching among the existing 500 and more vocabularies was never easier than today with the Linked Open Vocabularies (LOV) curated directory list. The LOV search provides one central point to explore the vocabulary terms space. However, it can be still cumbersome for non-experts or semantic annotation experts to discover the appropriate terms for the description of given website content. In this direction, the proposed approach is the cornerstone part of a methodology that aims to facilitate the selection of the highest ranked terms from the abundance of the registered vocabularies based on a keyword search. Moreover, it introduces for the first time the role of the contributors' background, which is retrieved from the LOV repository, in the ranking of the vocabularies. With this addition, we aim to address the issue of very low scores for the newly published vocabularies. The paper underlines the difficulty of selecting vocabulary terms through a survey and describes the approach that enables the ranking of vocabularies within the above mentioned methodology.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134062415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Cross-Evaluation of Entity Linking and Disambiguation Systems for Clinical Text Annotation 临床文本标注中实体链接和消歧系统的交叉评价
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993345
Camilo Thorne, Stefano Faralli, H. Stuckenschmidt
In this paper we study whether state-of-the-art techniques for multi-domain and multilingual entity linking can be ported to the clinical domain. To do so, we compare two known entity linking systems, BabelFly and TagMe, that leverage on Wikipedia and DBpedia, with the standard clinical semantic annotation and disambiguation system, MetaMap, over the SemRep clinical word sense disambiguation gold standard. We show that BabelFly and especially TagMe, while achieving decent precision on clinical annotation, outmatch MetaMap's F1-score.
在本文中,我们研究了多领域和多语言实体链接的最新技术是否可以移植到临床领域。为了做到这一点,我们比较了两个已知的实体链接系统,BabelFly和TagMe,它们利用Wikipedia和DBpedia,与标准的临床语义注释和消歧系统MetaMap,在SemRep临床词义消歧的gold标准上。我们表明,BabelFly,尤其是TagMe,在临床注释方面取得了不错的精度,超过了MetaMap的f1得分。
{"title":"Cross-Evaluation of Entity Linking and Disambiguation Systems for Clinical Text Annotation","authors":"Camilo Thorne, Stefano Faralli, H. Stuckenschmidt","doi":"10.1145/2993318.2993345","DOIUrl":"https://doi.org/10.1145/2993318.2993345","url":null,"abstract":"In this paper we study whether state-of-the-art techniques for multi-domain and multilingual entity linking can be ported to the clinical domain. To do so, we compare two known entity linking systems, BabelFly and TagMe, that leverage on Wikipedia and DBpedia, with the standard clinical semantic annotation and disambiguation system, MetaMap, over the SemRep clinical word sense disambiguation gold standard. We show that BabelFly and especially TagMe, while achieving decent precision on clinical annotation, outmatch MetaMap's F1-score.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115809891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Top-level Ideas about Importing, Translating and Exporting Knowledge via an Ontology of Representation Languages 基于表示语言本体的知识导入、翻译和输出的顶层思想
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993344
Philippe A. Martin, Jérémy Bénard
This article introduces KRLO, an ontology of knowledge representation languages (KRLs), the first to represent KRL abstract models in a uniform way and the first to represent KRL notations, i.e., concrete models. Thus, KRLO can help design tools handling many KRLs and letting their end-users design or adapt KRLs. KRLO also represent KRL import, translation and export methods in a declarative way, both via Datalog like rules and pure functions.
本文介绍了知识表示语言本体KRLO,它是第一个统一表示KRL抽象模型的本体,也是第一个表示KRL符号即具体模型的本体。因此,KRLO可以帮助设计处理许多krl的工具,并让它们的最终用户设计或调整krl。KRLO还通过类似Datalog的规则和纯函数,以声明的方式表示KRL的导入、翻译和导出方法。
{"title":"Top-level Ideas about Importing, Translating and Exporting Knowledge via an Ontology of Representation Languages","authors":"Philippe A. Martin, Jérémy Bénard","doi":"10.1145/2993318.2993344","DOIUrl":"https://doi.org/10.1145/2993318.2993344","url":null,"abstract":"This article introduces KRLO, an ontology of knowledge representation languages (KRLs), the first to represent KRL abstract models in a uniform way and the first to represent KRL notations, i.e., concrete models. Thus, KRLO can help design tools handling many KRLs and letting their end-users design or adapt KRLs. KRLO also represent KRL import, translation and export methods in a declarative way, both via Datalog like rules and pure functions.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"33 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115029431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linking Images to Semantic Knowledge Base with User-generated Tags 使用用户生成的标签链接图像到语义知识库
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993340
Shuangyong Song, Qingliang Miao, Yao Meng
Images account for an important part of Multimedia Linked Open Data, but currently most of the semantic relations between images and other entities are based on manual semantic annotation. With the popularity of image hosting websites, such as Flickr, plentiful tagging information of images makes it possible to automatically generate semantic relations between images and other semantic entities. In this paper, we propose a model for linking images to semantic knowledge base (KB) with user-generated tags of those images, while taking into account topical semantic similarity between tags. The experimental results show that our approach can effectively realize the mentioned aim.
图像是多媒体链接开放数据的重要组成部分,但目前大多数图像与其他实体之间的语义关系是基于人工语义标注的。随着Flickr等图片托管网站的普及,丰富的图片标注信息使得自动生成图片与其他语义实体之间的语义关系成为可能。在本文中,我们提出了一个使用用户生成的图像标签将图像链接到语义知识库(KB)的模型,同时考虑了标签之间的主题语义相似性。实验结果表明,该方法可以有效地实现上述目标。
{"title":"Linking Images to Semantic Knowledge Base with User-generated Tags","authors":"Shuangyong Song, Qingliang Miao, Yao Meng","doi":"10.1145/2993318.2993340","DOIUrl":"https://doi.org/10.1145/2993318.2993340","url":null,"abstract":"Images account for an important part of Multimedia Linked Open Data, but currently most of the semantic relations between images and other entities are based on manual semantic annotation. With the popularity of image hosting websites, such as Flickr, plentiful tagging information of images makes it possible to automatically generate semantic relations between images and other semantic entities. In this paper, we propose a model for linking images to semantic knowledge base (KB) with user-generated tags of those images, while taking into account topical semantic similarity between tags. The experimental results show that our approach can effectively realize the mentioned aim.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116299563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modeling and Enforcing Access Control Obligations for SPARQL-DL Queries 为SPARQL-DL查询建模和执行访问控制义务
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993337
Nicoletta Fornara, Fabio Marfia
Different access control models are presented in literature for semantic data, allowing the expression and enforcement of access policies that are based on roles and other attributes of the requesting user usually. We investigate a different access control perspective in the present work, allowing a Policy Administrator to define system obligations that are focused on the enhanced semantics, with a particular reference to the information that can be inferred from the starting knowledge representation, using DL reasoning. That is done by applying a paradigm for the specification and enforcement of access control obligations to the SPARQL-DL query model for OWL ontologies. The presented approach allows more than a simple permit/deny control on inferred data (e.g., data can be returned, but after an anonymization process), together with the possibility of specifying very expressive policies.
文献中针对语义数据提出了不同的访问控制模型,通常允许基于请求用户的角色和其他属性来表达和执行访问策略。我们在当前的工作中研究了不同的访问控制视角,允许策略管理员定义专注于增强语义的系统义务,并使用DL推理从开始的知识表示中推断出特定的信息。这是通过对OWL本体的SPARQL-DL查询模型应用规范和执行访问控制义务的范例来实现的。所提出的方法不仅允许对推断数据进行简单的允许/拒绝控制(例如,可以返回数据,但要经过匿名化过程),还可以指定非常有表现力的策略。
{"title":"Modeling and Enforcing Access Control Obligations for SPARQL-DL Queries","authors":"Nicoletta Fornara, Fabio Marfia","doi":"10.1145/2993318.2993337","DOIUrl":"https://doi.org/10.1145/2993318.2993337","url":null,"abstract":"Different access control models are presented in literature for semantic data, allowing the expression and enforcement of access policies that are based on roles and other attributes of the requesting user usually. We investigate a different access control perspective in the present work, allowing a Policy Administrator to define system obligations that are focused on the enhanced semantics, with a particular reference to the information that can be inferred from the starting knowledge representation, using DL reasoning. That is done by applying a paradigm for the specification and enforcement of access control obligations to the SPARQL-DL query model for OWL ontologies. The presented approach allows more than a simple permit/deny control on inferred data (e.g., data can be returned, but after an anonymization process), together with the possibility of specifying very expressive policies.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126419812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Verifiability and Traceability in a Linked Data Based Messaging System 基于关联数据的消息传递系统中的可验证性和可追溯性
Pub Date : 2016-09-12 DOI: 10.1145/2993318.2993342
F. Kleedorfer, Yana Panchenko, C. Busch, C. Huemer
When linked data applications communicate, they commonly use messaging technologies in which the message exchange itself is not represented as linked data, since it takes place on a different architectural level. When a message cannot be verified and traced on the linked data level, trust in data is moved from message originators to service providers. However, there are use cases in which the actual message exchange and its verifiability are of importance. In such situations, the separation between application data and communication data is not desirable. To address this, we propose messaging based on linked data, where communicating entities and their messages are represented as interconnected Web resources, and we show how conversations can be made verifiable using digital signatures.
当链接数据应用程序进行通信时,它们通常使用消息传递技术,其中消息交换本身不表示为链接数据,因为它发生在不同的体系结构级别上。当无法在链接数据级别上验证和跟踪消息时,对数据的信任将从消息发起者转移到服务提供者。然而,在某些用例中,实际的消息交换及其可验证性是很重要的。在这种情况下,应用程序数据和通信数据的分离是不可取的。为了解决这个问题,我们提出了基于链接数据的消息传递,其中通信实体及其消息被表示为相互连接的Web资源,并且我们展示了如何使用数字签名来验证对话。
{"title":"Verifiability and Traceability in a Linked Data Based Messaging System","authors":"F. Kleedorfer, Yana Panchenko, C. Busch, C. Huemer","doi":"10.1145/2993318.2993342","DOIUrl":"https://doi.org/10.1145/2993318.2993342","url":null,"abstract":"When linked data applications communicate, they commonly use messaging technologies in which the message exchange itself is not represented as linked data, since it takes place on a different architectural level. When a message cannot be verified and traced on the linked data level, trust in data is moved from message originators to service providers. However, there are use cases in which the actual message exchange and its verifiability are of importance. In such situations, the separation between application data and communication data is not desirable. To address this, we propose messaging based on linked data, where communicating entities and their messages are represented as interconnected Web resources, and we show how conversations can be made verifiable using digital signatures.","PeriodicalId":177013,"journal":{"name":"Proceedings of the 12th International Conference on Semantic Systems","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127379883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 12th International Conference on Semantic Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1