P. Haase, K. Hose, Ralf Schenkel, Michael Schmidt, A. Schwarte
{"title":"Federated Query Processing over Linked Data","authors":"P. Haase, K. Hose, Ralf Schenkel, Michael Schmidt, A. Schwarte","doi":"10.1201/b16859-19","DOIUrl":"https://doi.org/10.1201/b16859-19","url":null,"abstract":"","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126930591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesca Bugiotti, Jesús Camacho-Rodríguez, François Goasdoué, Zoi Kaoudi, I. Manolescu, Stamatis Zampetakis
{"title":"SPARQL Query Processing in the Cloud","authors":"Francesca Bugiotti, Jesús Camacho-Rodríguez, François Goasdoué, Zoi Kaoudi, I. Manolescu, Stamatis Zampetakis","doi":"10.1201/b16859-11","DOIUrl":"https://doi.org/10.1201/b16859-11","url":null,"abstract":"","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123900829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jürgen Umbrich, Marcel Karnstedt, A. Polleres, K. Sattler
{"title":"Index-Based Source Selection and Optimization","authors":"Jürgen Umbrich, Marcel Karnstedt, A. Polleres, K. Sattler","doi":"10.1201/b16859-17","DOIUrl":"https://doi.org/10.1201/b16859-17","url":null,"abstract":"","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117052604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Query Processing in RDF Databases","authors":"Andrey Gubichev, Thomas Neumann","doi":"10.1201/b16859-8","DOIUrl":"https://doi.org/10.1201/b16859-8","url":null,"abstract":"","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126656601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On the traditional World Wide Web we all know and love, machines are used as brokers of content: they store, organize, request, route, transmit, receive, and display content encapsulated as documents. In order for machines to process the content of documents automatically—for whatever purpose— they primarily require two things: machine-readable structure and semantics. Unfortunately, despite various advancements in the area of Natural Language Processing (NLP) down through the decades, modern computers still struggle to meaningfully process the idiosyncratic structure and semantics of natural language due to ambiguities present in grammar, coreference and word-sense. Hence, machines require a more “formal” notion of structure and semantics using unambiguous grammar, referencing, and vocabulary.
{"title":"Linked Data & the Semantic Web Standards","authors":"A. Hogan","doi":"10.1201/b16859-3","DOIUrl":"https://doi.org/10.1201/b16859-3","url":null,"abstract":"On the traditional World Wide Web we all know and love, machines are used as brokers of content: they store, organize, request, route, transmit, receive, and display content encapsulated as documents. In order for machines to process the content of documents automatically—for whatever purpose— they primarily require two things: machine-readable structure and semantics. Unfortunately, despite various advancements in the area of Natural Language Processing (NLP) down through the decades, modern computers still struggle to meaningfully process the idiosyncratic structure and semantics of natural language due to ambiguities present in grammar, coreference and word-sense. Hence, machines require a more “formal” notion of structure and semantics using unambiguous grammar, referencing, and vocabulary.","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122702261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
So far we have addressed different aspects of RDF and Linked Data management, from modeling to query processing or reasoning. However, in most cases these tasks and operations are applied to static data. For streaming data, which is highly dynamic and potentially infinite, the data management paradigm is quite different, as it focuses on the evolution of data over time, rather that on storage and retrieval. Despite these differences, data streams on the Web can also benefit from the exposure of machine-readable semantic content as seen in the previous chapters. Semantic Web technologies such as RDF and SPARQL have been applied for data streams over the years, in what can be broadly called Linked Data Streams. Querying data streams is a core operation in any streaming data application. Ranging from environmental and weather station observations, to realtime patient health monitoring, the availability of data streams in our world is dramatically changing the type of applications that are being developed and made available in many domains. Many of these applications pose complex requirements regarding data management and query processing. For example, streams produced by sensors can help studying and forecasting hurricanes, to prevent natural disasters in vulnerable regions. Monitoring the barometric pressure at sea level can be combined with other wind speed measurements and satellite imaging to better predict extreme weather conditions1. Another example can be found in the health domain, where the industry has produced affordable devices that track caloric burn, blood glucose or heartbeat rates, among others, allowing live monitoring of the activity, metabolism, and sleep patterns of any person [226]. Moreover, data streams fit naturally with applications that store or publish them in the cloud, allowing ubiquitous access, aggregation, comparison,
{"title":"Evaluating SPARQL Queries over Linked Data Streams","authors":"J. Calbimonte, Óscar Corcho","doi":"10.1201/b16859-9","DOIUrl":"https://doi.org/10.1201/b16859-9","url":null,"abstract":"So far we have addressed different aspects of RDF and Linked Data management, from modeling to query processing or reasoning. However, in most cases these tasks and operations are applied to static data. For streaming data, which is highly dynamic and potentially infinite, the data management paradigm is quite different, as it focuses on the evolution of data over time, rather that on storage and retrieval. Despite these differences, data streams on the Web can also benefit from the exposure of machine-readable semantic content as seen in the previous chapters. Semantic Web technologies such as RDF and SPARQL have been applied for data streams over the years, in what can be broadly called Linked Data Streams. Querying data streams is a core operation in any streaming data application. Ranging from environmental and weather station observations, to realtime patient health monitoring, the availability of data streams in our world is dramatically changing the type of applications that are being developed and made available in many domains. Many of these applications pose complex requirements regarding data management and query processing. For example, streams produced by sensors can help studying and forecasting hurricanes, to prevent natural disasters in vulnerable regions. Monitoring the barometric pressure at sea level can be combined with other wind speed measurements and satellite imaging to better predict extreme weather conditions1. Another example can be found in the health domain, where the industry has produced affordable devices that track caloric burn, blood glucose or heartbeat rates, among others, allowing live monitoring of the activity, metabolism, and sleep patterns of any person [226]. Moreover, data streams fit naturally with applications that store or publish them in the cloud, allowing ubiquitous access, aggregation, comparison,","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123205238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information services are commonly provided via Web APIs based on Representational State Transfer (REST) principles [196,452] or via Web Services based on the WS-* technology stack [182,429]. Currently deployed information services use HTTP as transport protocol, but return data as JSON or XML which requires glue code to combine data from different APIs with information provided as Linked Data. Linked Data interfaces for services have been created, e.g., in form of the book mashup [97] which returns RDF data about books based on Amazon’s API, or twitter2foaf which encodes the Twitter follower network of a given user based on the API provided by Twitter. However, the interfaces are not formally described and thus the link between services and data has to be established manually or by service-specific algorithms. For example, to establish a link between person instances (e.g., described using the FOAF vocabulary1) and their Twitter account, one has to hard-code which property relates people to their Twitter username and the fact that the URI of the person’s Twitter representation is created by appending the username to http://twitter2foaf.appspot.com/id/. In this chapter, we present the LInked Data Services (LIDS) approach for creating Linked Data interfaces to information services. The approach incorporates formal service descriptions that enable (semi-)automatic service discovery and integration. Specifically, we present the following components: an access mechanism for LIDS interfaces based on generic Web architecture
信息服务通常通过基于Representational State Transfer (REST)原则的Web api提供[196,452]或通过基于WS-*技术栈的Web服务提供[182,429]。目前部署的信息服务使用HTTP作为传输协议,但以JSON或XML的形式返回数据,这需要粘合代码将来自不同api的数据与作为关联数据提供的信息结合起来。已经为服务创建了关联数据接口,例如,以书籍mashup[97]的形式,它基于Amazon的API返回关于书籍的RDF数据,或者基于Twitter提供的API编码给定用户的Twitter关注者网络的twitter2foaf。然而,接口没有正式描述,因此服务和数据之间的链接必须手动或通过特定于服务的算法建立。例如,要在人员实例(例如,使用FOAF词汇表1进行描述)和他们的Twitter帐户之间建立链接,必须硬编码哪个属性将人员与他们的Twitter用户名联系起来,以及通过将用户名附加到http://twitter2foaf.appspot.com/id/来创建人员Twitter表示的URI。在本章中,我们将介绍用于创建链接数据接口到信息服务的关联数据服务(lid)方法。该方法结合了正式的服务描述,支持(半)自动化的服务发现和集成。具体来说,我们提出了以下组件:基于通用Web体系结构的lid接口访问机制
{"title":"Linked Data Services","authors":"Sebastian Speiser, M. Junghans, A. Haller","doi":"10.1201/b16859-24","DOIUrl":"https://doi.org/10.1201/b16859-24","url":null,"abstract":"Information services are commonly provided via Web APIs based on Representational State Transfer (REST) principles [196,452] or via Web Services based on the WS-* technology stack [182,429]. Currently deployed information services use HTTP as transport protocol, but return data as JSON or XML which requires glue code to combine data from different APIs with information provided as Linked Data. Linked Data interfaces for services have been created, e.g., in form of the book mashup [97] which returns RDF data about books based on Amazon’s API, or twitter2foaf which encodes the Twitter follower network of a given user based on the API provided by Twitter. However, the interfaces are not formally described and thus the link between services and data has to be established manually or by service-specific algorithms. For example, to establish a link between person instances (e.g., described using the FOAF vocabulary1) and their Twitter account, one has to hard-code which property relates people to their Twitter username and the fact that the URI of the person’s Twitter representation is created by appending the username to http://twitter2foaf.appspot.com/id/. In this chapter, we present the LInked Data Services (LIDS) approach for creating Linked Data interfaces to information services. The approach incorporates formal service descriptions that enable (semi-)automatic service discovery and integration. Specifically, we present the following components: an access mechanism for LIDS interfaces based on generic Web architecture","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"2005 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131102093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using read-write Linked Data for Application Integration","authors":"A. L. Hors, Steve Speicher","doi":"10.1201/b16859-25","DOIUrl":"https://doi.org/10.1201/b16859-25","url":null,"abstract":"","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"3 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115615133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"P2P-Based Query Processing over Linked Data","authors":"Marcel Karnstedt, K. Sattler, M. Hauswirth","doi":"10.1201/b16859-18","DOIUrl":"https://doi.org/10.1201/b16859-18","url":null,"abstract":"","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124158862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To live up to its promise of web-scale data integration, the Semantic Web will have to include the content of existing relational databases. One study determined that there is 500 times as much data in the hidden or deep web as there is in crawlable, indexable web pages; most of that hidden data is stored in relational databases [79]. Starting with a 2007 workshop, titled “RDF Access to Relational Databases”1, the W3C sponsored a series of activities to address this issue. At that workshop, the acronym, RDB2RDF, Relational Database to Resource Description Framework, was coined. In September 2012, these activities culminated in the ratification of two W3C standards, colloquially known as Direct Mapping [43] and R2RML [165]. By design, both these standards avoid any content that speaks about implementation, directly or indirectly. The standards concern is syntactic transformation of the contents of rows in relational tables to RDF. The R2RML language includes statements that specify which columns and tables are mapped to properties and classes of a domain ontology. Thus, the language empowers a developer to examine the contents of a relational database and write a mapping specification. For relational databases with large database schema, the manual development of a mapping is a commensurately large undertaking. Thus, a standard direct mapping is defined; that is an automatic mapping of the relational data to an RDF graph reflecting the structure of the database schema. URIs are automatically generated from the names of database schema elements.
{"title":"Mapping Relational Databases to Linked Data","authors":"Juan Sequeda, Daniel P. Miranker","doi":"10.1201/b16859-7","DOIUrl":"https://doi.org/10.1201/b16859-7","url":null,"abstract":"To live up to its promise of web-scale data integration, the Semantic Web will have to include the content of existing relational databases. One study determined that there is 500 times as much data in the hidden or deep web as there is in crawlable, indexable web pages; most of that hidden data is stored in relational databases [79]. Starting with a 2007 workshop, titled “RDF Access to Relational Databases”1, the W3C sponsored a series of activities to address this issue. At that workshop, the acronym, RDB2RDF, Relational Database to Resource Description Framework, was coined. In September 2012, these activities culminated in the ratification of two W3C standards, colloquially known as Direct Mapping [43] and R2RML [165]. By design, both these standards avoid any content that speaks about implementation, directly or indirectly. The standards concern is syntactic transformation of the contents of rows in relational tables to RDF. The R2RML language includes statements that specify which columns and tables are mapped to properties and classes of a domain ontology. Thus, the language empowers a developer to examine the contents of a relational database and write a mapping specification. For relational databases with large database schema, the manual development of a mapping is a commensurately large undertaking. Thus, a standard direct mapping is defined; that is an automatic mapping of the relational data to an RDF graph reflecting the structure of the database schema. URIs are automatically generated from the names of database schema elements.","PeriodicalId":252334,"journal":{"name":"Linked Data Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122281318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}