High performance and scalability are two essentials requirements for data analytics systems as the amount of data being collected, stored and processed continue to grow rapidly. In this paper, we propose a new approach based on HadoopDB. Our main goal is to improve HadoopDB performance by adding some components. To achieve this, we incorporate a fast and space-efficient data placement structure in MapReduce-based Warehouse systems and another SQL-to-MapReduce translator. We also replace the initial Database implemented in HadoopDB with other column oriented Database. In addition we add security mechanism to protect MapReduce processing integrity.
{"title":"Big data analysis and query optimization improve HadoopDB performance","authors":"Cherif A. A. Bissiriou, H. Chaoui","doi":"10.1145/2660517.2660529","DOIUrl":"https://doi.org/10.1145/2660517.2660529","url":null,"abstract":"High performance and scalability are two essentials requirements for data analytics systems as the amount of data being collected, stored and processed continue to grow rapidly. In this paper, we propose a new approach based on HadoopDB. Our main goal is to improve HadoopDB performance by adding some components. To achieve this, we incorporate a fast and space-efficient data placement structure in MapReduce-based Warehouse systems and another SQL-to-MapReduce translator. We also replace the initial Database implemented in HadoopDB with other column oriented Database. In addition we add security mechanism to protect MapReduce processing integrity.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126761913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tilman Deuschel, Christian Greppmeier, B. Humm, W. Stille
Faceted search allows navigating through large collections along different dimensions in order to find relevant objects efficiently. Traditional faceted search systems often suffer from a lack of usability; furthermore facets are often static and independent from the search result set. In this paper, we present a dynamic semantic topical faceting approach. It uses a pie menu called topic pie that allows visualisation of facets and user interaction. Depending on the search query, the topic pie presents a set of topics and major topics which help the user to drill down the search result set to relevant objects efficiently as well as to browse exploratively through the collection. The underlying algorithm optimises the conflicting goals relevance and diversity while avoiding information overload. It reveals a good performance on large data sets. As our use-case, we chose literature research in scientific libraries. An evaluation shows major advantages of our approach compared to state-of-the-art faceted search techniques in nowadays library portals.
{"title":"Semantically faceted navigation with topic pies","authors":"Tilman Deuschel, Christian Greppmeier, B. Humm, W. Stille","doi":"10.1145/2660517.2660522","DOIUrl":"https://doi.org/10.1145/2660517.2660522","url":null,"abstract":"Faceted search allows navigating through large collections along different dimensions in order to find relevant objects efficiently. Traditional faceted search systems often suffer from a lack of usability; furthermore facets are often static and independent from the search result set. In this paper, we present a dynamic semantic topical faceting approach. It uses a pie menu called topic pie that allows visualisation of facets and user interaction. Depending on the search query, the topic pie presents a set of topics and major topics which help the user to drill down the search result set to relevant objects efficiently as well as to browse exploratively through the collection. The underlying algorithm optimises the conflicting goals relevance and diversity while avoiding information overload. It reveals a good performance on large data sets. As our use-case, we chose literature research in scientific libraries. An evaluation shows major advantages of our approach compared to state-of-the-art faceted search techniques in nowadays library portals.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117113491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edgard Marx, Ricardo Usbeck, A. N. Ngomo, Konrad Höffner, Jens Lehmann, S. Auer
Billions of facts pertaining to a multitude of domains are now available on the Web as RDF data. However, accessing this data is still a difficult endeavour for non-expert users. In order to meliorate the access to this data, approaches imposing minimal hurdles to their users are required. Although many question answering systems over Linked Data have being proposed, retrieving the desired data is still significantly challenging. In addition, developing and evaluating question answering systems remains a very complex task. To overcome these obstacles, we present a modular and extensible open-source question answering framework. We demonstrate how the framework can be used by integrating two state-of-the-art question answering systems. As a result our evaluation shows that overall better results can be achieved by the use of combination rather than individual stand-alone versions.
{"title":"Towards an open question answering architecture","authors":"Edgard Marx, Ricardo Usbeck, A. N. Ngomo, Konrad Höffner, Jens Lehmann, S. Auer","doi":"10.1145/2660517.2660519","DOIUrl":"https://doi.org/10.1145/2660517.2660519","url":null,"abstract":"Billions of facts pertaining to a multitude of domains are now available on the Web as RDF data. However, accessing this data is still a difficult endeavour for non-expert users. In order to meliorate the access to this data, approaches imposing minimal hurdles to their users are required. Although many question answering systems over Linked Data have being proposed, retrieving the desired data is still significantly challenging. In addition, developing and evaluating question answering systems remains a very complex task. To overcome these obstacles, we present a modular and extensible open-source question answering framework. We demonstrate how the framework can be used by integrating two state-of-the-art question answering systems. As a result our evaluation shows that overall better results can be achieved by the use of combination rather than individual stand-alone versions.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132080746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
YoungGyun Hahm, Youngsik Kim, Yousung Won, Jongsung Woo, Jiwoo Seo, Jiseong Kim, Seong-Bae Park, D. Hwang, Key-Sun Choi
Nowadays, there are many ongoing researches to construct knowledge bases from unstructured data. This process requires an ontology that includes enough properties to cover the various attributes of knowledge elements. As a huge encyclopedia, Wikipedia is a typical unstructured corpora of knowledge. DBpedia, a structured knowledge base constructed from Wikipedia, is based on DBpedia ontology which was created to represent knowledge in Wikipedia well. However, DBpedia ontology is a Wikipedia-Infobox-driven ontology. This means that although it is suitable to represent essential knowledge of Wikipedia, it does not cover all of the knowledge in Wikipedia text. In overcoming this problem, resources representing semantics or relations of words such as WordNet and FrameNet are considered useful. In this paper we determined whether DBpedia ontology is enough to cover a sufficient amount of natural language written knowledge in Wikipedia. We mainly focused on the Korean Wikipedia, and calculated the Korean Wikipedia coverage rate with two methods, by the DBpedia ontology and by FrameNet frames. To do this, we extracted sentences with extractable knowledge from Wikipedia text, and also extracted natural language predicates by Part-Of-Speech tagging. We generated Korean lexicons for DBpedia ontology properties and frame indexes, and used these lexicons to measure the Korean Wikipedia coverage ratio of the DBpedia ontology and frames. By our measurements, FrameNet frames cover 73.85% of the Korean Wikipedia sentences, which is a sufficient portion of Wikipedia text. We finally show the limitations of DBpedia and FrameNet briefly, and propose the outlook of constructing knowledge bases based on the experiment results.
{"title":"Toward matching the relation instantiation from DBpedia ontology to Wikipedia text: fusing FrameNet to Korean","authors":"YoungGyun Hahm, Youngsik Kim, Yousung Won, Jongsung Woo, Jiwoo Seo, Jiseong Kim, Seong-Bae Park, D. Hwang, Key-Sun Choi","doi":"10.1145/2660517.2660534","DOIUrl":"https://doi.org/10.1145/2660517.2660534","url":null,"abstract":"Nowadays, there are many ongoing researches to construct knowledge bases from unstructured data. This process requires an ontology that includes enough properties to cover the various attributes of knowledge elements. As a huge encyclopedia, Wikipedia is a typical unstructured corpora of knowledge. DBpedia, a structured knowledge base constructed from Wikipedia, is based on DBpedia ontology which was created to represent knowledge in Wikipedia well. However, DBpedia ontology is a Wikipedia-Infobox-driven ontology. This means that although it is suitable to represent essential knowledge of Wikipedia, it does not cover all of the knowledge in Wikipedia text. In overcoming this problem, resources representing semantics or relations of words such as WordNet and FrameNet are considered useful. In this paper we determined whether DBpedia ontology is enough to cover a sufficient amount of natural language written knowledge in Wikipedia. We mainly focused on the Korean Wikipedia, and calculated the Korean Wikipedia coverage rate with two methods, by the DBpedia ontology and by FrameNet frames. To do this, we extracted sentences with extractable knowledge from Wikipedia text, and also extracted natural language predicates by Part-Of-Speech tagging. We generated Korean lexicons for DBpedia ontology properties and frame indexes, and used these lexicons to measure the Korean Wikipedia coverage ratio of the DBpedia ontology and frames. By our measurements, FrameNet frames cover 73.85% of the Korean Wikipedia sentences, which is a sufficient portion of Wikipedia text. We finally show the limitations of DBpedia and FrameNet briefly, and propose the outlook of constructing knowledge bases based on the experiment results.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122802403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Prud'hommeaux, Jose Emilio Labra Gayo, H. Solbrig
RDF is a graph based data model which is widely used for semantic web and linked data applications. In this paper we describe a Shape Expression definition language which enables RDF validation through the declaration of constraints on the RDF model. Shape Expressions can be used to validate RDF data, communicate expected graph patterns for interfaces and generate user interface forms. In this paper we describe the syntax and the formal semantics of Shape Expressions using inference rules. Shape Expressions can be seen as domain specific language to define Shapes of RDF graphs based on regular expressions. Attached to Shape Expressions are semantic actions which provide an extension point for validation or for arbitrary code execution such as those in parser generators. Using semantic actions, it is possible to augment the validation expressiveness of Shape Expressions and to transform RDF graphs in a easy way. We have implemented several validation tools that check if an RDF graph matches against a Shape Expressions schema and infer the corresponding Shapes. We have also implemented two extensions, called GenX and GenJ that leverage the predictability of the graph traversal and create ordered, closed content, XML/Json documents, providing a simple, declarative mapping from RDF data to XML and Json documents.
{"title":"Shape expressions: an RDF validation and transformation language","authors":"E. Prud'hommeaux, Jose Emilio Labra Gayo, H. Solbrig","doi":"10.1145/2660517.2660523","DOIUrl":"https://doi.org/10.1145/2660517.2660523","url":null,"abstract":"RDF is a graph based data model which is widely used for semantic web and linked data applications. In this paper we describe a Shape Expression definition language which enables RDF validation through the declaration of constraints on the RDF model. Shape Expressions can be used to validate RDF data, communicate expected graph patterns for interfaces and generate user interface forms. In this paper we describe the syntax and the formal semantics of Shape Expressions using inference rules. Shape Expressions can be seen as domain specific language to define Shapes of RDF graphs based on regular expressions.\u0000 Attached to Shape Expressions are semantic actions which provide an extension point for validation or for arbitrary code execution such as those in parser generators. Using semantic actions, it is possible to augment the validation expressiveness of Shape Expressions and to transform RDF graphs in a easy way.\u0000 We have implemented several validation tools that check if an RDF graph matches against a Shape Expressions schema and infer the corresponding Shapes. We have also implemented two extensions, called GenX and GenJ that leverage the predictability of the graph traversal and create ordered, closed content, XML/Json documents, providing a simple, declarative mapping from RDF data to XML and Json documents.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"489 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123158277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Both, A. N. Ngomo, Ricardo Usbeck, Denis Lukovnikov, Christian Lemke, Maximilian Speicher
Over the last decade, a growing importance of search engines could be observed. An increasing amount of knowledge is exposed and connected within the Linked Open Data Cloud, which raises users' expectations to be able to search for any information that is directly or indirectly contained. However, diverse data types require tailored search functionalities---such as semantic, geospatial and full text search. Hence, using only one data management system will not provide the required functionality at the expected level. In this paper, we will describe search services that provide specific search functionality via a generalized interface inspired by RDF. In addition, we introduce an application layer on top of these services that enables to query them in a unified way. This allows for the implementation of a distributed search that leverages the identification of the optimal search service for each query and subquery. This is achieved by connecting powerful tools like Openlink Virtuoso, ElasticSearch and PostGIS within a single framework.
{"title":"A service-oriented search framework for full text, geospatial and semantic search","authors":"A. Both, A. N. Ngomo, Ricardo Usbeck, Denis Lukovnikov, Christian Lemke, Maximilian Speicher","doi":"10.1145/2660517.2660528","DOIUrl":"https://doi.org/10.1145/2660517.2660528","url":null,"abstract":"Over the last decade, a growing importance of search engines could be observed. An increasing amount of knowledge is exposed and connected within the Linked Open Data Cloud, which raises users' expectations to be able to search for any information that is directly or indirectly contained. However, diverse data types require tailored search functionalities---such as semantic, geospatial and full text search.\u0000 Hence, using only one data management system will not provide the required functionality at the expected level. In this paper, we will describe search services that provide specific search functionality via a generalized interface inspired by RDF. In addition, we introduce an application layer on top of these services that enables to query them in a unified way. This allows for the implementation of a distributed search that leverages the identification of the optimal search service for each query and subquery. This is achieved by connecting powerful tools like Openlink Virtuoso, ElasticSearch and PostGIS within a single framework.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121383517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the growing interest in publishing data according to the Linked Data principles, it becomes more important to provide intuitive tools for users to view and interact with those resources. The characteristics of Linked Data pose several challenges for user-friendly presentation of information. In this work, we present the LD Viewer as a customizable framework that can easily be fitted for different datasets while addressing Linked Data presentation challenges. With this framework, we aim to provide dataset maintainers with easy means to expose their RDF resources. Moreover, we aim to make the interface intuitive and engaging for both expert users and lay users.
{"title":"LD viewer - linked data presentation framework","authors":"Denis Lukovnikov, Claus Stadler, Jens Lehmann","doi":"10.1145/2660517.2660539","DOIUrl":"https://doi.org/10.1145/2660517.2660539","url":null,"abstract":"With the growing interest in publishing data according to the Linked Data principles, it becomes more important to provide intuitive tools for users to view and interact with those resources. The characteristics of Linked Data pose several challenges for user-friendly presentation of information. In this work, we present the LD Viewer as a customizable framework that can easily be fitted for different datasets while addressing Linked Data presentation challenges. With this framework, we aim to provide dataset maintainers with easy means to expose their RDF resources. Moreover, we aim to make the interface intuitive and engaging for both expert users and lay users.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117045341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our recent paper, we proposed a new kind of citations, called the expanded citations, which link scientific papers and concepts from them. The expanded citations are represented in RDF and can be processed by machines. In this paper, we use the expanded citations to introduce projections of concepts which can be useful in searching for publications. The analysis of the projections and their time evolution gives a knowledge about the role and the significance of the concept in a given domain.
{"title":"Expanded citations and projections of concepts","authors":"M. Skulimowski","doi":"10.1145/2660517.2660537","DOIUrl":"https://doi.org/10.1145/2660517.2660537","url":null,"abstract":"In our recent paper, we proposed a new kind of citations, called the expanded citations, which link scientific papers and concepts from them. The expanded citations are represented in RDF and can be processed by machines. In this paper, we use the expanded citations to introduce projections of concepts which can be useful in searching for publications. The analysis of the projections and their time evolution gives a knowledge about the role and the significance of the concept in a given domain.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127933819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is widely accepted that by controlling metadata, it is easier to publish high quality data on the web. Metadata, in the context of Linked Data, refers to vocabularies and ontologies used for describing data. With more and more data published on the web, the need for reusing controlled taxonomies and vocabularies is becoming more and more a necessity. Catalogues of vocabularies are generally a starting point to search for vocabularies based on search terms. Some recent studies recommend that it is better to reuse terms from "popular" vocabularies [4]. However, there is not yet an agreement on what makes a popular vocabulary since it depends on diverse criteria such as the number of properties, the number of datasets using part or the whole vocabulary, etc. In this paper, we propose a method for ranking vocabularies based on an information content metric which combines three features: (i) the datasets using the vocabulary, (ii) the outlinks from the vocabulary and (iii) the inlinks to the vocabulary. We applied this method to 366 vocabularies described in the LOV catalogue. The results are then compared with other catalogues which provide alternative rankings.
{"title":"Information content based ranking metric for linked open vocabularies","authors":"G. Atemezing, Raphael Troncy","doi":"10.1145/2660517.2660533","DOIUrl":"https://doi.org/10.1145/2660517.2660533","url":null,"abstract":"It is widely accepted that by controlling metadata, it is easier to publish high quality data on the web. Metadata, in the context of Linked Data, refers to vocabularies and ontologies used for describing data. With more and more data published on the web, the need for reusing controlled taxonomies and vocabularies is becoming more and more a necessity. Catalogues of vocabularies are generally a starting point to search for vocabularies based on search terms. Some recent studies recommend that it is better to reuse terms from \"popular\" vocabularies [4]. However, there is not yet an agreement on what makes a popular vocabulary since it depends on diverse criteria such as the number of properties, the number of datasets using part or the whole vocabulary, etc. In this paper, we propose a method for ranking vocabularies based on an information content metric which combines three features: (i) the datasets using the vocabulary, (ii) the outlinks from the vocabulary and (iii) the inlinks to the vocabulary. We applied this method to 366 vocabularies described in the LOV catalogue. The results are then compared with other catalogues which provide alternative rankings.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128058089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic Web and Linked Data are widely considered as effective and powerful technologies for integrating heterogeneous data models and data sources. However, there is still a gap between promising research results and prototypes and their practical acceptance in industry contexts. In context of our industry partners we observed a lack of tool-support that (a) enables efficient modeling of OWL ontologies and (b) supports querying and visualization of query results also for non-experts. The selection and application of existing semantic programming libraries and editors is challenging and hinders software engineers, who are familiar with modeling approaches such as UML, in applying semantic concepts in their solutions. In this paper we introduce the Semantic Model Editor (SMEd) to support engineers who are non-experts in semantic technologies in designing ontologies based on well-known UML class diagram notations. SMEd -- a Web-based application -- enables an efficient integration of heterogeneous data models, i.e., designing, populating, and querying of ontologies. First results of a pilot application at industry partners showed that SMEd was found useful in industry context, leveraged the derivation of reusable artifacts, and significantly accelerated development and configuration of data integration scenarios.
{"title":"The semantic model editor: efficient data modeling and integration based on OWL ontologies","authors":"A. Grünwald, D. Winkler, M. Sabou, S. Biffl","doi":"10.1145/2660517.2660526","DOIUrl":"https://doi.org/10.1145/2660517.2660526","url":null,"abstract":"Semantic Web and Linked Data are widely considered as effective and powerful technologies for integrating heterogeneous data models and data sources. However, there is still a gap between promising research results and prototypes and their practical acceptance in industry contexts. In context of our industry partners we observed a lack of tool-support that (a) enables efficient modeling of OWL ontologies and (b) supports querying and visualization of query results also for non-experts. The selection and application of existing semantic programming libraries and editors is challenging and hinders software engineers, who are familiar with modeling approaches such as UML, in applying semantic concepts in their solutions. In this paper we introduce the Semantic Model Editor (SMEd) to support engineers who are non-experts in semantic technologies in designing ontologies based on well-known UML class diagram notations. SMEd -- a Web-based application -- enables an efficient integration of heterogeneous data models, i.e., designing, populating, and querying of ontologies. First results of a pilot application at industry partners showed that SMEd was found useful in industry context, leveraged the derivation of reusable artifacts, and significantly accelerated development and configuration of data integration scenarios.","PeriodicalId":344435,"journal":{"name":"Joint Conference on Lexical and Computational Semantics","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134641218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}